This is no longer true.
If you use NV’s TensorRT plugin with the A1111 development branch, TensorRT works very well with SDXL (it’s actually much less painful to use than SD1.5 TensorRT was initially).
The big constraint is VRAM capacity. I can use it for 1024x1024 (and similar-total-pixel-count) SDXL generations on my 4090, but can’t go much beyond that without tiling (though that is generally what you do anyway for larger resolutions).
Just like for SD1.5, TensorRT speeds up generation by almost a factor of 2 for SDXL (compared to an “optimized” baseline using SDP).
It mentions Olive. I don’t know what that is, but it’s suggesting it could cause AMD to catch back up. Is that true? Or is it more likely going to get them an extra 10% performance instead of the extra 110% they need to catch up?
https://www.pugetsystems.com/labs/articles/stable-diffusion-performance-nvidia-geforce-vs-amd-radeon/
That is, unfortunately, sorely outdated. Particularly with the advent of tensorRT. Best case vs best case the 4080 is about twice as fast today
https://www.tomshardware.com/pc-components/gpus/stable-diffusion-benchmarks#section-stable-diffusion-512x512-performance
The gap would be even larger if, or to be precise WHEN, Fp8 and/or sparisity will be used on the Ada Lovelace cards.
Of note, TensorRT doesn’t support SDXL yet.
This is no longer true.
If you use NV’s TensorRT plugin with the A1111 development branch, TensorRT works very well with SDXL (it’s actually much less painful to use than SD1.5 TensorRT was initially).
The big constraint is VRAM capacity. I can use it for 1024x1024 (and similar-total-pixel-count) SDXL generations on my 4090, but can’t go much beyond that without tiling (though that is generally what you do anyway for larger resolutions).
Just like for SD1.5, TensorRT speeds up generation by almost a factor of 2 for SDXL (compared to an “optimized” baseline using SDP).
Alright thanks. This stuff is moving very fast, and I was only looking at the master branch.
You cant compare using using two different impelementations. You compare only on A1111 or only on SHARK.
SHARK doesnt even seem be taking any adavantage of the 4090 being significatly slower than the 7900xtx.
The recent A1111 Olive branch made the performance of it almost equal SHARK model. A1111 also full uses the 4090.
The new results on the same A1111 implention are here -
https://www.pugetsystems.com/labs/articles/amd-microsoft-olive-optimizations-for-stable-diffusion-performance-analysis/
You can divide the 4090’s perf by half if you want no Tensor RT which is 35. Thats still significantly higher than the 7900xtx’s 23
It mentions Olive. I don’t know what that is, but it’s suggesting it could cause AMD to catch back up. Is that true? Or is it more likely going to get them an extra 10% performance instead of the extra 110% they need to catch up?
That’s seems like an arbitrary handicap. You should use whichever solution runs best on the respective hardware.