Unfortunately, due to the complexity and specialized nature of AVX-512, such optimizations are typically reserved for performance-critical applications and require expertise in low-level programming and processor microarchitecture.
Whomever wrote this article is just misleading everyone.
First of all, they did this for other kinds of similar instruction sets before, so this is nothing special. Second of all, they measure the speedup compared to a basic implementation that doesn’t use any optimizations.
They did the same in the past for AVX-2, which is 67x faster in the test where avx-512 got the 94x speed increase. So it’s not 94x faster now, it’s 1.4x faster than the previous iteration using the older AVX-2 instruction set. It’s barely twice as fast as the implementation using SSE3 (40x faster than the slow version), an instruction set from 20 years ago…
So yeah, it’s awesome that they did the same awesome work for AVX-512, but the 94x boost is just plain bullshit… it’s really sad that great work then gets worded in such a misleading way to form clickbait, rather than getting a proper informative article…
Even more ridiculous since a 1.4x performance increase is already incredible news for anyone who makes regular of this.
If someone found a software optimization that improved, say, blender performance by 1.4x people would be shouting praises from the rooftops.
Indeed, it’s a very nice boost, and great work, but this clickbait nonsense is just so stupid…
And i’m really bothered how it’s just parrotted everywhere… Doesn’t anybody wonder “94x faster is like… really a LOT… that can’t be true”
deleted by creator
Relevant section:
Intel made waves when it disabled AVX-512 support at the firmware level on 12th-gen Core processors and later models, effectively removing the SIMD ISA from its consumer chips.
As someone who has done some hand coding of AVX-512, I appreciate their willingness to take this on. Getting the input vectors setup correctly for the instructions can be a hassle, especially when the input dataset is not an even multiple of 64.
Removed by mod
Why, do you expect a difference on bsd?
Removed by mod
nice.
can usually get a pretty good performance increase with hand writing asm where appropriate.
don’t know if its a coincidence, but i’ve never seen someone who’s good at writing assembly say that its never useful.
To be fair, people who don’t find assembly useful probably wouldn’t get good at writing assembly
for sure, its perfectly reasonable to say “this tool isn’t useful for me”
its another thing to say “this tool isn’t useful for anyone”
Though you’d get the same speedup if you used SIMD intrinsics. This is just comparing non-SIMD to SIMD.
from the article it’s not clear what the performance boost is relative to intrinsics (its extremely unlikely to be anything close to 94x lol), its not even clear from the article if the avx2 implementation they benchmarked against was instrinsics or handwritten either. in some cases avx2 seems to slightly outperform avx-512 in their implementation
there’s also so many different ways to break a problem down that i’m not sure this is an ideal showcase, at least without more information.
to be fair to the presenters they may not be the ones making the specific flavour of hype that the article writers are.
deleted by creator
yes, as i said
from the article it’s not clear what the performance boost is relative to intrinsics
(they don’t make that comparison in the article)
so its not clear exactly how handwritten asm compares to intrinsics in this specific comparison. we can’t assume their handwritten AVX-512 asm and instrinics AVX-512 will perform identically here, it may be better, or worse.
also worth noting they’re discussing benchmarking of a specific function, so overall performance on executing a given set of commands may be quite different depending what can and can’t be unrolled and in which order for different dependencies.
Absolute madness. I cringe at the thought of making modern x86 asm code.
Great work!