Probably impossible without some really hideous equation front-and-center in the whitepaper, and I am not seeing it. Like, I’m fully willing to believe there’s some way to do linear work in parallel, by transforming it via some mathematician’s ayahuasca-fueled master’s thesis. But I’m also holding out hope that P=NP.
And either one of those would work just fine on a GPU, because of Turing completeness. If you’re pushing new hardware to run software betterer: scam.
Probably impossible without some really hideous equation front-and-center in the whitepaper, and I am not seeing it. Like, I’m fully willing to believe there’s some way to do linear work in parallel, by transforming it via some mathematician’s ayahuasca-fueled master’s thesis. But I’m also holding out hope that P=NP.
And either one of those would work just fine on a GPU, because of Turing completeness. If you’re pushing new hardware to run software betterer: scam.