The input and output are text, very easy to send over the internet. Yes, the best models need VRAM, how that’s different than “lots of hardware” idk. How do you think chatGPT works? You send them text and rent GPU time and in return get text. We are proposing the same thing, but distributed. And yes like Turo, but for GPUs. Very similar to cloud gaming, another use case for renting GPU time. However, unlike cloud gaming we don’t have bandwidth limitations. We can even have several users with small amounts of VRAM combine it to use very large models by splitting the model into sections. This has been accomplished: https://github.com/bigscience-workshop/petals we are proposing a validation method to verify whether the output is correct. Additionally, we are offering an incentive structure. The problem with petals is it relies on charity.
The key difference here though is that ChatGPT does not rent GPU time from its users. It rents GPU time from datacenters, for fiat currency.
Datacenters take that money and use it to buy new hardware and to maintain that hardware. If I spent my money for my hardware, I would want to rent it out at a decent rate. After all, unlike datacenters who buy electricity in bulk, I have to pay residential costs for electricity usage.
I am not convinced that a crypto payment in an unknown coin could make up for what I spend in electricity. I would expect to earn something similar to proof of work models, maybe about $1 after a month’s worth of processing. After electricity costs, that’s going to cost me probably $10 or more, making it not worth my while.
Maybe it will be worthwhile for others. But for most peope, they could likely just install serge or LLaMa and get more “bang for the buck”. I guess that all comes down to how you hand out the coin and what other perks come with being a miner / staking node / person with a gpu.
I mean, if I don’t get paid for it, could I at least use the network for free?
The input and output are text, very easy to send over the internet. Yes, the best models need VRAM, how that’s different than “lots of hardware” idk. How do you think chatGPT works? You send them text and rent GPU time and in return get text. We are proposing the same thing, but distributed. And yes like Turo, but for GPUs. Very similar to cloud gaming, another use case for renting GPU time. However, unlike cloud gaming we don’t have bandwidth limitations. We can even have several users with small amounts of VRAM combine it to use very large models by splitting the model into sections. This has been accomplished: https://github.com/bigscience-workshop/petals we are proposing a validation method to verify whether the output is correct. Additionally, we are offering an incentive structure. The problem with petals is it relies on charity.
The key difference here though is that ChatGPT does not rent GPU time from its users. It rents GPU time from datacenters, for fiat currency.
Datacenters take that money and use it to buy new hardware and to maintain that hardware. If I spent my money for my hardware, I would want to rent it out at a decent rate. After all, unlike datacenters who buy electricity in bulk, I have to pay residential costs for electricity usage.
I am not convinced that a crypto payment in an unknown coin could make up for what I spend in electricity. I would expect to earn something similar to proof of work models, maybe about $1 after a month’s worth of processing. After electricity costs, that’s going to cost me probably $10 or more, making it not worth my while.
Maybe it will be worthwhile for others. But for most peope, they could likely just install serge or LLaMa and get more “bang for the buck”. I guess that all comes down to how you hand out the coin and what other perks come with being a miner / staking node / person with a gpu.
I mean, if I don’t get paid for it, could I at least use the network for free?
deleted by creator