I work with machine learning tasks daily, both as an ML researcher and as a hobby. The difference between what I can do at work and at home is significant - an A40 at work can do far more than the 3080 I have at home. This obviously makes sense, given the massively increased price point.
However, what I find odd is how there are no consumer level server GPUs targeted towards ML on the market. The A40 is not just a scaled up consumer GPU, and with machine learning growing as a hobby, consumer and enthusiast-level server GPUs are a surprising market gap.
Have you considered renting GPU VMs from a cloud provider?
Azure has A10 and A100 instances you can spin up. Not sure if A40 is also a thing with them, but might be worth taking a look at.
AWS and Google probably have comparable offerings.