The most interesting part here I find is the cost analysis. Was quite surprised to see that the cost to train it on current hardware would have been a third of the cost it was back when they were training it. That is like a 3x improvement in a year/year and a half. I winder whether this trend will continue.
I have been out of the ml world for a bit (like 6months lol …) And I already feek way out if date. It seems like I should pick up the vicuna llm, didnt want to touch llama initially due to the legal problems with it. I thought that would be a problen for a while, and then they went and solved it. Somehow even missed the news of it, most likely due to the enormous amount of news comming from the ml world (I might need a model to abbreviate it). Anyways thanks for the article I know what to do this weekend.
The most interesting part here I find is the cost analysis. Was quite surprised to see that the cost to train it on current hardware would have been a third of the cost it was back when they were training it. That is like a 3x improvement in a year/year and a half. I winder whether this trend will continue.
This was a really good write up on how quickly this stuff is evolving https://steve-yegge.medium.com/were-gonna-need-a-bigger-moat-478a8df6a0d2
I have been out of the ml world for a bit (like 6months lol …) And I already feek way out if date. It seems like I should pick up the vicuna llm, didnt want to touch llama initially due to the legal problems with it. I thought that would be a problen for a while, and then they went and solved it. Somehow even missed the news of it, most likely due to the enormous amount of news comming from the ml world (I might need a model to abbreviate it). Anyways thanks for the article I know what to do this weekend.