- cross-posted to:
- [email protected]
- [email protected]
- cross-posted to:
- [email protected]
- [email protected]
I often see a lot of people with outdated understanding of modern LLMs.
This is probably the best interpretability research to date, by the leading interpretability research team.
It’s worth a read if you want a peek behind the curtain on modern models.
Ah yes, it must be the scientists specializing in machine learning studying the model full time who don’t understand it.