StarVector is a foundation model for generating Scalable Vector Graphics (SVG) code from images and text. It utilizes a Vision-Language Modeling architecture to understand both visual and textual inputs, enabling high-quality vectorization and text-guided SVG creation.
I was just thinking about an img2svg generator. I was tying to get Claude to do it earlier today with poor results.
Claude frequently draws svgs to illustrate things for me (I’m guessing it’s in the prompt) but even though it’s better at it than all the other models, it still kinda sucks. It’s just fudamentally dumb task to do for a purely language model, similar to the arc-agi benchmark , just makes more sense for a vision model and trying to get an llm to do is a waste
It doesn’t look to be that much more effective than a simple autotracer.
autotracers can’t generate svgs from text
True, I was only looking at the img to svg part.