You must log in or register to comment.
It’s a travesty. The whole LLM “AI” push is a fraud. There’s nothing approaching actual intelligence. It’s simply statistical word strings.
I frankly think Anthropic and OpenAI will/would struggle to make a hallucination free AI too. I don’t understand why Apple thinks they are going to be able to fix hallucinations.
I don’t even know if it’s theoretically possible to make a hallucination free LLM. That’s kind of its basic operating principle.
People are misled by the name. Its not making stuff up, its just less accurate
Less accurate as in misleading and outright false.
It always predicts the next word based on its tokenisation, data from training and context handling. So accuracy is all there is.