- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
You must log in or register to comment.
This was an interesting raising-awareness project.
And the article says they didn’t let the chatbot generate its own responses (and therefore produce LLM hallucinations) but rather used an LLM in the background to categorize user’s question and return an answer from said category.
Surely nothing could go wrong…
“What do you know about money laundering?”
I mean, couldn’t you just use any of a plethora of other uncensored LLMs from huggingface if you want those sorts of answers?