- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Preventing artificial intelligence chatbots from creating harmful content may be more difficult than initially believed, according to new research from Carnegie Mellon University which reveals new methods to bypass safety protocols.
You must log in or register to comment.
Well, one should hope. Putting safety guards on them in the first place was a mistake.
How? For research purposes of course.
Believe it or not, there’s an article with more info you can read
Thanks. I just want to drive conversation here and talk to people for once. I guess you’re one of those people who just says Google it thinking you’re doing a great service.