- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Summary
Fable, a social media app focused on books, faced backlash for its AI-generated 2024 reading summaries containing offensive and biased commentary, like labeling a user a “diversity devotee” or urging another to “surface for the occasional white author.”
The feature, powered by OpenAI’s API, was intended to be playful and fun. However, some of the summaries took on an oddly combative tone, making inappropriate comments on users’ diversity and sexual orientation.
Fable apologized, disabled the feature, and removed other AI tools.
Critics argue the response was insufficient, highlighting broader issues of bias in generative AI and the need for better safeguards.
I won’t say AI doesn’t have its edgecase uses, and I know people sneer at “prompt engineering” - but you really gotta put as much if not more effort into the prompt as it would to make a dumb if-case machine.
Several paragraphs explaining and contextualizing the AI’s role, then the task at hand, then how you want the output to be, and any additional input. It should be at least 10 substantial paragraphs - but even then you’ve probably not got a bunch of checks for edgecases, errors, formatting, malicious intent from the user…
It’s a less secure, less specific, less technical, higher risk, higher variable, untrustworthy “programming language” interface that enveigles and abstracts the interaction with the data and processors. It is not a person.
and the bot still tends to ignore some instructions