• Researchers have found political bias in AI language patterns.
• OpenAI ChatGPT and GPT-4 have shown themselves to be left libertarians, Meta's LLaMA is right authoritarian.
• Political bias can lead to inappropriate and harmful content such as disinformation and hate speech.
• The researchers examined three steps in the development of language models: identifying political affiliations, learning from data of different political views, and influencing content classification.
• Models' political biases can be reinforced by training different political views on the data.
• Clearing data from bias is not always sufficient to remove bias.
• Researchers acknowledge that the political compass test is not ideal for measuring all the nuances of politics.
• Companies should be aware of the impact of political bias on AI models and strive for fairness and awareness.