Google Gemini seems to have been programmed to provide cookie cutter responses when asked questions about whether Trump tried to overturn the election.

When you point it out to Gemini, it says it isn’t programmed to avoid any topics or viewpoints.

Even saying you’ll accept a variety of sources & viewpoints on the topic to reach your own conclusion, results in it saying it can’t answer.

When asking if it has been trained on research papers, case law, indexed news stories & even Wikipedia results, it says that it has.

  • ricecake
    link
    fedilink
    arrow-up
    3
    ·
    7 days ago

    Well, yeah. That’s what it said.

    It’s trained by reading the horrible morass of stuff on the Internet. Topics with larger amounts of disinformation are areas where they’re very prone to making mistakes. Crossing those topics with ones that misinformation or the appearance of misinformation are particularly damaging to the world or to their reputation and you have a good list of topics that are probably not good candidates to let your chatbot talk about.

    It doesn’t do “reasoning” or “critical thinking” in the way you might expect for something that can communicate articulately. It doesn’t know what’s accurate or not, only what’s likely to be stated on the Internet. Unfortunately, it’s very likely for people on the Internet to say some bonkers things about the 2020 election in specific, and anything political in general. Even in sources that normally might be ranked higher for factuality, like a news publication.
    It’s not just trump, it’s anything political.

    This type of AI isn’t an expert, it’s a mimic. It knows how to mimic patterns, and it’s been told to mimic something knowledgeable and helpful based on all the text on the Internet, where people regularly present themselves as knowledgeable regardless of their basic sanity.