Channel 1 AI released a promotional video explaining how the service will provide personalized news coverage to users from finance to entertainment.

  • @restingboredface
    link
    English
    885 months ago

    “they intend to re-create events not captured by camera using generative AI,”

    That’s going to cause all sorts of problems.

    • @[email protected]
      link
      fedilink
      English
      165 months ago

      don’t worry they’ve stated that they’ll clearly mark it with an arbitrary symbol, so no one will possibly think it’s the real deal.

    • Ech
      link
      fedilink
      English
      6
      edit-2
      5 months ago

      The deepest of ironies in a company allegedly made to report “facts”, primarily using artificiality to do so.

      Even if they are completely earnest and honest, the tech they are helping spearhead is going to completely change how information is approached and will be one of the biggest challenges, if not the biggest challenge, we are going to face as a society in a long time. This is really unnerving to watch develop in front of my eyes.

      • @[email protected]
        link
        fedilink
        English
        45 months ago

        I feel like they’re shooting themselves in the foot with this. If AI reporting gets to a good enough point, people can just have an AI on their phone tell them the news report, rather than a video of an AI giving the report, broadcast from a central server. They will lose their viewers

    • @mindbleach
      link
      English
      55 months ago

      Seriously! “We’re gonna make shit up and present it as news” is the worst-case scenario everyone’s aghast over, and these idiots openly plan to do it.

      At long last, we have created the torment nexus, from the bestselling novel Do Not Build The Torment Nexus.

    • @fruitycoder
      link
      English
      55 months ago

      The use of unrelated b-roll and stock footage is already a nightmare in this way. One burning trash can becomes the new standard-bearer for any protest they disagree with for the next year or two. Now they can fine tune it to location …

      Honestly, for the news, these sorts of tricks for effect NEED to be clearly and constantly marked during their use. It is actually dangerous the way they currently do it, imho.

      I’m biased enough towards transparency, though I think if they do use AI then they should have to reveal the prompt and model used to make it on top of clearly and constant marking showing it is not actual footage.