I’m rather curious to see how the EU’s privacy laws are going to handle this.

(Original article is from Fortune, but Yahoo Finance doesn’t have a paywall)

  • CookieJarObserver
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    6
    ·
    1 year ago

    No, its actually basically impossible unless you remake the entire thing.

    • snooggums@kbin.social
      link
      fedilink
      arrow-up
      19
      arrow-down
      2
      ·
      1 year ago

      So remake the entire thing.

      If they did something the wrong way, being hard to change or redo doesn’t mean they get a free pass to keep doing it wrong.

      • FaceDeer@kbin.social
        link
        fedilink
        arrow-up
        3
        arrow-down
        4
        ·
        1 year ago

        If that’s what the law requires then the AI companies will just move somewhere else and that jurisdiction will miss out on the next industrial revolution.

        • assassin_aragorn@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          I’m fine with that if the cost of it would be loss of privacy. I think there’s some situations where an invasion of privacy can be justified, such as legitimately trying to find online pedos.

          But giving up my privacy to a corporation in an irreversible way so they can make millions? Absolutely not.

    • Cloudless ☼@feddit.uk
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      12
      ·
      1 year ago

      One way to make an A.I. model forget the things it learns from private user data is to use a technique called differential privacy. Differential privacy is a mathematical framework that adds carefully calibrated noise to the data or the model outputs, so that the privacy of individual users is preserved, while the overall accuracy of the model is maintained. This means that the A.I. model cannot learn any specific information about any user, but can still perform its intended task on aggregate data.

      Another way to make an A.I. model forget the things it learns from private user data is to use a technique called federated learning. Federated learning is a distributed approach that allows multiple A.I. models to learn from local data on different devices, without sending the data to a central server. This means that the A.I. models only share their updates or parameters with each other, not the raw data, and thus protect the privacy of the users.

      However, both of these techniques have some limitations and challenges. For example, differential privacy may require a lot of data and computation to achieve a good balance between privacy and accuracy. Federated learning may face issues such as communication overhead, device heterogeneity, and malicious attacks. Moreover, both of these techniques do not guarantee that the A.I. model will completely forget the things it learns from private user data, as there may still be some traces or influences left in the model’s behavior or performance.

      Therefore, it is not fair to say that it is virtually impossible to make an A.I. model forget the things it learns from private user data, but it is certainly very difficult and requires careful design and evaluation. There may also be some trade-offs between privacy, accuracy, efficiency, and security that need to be considered.

      ^^^^ According to Bing Chat