• Warl0k3@lemmy.world
    link
    fedilink
    English
    arrow-up
    34
    arrow-down
    7
    ·
    edit-2
    5 months ago

    Thats on the companies to figure out, tbh. “you cant say we arent allowed to build biological weapons, thats too hard” isn’t what you’re saying, but it’s a hyperbolic example. The industry needs to figure out how to control the monster they’ve happily sent staggering towards the village, and really they’re the only people with the knowledge to figure out how to stop it. If it’s not possible, maybe we should restrict this tech until it is possible. LLMs aren’t going to end the world, probably, but a protein sequencing AI that hallucinates while replicating a flu virus could be real bad for us as a species, to say nothing of the pearl clutching scenario of bad actors getting ahold of it.

    • 5C5C5C@programming.dev
      link
      fedilink
      English
      arrow-up
      24
      arrow-down
      6
      ·
      5 months ago

      Yeah that’s my big takeaway here: If the people who are rolling out this technology cannot make these assurances then the technology has no right to exist.

      • mindbleach
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 months ago

        Show me a computer that can only run benign programs.

        • 5C5C5C@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 months ago

          A computer will run whatever software you put on it. As long we’re putting benign software on our computers, the computer will be benign.

          If you knowingly put criminal software on a computer then you are committing a crime. If someone tricks you into putting criminal software onto a computer then the person who tricked you is committing a crime.

          If you are developing software and can’t be sure whether that the software you’re developing will commit crimes, then you are guilty of a criminal level of negligence.

          • mindbleach
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 months ago

            Nah, if the computer manufacturer can’t stop you from running evil software, the technology has no right to exist. Demand these assurances!

            • 5C5C5C@programming.dev
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              5 months ago

              You’re being pretty dense if you can’t wrap your head around a basic concept of accountability.

              A human can choose to commit crimes with any product, including … I don’t know … a fork. You could choose to stab someone with a fork, and you’d be a criminal. We wouldn’t blame the fork manufacturer for that because the person who chose for a crime to be committed was the person holding the fork. That’s who’s accountable.

              But if a fork manufacturer starts selling forks which might start stabbing people on their own, without any human user intending for the stabbing to take place, then the manufacturer who produced and sold the auto-stabbing forks is absolutely guilty of criminal negligence.

              Edit: But I’ll concede that a law against the technology being used to assist humans in criminal activity in a broad sense is unrealistic. At best there would need to be bounds around the degree of criminal help that the tool is able to provide.

              • mindbleach
                link
                fedilink
                English
                arrow-up
                1
                ·
                5 months ago

                But a human asking how to make a bomb is somehow the LLM’s fault.

                Or the LLM has to know that you are who you say you are, to prevent you from writing scam e-mails.

                The guy you initially replied to was talking about hooking up an LLM to a virus replication machine. Is that the level of safety you’re asking for? A machine so safe, we can give it to supervillains?

    • tal@lemmy.today
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      9
      ·
      edit-2
      5 months ago
      1. There are many tools that might be used to create a biological weapon or something. You can use a pocket calculator for that. But we don’t place bars on sale of pocket calculators to require proof be issued that nothing hazardous can be done with them. That is, this is a bar that is substantially higher than exists for any other tool.

      2. Second, while I certainly think that there are legitimate existential risks, we are not looking at a near-term one. OpenAI or whoever isn’t going to be producing something human-level any time soon. Like, Stable Diffusion, a tool used to generate images, would fall under this. It’s very questionable that it, however, would be terribly useful in doing anything dangerous.

      3. California putting a restriction like that in place, absent some kind of global restriction, won’t stop development of models. It just ensures that it’ll happen outside California. Like, it’ll have a negative economic impact on California, maybe, but it’s not going to have a globally-restrictive impact.

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        12
        ·
        5 months ago

        Like, Stable Diffusion, a tool used to generate images, would fall under this. It’s very questionable that it, however, would be terribly useful in doing anything dangerous.

        My concern is how short a hop it is from this to “won’t someone please think of the children?” And then someone uses Stable Diffusion to create a baby in a sexy pose and it’s all down in flames. IMO that sort of thing happens enough that pushing back against “gateway” legislation is reasonable.

        California putting a restriction like that in place, absent some kind of global restriction, won’t stop development of models.

        I’d be concerned about its impact on the deployment of models too. Companies are not going to want to write software that they can’t sell in California, or that might get them sued if someone takes it into California despite it not being sold there. Silicon Valley is in California, this isn’t like it’s Montana banning it.

      • Mouselemming
        link
        fedilink
        English
        arrow-up
        7
        ·
        5 months ago

        So, the monster was given a human brain that was already known to be murderous. Why, we don’t know, but a good bet would be childhood abuse and alcohol syndrome, maybe inherited syphilis, given the era. Now that murderer’s brain is given an extra-strong body, and then subjected to more abuse and rejection. That’s how you create a monster.

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        5 months ago

        Indeed. If only Frankenstein’s Monster had been shunned nothing bad would have happened.

        • Warl0k3@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          5 months ago

          You two may not be giving me enough credit for my choice of metaphors here.

    • conciselyverbose
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      11
      ·
      5 months ago

      It’s not a monster. It doesn’t vaguely resemble a monster.

      It’s a ridiculously simple tool that does not in any way resemble intelligence and has no agency. LLMs do not have the capacity for harm. They do not have the capability to invent or discover (though if they did, that would be a massive boon for humanity and also insane to hold back). They’re just a combination of a mediocre search tool with advanced parsing of requests and the ability to format the output in the structure of sentences.

      AI cannot do anything. If your concern is allowing AI to release proteins into the wild, obviously that is a terrible idea. But that’s already more than covered by all the regulation on research in dangerous diseases and bio weapons. AI does not change anything about the scenario.

      • Carrolade@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        5 months ago

        I largely agree, current LLMs add no capabilities to humanity that it did not already possess. The point of the regulation is to encourage a certain degree of caution in future development though.

        Personally I do think it’s a little overly broad. Google search can aid in a cyber security attack. The kill switch idea is also a little silly, and largely a waste of time dreamed up by watching too many Terminator and Matrix movies. While we eventually might reach a point where that becomes a prudent idea, we’re still quite far away.

        • conciselyverbose
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          2
          ·
          edit-2
          5 months ago

          We’re not anywhere near anything that has anything in common with human level intelligence, or poses any threat.

          The only possible cause for support of legislation like this is either a completely absence of understanding of what the technology is combined with treating Hollywood as reality (the layperson and probably most legislators involved in this), or an aggressive market control attempt through regulatory capture by big tech. If you understand where we are and what paths we have forward, it’s very clear that there’s only harm that this can do.