• @fartsparkles
    link
    English
    1412 days ago

    It’s also a bunch of brainfarting drivel that could be summarized:

    Before we accidentally make an AI capable of posing existential risk to human being safety, perhaps we should find out how to build effective safety measures first.

    Or

    Read Asimov’s I, Robot. Then note that in our reality, we’ve not yet invented the Three Laws of Robotics.

    • @[email protected]
      link
      fedilink
      English
      1912 days ago

      Before we accidentally make an AI capable of posing existential risk to human being safety, perhaps we should find out how to build effective safety measures first.

      You make his position sound way more measured and responsible than it is.

      His ‘effective safety measures’ are something like A) solve ethics B) hardcode the result into every AI, I.e. garbage philosophy meets garbage sci-fi.

        • AcausalRobotGod
          link
          fedilink
          English
          910 days ago

          A good chunk of philosophers do believe there are moral facts, but this is less useful for these purposes than one would think

          • @[email protected]
            link
            fedilink
            English
            710 days ago

            yeah it’s been absolutely hilarious to watch this play out in LLM space. so many prompt configurations and model deployments with so very many string-based rule inputs, meant to be configuring inviolable behaviour, that still get egregiously broken

            and afaict none of the dipshits have really seemed to internalise that just maybe their approach isn’t working

    • @[email protected]
      link
      fedilink
      English
      1912 days ago

      If yud just got to the point, people would realise he didn’t have anything worth saying.

      It’s all about trying to look smart without having any actual insights to convey. No wonder he’s terrified of being replaced by LLMs.

      • @fartsparkles
        link
        English
        1412 days ago

        LLMs are already more coherent and capable of articulating and arguing a concrete point.

    • @[email protected]
      link
      fedilink
      English
      1512 days ago

      Before we accidentally make an AI capable of posing existential risk to human being safety

      It’s cool to know that this isn’t a real concern and therefore in a clear vantage of how all the downstream anxiety is really a piranha pool of grifts for venture bucks and ad clicks.

    • @[email protected]
      link
      fedilink
      English
      29 days ago

      That’s a summary of his thinking overall but not at all what he wrote in the post. What he wrote in the post is that people assume that his theory depends on an assumption (monomaniacal AIs) but he’s saying that actually, his assumptions don’t rest on that at all. I don’t think he’s shown his work adequately, however, despite going on and on and fucking on.