OpenAI was working on advanced model so powerful it alarmed staff::Reports say new model Q* fuelled safety fears, with workers airing their concerns to the board before CEO Sam Altman’s sacking

  • Darkassassin07@lemmy.ca
    link
    fedilink
    English
    arrow-up
    126
    arrow-down
    3
    ·
    1 year ago

    So staff requested the board take action, then those same staff threatened to quit because the board took action?

    That doesn’t add up.

    • FrostyTrichs@lemmy.world
      link
      fedilink
      English
      arrow-up
      110
      ·
      1 year ago

      The whole thing sounds like some cockamamie plot derived from chatgpt itself. Corporate America is completely detached from the real world.

      • db2@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        36
        arrow-down
        6
        ·
        edit-2
        1 year ago

        That’s exactly what it is. A ploy for free attention and it’s working.

          • db2@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            4
            ·
            1 year ago

            ploy
            /ploi/
            noun
            a cunning plan or action designed to turn a situation to one’s own advantage.

            Except for the cunning part it seems to be a pretty good description.

            • pulaskiwasright@lemmy.ml
              link
              fedilink
              English
              arrow-up
              14
              ·
              1 year ago

              There’s no way the board members tarnished their reputations and lost their jobs so they could get attention for a company they no longer work for and don’t have a stake in. That’s just silly.

              • assassinatedbyCIA@lemmy.world
                link
                fedilink
                English
                arrow-up
                4
                arrow-down
                1
                ·
                1 year ago

                I don’t think the firing was a ploy, but I do think the retroactive justification of ‘we were building a model so powerful it scared us’ is a ploy to drum up hype. Just like all the other times they’ve said the same thing.

        • Identity3000@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          edit-2
          1 year ago

          That’s an appealing ‘conspiracy’ angle, and I understand why it might seem juicy and tantalising to onlookers, but that idea doesn’t hold up to any real scrutiny whatsoever.

          Why would the Board willingly trash their reputation? Why would they drag the former Twitch CEO through the mud and make him look weak and powerless? Why would they not warn Microsoft and risk damaging that relationship? Why would they let MS strike a tentative agreement with the OpenAI employees that upsets their own staff, only to then undo it?

          None of that makes any sense whatsoever from a strategic, corporate “planned” perspective. They are all actions of people who are reacting to things in the heat of the moment and are panicking because they don’t know how it will end.

          • db2@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            5
            ·
            1 year ago

            Why would the Board willingly trash their reputation?

            What reputation?

            Why would they drag the former Twitch CEO through the mud and make him look weak and powerless?

            Why would they care about that?

            Why would they not warn Microsoft and risk damaging that relationship? Why would they let MS strike a tentative agreement with the OpenAI employees that upsets their own staff, only to then undo it?

            Microsoft has put their entire sack in OpenAI’s purse. They could literally do or say anything to Microsoft.

            Are you telling me you really think it’s outlandish to think the same people who push a glorified nested ‘if’ statement as AI would do what it said to do? Those people are goofy, if they thought they were being given a convoluted real life quest by a digital DM they’d be all about it.

          • db2@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            What’s that got to do with anything? They sell a thing, they want the thing to sell more.

            • Echo Dot@feddit.uk
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              I think pretty much the entire world knows about chat GPT so clearly advertising isn’t an issue for them. Firing your CEO is not really a good look unless you’ve got a very very good reason in which case you should announce it.

              • db2@sopuli.xyz
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                Which they didn’t because it’s fake grandstanding bullshit.

    • bionicjoey@lemmy.ca
      link
      fedilink
      English
      arrow-up
      53
      ·
      1 year ago

      OpenAI loves to “leak” stories about how they’ve developed an AI so good that it is scaring engineers because it makes people believe they’ve made a massive new technological breakthrough.

      • Taleya@aussie.zone
        link
        fedilink
        English
        arrow-up
        12
        ·
        1 year ago

        Meanwhile anyone who works tech immediately thinks “some csuite dickhead just greenlit ED-209”

    • RedditWanderer@lemmy.world
      link
      fedilink
      English
      arrow-up
      28
      ·
      edit-2
      1 year ago

      More like:

      • They get a breakthrough called Q* (Q star) which is just combining 2 things we already knew about.

      • Chief scientist dude tells the board Sam has plans for it already

      • Board says Sam is going too fast with his “breakthroughs” and fires him.

      • Original scientist who raised the flag realized his mistake and started supporting Sam but damage was done

      • Microsoft

      My bet is the board freaked out at how “powerful” they heard it was (which is still unfounded and from what they explain in various articles, Q* is not very groundbreaking) and jumped the gun. So now everyone wants them to resign because they’ve shown they’ll take drastic actions without asking on things they don’t understand.

    • maegul (he/they)@lemmy.ml
      link
      fedilink
      English
      arrow-up
      14
      ·
      1 year ago

      There’s clearly a good amount fog around this. But something that is clearly true is that at least some OpenAI people have behaved poorly. Altman, the board, some employees, the mainstream of the employees or maybe all of them in some way or another.

      What we know about the employees was the petition which had ~90% sign it. Many were quick to point out the weird peer pressure that was likely around that petition. Amongst all that, some employees being alarmed about the new AI to the board or other higher ups is perfectly plausible. Either they were also unhappy with the poorly managed Altman sacking, never signed the petition or did so while really not wanting Altman back that much.

      • otter@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        4
        ·
        1 year ago

        It’s just such a relief that you’re doing your daily best to post the content we all so clearly need from this community. I’ve been meaning to thank you for your hard work.

  • Pxtl@lemmy.ca
    link
    fedilink
    English
    arrow-up
    53
    arrow-down
    2
    ·
    1 year ago

    There’s a huge discrepency between the scary warnings about Q* calling it the lead-up to artificial superintelligence, and the actual discussion of the capabilities of Q* (it is good-enough at logic to solve some math problems).

    My theory: the actual capabilities of Q* are perfectly nice and useful and unfrightening… but somebody pointed out the obvious: Q* can write code.

    Either

    1. “Q* is gonna take my job!”

    2. “As we enhance Q*, it’s going to get better at writing code… and we’ll use Q* to write our AI code. This thing might not be our hypothetical digital God, but it might make it.”

    • Sekoia@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      1
      ·
      1 year ago

      Nah. Programming is… really hard to automate, and machine learning more so. The actual programming for it is pretty straightforward, but to make anything useful you need to get training data, clean it, and design a structure, which is much too general for an LLM.

      • Corkyskog
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Programming is like 10% writing code and 90% managing client expectations in my small experience.

        • Robmart@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Programming is 10% writing code, 80% being up at 3 in the morning wondering whY THE FUCKING CODE WON’T RUN CORRECTLY (it was a typo that you missed despite looking at it over 10 times), and 10% managing expectations

          • realharo@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            1 year ago

            Typos in programming aren’t really a thing, unless you’re using the shittiest tools possible.

            • Robmart@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              Typos are very much a problem in programming. Variables can be set to the wrong value without the programmer noticing, you can call the wrong method (example RotateZ instead of RotateX), and in more advanced programming such as Java/C# reflection the IDE can’t correct you.

        • realharo@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          But a lot of the crap you have to do only exists because projects are large enough to require multiple separate teams, so you get all the overhead of communication between the teams, etc.

          If the task gets simple enough that a single person can manage it, a lot of the coordination overhead will disappear too.

          In the end though, people may find out that the entire product, that they are trying to develop using automation, is no longer relevant anyway.

  • archomrade [he/him]@midwest.social
    link
    fedilink
    English
    arrow-up
    42
    arrow-down
    3
    ·
    1 year ago

    The sensationalized headline aside, I wish people would stop being so dismissive about reports of advancement here. Nobody but those at the fringes are freaking out about sentience, and there are plenty of domains where small improvements in the models can fuck up big parts of our defense/privacy/infrastructure if they end up being successful. It really doesn’t matter if computers have subjective experience, if that computer is able to decrypt AES-192 or identify keystrokes from an audio recording.

    We need to be talking about what happens after AI becomes competent at even a handful of tasks, and it really doesn’t inspire confidence if every bit of news is received with a “LOL computers aren’t conscious GTFO”.

    • TheHarpyEagle@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      1
      ·
      edit-2
      1 year ago

      That’s why I hate when people retort “GPT isn’t even that smart, it’s just an LLM.” Like yeah, the machines being malevolent is not what I’m worried about, it’s the incompetent and malicious humans behind them. Everything from scam mail to propaganda to law enforcement is testing the water with these “not so smart” models and getting incredible (for them) results. Misinformation is going to be an even bigger problem when it’s so hard to know what to believe.

      • exocortex@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        Also “Yeah what are people’s minds really?”. The fact that we cannot really categorize our own minds doesn’t really mean that we’re forever superior to any categorized AI model. The mere fact that right now that bleeding edge is called an LLM doesn’t mean that it cannot fuck with us - especially if it is an even more powerful one in the future.

  • viking@infosec.pub
    link
    fedilink
    English
    arrow-up
    27
    ·
    1 year ago

    Allegedly. And no proof was presented. The letter cited was nowhere to be found.

  • linearchaos@lemmy.world
    link
    fedilink
    English
    arrow-up
    35
    arrow-down
    9
    ·
    1 year ago

    Pure propaganda. The only safety fears anyone in the industry is going to have is if a model is telling people to kill themselves or each other. But by saying that, The uneducated public is going to assume it’s skynet.

    • Lemminary@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      edit-2
      1 year ago

      Why must it always be propaganda in the Fediverse? Why can’t it be a more sensible take like sensationalization? Not everything is out to get you, sometimes a desperate news site just wants a click or a reader.

      • linearchaos@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Sensationalization infers that it happened and media turned it into misunderstood clickbait.

        If the company designed the PR stunt and executed the PR stunt that would be propaganda.

        • Lemminary@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          There’s literally no proof for the latter and the former is a lot more reasonable. I don’t understand this need to jump to conclusions and call everything propaganda like it’s a trump card.

          • linearchaos@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            If you want to fan-boy the company, that’s your own choice.

            The odds they constructed to 97th “ai safety story” for press vs the developers “being scared” of the llm are very very high.

            No reasonable developer of the product has any worry for safety beyond hallucinations telling people to do immoral things. The only reason anyone says “safety” around llm is to generate an alarmist news story for press.

            • Lemminary@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              1 year ago

              Who is “fan boying” the company? Can you explain that and how I did that, exactly? And please quote me.

              The real story here is that the model acquired an ability that impressed a lot of people. The press ran with it and fabricated panic for views. Textbook sensationalism. How is that too hard to understand?

              The only reason anyone says “safety” around llm is to generate an alarmist news story for press.

              That’s literally what I’m saying. Lol How do we jump from that to propaganda is my question.

              • linearchaos@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                1 year ago

                https://www.latimes.com/business/story/2023-11-20/openai-staff-threaten-to-go-to-microsoft-if-board-doesnt-quit

                3 pieces of “information” have been released:

                1. The board fired him primarily for not being honest with them and other things
                2. The signed letter mentioned most employees were upset about his departure and some are willing to follow him
                3. The signed employee letter mentioned the board being scared and limiting how fast to develop due to security was one of the points of contention.

                From that information, you have decided that ALL media outlets reports of #3 were falsified by all news outlets, but #1 and #2 are solid.

                I’m calling on Occams Razor and saying #3 was damage control from the board rather than all the media outlets faking it.

                I’m saying you’re “fan boying” because you’re giving undue credit to the companies action pushing it over on the media, when the paper signed by 700 employees says the board was incompetent and spreading the information about being scared themselves.

                • Lemminary@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  1 year ago

                  And yet you missed a very important aspect of fan boying: that I don’t give two shits what happens to the company. What I would be doing more accurately speaking is giving the benefit of the doubt and not drawing conclusions based on nothing, which is vastly different.

                  From that information, you have decided

                  What the fuck are you on about?

                  Miss me with your antisocial shit and name-calling and all your bullshit. Get blocked, run-of-the-mill asshole.

      • Socsa
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 year ago

        Because the political zeitgeist here is dominated by edgy teenagers who still see the world as something done to them instead of something they are doing. It’s extremely obvious if you’ve been through that phase of life already.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 year ago

    This is the best summary I could come up with:


    OpenAI was reportedly working on an advanced system before Sam Altman’s sacking that was so powerful it caused safety concerns among staff at the company.

    The artificial intelligence model triggered such alarm with some OpenAI researchers that they wrote to the board of directors before Altman’s dismissal warning it could threaten humanity, Reuters reported.

    The model, called Q* – and pronounced as “Q-Star” – was able to solve basic maths problems it had not seen before, according to the tech news site the Information, which added that the pace of development behind the system had alarmed some safety researchers.

    The reports followed days of turmoil at San Francisco-based OpenAI, whose board sacked Altman last Friday but then reinstated him on Tuesday night after nearly all the company’s 750 staff threatened to resign if he was not brought back.

    As part of the agreement in principle for Altman’s return, OpenAI will have a new board chaired by Bret Taylor, a former co-chief executive of software company Salesforce.

    However, his brief successor as interim chief executive, Emmett Shear, wrote this week that the board “did not remove Sam over any specific disagreement on safety”.


    The original article contains 504 words, the summary contains 192 words. Saved 62%. I’m a bot and I’m open source!

    • nucleative@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      1 year ago

      I read that blog post about her. I thought it all sounded pretty wild - perhaps credible and perhaps not. If she really believes what she says, she needs to hire a lawyer and file charges if she thinks there’s a crime.

      Instead she’s making these periodic accusatory posts containing big claims without evidence, which serves more as an antagonizer/libelous effort than one to achieve justice. It doesn’t rise to the standard we expect to find guilt.