• Flying Squid@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    2
    ·
    edit-2
    1 year ago

    Even if ChatGPT gets far in advance of the way it is now in terms of writing code, at the very least you’re still going to need people to go over the code as a redundancy. Who is going to trust an AI so much that they will be willing to risk it making coding errors? I think that the job of at the very least understanding how code works will be safe for a very long time, and I don’t think ChatGPT will get that advanced for a very long time either, if ever.

    • kescusay@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      1
      ·
      1 year ago

      There’s more to it than that, even. It takes a developer’s level of knowledge to even begin to tell ChatGPT to make something sensible.

      Sit an MBA down in front of a ChatGPT window and tell them to make an application. The application has to save state, it has to use the company’s OAuth login system, it has to store data in a PostgreSQL database, and it has to have granular, roles-based access control.

      Then watch the MBA struggle because they don’t understand that…

      • Saving state is going to vary depending on the front-end. Are we writing a browser application, a desktop application, or a mobile application? The MBA doesn’t know and doesn’t understand what to ask ChatGPT to do.
      • OAuth is a service running separately to the application, and requires integration steps that the MBA doesn’t know how to do, or ask ChatGPT to do. Even if they figure out what OAuth is, ChatGPT isn’t trained on their particular corporate flavor for integration.
      • They’re actually writing two different applications, a front-end and a back-end. The back-end is going to handle communication with PostgreSQL services. The MBA has no idea what any of that means, let alone know how to ask ChatGPT to produce the right code for separate front-end and back-end features.
      • RBAC is also probably a separate service, requiring separate integration steps. Neither the MBA nor ChatGPT will have any idea what those integration steps are.

      The level of knowledge and detail required to make ChatGPT produce something useful on a large scale is beyond an MBA’s skillset. They literally don’t know what they don’t know.

      I use an LLM in my job now, and it’s helpful. I can tell it to produce snippets of code for a specific purpose that I know how to describe accurately, and it’ll do it. Saves me time having to do it manually.

      But if my company ever decided it didn’t need developers anymore because ChatGPT can do it all, it would collapse inside six months, and everything would be broken due to bad pull requests from non-developers who don’t know how badly they’re fucking up. They’d have to rehire me… And I’d be asking for a lot more money to clean up after the poor MBA who’d been stuck trying to do my job.

        • kescusay@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          1 year ago

          You’re welcome! And it occurs to me that the fact that it took a developer to explain all of that is an object lesson in why ChatGPT won’t end software development as a career option - and believe me, I simplified it for a non-developer audience.

    • thisfro@slrpnk.net
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      1
      ·
      1 year ago

      Who is going to trust an AI so much that they won’t risk it making coding errors?

      Sadly, too many

        • Jeena@jemmy.jeena.net
          link
          fedilink
          English
          arrow-up
          8
          ·
          1 year ago

          I don’t believe it. If it’s good enough then they will ship and make money, and those who put people on it will be so slow that they will be just outperformed by those who don’t.

          • Flying Squid@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            ·
            1 year ago

            If your code doesn’t work because you rely entirely on an AI to do it, you don’t have a business you can run unless you want to go back to paper and pencil.

            • Jeena@jemmy.jeena.net
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              2
              ·
              1 year ago

              If your code doesn’t work because you rely on humans understanding it, you don’t have a business you can run. We already are there where humans have no idea why the computer does this or that decision because it’s so complex especially with all the machine learning and complex training data, etc. let’s not pretend it will get less complex with time.

              • Flying Squid@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                ·
                1 year ago

                So your argument is that people will rely on AI entirely without making any redundancies, unlike now where they have more than one human so they can check for these issues because humans make coding errors?

                • enkers
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  edit-2
                  1 year ago

                  I kinda agree with them. Currently coding already is an abstraction. The average developer has very little idea what machine code their compiler actually produces, and for the most part they don’t need to care about this. Feeding an AI a specification is just a higher level of abstraction.

                  For now, we’ll need people to check that AI produces code that does what we expect, but I believe at some point we’ll mostly take it for granted that they just do.

                • Jeena@jemmy.jeena.net
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  1 year ago

                  My argument is that already today no human is able to and checks it when it comes to decision making models like for example if the car should go left or right around a obstacle. And over time we will have less straight forward classical programming doing decisions and more and more models doing decisions with hundreds or thousands of sensor inputs.

                  • lemmyvore@feddit.nl
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    edit-2
                    1 year ago

                    Except we already have fields (like pharma manufacturing) that have to deal with hundreds or thousands of inputs and variables, are automated, and we still manage to fully understand the stack as well as fully check everything.

                    Hint: when someone tells you they “can’t” check or understand what their software is doing, it’s a scam.

                    Normally they should be told to go back and figure it out before being allowed to ship any product. If you tried this in any other industry it would be laughable. Even in software it’s outrageous, imagine getting accounting software or even a simple file backup tool that doesn’t work some of the time and nobody can tell you how it works. Yet these companies get a pass putting cars like this on the road.

    • nicetriangle@kbin.social
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      1 year ago

      That’s a fuckin bleak outcome for a lot of people if the job transition goes from \ to \

      That’s like being an artist and being told your job now is simply to fix the shitty hands Midjourney draws. And your job will only last as long as that remains a problem.

      • Flying Squid@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        Hey, I didn’t say the future would be bright, just that it will still need people familiar with code for the foreseeable future. At least until the Earth heats up so much that the lack of potable water and the unsurvivable high temperatures destroy civilization.

    • archomrade [he/him]@midwest.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      It isn’t surprising that this is the way we conceptualize the potential impact of AI, but it’s frustrating to see it tossed around as if AI disruption is a forgone conclusion.

      AI will start re-defining the problems that code is written to solve long before we get anywhere close to GPT models replacing human workers, and that’s a big enough problem by itself.

      It used to be that before code could even be employed to solve a problem, it had to be understood procedurally. That’s increasingly not the case, given that ML is routinely employed to decode things that were previously thought to be too chaotic to be understood, like brain waves and image pixel data. I don’t know why we’re so sure of ourselves that machine learning is just a gimmick and poses no real threat, just because anthropomorphizing it seems silly.