• gravitas_deficiency
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    edit-2
    7 months ago

    A human can only do bad or dumb things so quickly.

    A human writing code can do bad or dumb things at scale, as well as orders of magnitude more quickly.

    • dual_sport_dork 🐧🗡️@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 months ago

      And untangling that clusterfuck can be damn near impossible.

      The reaper may not present his bill immediately, but he will always present his bill eventually. This is a zero-sum thing: There is no net savings because the work required can be front loaded or back loaded, and you sitting there at the terminal in the present might not know. Yet.

      There are three phases where time and effort are input, and wherein asses can be bitten either preemptively or after the fact:

      1. Loading the algorithm with all the data. Where did all that data come from? In the case of LLM’s, it came from an infinite number of monkeys typing on an infinite number of keyboards. That is, us. The system is front loaded with all of this time and effort – stolen, in most cases. Also the time and effort spent by those developing the system and loading it with said data.
      2. At execution time. This is the classic example, i.e. the algorithm spits out into your face something that is patently absurd. We all point and laugh, and a screen shot gets posted to Lemmy. “Look, Google says you should put glue on your pizza!” Etc.
      3. Lurking horrors. You find out about the problem later. Much later. After the piece went to print, or the code went into production. “Time and effort were saved,” producing the article or writing the code. Yes, they appeared to be – then. Now it’s now. Significant expenditure must be made cleaning up the mess. Nobody actually understood the code but now it has to be debugged. And somebody has to pay the lawyers.