• PeriodicallyPedantic@lemmy.ca
    link
    fedilink
    arrow-up
    1
    ·
    7 hours ago

    I agree, but the crux of my post is that it doesn’t have to be that way - it’s not inherent to the training and use of LLMs.

    I think your second point is what makes the first point worse - this is happening at an industrial scale, with the only concern being profit. We pay technocrats for the use of their services, and they use that money to train more models without a care for the deviation it causes.

    I think a lot of the harm caused by model training can be forgiven if the models were used for the betterment of quality of life of the masses, but they’re not, they’re mainly used to enrich technocrats and business owners at any expense.

    • raspberriesareyummy@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      1 hour ago

      Well - there’s nothing left to argue about - I do believe we have bigger climate killers than large computing centers, but it is a worrying trend to spend that much energy for an investment bubble on what is essentially an somewhat advanced word prediction. However, if we could somehow get the wish.com version of Tony Stark and other evil pisswads to die out, then yes, using LLMs for some creative ideas is a possibility. Or for references to other sources that you can then check.

      However, the way those models are being trained is aimed at impressing naive people and that’s very dangerous, because those people mistake impressively coherent sentences for understanding and are willing to talk about automating tasks upon which lives depend.