• BearOfaTime@lemm.ee
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    3
    ·
    8 months ago

    Not modest enough.

    A Modest AI proposal would be to eat the shills who are dishonestly promoting it.

  • sbv
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    8 months ago

    An alternate title could be:

    LLM companies should pay creators for training data

    It sounds like they need tonnes of data to train the models, so that would have a significant effect on the business model.

    (There’s also the question of the quality of the data)

    • Grimy@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      8 months ago

      It needs so much data that only a handful of companies would be able to train them, mainly google and Microsoft.

      It wouldn’t go away, we would just have a subscription model as our only option and as we head towards an AI driven society, most of our economy would end up being owned by whoever can afford to pay for the data.

      None of that money would go to individuals like journalist and other creators either. All the content is owned by entities like the new York Times, reddit, publishing houses, etc.

  • RobotToaster@mander.xyz
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    8 months ago

    Journalists who join the programs (and they should be allowed to join multiple programs from multiple companies) agree to publish new, well-written articles on a regular basis, in exchange for some level of financial support. It should be abundantly clear that the AI companies have no say over the type of journalism being done, nor do they have any say in editorial beyond the ability to review the quality of the writing to make sure it’s actually useful in training new systems.

    • Who gets to decide who is a “journalist”
    • Do you want well written but obviously “wrong” information to be supported? To give an extreme example, David Icke writes pretty well, he’s just wrong. Personally I don’t have an issue with a few “swivel eyed loons” being supported by something like this, but a lot of people will, and you would need some way to stop them becoming the majority.
    • The opposite of that is how you stop the AI companies simply claiming reporting they don’t like is poorly written