• Seasoned_Greetings
    link
    fedilink
    11
    edit-2
    1 year ago

    That’s because Huffman doesn’t believe that ongoing community is his ticket to millions. He believes selling data to ai learning programs is. They don’t need continuing users for that when they’re sitting on almost two decades of content.

    Just look at the actual actions of the admins. They’re removing mods for privatizing subs. They’re restoring erased content. They’re shadow banning comments critical of the system. They’re forcefully reverting changes to sub rules.

    They aren’t trying to get 20 million from the likes of a 500k/year company like Apollo, they’re trying to get 20 million from billionaire companies like Google, Microsoft, Apple, etc. that are maybe more willing to shell out that kind of money for an emerging technology. Killing third party apps wasn’t their goal, it was just an incredibly unpopular but necessary side effect because those apps use the same api that ai learning programs do.

    • NevermindNoMind
      link
      fedilink
      51 year ago

      The problem with this theory is that they could have done two tiered pricing. Reddit could have charged TPA developers one price and the LLM trainers a much higher price for API access. In fact, I believe that is exactly what Reddit is doing, they just haven’t been public about what they are trying to charge the LLMs. The Verge asked Spez about whether the LLM folks are biting on this and what that price would be, he just responded that they are “in talks.”

      If Reddit didn’t want to kill TPAs, they also could have given them a year or so to figure out their business models, rather than the 30 days they were given. Hell, Reddit could have backed down at any point and extended the time period for implementing charges.

      If Spez thinks he’s going to make money off LLMs, I think he’s delusional. The OpenAIs, Googles, and Metas out there have already used the Reddit data to train their models. That ship has sailed. The focus in the LLM world now is making better models, more compact models, refining their answers and making them more accurate, etc. The days of throwing vast amounts of random data at these models is probably over. For GPT 5, OpenAI is probably not looking to spend 50 million on new Reddit comments. Instead they will spend that to hire experts to revise GPT 4s outputs and use that as training data.

      • Icalasari
        link
        fedilink
        3
        edit-2
        1 year ago

        Plus, scraping exists. No need to pay for API access if it can scrape what is publically available

        • @[email protected]
          link
          fedilink
          21 year ago

          Plus, storage exists. No need to pay for API access to scrape if you already scraped to your own storage once already.

      • Sens
        link
        fedilink
        English
        11 year ago

        I don’t think it’s all over personally. The AIs will need to know any future changes to coding languages and similar things where parts are deprecated etc

        The english articulation language part is over though.