• @[email protected]
    cake
    link
    fedilink
    English
    45 months ago

    These aren’t simulations that are estimating results, they’re language models that are extrapolating off a ton of human knowledge embedded as artifacts into text. It’s not necessarily going to pick the best long term solution.

    • @[email protected]
      cake
      link
      fedilink
      English
      25 months ago

      Language models can extrapolate but they can also reason (by extrapolating human reasoning).

      • @[email protected]
        cake
        link
        fedilink
        English
        45 months ago

        I want to be careful about how the word reasoning is used because when it comes to AI there’s a lot of nuance. LLMs can recall text that has reasoning in it as an artifact of human knowledge stored into that text. It’s a subtle but important distinction that’s important for how we deploy LLMs.