• mindbleach
    link
    fedilink
    English
    arrow-up
    2
    ·
    4 days ago

    Seconded. LLMs are neat - but they’re fundamentally not oracles. They belong in video games, not in fucking Google.

    They’ve served their purpose demonstrating that data and training will suffice to perform the impossible. We need to move on to better questions than ‘what’s the next word?’ Text diffusion models should be better, but their metric remains ‘that looks about right,’ so their repeated adjustments will be wrong in fascinating new ways.