Brian Eno has spent decades pushing the boundaries of music and technology, but when it comes to artificial intelligence, his biggest concern isn’t the tech — it’s who controls it.
For some reason the megacorps have got LLMs on the brain, and they’re the worst “AI” I’ve seen. There are other types of AI that are actually impressive, but the “writes a thing that looks like it might be the answer” machine is way less useful than they think it is.
Seconded. LLMs are neat - but they’re fundamentally not oracles. They belong in video games, not in fucking Google.
They’ve served their purpose demonstrating that data and training will suffice to perform the impossible. We need to move on to better questions than ‘what’s the next word?’ Text diffusion models should be better, but their metric remains ‘that looks about right,’ so their repeated adjustments will be wrong in fascinating new ways.
most LLM’s for chat, pictures and clips are magical and amazing. For about 4 - 8 hours of fiddling then they lose all entertainment value.
As for practical use, the things can’t do math so they’re useless at work. I write better Emails on my own so I can’t imagine being so lazy and socially inept that I need help writing an email asking for tech support or outlining an audit report. Sometimes the web summaries save me from clicking a result, but I usually do anyway because the things are so prone to very convincing halucinations, so yeah, utterly useless in their current state.
I usually get some angsty reply when I say this by some techbro-AI-cultist-singularity-head who starts whinging how it’s reshaped their entire lives, but in some deep niche way that is completely irrelevant to the average working adult.
I have also talked to way too many delusional maniacs who are literally planning for the day an Artificial Super Intelligence is created and the whole world becomes like Star Trek and they personally will become wealthy and have all their needs met. They think this is going to happen within the next 5 years.
The delusional maniacs are going to be surprised when they ask the Super AI “how do we solve global warming?” and the answer is “build lots of solar, wind, and storage, and change infrastructure in cities to support walking, biking, and public transportation”.
Which is the answer they will get right before sending the AI back for “repairs.”
As we saw with Grock already several times.
They absolutely adore AI, it makes them feel in-touch with the world and able to feel validated, since all it is is a validation machine. They don’t care if it’s right or accurate or even remotely neutral, they want a biased fantasy crafting system that paints terrible pictures of Donald Trump all ripped and oiled riding on a tank and they want the AI to say “Look what you made! What a good boy! You did SO good!”
For some reason the megacorps have got LLMs on the brain, and they’re the worst “AI” I’ve seen. There are other types of AI that are actually impressive, but the “writes a thing that looks like it might be the answer” machine is way less useful than they think it is.
Seconded. LLMs are neat - but they’re fundamentally not oracles. They belong in video games, not in fucking Google.
They’ve served their purpose demonstrating that data and training will suffice to perform the impossible. We need to move on to better questions than ‘what’s the next word?’ Text diffusion models should be better, but their metric remains ‘that looks about right,’ so their repeated adjustments will be wrong in fascinating new ways.
most LLM’s for chat, pictures and clips are magical and amazing. For about 4 - 8 hours of fiddling then they lose all entertainment value.
As for practical use, the things can’t do math so they’re useless at work. I write better Emails on my own so I can’t imagine being so lazy and socially inept that I need help writing an email asking for tech support or outlining an audit report. Sometimes the web summaries save me from clicking a result, but I usually do anyway because the things are so prone to very convincing halucinations, so yeah, utterly useless in their current state.
I usually get some angsty reply when I say this by some techbro-AI-cultist-singularity-head who starts whinging how it’s reshaped their entire lives, but in some deep niche way that is completely irrelevant to the average working adult.
I have also talked to way too many delusional maniacs who are literally planning for the day an Artificial Super Intelligence is created and the whole world becomes like Star Trek and they personally will become wealthy and have all their needs met. They think this is going to happen within the next 5 years.
The delusional maniacs are going to be surprised when they ask the Super AI “how do we solve global warming?” and the answer is “build lots of solar, wind, and storage, and change infrastructure in cities to support walking, biking, and public transportation”.
Which is the answer they will get right before sending the AI back for “repairs.”
As we saw with Grock already several times.
They absolutely adore AI, it makes them feel in-touch with the world and able to feel validated, since all it is is a validation machine. They don’t care if it’s right or accurate or even remotely neutral, they want a biased fantasy crafting system that paints terrible pictures of Donald Trump all ripped and oiled riding on a tank and they want the AI to say “Look what you made! What a good boy! You did SO good!”