Google has plunged the internet into a “spiral of decline”, the co-founder of the company’s artificial intelligence (AI) lab has claimed.
Mustafa Suleyman, the British entrepreneur who co-founded DeepMind, said: “The business model that Google had broke the internet.”
He said search results had become plagued with “clickbait” to keep people “addicted and absorbed on the page as long as possible”.
Information online is “buried at the bottom of a lot of verbiage and guff”, Mr Suleyman argued, so websites can “sell more adverts”, fuelled by Google’s technology.
I’d say they at least give more immediately useful info. I’ve got to scroll past 5-8 sponsored results and then the next top results are AI generated garbage anyways.
Even though I think he’s mostly right, the AI techbro gameplan is obvious. Position yourself as a better alternative to Google search, burn money by the barrelful to capture the market, then begin enshitification.
In fact, enshitification has already begun; responses are comparatively expensive to generate. The more users they onboard, the more they have to scale back the quality of those responses.
Highly doubtful. Mistral 7b just dropped and it’s almost as good as GPT 3.5 which is a 180b. It’s open(ish) source and can be run locally.
This is all very much in its infancy.
What percentage of search engine users do you think are technically savvy enough and have the inclination to do this themselves?
There are some relatively friendly front ends for this type of thing already. A little better packaging and local llm’s could hit mass market appeal. They run pretty well on Apple devices which are already popular, it’s not confined to gamer rigs
This whole thread is an excellent example of the fact that most people really suck at understanding implications or parsing potential future outcomes. You can’t just take the last year and project the changes onto the next year and call it a day.
It’s already not difficult to install an LLM. It will become much simpler. And besides, I’m talking about this tech as a bulwark to protect us from what you’re talking about.
This thread is honestly depressing with how committed people are to their very obviously bad takes. Dunning Kruger effect I suppose.
You seem to be vastly underestimating the potential of even the tiniest bits of friction to stop potential users from being onboarded. Do you know what one of the biggest spikes in reddit users in history was? When they removed the requirement of direct registration. Eliminating five seconds of friction resulted in a massive influx of users
Unless you reach convenience parity, you’re not going to have nearly the same level of market penetration.
For someone who bandies about the potential of Dunning Krueger, you don’t seem very self aware. Maybe you shouldn’t be an ass or just assume everyone else here is an idiot.
you fundamentally don’t understand enough for me to even bother in this discussion. i’m not even talking about everyone individually loading their own LLMs. you don’t understand how having a model a fraction of the size that requires a fraction of the computing and energy costs to run would temper enshittification? AI haters are so fucking annoying.
You sure enjoy making bad faith assumptions. Good luck with that.
ChatGPT is already getting worse at code commenting and programming.
The problem is that enshitification is basically a requirement in a capitalist economy.