Its a very different process. Having work on search engines before, I can tell you that the word generate means something different in this context. It means, in simple terms, to match your search query with a bunch of results, gather links to said results, and then send them to the user to be displayed
The LLM is going over the search results, taking them as a prompt and then generating a summary of the results as an output.
The search results are generated by the good old search engine, the “AI summary” option at the top is just doing the reading for you.
And of course if the answer isn’t trivial, very likely generating an inaccurate or incorrect output from the inputs.
But none of that changes how the underlying search engine works. It’s just doing additional work on the same results the same search engine generates.
EDIT: Just to clarify, DDG also has a “chat” service that, as far as I can tell, is just an UI overlay over whatever model you select. That just works the same way as all the AI chatbots you can use online or host locally and I presume it’s not what we’re talking about.
Well, yeah, there are multiple things feeding into the results page they generate for you. Not just two. There’s the search results, there’s an algorithmic widget that shows different things (so a calculator if you input some math, a translation box if you input a translation request, a summary of Wikipedia or IMDB if you search for a movie or a performer, that type of thing). And there is a pop-up window with an LLM-generated summary of the search results now.
Those are all different pieces. Your search resutls for “3 divided by 7” aren’t different because they also pop up a calculator for you at the top of the page.
Its a very different process. Having work on search engines before, I can tell you that the word generate means something different in this context. It means, in simple terms, to match your search query with a bunch of results, gather links to said results, and then send them to the user to be displayed
This is where my understanding breaks. Why would displaying it as a summary mean the backend process is no longer a search engine?
The LLM is going over the search results, taking them as a prompt and then generating a summary of the results as an output.
The search results are generated by the good old search engine, the “AI summary” option at the top is just doing the reading for you.
And of course if the answer isn’t trivial, very likely generating an inaccurate or incorrect output from the inputs.
But none of that changes how the underlying search engine works. It’s just doing additional work on the same results the same search engine generates.
EDIT: Just to clarify, DDG also has a “chat” service that, as far as I can tell, is just an UI overlay over whatever model you select. That just works the same way as all the AI chatbots you can use online or host locally and I presume it’s not what we’re talking about.
I see, you’re splitting the UI from the backend as two different things, and Im seeing them as parts to a whole.
Well, yeah, there are multiple things feeding into the results page they generate for you. Not just two. There’s the search results, there’s an algorithmic widget that shows different things (so a calculator if you input some math, a translation box if you input a translation request, a summary of Wikipedia or IMDB if you search for a movie or a performer, that type of thing). And there is a pop-up window with an LLM-generated summary of the search results now.
Those are all different pieces. Your search resutls for “3 divided by 7” aren’t different because they also pop up a calculator for you at the top of the page.
Yeah, for some reason I was thinking you were trying to say that bolting on widgets made it no longer a search engine.