I mean LLMs pretty much just try to guess what to say in a way that matches their training data, and research is usually trying to test or measure stuff in reality and see the data and try to find conclusions based on that so it doesn’t seem feasible for LLMs to do research
They maybe used as part of research but it can’t do the whole research as a crucial part of most research would be the actual data and you’d need a LOT more than just LLMs to get that
Despite using search results, it also hallucinates, like when it told me last week that IKEA had built a model of aircraft during World War 2 (uncited).
I was trying to remember the name of a well known consumer goods company that had made an aircraft and also had an aerospace division. The answer is Ball, the jar and soda can company.
I had it tell me a certain product had a feature it didn’t and then cite a website that was hosting a copy of the user manual… that didn’t mention said feature. Having it cite sources makes it way easier to double check if it’s spewing bullshit though
Yes, but it shows how an LLM can combine its own AI with information taken from web searches.
The question I’m responding to was:
I wonder why nobody seems capable of making a LLM that knows how to do research and cite real sources.
And Bing Chat is one example of exactly that. It’s not perfect, but I wasn’t claiming it was. Only that it was an example of what the commenter was asking about.
As you pointed out, when it makes mistakes you can check them by following the citations it has provided.
Bigger models do start to show more emergent intelligent properties and there are components being added to the LLM to make them more logical and robust. At least this is what OpenAI and others are saying about even bigger datasets.
For me the biggest indicator that we’ve barking up the wrong tree is energy consumption.
Consider the energy required to feed a human with that required to train and run the current “leading edge” systems.
From a software development perspective, I think machine learning is a very useful way to model unknown variables, but that’s not the same as “intelligence”.
Cohere’s command-r models are trained for exactly this type of task. The real struggle is finding a way to feed relevant sources into the model. There are plenty of projects that have attempted it but few can do more than pulling the first few search results.
I wonder why nobody seems capable of making a LLM that knows how to do research and cite real sources.
I mean LLMs pretty much just try to guess what to say in a way that matches their training data, and research is usually trying to test or measure stuff in reality and see the data and try to find conclusions based on that so it doesn’t seem feasible for LLMs to do research
They maybe used as part of research but it can’t do the whole research as a crucial part of most research would be the actual data and you’d need a LOT more than just LLMs to get that
Yup! LLMs don’t put facts together. They just look for patterns, without any concept of what they are looking at.
Have you ever tried Bing Chat? It does that. LLMs that do websearches and make use of the results are pretty common now.
Bing uses ChatGPT.
Despite using search results, it also hallucinates, like when it told me last week that IKEA had built a model of aircraft during World War 2 (uncited).
I was trying to remember the name of a well known consumer goods company that had made an aircraft and also had an aerospace division. The answer is Ball, the jar and soda can company.
I had it tell me a certain product had a feature it didn’t and then cite a website that was hosting a copy of the user manual… that didn’t mention said feature. Having it cite sources makes it way easier to double check if it’s spewing bullshit though
Yes, but it shows how an LLM can combine its own AI with information taken from web searches.
The question I’m responding to was:
And Bing Chat is one example of exactly that. It’s not perfect, but I wasn’t claiming it was. Only that it was an example of what the commenter was asking about.
As you pointed out, when it makes mistakes you can check them by following the citations it has provided.
Because the inherent design of modern AIs is not deterministic.
Adding a progressively bigger model cannot fix that. We need an entirely new approach to AI to do that.
Bigger models do start to show more emergent intelligent properties and there are components being added to the LLM to make them more logical and robust. At least this is what OpenAI and others are saying about even bigger datasets.
For me the biggest indicator that we’ve barking up the wrong tree is energy consumption.
Consider the energy required to feed a human with that required to train and run the current “leading edge” systems.
From a software development perspective, I think machine learning is a very useful way to model unknown variables, but that’s not the same as “intelligence”.
Cohere’s command-r models are trained for exactly this type of task. The real struggle is finding a way to feed relevant sources into the model. There are plenty of projects that have attempted it but few can do more than pulling the first few search results.