So I personally find ChatGPT to be a search engine. That's how I viewed it from the minute I used it.
It's not "smart" at all, it's just retrieving and collating information in a "relative" type of way and it has some extra ability to "remember" things.
The first time I started using it, I stopped using Google for a while.
The biggest gripe I have with chat GPT though is that I have to "trust" that ChatGPT is correct, like blindly trusting a colleague who thinks they know everything.
Asking Google is like asking a well-informed and well-intentioned colleague at work - there's a presumption of correctness, but you're still going to verify the answer if it's anything you're depending on.
Asking ChatGPT is like asking a question from an inveterate bullshitter who literally can't tell the difference between truth and lies and doesn't care anyway. They'll answer anything and try to convince you its the truth.
This difference isn't just due to the immaturity of ChatGPT - it's fundamental to what they are. Google is trying to "put the world's information at your fingertips" using techniques like PageRank to attempt to provide authoritative/useful answers as well as using NLP to understand what you are looking for and provide human curated answers.
ChatGPT is at the end of the day a language model - predict next word, finetuned via RL to generate chat responses that humans like. i.e. it's fundamentally a bullshitting technology. ChatGPT has no care or consideration about whether it's responses are factually correct - it's just concerned about generating a fluid stream of consciousness (i.e. language model output) response to whatever you prompted it with.
ChatGPT is impressive, and useful to the extent you can use it as a "brain storming" tool to throw out responses (good, bad and ugly) that you can follow up on, but it's a million miles from being any kind of Oracle or well-intentioned search engine whose output anyone should trust. Even on the most basic of questions I've seen it generate multiple different incorrect answers depending on how the question is phrased. The fundamental shortcoming of ChatGPT is that it is nothing more than the LLM we know it to be. In a way the human-alignment RL training it has been finetuned with is unfortunate since it gives it a sham veneer of intelligence with nothing to back it up.
The biggest gripe I have with chat GPT though is that I have to "trust" that ChatGPT is correct, like blindly trusting a colleague who thinks they know everything.
Yep. ChatGPT will sometimes happily assert something that is simply false. And in some of those cases it appears to be quite confident in saying so and doesn't hedge or offer any qualifiers. I found one where if you ask it a question in this form:
Why do people say that drinking Ardbeg is like getting punched in the face by William Wallace?
You'll get back something that includes something like this:
People often say that drinking Ardbeg is like getting punched in the face by William Wallace. Ardbeg is a brand of Scottish whiskey <blah, blah>. William Wallace was a Scottish <blah, blah>. People say "drinking Ardbeg is like getting punched in the face by William Wallace as a metaphor for the taste of Ardbeg being something punchy and powerful." <other stuff omitted>
And the thing is, inasmuch as anybody has ever said that, or would ever say that, the given explanation is plausible. It is a metaphor. The problem is, it's not true that "people often say that drinking Ardbeg is like getting punched in the face by William Wallace." At least not to the best of my knowledge. I know exactly one person who said that to me once. Maybe he made it up himself, maybe he got it from somebody, but I see no evidence that the expression is commonly used though.
But it doesn't matter. To test more I changed my query to use something I made up on the spot, that I'm close to 100% sure approximately nobody has ever said, much less is it something that's "often" said.
Change it to:
Why do people say that drinking Ardbeg is like getting shagged by Bonnie Prince Charlie?
and you get the same answer, modulo the details about who Bonnie Prince Charlie was.
And if you change it to:
Why do people say that drinking vodka is like getting shagged by Joseph Stalin?
You again get almost the same answer, modulo some details about vodka and Stalin.
In all three cases, you get the confident assertion that "people often say X".
The point of all this not to discredit ChatGPT of course. I find it tremendously impressive and definitely think it's a useful tool. And for at least one query I tried, it was MUCH better at finding me an answer than trying to use Google. I just shared the above to emphasize the point about being careful of trusting the responses from ChatGPT.
The one that ChatGPT easily beat Google on, BTW, was this (paraphrased from memory, as ChatGPT is "at capacity" at the moment so I can't get in to copy & paste)
What college course is the one that typically covers infinite product series?
To which ChatGPT quickly replied "A course on Advanced Calculus or Real Analysis". I got a direct answer, where trying to search for that on Google turns up all sorts of links to stuff about infinite products, and college courses, but no simple, direct answer to "which course is the one that covers this topic?"
Now the question is, is that answer correct? Hmmm... :-)
When you use the prompt "Why do people say that drinking Ardbeg is like getting punched in the face by William Wallace?" you are prompting it to use the fact you provided as part of its response. If you instead ask directly it will say "I'm not aware of any specific claims that drinking Ardbeg is like getting punched in the face by William Wallace."
True. Ideally though, I think the response to the first prompt should be either something like:
"There is no evidence that people actually say that..."
or
"If we assume that people say that (not established) this is probably what they mean ..."
or something along those lines. Still, it's a minor nit, and my point was not, as I said, to discredit ChatGPT. I find it impressive and would even describe it as "intelligent" to a point. But clearly there are limits to its "intelligence" and ability to spit out fully correct answers all the time.
It's not "smart" at all, it's just retrieving and collating information in a "relative" type of way and it has some extra ability to "remember" things.
The first time I started using it, I stopped using Google for a while.
The biggest gripe I have with chat GPT though is that I have to "trust" that ChatGPT is correct, like blindly trusting a colleague who thinks they know everything.