Researchers describe how to tell if ChatGPT is confabulating

Finding out whether the AI is uncertain about facts or phrasing is the key.

https://arstechnica.com/ai/2024/06/researchers-describe-how-to-tell-if-chatgpt-is-confabulating/

This is not what you think. It's about how the software can tell that it's probably screwed up. This article won't help an LLM user tell if she's being bullshitted by the LLM.

Also, this reminds me of the old joke...

Q: How can tell that a politician is lying?
A: His lips move.

The LLM guesses that its answers are bullshit when some of its answers contradict others. So this is a breakthrough? How was this not obvious from the beginning?

This means it's not a great idea to simply force the LLM to return "I don't know" ...

A better answer would be "I don't know enough, and I'm not intelligent enough to confidently give you an answer. I'm basically just glorified autocomplete."

#ai #artificial-intelligence #as #artificial-stupidity #chatgpt

1