- Oct 12, 2022
- 2,815
- 1,524
- Country
- United States
- Gender
- Male
- Faith
- Christian
- Marital Status
- Married
Not Physical or Life Science; more like technology. In any event, with AI becoming ubiquitous, here's an easy experiment: Ask an AI about something concrete but obscure that you're 100% certain of without any shadow of a doubt. Not anything "gotcha" like 11 + 11 (22 decimal; 110 binary). Just something straightforward. Which AI you ask doesn't matter.
I tried this, and the results were less than impressive. The only thing it got correct was the name of the event and when it happened. A supposed participant wasn't there. The specific event that triggered it wasn't mentioned, nor was the aftermath. Most of the response was fluff worded to pass for something authoritative. It didn't work. It read for all the world like a grade school student trying to bluff his way through an essay question.
In picking something concrete but obscure, you're asking the AI for something there's not a lot of info online to train it. No telling how many words are written about, oh, Washington crossing the Delaware, or calculating the value of Pi, and what AI regurgitates might be passable. Might. Something obscure isn't going to have many words online about it, and what AI comes up with can be questionable. It might be questionable even if there's a lot of words written about it, because it has no reasoning to fact check itself. This experiment is sort of looking at boundary conditions that evaluates the accuracy of AI.
Add to the mix topics that people debate. Add to it things that some people consider as not concrete. And consider that AI has no reasoning to evaluate its own responses.
Now consider the posts that show up on CF that are essentially "AI says." Then consider that if AI can't be relied on when it comes to the obscure, how can we rely on it for anything where it has to imitate "general knowledge?"
I tried this, and the results were less than impressive. The only thing it got correct was the name of the event and when it happened. A supposed participant wasn't there. The specific event that triggered it wasn't mentioned, nor was the aftermath. Most of the response was fluff worded to pass for something authoritative. It didn't work. It read for all the world like a grade school student trying to bluff his way through an essay question.
In picking something concrete but obscure, you're asking the AI for something there's not a lot of info online to train it. No telling how many words are written about, oh, Washington crossing the Delaware, or calculating the value of Pi, and what AI regurgitates might be passable. Might. Something obscure isn't going to have many words online about it, and what AI comes up with can be questionable. It might be questionable even if there's a lot of words written about it, because it has no reasoning to fact check itself. This experiment is sort of looking at boundary conditions that evaluates the accuracy of AI.
Add to the mix topics that people debate. Add to it things that some people consider as not concrete. And consider that AI has no reasoning to evaluate its own responses.
Now consider the posts that show up on CF that are essentially "AI says." Then consider that if AI can't be relied on when it comes to the obscure, how can we rely on it for anything where it has to imitate "general knowledge?"