You mean like the California warning labels that mark nearly everything as cancer-causing...to the point that people simply ignore them?Which is why ChatGPT and other responsible providers include a warning label.
This is pretty much like a gun owner attempting to explain proper marksmanship training and gun handling to people who buy guns but still expect them to be magic wands that somehow just make bad guys fall down.By the way, one can protect against inaccurate information to a large extent through careful use of the prompt. People don’t understand the real capability of these systems is that they represent the ultimate macro processor or text-transformation utility, as if one can speak the word and animate all of the text processing utilities of the UNIX system like m4, grep, less, sed, awk, et cetera.
Also the errors tend to relate primarily to questions posed in natural language not checked against external data, for example, with a web search, and never relate to the output of questions processed by the built-in programming language.
So whereas on the one hand I have made the point that if people refer to AI as a reliable source for information in their posts they are making an appeal to unqualified authority, but on the other hand, AI can be a source for information that is as potentially reliable as any other web search, with additional processing capabilities.
Upvote
0