AGI claims are a cruel deception
More advanced AI large language models (e.g., ChatGPT, Claude and GROK) generate more garbage (aka hallucinations)!
Today’s New York Times (article gift link) reports
‘‘A new wave of “reasoning” systems from companies like OpenAI is producing incorrect information more often. Even the companies don’t know why…
The hallucination rates of newer A.I. systems were as high as 79 percent…
[Perhaps because] we still don’t know how these models work exactly.’’
Lacking an understanding of the why or the how, in the quotes above, suggests a real problem!
Understand, most providers of these systems have declared (or strongly implied) that they are getting closer and closer to AGI (artificial GENERAL intelligence.)
Unintentional irony, perhaps, or intended deception? What do you think?
Wikipedia, a source of content created by human beings, acting naturally (debating or arguing in the process of developing a consensus) defines AGI as follows:
Artificial general intelligence (AGI)—sometimes called human‑level intelligence AI—is a type of artificial intelligence capable of performing the full spectrum of cognitively demanding tasks with proficiency comparable to, or surpassing, that of humans.
Except that they’re getting more error prone, not less, and, as pointed out in The NY Times article:
“What the system says it is thinking is not necessarily what it is thinking.”
So maybe there’s a failure of human intelligence…or a failure of artificial intelligence…or both.
Don’t misunderstand; I’m not opposed to research on AI, or the use of AI, but it has to be guarded with attention to the possible errors it might make, particularly when dealing with language.
I am opposed to anthropomorphic descriptions (e.g., saying or implying that “AI thinks like a human”) that are at best a deception. There’s nothing new in that statement. I’ve been explicitly saying it since 2012.
Go use OpenAI, Claude, Grok and the others but do it while applying conservative assumptions about the quality of those systems.
Act with caution. Question. And enjoy!
I agree that many/most AI models "hallucinate," and thus offer "wrong," misleading or "shaded" answers and advice. The reasons for this are: 1. The training data (and data sets) that AIs utilize has many differing opinions and factual errors; 2. Data set creators are often biased in that they are SEEKING a solution and thus data is/are weighted differently; 3. Given the increasing "pollution" throughout the internet (often influenced by "bad actors"), coupled with the escalating exploitation of mis- and disinformation (e.g. for political or business or other advantage), it's no surprise that AI agents are "error prone;" 4. Humans make error all the time for many of the prior 3 reasons, plus their added stupidity.
So, I would suggest that people who are using AI must understand and recognize that AI systems and models are an assistant that makes mistakes and NOT an oracle of truth all the time.
If you are looking for an infallible oracle of truth you should not be using AI or any other unverified information sources on the internet. If you can work with AI as an assistant and are willing do critical thinking and quality checking with it you will find it to be a very productive assistant. That is the approach I use and always recommend to friends! Hope that helps folks.