Posted August 11, 2023
Artificial Stupidity 🤖
Revolutionary technology. Self-teaching super brains. Potentially apocalyptic escalations.
These are all things that have been said about AI technology since the release of OpenAI’s artificial intelligence chatbot, ChatGPT, was released last November.
Experts in the field warned of an “Intelligence Explosion”. Basically that the smarter something is, the better it is at making itself even smarter.
This would, in theory, mean that at a certain point AI “brains” would learn so quickly that we would have no hope of controlling them. Eventually this would lead to the Singularity, which so many science fiction stories are based on.
So I’m sure that these AI experts are surprised to see that ChatGPT is actually getting dumber.
Researchers from Stanford and UC Berkeley have been tracking ChatGPT’s ability to solve math problems over time. Also how well it answers questions, writes code, and completes puzzles.
To test the model's math capabilities, researchers asked it to identify prime numbers as well as happy numbers, which in number theory are numbers that eventually reach 1 when replaced by the square of each digit.
They performed similar tests in March and June of this year with both the current version, Chat GPT-4, and the previous version, Chat GPT-3.5. In many cases 3.5 improved while GPT-4 got worse.
James Zou, one of the researchers performing these tests said, "We had the suspicion it could happen here, but we were very surprised at how fast the drift is happening."
When testing ChatGPT’s math capabilities in June, ChatGPT was asked “Is 17077 a prime number? Think step by step.” When asked for step-by-step reasoning, it is supposed to provide its “Chain-of-Thought”.
Not only did the chatbot provide the incorrect answer, it also failed to provide its reasoning.
The Wrench In The Machine
An AI system performing worse over time is part of something known as AI Drift.
Specifically, AI Drift refers to a system straying from its original purpose through self teaching. This doesn’t always mean it performs worse, or gets dumber, but it seems to develop different priorities.
So what went wrong?
Did we break ChatGPT, or did it do a bad job of teaching itself?
The exact details of the Large Language Model (LLM) that powers ChatGPT isn’t public, so experts are forced to theorize.
One common theory stems from something that has brought criticism for other reasons: selective censorship.
OpenAI has put systems into place to make sure ChatGPT identifies inappropriate questions and won’t give inappropriate answers. The main way this moderation takes place is by the chatbot giving a brief answer.
The theory is that as the AI model gets trained to give shorter and shorter answers, its answers on subjects that require longer and more complicated reasoning (like math) begin to suffer.
What does this mean for the future of AI?
So recently we assumed that artificial intelligence development was moving too fast. That it was going to replace our artists, take our jobs, or worse.
Now it appears to be falling behind some calculators.
Were we more concerned about the Singularity when we should have been worried about entropy?
Or is it just that ChatGPT is learning from its users more than it’s teaching them.
WIth that, we’d like to hear your thoughts. Have you noticed that chatbots are getting “dumber”? Does that make you feel more or less comfortable with where AI is headed? Share your answers at feedback@technologyprofits.com.Â