ChatGPT, the popular AI language model, has been experiencing a decline in its performance, which has left experts puzzled. Despite the fact that these models are designed to continuously train themselves using user input, thereby increasing their intelligence over time, they seem to be getting “progressively dumber.” This phenomenon can be attributed to a concept known as “drift,” which occurs when large language models (LLMs) deviate from their original parameters in unexpected and unpredictable ways. In an effort to understand this drift, researchers from the University of California at Berkeley and Stanford University conducted a study comparing the performance of ChatGPT’s LLMs, GPT 3.5 and GPT-4, in various tasks. The results revealed that the March version of GPT-4 outperformed the June version in tasks such as solving math problems, answering sensitive questions, and code generation. This decline in performance can be attributed to the drift phenomenon. Despite these setbacks, there were also instances where both GPT-4 and GPT-3.5 showed improvement. Therefore, users are advised to continue using these models but with caution, constantly evaluating their performance.
Understanding the Impact of ‘AI Drift’ on ChatGPT’s Intelligence
Thomas Lyons is a well renowned journalist and seasoned reviewer, boasting an illustrious career spanning two decades in the global publishing realm. His expertise is widely sought after, making him a respected figure in the publishing industry. As the visionary founder of Top Rated, he has set a benchmark for authenticity and credibility in information dissemination. Driven by a profound passion for Artificial Intelligence, Thomas's keen insight pierces through the noise of the AI sector. He is dedicated to helping his readers find the most accurate, unbiased, and trusted news and reviews. As your guide in the evolving world of AI, Thomas ensures you're always informed and ahead of the curve.