“The Year of AI” Declared 43 Years Ago: Reflecting on the Impact of AI in 2023
By [Your Name]
With the rapid advancements and cultural impact of artificial intelligence (AI) this year, it may be tempting to proclaim 2023 as “The Year of AI.” However, as an academic journal reports, the designation of the “year of AI” was actually made 43 years ago in 1980. AI has been a part of our lives for a very long time.
In fact, decades ago, I conducted an academic thesis on AI ethics, wrote an article for Computer Design Magazine in 1986 titled “Artificial Intelligence as a Systems Component,” and introduced two AI-based products for the Mac in 1988. Even then, AI was already over 30 years old. Its roots can be traced back to Professor John McCarthy of Stanford, MIT, and Dartmouth, who founded the Stanford AI Lab (SAIL) in 1955 and invented the programming language LISP in 1958. Thus, by 2023, AI has been around for at least 68 years.
Moreover, the exploration of AI ethics began even earlier, with Isaac Asimov contemplating the subject in 1940 through his speculative fiction. Despite this extensive history, it is hard to argue against calling 2023 the Year of AI, considering the significant developments and breakthroughs we have witnessed. AI has been utilized in various fields such as expert systems, diagnostic tools, video games, navigation systems, and more for many years.
However, this year has seen a unique advancement in the field of generative AI. While previous years could lay claim to the “Year of AI” title, there is no doubt that 2023 is the “Year of Generative AI.” The key difference lies in the way we now train AIs. In the past, most training was supervised, with AI designers feeding specific information to the AI, limiting its knowledge and capabilities. But now, we are in the era of large language models (LLMs), where unsupervised pre-training is possible.
Instead of feeding limited domain-specific information, AI vendors like OpenAI have been providing the AIs with vast amounts of data, including the entire internet and other digital content. This approach allows AIs to generate a wide range of material with unprecedented breadth. This progress has been facilitated by significant improvements in processor performance, storage capacity, and the availability of cloud computing and broadband connectivity.
To illustrate this transformation, let’s compare it to one of the products I introduced many years ago: House Plant Clinic, an expert system trained in horticultural knowledge. The development process involved painstaking interviews with a plant expert, encoding rules, facts, and best practices into the system’s knowledge base. The system only possessed the knowledge we had encoded, and it worked well within those boundaries.
Now, let’s consider ChatGPT, a generative AI model. When I asked it questions about a sick house plant, it asked step-by-step questions about soil moisture and leaf condition. When I requested an image of pests that might affect a house plant, it provided a much more advanced image. However, it failed to identify a “KRIDEFLIT,” highlighting the truthiness problem associated with generative AI.
While ChatGPT can confidently discuss various topics, our older expert system-based project had a higher chance of providing accurate information. The latter was created and vetted by a subject matter expert, whereas today’s chatbot generates information from an extensive pool of unverified data. The generative AI we have witnessed this year possesses immense potential but also comes with inherent challenges.
In conclusion, while 2023 may not be the first “Year of AI,” it has undoubtedly been a groundbreaking year for generative AI. The advancements in training methods and technological capabilities have allowed AI to reach new heights of productivity and creativity. As we continue to explore the potential of AI, it is crucial to address the ethical considerations and ensure that the benefits of this technology are harnessed responsibly.