Build and transform technology landscapes to support evolving business strategies and operationalize innovation.
Learn moreMaximize market potential through a partner program offering LeanIX solutions tailored to your business model.
Learn moreArtificial intelligence (AI) isn’t new. Learn the history of the technology from early concepts to modern advancements, Alan Turing to John McCarthy.
Artificial intelligence (AI) is not a new concept. While the innovations of large language models (LLMs) are bringing the full potential of AI directly to internet users across the world, AI has taken a long road to get there.
Understanding the history of AI can provide IT leaders with crucial context on its nature. Not to mention, that information will help you predict where AI is heading in the future.
Understanding how AI has developed, how it can be leveraged now, and its longer-term potential, will empower you to navigate a course for your AI initiatives.
Let’s consider the origins of thinking machines as a concept and how far we’ve come since then.
The idea of artificial beings with human-like intelligence has been part of mythology for centuries. Ancient myths such as the Greek tale of Talos, a giant automaton, reflect humanity's fascination with creating life-like machines.
Yet, it wasn’t until the 19th century that mathematician Charles Babbage designed the Analytical Engine, an early mechanical general-purpose computer. Over the next 150 years, the concept of computing expanded to form the foundation for the digital world we live in today.
In this light, we can see that artificial intelligence (AI) is simply the natural evolution of computing that began in ancient Greece.
The first steps from analog computing to AI were taken in the 1950s.
Suggested reading:
The 1950s saw artificial intelligence (AI) first recognized as a scientific field of study, rather than the realm of myth and legend. This period saw groundbreaking ideas and foundational work that set the stage for future developments in AI.
Alan Turing is often considered the father of computer science, partly due to his significant contributions to the conceptualization of AI. In 1950, he published a seminal paper titled "Computing Machinery and Intelligence", in which he proposed a test that could potentially determine whether a computer had achieved sentience, which would become known as the Turing Test.
Alan Turing & Turing Machine, Source: pivot.digital
The Turing Test itself was based on the popular “Imitation Game”, a party game in which two party guests would sequester themselves in closed rooms with typewriters and pass typed messages through the door. Each would attempt to pose as the other and guests would try to guess which room contained which guest from their typed answers.
Turing suggested that if a machine could be created that could succeed at the Imitation Game, it could be considered intelligent. This idea laid the conceptual groundwork for what we now consider to be AI.
The Turing Test has since been called into question by John Searle’s “Chinese Room” thought experiment, which he formulated in 1980.
In the Chinese Room theory, Searle proposes that one could create a machine that could output pre-defined responses to questions posed to it in Chinese without actually understanding the Chinese language, therefore passing the Turing Test without having artificial intelligence.
Despite the debate over the validity of the test, Turing’s theories still form our understanding of AI today. Turing certainly deserves his place as a founding contributor to AI theory.
Suggested reading:
The Dartmouth Workshop, held in the summer of 1956, is often cited as the birthplace of AI as an academic discipline. Organized by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon, the conference brought together leading researchers to discuss the potential of creating intelligent machines.
Marvin Minsky, Claude Shannon, Ray Solomonoff, and other scientists at the Dartmouth Summer Research Project on artificial intelligence (Photo: Margaret Minsky)
John McCarthy first coined the term "artificial intelligence". With his colleagues, he envisioned a field of academic research that would explore how to make machines use language, form abstractions and concepts, solve problems, and improve themselves.
The Dartmouth Workshop was ambitious in scope, proposing that "every aspect of learning or any other feature of intelligence can, in principle, be so precisely described that a machine can be made to simulate it”. This optimistic vision set the agenda for AI research and led to the development of early AI programs and significant advancements in the field.
Suggested Reading:
The late 1950s and early 1960s saw the creation of some of the first AI programs, which demonstrated the feasibility of machines performing tasks that required intelligence. Notable early AI programs include:
These early programs showcased the potential of AI but also highlighted the limitations of the technology at the time, including the need for more powerful computers and more sophisticated algorithms.
Suggested Reading:
Guide
Enterprise Architecture done right accelerates your AI time-to-value
The 1970s marked a period of stagnation and disillusionment in artificial intelligence (AI) research, often referred to as the "first AI winter." During this time, optimism gave way to frustration as the limitations of early AI technologies became apparent.
The initial excitement surrounding AI research led to high expectations and substantial funding from both government and private sectors. However, as researchers encountered significant technical challenges, such as the limited processing power of computers and the complexity of human intelligence, progress slowed.
By the mid-1970s, funding agencies and investors began to lose confidence in AI. This decline in financial support resulted in fewer research projects and a slowdown in advancements.
Suggested Reading:
Three factors contributed to the AI winter of the 1970s:
These challenges underscored the need for more-advanced technology and better theoretical foundations, which would eventually come in the following decades.
Suggested Reading:
The 1980s saw a revival of interest and investment in artificial intelligence (AI), largely driven by the development of “expert systems”. These systems demonstrated that AI could be practically applied to solve real-world problems, leading to renewed optimism and funding.
Expert systems are AI programs that mimic the decision-making abilities of human experts. They use a knowledge base of facts and rules to solve problems in specific domains, such as medicine, engineering, and finance.
One of the most-famous early expert systems was MYCIN, developed in the mid-1970s to diagnose bacterial infections and recommend treatments. MYCIN's success showcased the potential of expert systems to provide valuable assistance in specialized fields.
The architecture of an Expert System, source: techtarget.com
The 1980s saw a proliferation of expert systems, such as DENDRAL for chemical analysis and XCON for configuring computer systems. These systems were commercially successful and demonstrated the practical applications of AI.
Suggested Reading:
In addition to expert systems, the 1980s saw the introduction of machine-learning techniques that allowed computers to analyze their own outputs and improve their performance over time. Researchers like John Hopfield and David Rumelhart developed neural networks that could recognize patterns and make predictions based on training data.
This period also saw the development of back-propagation, an algorithm for training neural networks, which significantly improved their accuracy and efficiency. Machine learning techniques laid the foundation for many of the AI advancements that would come in the following decades.
Suggested Reading:
In the early 1980s, Japan launched the Fifth Generation Computer Systems project, an ambitious initiative aimed at developing computers that could perform parallel processing and utilize AI. The project received substantial funding and attracted significant international attention.
While it did not achieve all its goals, it played a crucial role in advancing AI research and fostering collaboration among researchers worldwide. The project highlighted the importance of integrating AI with advanced computing technologies and inspired similar initiatives in other countries, contributing to the global progress in AI research.
Suggested Reading:
The late 1980s and early 1990s witnessed another period of reduced funding and interest in artificial intelligence (AI), known as the "second AI winter." This phase was marked by a re-evaluation of AI's potential and a shift in research priorities.
The success of expert systems in the 1980s led to another increase in expectations about AI's capabilities. However, these systems were often limited to narrow domains and required extensive manual input to build and maintain their knowledge bases.
As a result, many AI applications failed to live up to the hype, leading to disappointment among investors and the public.
Gartner Hype Cycle, source: Wikipedia
Pamela McCorduck, in her book Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence, captures the essence of this period's disillusionment, noting that "hopes were high, but so were the stakes, and the technology simply wasn't ready."
Suggested Reading:
The second AI winter was marked by a significant drop in investments, as the industry re-assessed the feasibility of AI technologies. Funding agencies, particularly in the United States and Europe, redirected resources to other fields perceived to have more immediate and tangible benefits.
This period forced researchers to refine their approaches and set more realistic goals. This often focused on incremental improvements rather than revolutionary breakthroughs.
Suggested Reading:
The 1990s and 2000s marked a period of resurgence for artificial intelligence (AI), driven by significant advancements in computational power and data availability. This era saw the re-emergence of AI as a powerful tool across various industries, supported by new technologies and methodologies.
The resurgence of AI was driven by advances in computational power and the availability of large datasets. The development of more powerful processors, such as GPUs, enabled the handling of complex calculations necessary for AI.
Additionally, the explosion of digital data from the internet and other sources provided the raw material for training advanced AI models. The increasing accessibility of digital data from the internet and computing resources allowed for the development of more sophisticated AI algorithms.
Researchers could now train models on vast amounts of data, improving their accuracy and robustness. The ability to process large datasets quickly and efficiently was a game-changer for AI research.
Suggested Reading:
Key breakthroughs such as backpropagation and the use of deep neural networks led to significant improvements in the performance of AI systems. These advancements were demonstrated through notable milestones that showcased AI's potential.
Deep Blue vs. Garry Kasparov Source: kasparov.com
Deep Blue vs. Garry Kasparov (1997): IBM's Deep Blue, a chess-playing computer, defeated world chess champion Garry Kasparov in a six-game match. This event highlighted the capabilities of AI in mastering complex strategic games.
AlphaGo vs. Lee Sedol (2016): Google's AlphaGo, developed by DeepMind, defeated Go champion Lee Sedol in a five-game match. Go is considered one of the most complex board games, and AlphaGo's victory demonstrated the power of deep learning and reinforcement learning.
Suggested Reading:
The 21st century has seen artificial intelligence (AI) integrated into various aspects of daily life, transforming industries and shaping the future. The rise of Big Data, the development of deep learning, and the introduction of generative models have driven unprecedented advancements in AI.
The 21st century has seen AI integrated into various industries, from healthcare and finance to entertainment and transportation, leveraging Big Data for improved decision-making. The explosion of data available from digital sources has enabled AI to make more-accurate predictions and offer more-personalized experiences.
Big Data has provided the fuel for AI systems, allowing them to learn from vast amounts of information. This has led to significant improvements in fields such as medical diagnostics, where AI can analyze patient data to identify patterns and predict health outcomes; and finance, where AI algorithms can detect fraudulent transactions and optimize investment strategies.
Suggested Reading:
Deep learning, a subset of machine learning, has enabled significant advancements in AI, particularly in areas like image and speech recognition. This technique involves neural networks with many layers, which can learn to recognize patterns in data with high accuracy.
Deep learning has been instrumental in the development of technologies, such as autonomous vehicles, where AI systems can interpret sensor data to navigate complex environments; and virtual assistants, which can understand and respond to natural language queries.
Suggested Reading:
Generative models like GANs (Generative Adversarial Networks), VAEs (Variational Autoencoders), and transformers have revolutionized AI by enabling the generation of realistic images, videos, and text. These models have applications in diverse fields, from art and entertainment to scientific research and data analysis.
Deep neural network, Source: techtarget.com
GANs, introduced by Ian Goodfellow in 2014, consist of two neural networks—a generator and a discriminator—that compete against each other to create realistic data. VAEs provide a probabilistic approach to data generation, allowing for the creation of new data points from learned distributions.
Transformers, such as the ones used in models like GPT-3, have transformed natural language processing by enabling the generation of coherent and contextually relevant text.
Suggested Reading:
As artificial intelligence (AI) becomes more pervasive, ethical considerations and governance frameworks are crucial to ensure responsible use and mitigate potential risks. This includes addressing biases in AI systems, ensuring privacy and beneficial AI integration.
AI technologies, such as virtual assistants, autonomous vehicles, and personalized recommendations, are increasingly integrated into daily life, enhancing convenience and efficiency. These technologies are transforming how we interact with the world and each other.
The future of AI promises continued advancements in areas like natural language processing, robotics, and general AI, with the potential to transform society further. Innovations like quantum computing could further accelerate AI development, making it even more powerful and ubiquitous.
📚 Related: Secure AI in Enterprise Architecture and Shadow AI
80% of companies are leveraging generative AI
90% of IT experts say they need a clear view of AI use in their organizations
14% say they actually have the overview of AI that they need
What is a brief history of AI?
Our modern concept of Artificial Intelligence (AI) began in the 1950s with Alan Turing's proposal of the Turing Test to determine a machine's ability to exhibit human-like intelligence. The Dartmouth Conference in 1956, organized by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon, officially established AI as a field of study. Early achievements included programs like the Logic Theorist and ELIZA. The field experienced periods of decline known as "AI winters" in the 1970s and late 1980s, due to unmet expectations and limited computational power. The revival in the 1980s was driven by the development of expert systems. The modern era, from the 1990s onward, has seen tremendous growth due to advances in machine learning, deep learning, and the availability of Big Data, leading to breakthroughs like IBM's Deep Blue and Google's AlphaGo.
Who is the father of AI?
John McCarthy is often referred to as the "father of AI." He was a pivotal figure in the establishment of artificial intelligence as an academic discipline. McCarthy coined the term "artificial intelligence" and organized the Dartmouth Conference in 1956, which is considered the birth of AI as a formal field of study. His contributions laid the foundational concepts and set the agenda for future AI research.
Who first predicted AI?
The concept of artificial beings with intelligence dates back to ancient myths, but in terms of scientific prediction, Alan Turing is one of the first to propose a formal framework for AI. In his 1950 paper "Computing Machinery and Intelligence," Turing discussed the potential for machines to exhibit intelligent behavior and introduced the Turing Test as a measure of machine intelligence.
What is the history of AI class 9?
For class 9 students, the history of AI can be summarized as follows: AI is the study of creating machines that can think and learn like humans. The field started in the 1950s with pioneers like Alan Turing, who proposed tests to evaluate machine intelligence. The 1956 Dartmouth Conference officially launched AI as a field. Early programs could solve problems and mimic human conversation. AI faced challenges and slow periods called "AI winters." In the 1980s, expert systems revived interest in AI. Today, AI is used in various fields, driven by machine learning and big data.
Who is the mother of AI?
There is no single person recognized as the "mother of AI" in the same way John McCarthy is known as the "father of AI." However, Ada Lovelace is often celebrated as an early pioneer for her work on Charles Babbage's Analytical Engine, which laid foundational ideas for computer science. Ada Lovelace's contributions are crucial to the conceptual development of programmable machines, an essential aspect of AI.
Report
2024 SAP LeanIX AI Report
Find out how 226 IT professionals working for organizations across the world deal with AI Governance
Access Now