History of AI, Part 1: Origins to Early Experiments

Artificial intelligence is rooted in an old fascination with creating minds beyond our own: first imagined in myths, later explored through logic, mathematics, and machines.

What once belonged to tales became a practical question for scientists and engineers: could human reasoning be replicated in a machine?

With the advent of digital computers in the 1940s and 50s, the conversation shifted from fiction to formal research, as pioneers debated whether processes like logic, language, and learning could be made mechanical.

Early projects set high expectations, prompting ambitious predictions and major funding.

Over the decades, AI has faced bursts of progress, skepticism, and reinvention. It moved from simple algorithms in games to sophisticated systems tackling language, vision, and reasoning.

The story of AI is one of ambition, misstep, and persistence. It’s a steady transformation from philosophical puzzles into real-world technology that now touches almost every part of daily life.

Early Dreams of Artificial Intelligence

Long before computer science existed, cultures imagined artificial beings with intelligence. Ancient Greek legends tell of Talos, a mechanical guardian that protected Crete, and stories from Jewish folklore describe the golem, animated from clay to serve a master.

Medieval thinkers, alchemists, and inventors also engaged with the idea of creating life through art or science, mixing mysticism and early natural philosophy.

Moving into the 19th century, fictional works like Mary Shelley’s “Frankenstein” and Karel Čapek’s “R.U.R.” used artificial life and robots to question the nature of intelligence and free will.

These early narratives reflected real hopes and anxieties about technology imitating or even surpassing human capabilities.

As mathematics and logic advanced, thinkers began speculating about mechanical reasoning.

The groundwork was laid for a shift from myth and metaphor to scientific inquiry, turning dreams of artificial minds into questions that could be tested and explored with emerging technology.

Logic, Reasoning, and the Birth of Computation

The idea that human thought could be translated into logical steps dates back to ancient philosophers in China, India, and Greece. Early logic paved the way for formal reasoning, with figures like Aristotle mapping out systems of deduction.

In the Middle Ages, scholars including Ramon Llull designed logical “machines” to mechanically combine concepts. Later, Leibniz dreamed of universal symbolic languages that could reduce disputes to calculations.

The 19th and early 20th centuries saw breakthroughs by mathematicians like George Boole and Gottlob Frege, who formalized logic into mathematical notation.

This movement reached a turning point with Kurt Gödel, Alan Turing, and Alonzo Church, whose work revealed both the limits and possibilities for mechanical reasoning.

The Dawn of Digital Minds: Turing, Neural Networks, and Early AI Concepts

During the 1940s and 1950s, as electronic computers became reality, the question of machine intelligence moved from theory into possible practice.

Alan Turing played a central role, proposing the idea of machines that could reason and learn. In his 1950 paper, Turing introduced the famous “imitation game,” later known as the Turing Test, to evaluate if a machine’s responses could be mistaken for those of a person.

At the same time, researchers looked to biology for inspiration. Neural networks emerged from work by McCulloch and Pitts, who modeled basic artificial neurons after brain cells. They showed that simple networks could perform logical functions.

These concepts fueled experiments in machine learning, where systems might adapt their behavior based on results.

The earliest AI projects were motivated by the hope that computation, guided by logic or inspired by the brain, might eventually produce thinking machines. This launched the field toward concrete goals and led to the first programs to demonstrate elements of reasoning.

The Dartmouth Workshop and the Foundation of AI Research

In the summer of 1956, a small group of mathematicians, scientists, and engineers gathered at Dartmouth College for a now-famous workshop that marked the formal start of artificial intelligence as a research field.

  • The term “artificial intelligence” was coined there
  • The field gained its name, early goals, and key personalities

The event, organized by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon, aimed to explore the claim that every aspect of learning or intelligence could be described so exactly that a machine could simulate it.

This clear mission shaped the growth of AI and united researchers across logic, psychology, and computer science.

At the workshop, Allen Newell and Herbert Simon demonstrated the Logic Theorist, capable of proving mathematical theorems. It was clear that machines could tackle problems requiring reasoning.

Foundations laid at Dartmouth guided decades of progress, combining ideas from symbolic reasoning, learning, and computation into a new scientific discipline.

Early AI Experiments: Symbolic Reasoning, Games, and Language

Following the Dartmouth Workshop, researchers quickly moved to build and test AI systems that manipulated symbols, solved puzzles, and processed language.

Early successes included the General Problem Solver, which could tackle word problems through step-by-step inference, echoing human-like reasoning.

At the same time, game-playing demonstrated practical applications:

  • Arthur Samuel’s checkers program
  • Dietrich Prinz’s chess prototype

These pioneering systems showed that machines could learn from experience and adapt their strategies over time.

In the domain of language, Joseph Weizenbaum’s ELIZA engaged in basic conversation by matching user inputs to scripted responses. It was basically an early chatbot that surprised many with its realism, despite limited understanding.

Projects like SHRDLU handled instructions and questions about a simple blocks world, connecting language processing with object manipulation.

These efforts established core methods used in AI: search algorithms, rule-based reasoning, and basic learning.

Challenges and Setbacks: The First AI Winter

By the early 1970s, AI researchers faced problems their early optimism had not anticipated. Progress slowed as it became clear that systems which worked well in controlled demos struggled with real-world complexity.

Machines lacked common sense and had trouble scaling up approaches for things like language and vision. Limited computer power and memory made it hard to run anything more than small “toy” programs.

  • Criticism and official reports, like James Lighthill’s in the UK, led to skepticism from governments.
  • In the US, agencies restricted support to projects promising near-term, practical results.
  • Many labs downsized, and researchers often avoided using the term “AI” to escape the stigma.

The combination of overpromised results and technical hurdles created a period of disillusionment later called the “AI winter.”

Still, dedicated work continued in specialized subfields, quietly setting the stage for later breakthroughs in logic, learning, and robotics.

Summary

Time Ideas and Events Key Figures
Ancient times – 1800s Myths of artificial beings, early logic and automata Talos (Greek myth), golem (folklore); Ramon Llull; Leibniz
19th – early 20th c. Mechanical reasoning, foundation of mathematical logic George Boole, Gottlob Frege
1930s – 1950s Theoretical computation, electronic computers, neural nets Alan Turing, Kurt Gödel, Alonzo Church, McCulloch & Pitts
1950s – 1960s AI as a discipline, symbolic reasoning, language programs John McCarthy, Marvin Minsky, Claude Shannon, Allen Newell, Herbert Simon, Joseph Weizenbaum
1956 Dartmouth Workshop: “artificial intelligence” coined John McCarthy, Marvin Minsky, Nathaniel Rochester, Claude Shannon
1950s – 1970s Game-playing AI, symbolic problem solving, chatbots Arthur Samuel, Dietrich Prinz, Joseph Weizenbaum, Terry Winograd
1970s First “AI Winter” – setbacks and skepticism James Lighthill (report), AI community worldwide

Table of Contents

This blog post was generated by Stryng.