The Dawn of Silicon Minds: A Complete History of the First Artificial Intelligence

 Introduction: The Moment the Machine "Woke Up"

History of the First AI
The Dawn of Silicon Minds: A Complete History of the First Artificial Intelligence - First Everything

Look around you. Artificial Intelligence is no longer a dream of the future; it is the fabric of our present. It writes our emails, generates our art, and predicts our behaviors. But every revolution has a Day Zero. Long before Silicon Valley and long before the internet, there was a moment when a machine, for the very first time, did something that wasn’t just math. It was logic. It was a choice.

To find the true origin of the first AI, we have to travel back to the smoke-filled labs of the 1950s—a time when computers were massive, fragile monsters that filled entire rooms. This is the story of the first time a machine learned to think.


The Philosophical Architect — Alan Turing (1950)

In the early 1950s, the concept of a "thinking machine" was considered pure science fiction. Computers were viewed as "super-calculators"—glorified abacuses that could solve equations but possessed no agency.

However, Alan Turing, a British mathematician and WWII codebreaker, saw further. In 1950, he published a landmark paper titled "Computing Machinery and Intelligence" in the journal Mind. Turing opened with a question that would change history: "Can machines think?"

Turing argued that "thinking" was too vague to define. Instead, he proposed the Imitation Game, now famously known as the Turing Test. He suggested that if a human judge, conversing via text, could not distinguish between a human and a machine, then the machine could be said to "think" in any practical sense. While Turing provided the philosophy, he also understood that "Universal" digital computers could, in theory, simulate any logical process given enough memory—setting the stage for the first actual code.

Alan Turing
Alan Turing (credit: NPL Archive  Science Museum)


The True First AI — Christopher Strachey’s Checkers (1951)

While the world debated Turing’s philosophy, a schoolteacher and physicist named Christopher Strachey was busy providing the code.

In 1951, Strachey gained access to the National Physical Laboratory’s Pilot ACE, a computer based on Turing’s designs. He wrote a checkers-playing program that was revolutionary for its time. It didn't just follow a set path; it used Heuristic Search to evaluate the board and make decisions.

However, the Pilot ACE didn't have enough memory to run the full program. In 1952, Strachey traveled to the University of Manchester to meet Turing and use the Ferranti Mark 1—the world’s first commercially available electronic computer. By July 1952, the program could play a complete game of checkers at a reasonable speed.

Technical Milestone: Strachey’s code was over 20 pages long and contained 1,000 instructions, making it the longest program ever written at that time. It even used the Ferranti's screens to display a "bitboard" of the game—marking a foundational moment for both AI and computer graphics.


The Technical Anatomy of a Pioneer

Christopher Strachey's 1951 checkers program running on the Ferranti Mark 1.
Christopher Strachey and the Ferranti Mark 1 computer - First Everything

To understand how Strachey achieved "First AI" status, we must understand the hardware he was wrestling with: the Ferranti Mark 1.

  • The Behemoth: In 1951, coding was done in machine code, often represented as base-32 characters, rather than high-level languages like Python. The Ferranti Mark 1 was a behemoth of engineering, utilizing approximately 4,000 vacuum tubes and miles of wiring.

  • The Memory Hurdle: The machine used "Williams-Kilburn Tubes" for memory—essentially cathode-ray tubes that stored bits as dots of charge on a screen. The total memory was roughly 25,600 bits. For comparison, a single low-resolution photo today is thousands of times larger than the entire memory capacity of the world's first AI computer. Strachey had to fit his entire checkers logic—board evaluation, move generation, and the "Heuristic" search—into this tiny space.

·       The Heuristic Breakthrough: Strachey’s program didn't just look at the next move. It used a "Minimax" algorithm approach. It would simulate a move, imagine the opponent's best response, and then pick the path that maximized its own advantage. This logic is still the fundamental "brain" behind competitive AI in the 21st century.


Coining the Name — The Dartmouth Workshop (1956)

Despite Strachey’s success, the field didn't have a name. That changed in 1956 at the Dartmouth Summer Research Project on Artificial Intelligence.

Organized by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon, this two-month gathering in New Hampshire is known as the "Big Bang" of AI. McCarthy officially coined the term "Artificial Intelligence" to distinguish the field from "Cybernetics" or "Automata Studies".

The attendees were incredibly optimistic. They proposed that "every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it". They expected to solve the major problems of AI in a single summer. While they failed that timeline, they birthed the entire academic discipline we work in today.


The First Artificial Brain — Neural Networks (1943–1960)

While some researchers focused on logic (Symbolic AI), others looked at the brain.

  • 1943: Warren McCulloch and Walter Pitts published the first mathematical model of a biological neuron, showing that connected "nerve nets" could perform logical calculations.

  • 1957: Frank Rosenblatt took this further by creating the Perceptron.

  • 1960: The Mark I Perceptron was demonstrated publicly. It was a custom-built hardware machine designed for image recognition. It was the great-grandfather of the "Deep Learning" that powers modern AI like Midjourney or ChatGPT.


ELIZA — The First Virtual Mind (1966)

In 1966, Joseph Weizenbaum at MIT created ELIZA, the world’s first chatbot. Using the DOCTOR script, ELIZA mimicked a Rogerian psychotherapist by reflecting the user's words back to them.

Weizenbaum was shocked by the results. Even though users knew ELIZA was a simple pattern-matching script, they began sharing their deepest secrets with the machine. This phenomenon became known as the "ELIZA Effect"—the human tendency to attribute human-like intelligence and emotion to computer programs.

Weizenbaum later became a critic of AI, arguing that while machines can decide (calculate), they can never truly choose (exercise judgment).

ELIZA Effect
ELIZA Effect - First Everything


The Dark Years — The AI Winters

The early optimism of the 1950s eventually met a wall of technical reality. Computers of the 1970s lacked the memory and speed to handle real-world complexity.

  • The First AI Winter (1974–1980): Following reports like the Lighthill Report in the UK, which criticized the lack of practical results, government funding was slashed.

  • The Second AI Winter (1987–1993): After a brief boom in "Expert Systems," the market for specialized AI hardware collapsed, leading to another period of skepticism.

These "winters" were caused by over-promising and under-delivering. It wasn't until the 2010s, with the explosion of "Big Data" and GPU processing power, that AI finally broke through the frost.


The Human Element — The Architects of Logic

Every "First" in our archive is driven by a human story, let’s look at some of the men who dared to dream of silicon thought.

The Visionary: Alan Turing (1912–1954) - Turing was more than a mathematician; he was a philosopher of the future. After cracking the Enigma code in WWII, he became obsessed with the "Universal Machine." His colleagues recalled him talking about "building a brain" as early as 1945. He was a man of intense quirks—he used to chain his tea mug to the radiator to prevent it from being stolen and ran marathons to relieve the stress of his world-changing thoughts.

The Coder: Christopher Strachey (1916–1975) - Strachey was a schoolteacher at Harrow when he wrote the first AI program. He was known for his "encyclopedic" mind and an ability to see the beauty in complex systems. While Turing provided the "Why," Strachey provided the "How." He famously spent a single night at the Manchester lab, working through the early hours of the morning to get the checkers program to run, fueled by nothing but coffee and logic.


The Legacy Timeline — 70+ Years of Intelligence (Expansion)

To give you, our readers, the advantage of our archive, here is the chronological path from Strachey's first move to the modern era:

  • 1950: Alan Turing publishes Computing Machinery and Intelligence.

  • 1951: Christopher Strachey writes the first successful AI program (Checkers) on the Pilot ACE.

  • 1952: The program is successfully ported to the Ferranti Mark 1, playing its first full game.

  • 1956: The Dartmouth Workshop officially names the field "Artificial Intelligence."

  • 1958: John McCarthy creates LISP, the programming language that would dominate AI research for 30 years.

  • 1964: ELIZA, the first chatbot, is born at MIT.

  • 1974: The "Lighthill Report" triggers the first AI Winter.

  • 1997: IBM’s Deep Blue defeats World Chess Champion Garry Kasparov, using an evolved version of the search logic Strachey pioneered.

  • 2012: The "Deep Learning" revolution begins with AlexNet’s success in image recognition.

  • 2023-2026: Large Language Models (LLMs) like GPT-4 and beyond become household names.


Conclusion — Why the "First" Matters (Legacy)

Teaching Machines to Dream

From the vacuum tubes of the Ferranti Mark 1 to the billions of parameters in today’s Large Language Models (LLMs), the journey of AI has been a climb up a mountain we are still scaling.

The "First AI" wasn’t an all-knowing god; it was a checkers program that proved intelligence is a logical process rather than a biological miracle. As we stand on the edge of the next leap in technology, we look back at the pioneers like Strachey, Turing, and McCarthy—those who dared to treat a machine like a student rather than a tool.

The machines didn’t just wake up; we spent over 70 years teaching them how to dream.

70 years of Training AI
70 Years of developing AI - First Everything


📑 References & Further Reading

  1. Guinness World Records. "First computer game: Christopher Strachey’s Checkers." [1.1]

  2. Wikipedia. "Checkers (video game) & Christopher Strachey biography." [1.2, 1.3]

  3. Council of Science. "AI was born at a US summer camp 68 years ago (Dartmouth 1956)." [2.1, 2.3]

  4. McCarthy, J., et al. (1955). "A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence." [2.2]

  5. MIT Archives / ResearchGate. "ELIZA Reanimated: The world’s first chatbot." [3.1, 3.2]

  6. History of Information. "Joseph Weizenbaum Writes ELIZA." [3.3]

  7. Turing, A. M. (1950). "Computing Machinery and Intelligence." Mind. [4.1, 4.3]

  8. Jala University. "The Turing Machine and Its Impact on AI." [4.4]

  9. University of Chicago. "The Birthplace of Neural Networks: McCulloch & Pitts." [5.3]

  10. DataCamp / History of Data Science. "Understanding the Cycles of AI Winter." [6.1, 6.4]

  11. Copeland, Jack. "The Essential Turing." Oxford University Press. (History of Turing and Strachey)

  12. Computer History Museum. "The Ferranti Mark 1: Specifications and Operation."

  13. McCarthy, John. "The Dartmouth Summer Research Project on AI: A Retrospective."

  14. Weizenbaum, Joseph. "Computer Power and Human Reason." (Context for ELIZA and the "ELIZA Effect")

  15. Stanford Encyclopedia of Philosophy. "The Turing Test."

  16. IEEE Annals of the History of Computing. "Christopher Strachey’s 1951 Draughts Program."

Share on Google Plus

About Stan

This is a short description in the author block about the author. You edit it by entering text in the "Biographical Info" field in the user admin panel.

0 Comments:

Post a Comment