Key Takeaways
No history of artificial intelligence (AI) would be complete without reference to the 1955 conference at Dartmouth College that first coined the term and inaugurated the field.
Attended by John McCarthy, Marvin Minsky and Claude Shannon, the Dartmouth workshop brought together some of the brightest minds in American computer science who are today celebrated as the founding fathers of artificial intelligence. But one attendee—Peter M Milner—is often overlooked.
While other attendees at the event came from a mathematical or engineering background, Milner is most well-known for his contributions to neuroscience, which include the co-discovery of the brain stimulation reward (BSR) phenomenon in rats.
However, before he trained as a neuroscientist, Milner worked as an electrical engineer. And it was perhaps his unique interdisciplinary perspective that garnered him an invitation to Dartmouth in the first place.
Even before McCarthy decided to organize the Dartmouth conference, questions of machine intelligence had attracted a diverse range of thinkers.
Alan Turing’s 1950 paper “Computing Machinery and Intelligence ” is considered seminal and raised many of the questions Milner and his collaborators would attend to five years later. However, Turing’s explorations in philosophy and logic were highly abstract and it would take the work of applied scientists to really kickstart the AI revolution.
To that end, the Dartmouth conference was not only attended by mathematicians and logicians but also by electrical engineers, computer scientists, the political scientist Herbert Alexander Simon, the physicist Donald MacCrimmon MacKay and Milner, with his expertise in neuroscience.
Several attendees, namely Milner, Minsky, Simon and Allen Newell, also had a strong background in cognitive psychology.
Milner’s participation in the Dartmouth conference reflects the key role neuroscientific thinking played in the development of AI.
Early AI pioneers were inspired by the structure and function of the human brain, which led to the creation of models that mimic neural processing. One of the most significant developments was the concept of artificial neural networks—computational models designed to simulate the way neurons in the brain process and transmit information.
Neuroscience provided the foundational ideas for these networks, particularly through the work of Warren McCulloch and Walter Pitts, who first proposed a model of artificial neurons that could perform logical functions, laying the groundwork for later developments in AI.
Another important contributor to the development of neural networks was Donald O. Hebb, a Canadian psychologist who attempted to explain how the function of neurons contributed to psychological processes such as learning.
In the early years of AI, Hebb’s neurophysiological account of learning and memory would prove invaluable to computer scientists attempting to mimic the workings of the brain.
Today, artificial neural networks are still based on the transmission of signals via electrical impulses, which Hebb initially proposed.
For his part, Milner discovered Hebb’s work through his wife Brenda, who was enrolled in one of Hebb’s graduate seminars at McGill.
Upon reading Hebb’s “The Organization of Behaviour ,” Milner was inspired to abandon his career as an electrical engineer at the Chalk River nuclear reactor and pursue studies in neuropsychology instead.
With Hebb as his supervisor, Milner expanded on the concept of “cell assembly”—the second stage of a three-part model for the neurophysiological changes that underpin learning and memory.
At the time of the Dartmouth workshops, Milner would have been toying with ideas that would eventually be published as “The Cell Assembly: Mark II ,” in which he describes a parallel between Hebb’s cell assembly process and the atomic chain reactions he had observed at Chalk River.
Just as atomic reactions can snowball out of control if they aren’t properly controlled, Milner argued that there must be mechanisms that maintain equilibrium in neurological functions and prevent “uncontrolled epileptic discharge.”
His model of dynamically forming and reinforcing connections between neurons paralleled the way early AI researchers sought to design systems that could learn from experience, adapt to new information and recognize patterns.
Milner’s work also influenced the development of Hebbian learning rules, algorithms that adjust the strength of connections between artificial neurons based on their activation patterns. These principles laid the groundwork for more advanced machine-learning techniques that would be developed in the latter part of the 20th century.
Ultimately, modern neuroscience has moved beyond Hebb and Milner’s electro-centric model of neural activity to place more emphasis on chemical synaptic transmission.
Nonetheless, their theories formed an important foundation for the development of artificial neural networks, which remains relevant today as AI researchers continue to model human perception computationally.