The roots of modern Artificial Intelligence, or AI, can be traced back to the classical philosophers of
Greece, and their efforts to model human thinking as a system of symbols. More recently, , a school
of thought called “Connectionism” was developed to study the process of thinking. In 1950, a man
named Alan Turing wrote a paper suggesting how to test a “thinking” machine. He believed if a
machine could carry on a conversation by way of a teleprinter, imitating a human with no noticeable
differences, the machine could be described as thinking. These events, at a conference sponsored by
Dartmouth College in 1956, helped to spark the concept of Artificial Intelligence.
The development of AI has not been streamlined and efficient. Starting as an exciting, imaginative
concept in 1956, Artificial Intelligence research funding was cut in the 1970s, after several reports
criticized a lack of progress. Efforts to imitate the human brain, called “neural networks,” were
experimented with, and dropped. The most impressive, functional programs were only able to
handle simplistic problems, and were described as toys by the unimpressed. AI researchers had been
overly optimistic in establishing their goals, and had made naive assumptions about the problems
they would encounter. When the results they promised never materialized, it should come as no
surprise their funding was cut.
The First AI Winter
AI researchers had to deal with two very basic limitations, not enough memory, and processing
speeds that would seem abysmal by today’s standards. Much like gravity research at the time,
Artificial Intelligence research had its government funding cut, and interest dropped off. However,
unlike gravity, AI research resumed in the 1980s, with the U.S. and Britain providing funding to
compete with Japan’s new “fifth generation” computer project, and their goal of becoming the world
leader in computer technology. The stretch of time between 1974 and 1980 has become known as
‘The First AI Winter.’
The First AI Winter ended with the introduction of “Expert Systems,” which were developed and
quickly adopted by competitive corporations all around the world. The primary focus of AI research
was now on the theme of accumulating knowledge from various experts. AI also benefited from the
revival of Connectionism in the 1980s.
Expert Systems
Expert Systems represent an approach in Artificial Intelligence research that became popular
throughout the 1970s. An Expert System uses the knowledge of experts to create a program. Expert
Systems can answer questions and solve problems within a clearly defined arena of knowledge, and
uses “rules” of logic. Their simplistic design made it reasonably easy for programs to be designed,
built, and modified. Bank loan screening programs provide a good example of an Expert System from
the early 1980s, but there were also medical and sales applications using Expert Systems. Generally
speaking, these simple programs became quite useful, and started saving businesses large amounts
The Second AI Winter
The AI field experienced another major winter from 1987 to 1993. This second slowdown in AI
research coincided with other early Expert System computers, being seen as slow and clumsy.
Desktop computers were becoming very popular and displacing the older, bulkier, much less user-
friendly computer banks. Eventually, Expert Systems simply became too expensive to maintain, when
compared to desktops. They were difficult to update, and could not “learn.” These were problems