The roots of modern Artificial Intelligence, or AI, can be traced back to the classical philosophers of Greece, and their efforts to model human thinking as a system of symbols. More recently, , a school of thought called “Connectionism” was developed to study the process of thinking. In 1950, a man named Alan Turing wrote a paper suggesting how to test a “thinking” machine. He believed if a machine could carry on a conversation by way of a teleprinter, imitating a human with no noticeable differences, the machine could be described as thinking. These events, at a conference sponsored by Dartmouth College in 1956, helped to spark the concept of Artificial Intelligence. The development of AI has not been streamlined and efficient. Starting as an exciting, imaginative concept in 1956, Artificial Intelligence research funding was cut in the 1970s, after several reports criticized a lack of progress. Efforts to imitate the human brain, called “neural networks,” were experimented with, and dropped. The most impressive, functional programs were only able to handle simplistic problems, and were described as toys by the unimpressed. AI researchers had been overly optimistic in establishing their goals, and had made naive assumptions about the problems they would encounter. When the results they promised never materialized, it should come as no surprise their funding was cut. The First AI Winter AI researchers had to deal with two very basic limitations, not enough memory, and processing speeds that would seem abysmal by today’s standards. Much like gravity research at the time, Artificial Intelligence research had its government funding cut, and interest dropped off. However, unlike gravity, AI research resumed in the 1980s, with the U.S. and Britain providing funding to compete with Japan’s new “fifth generation” computer project, and their goal of becoming the world leader in computer technology. The stretch of time between 1974 and 1980 has become known as ‘The First AI Winter.’ The First AI Winter ended with the introduction of “Expert Systems,” which were developed and quickly adopted by competitive corporations all around the world. The primary focus of AI research was now on the theme of accumulating knowledge from various experts. AI also benefited from the revival of Connectionism in the 1980s. Expert Systems Expert Systems represent an approach in Artificial Intelligence research that became popular throughout the 1970s. An Expert System uses the knowledge of experts to create a program. Expert Systems can answer questions and solve problems within a clearly defined arena of knowledge, and uses “rules” of logic. Their simplistic design made it reasonably easy for programs to be designed, built, and modified. Bank loan screening programs provide a good example of an Expert System from the early 1980s, but there were also medical and sales applications using Expert Systems. Generally speaking, these simple programs became quite useful, and started saving businesses large amounts The Second AI Winter The AI field experienced another major winter from 1987 to 1993. This second slowdown in AI research coincided with other early Expert System computers, being seen as slow and clumsy. Desktop computers were becoming very popular and displacing the older, bulkier, much less userfriendly computer banks. Eventually, Expert Systems simply became too expensive to maintain, when compared to desktops. They were difficult to update, and could not “learn.” These were problems desktop computers did not have. At about the same time, DARPA (Defense Advanced Research Projects Agency) concluded AI would not be “the next wave” and redirected its funds to projects deemed more likely to provide quick results. As a consequence, in the late 1980s, funding for AI research was cut deeply, creating the Second AI Winter. Conversation with Computers Becomes a Reality In the early 1990s, Artificial Intelligence research shifted its focus to something called an intelligent agent. These intelligent agents can be used for news retrieval services, online shopping, and browsing the web. Intelligent agents are also sometimes called agents or bots. With the use of Big Data programs, they have gradually evolved into personal digital assistants, or virtual assistants. Currently, giant tech businesses such as Google, Facebook, IBM, and Microsoft are researching a number of Artificial Intelligence projects, including virtual assistants. They are all competing to create assistants such as Facebook’s M, or Cortana from Microsoft, or Apple’s Siri.The goal of Artificial Intelligence is no longer to create an intelligent machine capable of imitating human conversation with a teletype. The use of Big Data has allowed AI to take the next evolutionary step. Now, the goals are to develop software programs capable of speaking in a natural language, like English, and to act as your virtual assistant. These virtual assistants represent the future of AI research, and may take the form of robots for physical help, or may be housed in laptops and help in make business decisions, or they may be integrated into a business’s customer service program and answer the phone. Artificial Intelligence is still evolving and finding new uses.