Skip to main content Skip to secondary navigation

Appendix I: A Short History of AI (Annotated)

Main content start

[turn off 2021 annotations]

This Appendix is based primarily on Nilsson's book[140] and written from the prevalent current perspective, which focuses on data intensive methods and big data. However important, this focus has not yet shown itself to be the solution to all problems. A complete and fully balanced history of the field is beyond the scope of this document.

The field of Artificial Intelligence (AI) was officially born and christened at a workshop organized by John McCarthy in 1956 at the Dartmouth Summer Research Project on Artificial Intelligence. The goal was to investigate ways in which machines could be made to simulate aspects of intelligence—the essential idea that has continued to drive the field forward ever since. McCarthy is credited with the first use of the term “artificial intelligence” in the proposal he co-authored for the workshop with Marvin Minsky, Nathaniel Rochester, and Claude Shannon.[141] Many of the people who attended soon led significant projects under the banner of AI, including Arthur Samuel, Oliver Selfridge, Ray Solomonoff, Allen Newell, and Herbert Simon.

Although the Dartmouth workshop created a unified identity for the field and a dedicated research community, many of the technical ideas that have come to characterize AI existed much earlier. In the eighteenth century, Thomas Bayes provided a framework for reasoning about the probability of events.[142] In the nineteenth century, George Boole showed that logical reasoning—dating back to Aristotle—could be performed systematically in the same manner as solving a system of equations.[143] By the turn of the twentieth century, progress in the experimental sciences had led to the emergence of the field of statistics,[144] which enables inferences to be drawn rigorously from data. The idea of physically engineering a machine to execute sequences of instructions, which had captured the imagination of pioneers such as Charles Babbage, had matured by the 1950s, and resulted in the construction of the first electronic computers.[145] Primitive robots, which could sense and act autonomously, had also been built by that time.[146]

The most influential ideas underpinning computer science came from Alan Turing, who proposed a formal model of computing. Turing's classic essay, Computing Machinery and Intelligence,[147] imagines the possibility of computers created for simulating intelligence and explores many of the ingredients now associated with AI, including how intelligence might be tested, and how machines might automatically learn. Though these ideas inspired AI, Turing did not have access to the computing resources needed to translate his ideas into action.

Several focal areas in the quest for AI emerged between the 1950s and the 1970s.[148] Newell and Simon pioneered the foray into heuristic search, an efficient procedure for finding solutions in large, combinatorial spaces. In particular, they applied this idea to construct proofs of mathematical theorems, first through their Logic Theorist program, and then through the General Problem Solver.[149] In the area of computer vision, early work in character recognition by Selfridge and colleagues[150] laid the basis for more complex applications such as face recognition.[151] By the late sixties, work had also begun on natural language processing.[152] “Shakey”, a wheeled robot built at SRI International, launched the field of mobile robotics. Samuel's Checkers-playing program, which improved itself through self-play, was one of the first working instances of a machine learning system.[153] Rosenblatt's Perceptron,[154] a computational model based on biological neurons, became the basis for the field of artificial neural networks. Feigenbaum and others advocated [155]the case for building expert systems—knowledge repositories tailored for specialized domains such as chemistry and medical diagnosis.[156]

Early conceptual progress assumed the existence of a symbolic system that could be reasoned about and built upon. But by the 1980s, despite this promising headway made into different aspects of artificial intelligence, the field still could boast no significant practical successes. This gap between theory and practice arose in part from an insufficient emphasis within the AI community on grounding systems physically, with direct access to environmental signals and data. There was also an overemphasis on Boolean (True/False) logic, overlooking the need to quantify uncertainty. The field was forced to take cognizance of these shortcomings in the mid-1980s, since interest in AI began to drop, and funding dried up. Nilsson calls this period the “AI winter.”

A much needed resurgence in the nineties built upon the idea that “Good Old-Fashioned AI”[157] was inadequate as an end-to-end approach to building intelligent systems. Rather, intelligent systems needed to be built from the ground up, at all times solving the task at hand, albeit with different degrees of proficiency.[158] Technological progress had also made the task of building systems driven by real-world data more feasible. Cheaper and more reliable hardware for sensing and actuation made robots easier to build. Further, the Internet’s capacity for gathering large amounts of data, and the availability of computing power and storage to process that data, enabled statistical techniques that, by design, derive solutions from data. These developments have allowed AI to emerge in the past two decades as a profound influence on our daily lives, as detailed in Section II.

In summary, following is a list of some of the traditional sub-areas of AI. As described in Section II, some of them are currently “hotter” than others for various reasons. But that is neither to minimize the historical importance of the others, nor to say that they may not re-emerge as hot areas in the future.

  • Search and Planning deal with reasoning about goal-directed behavior. Search plays a key role, for example, in chess-playing programs such as Deep Blue, in deciding which move (behavior) will ultimately lead to a win (goal).
  • The area of Knowledge Representation and Reasoning involves processing information (typically when in large amounts) into a structured form that can be queried more reliably and efficiently. IBM's Watson program, which beat human contenders to win the Jeopardy challenge in 2011, was largely based on an efficient scheme for organizing, indexing, and retrieving large amounts of information gathered from various sources.[159]
  • Machine Learning is a paradigm that enables systems to automatically improve their performance at a task by observing relevant data. Indeed, machine learning has been the key contributor to the AI surge in the past few decades, ranging from search and product recommendation engines, to systems for speech recognition, fraud detection, image understanding, and countless other tasks that once relied on human skill and judgment. The automation of these tasks has enabled the scaling up of services such as e-commerce.
  • As more and more intelligent systems get built, a natural question to consider is how such systems will interact with each other. The field of Multi-Agent Systems considers this question, which is becoming increasingly important in on-line marketplaces and transportation systems.
  • From its early days, AI has taken up the design and construction of systems that are embodied in the real world. The area of Robotics investigates fundamental aspects of sensing and acting—and especially their integration—that enable a robot to behave effectively. Since robots and other computer systems share the living world with human beings, the specialized subject of Human Robot Interaction has also become prominent in recent decades.
  • Machine perception has always played a central role in AI, partly in developing robotics, but also as a completely independent area of study. The most commonly studied perception modalities are Computer Vision and Natural Language Processing, each of which is attended to by large and vibrant communities.
  • Several other focus areas within AI today are consequences of the growth of the Internet. Social Network Analysis investigates the effect of neighborhood relations in influencing the behavior of individuals and communities. Crowdsourcing is yet another innovative problem-solving technique, which relies on harnessing human intelligence (typically from thousands of humans) to solve hard computational problems.

Although the separation of AI into sub-fields has enabled deep technical progress along several different fronts, synthesizing intelligence at any reasonable scale invariably requires many different ideas to be integrated. For example, the AlphaGo program[160] [161] that recently defeated the current human champion at the game of Go used multiple machine learning algorithms for training itself, and also used a sophisticated search procedure while playing the game.

 

 


[140] Nilsson, The Quest for Artificial Intelligence.

[141] J. McCarthy, Marvin L. Minsky, Nathaniel Rochester, and Claude E. Shannon, "A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence," August 31, 1955, accessed August 1, 2016, http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html.

[142] Thomas Bayes, “An Essay towards Solving a Problem in the Doctrine of Chances,” Philosophical Transactions of the Royal Society of London 53 (January 1, 1763): 370-418, accessed August 1, 2016, http://rstl.royalsocietypublishing.org/search?fulltext=an+essay+towards+solving&submit=yes&andorexactfulltext=and&x=0&y=0.

[143] George Boole, An Investigation of the Laws of Thought on Which are Founded the Mathematical Theories of Logic and Probabilities, (Macmillan, 1854, reprinted with corrections, Dover Publications, New York, NY, 1958, and reissued by Cambridge University Press, 2009), accessed August 1, 2016, http://ebooks.cambridge.org/ebook.jsf?bid=CBO9780511693090.

[144] “History of statistics,” Wikipedia, Last modified June 3, 2016, accessed August 1, 2016, https://en.wikipedia.org/wiki/History_of_statistics.

[145] Joel N. Shurkin, Engines of the Mind: The Evolution of the Computer from Mainframes to Microprocessors (New York: W. W. Norton & Company, 1996).

[146] William Grey Walter, “An Electromechanical Animal,” Dialectica 4 (1950): 42—49.

[147] A. M. Turing, “Computing Machinery and Intelligence,” Mind 59, no. 236 (1950): 433-460.

[148] Marvin Minsky, “Steps toward Artificial Intelligence,” MIT Media Laboratory, October 24, 1960, accessed August 1, 2016, http://web.media.mit.edu/~minsky/papers/steps.html.

[149] Allen Newell, John C Shaw, and Herbert A Simon, “Report on a general problem-solving program,” Proceedings of the International Conference on Information Processing, UNESCO, Paris 15-20 June 1959 (Unesco/Oldenbourg/Butterworths, 1960), 256-264.

[150] O. G. Selfridge, “Pandemonium: A paradigm for learning,” Proceedings of the Symposium on Mechanization of Thought Processes (London: H. M. Stationary Office, 1959): 511-531.

[151] Woodrow W. Bledsoe and Helen Chan, “A Man-Machine Facial Recognition System: Some Preliminary Results,” Technical Report PRI 19A (Palo Alto, California: Panoramic Research, Inc., 1965).

[152] D. Raj Reddy, “Speech Recognition by Machine: A Review,” Proceedings of the IEEE 64, no.4 (April 1976), 501-531.

[153] Arthur Samuel, “Some Studies in Machine Learning Using the Game of Checkers, IBM Journal of Research and Development 3, no. 3 (1959): 210—229.

[154] Frank Rosenblatt, “The Perceptron—A Perceiving and Recognizing Automaton,” Report 85-460-1, (Buffalo, New York: Cornell Aeronautical Laboratory, 1957).

[155] “Shakey the robot,” Wikipedia, last modified July 11, 2016, accessed August 1, 2016, https://en.wikipedia.org/wiki/Shakey_the_robot.

[156] Edward A. Feigenbaum and Bruce G. Buchanan, “DENDRAL and Meta-DENDRAL: Roots of Knowledge Systems and Expert System Applications,” Artificial Intelligence 59, no. 1-2 (1993), 233-240.

[157] John Haugeland, Artificial Intelligence: The Very Idea, (Cambridge, Massachusetts: MIT Press, 1985).

[158] Rodney A. Brooks, “Elephants Don't Play Chess,” Robotics and Autonomous Systems 6, no. 1-2 (June 1990): 3-15.

[159] David A. Ferrucci, “Introduction to ‘This is Watson,’” IBM Journal of Research and Development, 56, no. 3-4 (2012): 1.

[160] David Silver et al., “Mastering the Game of Go with Deep Neural Networks and Tree Search.”

[161] Steven Borowiec and Tracey Lien, "AlphaGo beats human Go champ in milestone for artificial intelligence," Los Angeles Times, March 12, 2016, accessed August 1, 2016, http://www.latimes.com/world/asia/la-fg-korea-alphago-20160312-story.html.

Cite This Report

Peter Stone, Rodney Brooks, Erik Brynjolfsson, Ryan Calo, Oren Etzioni, Greg Hager, Julia Hirschberg, Shivaram Kalyanakrishnan, Ece Kamar, Sarit Kraus, Kevin Leyton-Brown, David Parkes, William Press, AnnaLee Saxenian, Julie Shah, Milind Tambe, and Astro Teller.  "Artificial Intelligence and Life in 2030." One Hundred Year Study on Artificial Intelligence: Report of the 2015-2016 Study Panel, Stanford University, Stanford, CA,  September 2016. Doc: http://ai100.stanford.edu/2016-report. Accessed:  September 6, 2016.

Report Authors

AI100 Standing Committee and Study Panel 

Copyright

© 2016 by Stanford University. Artificial Intelligence and Life in 2030 is made available under a Creative Commons Attribution-NoDerivatives 4.0 License (International): https://creativecommons.org/licenses/by-nd/4.0/.