AI@50
AI@50, formally known as the "Dartmouth Artificial Intelligence Conference: The Next Fifty Years" (July 13–15, 2006), was a conference organized by James Moor, commemorating the 50th anniversary of the Dartmouth workshop which effectively inaugurated the history of artificial intelligence. Five of the original ten attendees were present: Marvin Minsky, Ray Solomonoff, Oliver Selfridge, Trenchard More, and John McCarthy.[1]
While sponsored by Dartmouth College, General Electric, and the Frederick Whittemore Foundation, a $200,000 grant from the Defense Advanced Research Projects Agency (DARPA) called for a report of the proceedings that would:
- Analyze progress on AI's original challenges during the first 50 years, and assess whether the challenges were "easier" or "harder" than originally thought and why
- Document what the AI@50 participants believe are the major research and development challenges facing this field over the next 50 years, and identify what breakthroughs will be needed to meet those challenges
- Relate those challenges and breakthroughs against developments and trends in other areas such as control theory, signal processing, information theory, statistics, and optimization theory.[2]
A summary report by the conference director, James Moor, was published in AI Magazine.[3]
Conference Program and links to published papers
- James Moor, conference Director, Introduction
- Carol Folt and Barry Scherr, Welcome[4]
- Carey Heckman, Tonypandy and the Origins of Science
AI: Past, Present, Future
- John McCarthy, What Was Expected, What We Did, and AI Today
- Marvin Minsky, The Emotion Machine
The Future Model of Thinking
- Ron Brachman and Hector Levesque, A Large Part of Human Thought
- David Mumford, What is the Right Model for 'Thought'?
- Stuart Russell, The Approach of Modern AI[5]
The Future of Network Models
- Geoffrey Hinton & Simon Osindero, From Pandemonium to Graphical Models and Back Again
- Rick Granger, From Brain Circuits to Mind Manufacture
The Future of Learning & Search
- Oliver Selfridge, Learning and Education for Software: New Approaches in Machine Learning
- Ray Solomonoff, Machine Learning — Past and Future [6]
- Leslie Pack Kaelbling, Learning to be Intelligent
- Peter Norvig, Web Search as a Product of and Catalyst for AI
The Future of AI
- Rod Brooks, Intelligence and Bodies
- Nils Nilsson, Routes to the Summit
- Eric Horvitz, In Pursuit of Artificial Intelligence: Reflections on Challenges and Trajectories
The Future of Vision
- Eric Grimson, Intelligent Medical Image Analysis: Computer Assisted Surgery and Disease Monitoring
- Takeo Kanade, Artificial Intelligence Vision: Progress and Non-Progress
- Terry Sejnowski, A Critique of Pure Vision
The Future of Reasoning
- Alan Bundy, Constructing, Selecting and Repairing Representations of Knowledge
- Edwina Rissland, The Exquisite Centrality of Examples
- Bart Selman, The Challenge and Promise of Automated Reasoning
The Future of Language and Cognition
- Trenchard More The Birth of Array Theory and Nial
- Eugene Charniak, Why Natural Language Processing is Now Statistical Natural Language Processing
- Pat Langley, Intelligent Behavior in Humans and Machines [7]
The Future of the Future
- Ray Kurzweil, Why We Can Be Confident of Turing Test Capability Within a Quarter Century [8]
- George Cybenko, The Future Trajectory of AI
- Charles J. Holland, DARPA's Perspective
AI and Games
- Jonathan Schaeffer, Games as a Test-bed for Artificial Intelligence Research"
- Danny Kopec, Chess and AI
- Shay Bushinsky, Principle Positions in Deep Junior's Development
Future Interactions with Intelligent Machines
- Daniela Rus, Making Bodies Smart
- Sherry Turkle, From Building Intelligences to Nurturing Sensibilities
Selected Submitted Papers: Future Strategies for AI
- J. Storrs Hall, Self-improving AI: An Analysis[9]
- Selmer Bringsjord, The Logicist Manifesto[10]
- Vincent C. Müller, Is There a Future for AI Without Representation?[11]
- Kristinn R. Thórisson, Integrated A.I. Systems[12]
Selected Submitted Papers: Future Possibilities for AI
References
- Nilsson, Nils J. (2009). The Quest for Artificial Intelligence. Cambridge University Press. ISBN 978-0-521-12293-1. pp. 80-81
- Knapp, Susan (2006-07-06). "Dartmouth receives grant from DARPA to support AI@50 conference". Dartmouth College Office of Public Affairs. Archived from the original on 2010-06-07. Retrieved 2010-06-11.
- Moor, James (2006). "The Dartmouth College Artificial Intelligence Conference: The Next Fifty Years" (PDF). AI Magazine. 27 (4): 87–91. ISSN 0738-4602.
- Knapp, Susan (2006-07-24). "Artificial Intelligence: Past, Present, and Future". Vox of Dartmouth. Archived from the original on 2020-10-25. Retrieved 2010-06-11.
- Russell, Stuart (2006-07-12). "The Approach of Modern AI". Archived from the original (PPT) on 2012-03-24. Retrieved 2010-06-11.
- Solomonoff, Ray J. (2006). "Machine Learning -- Past and Future" (PDF). Retrieved 2008-07-25.
- Langley, Pat (2006). "Intelligent Behavior in Humans and Machines" (PDF). Retrieved 2008-07-25.
- Kurzweil, Ray (14 July 2006). "Why We Can Be Confident of Turing Test Capability Within a Quarter Century". Archived from the original on 10 August 2006. Retrieved 25 July 2006.
- Hall, J. Stoors (2007). "Self-improving AI: An Analysis". Minds and Machines. 17 (3): 249–259. doi:10.1007/s11023-007-9065-3. S2CID 15347250.
Self-improvement was one of the aspects of AI proposed for study in the 1956 Dartmouth conference. Turing proposed a "child machine" which could be taught in the human manner to attain adult human-level intelligence. In latter days, the contention that an AI system could be built to learn and improve itself indefinitely has acquired the label of the bootstrap fallacy. Attempts in AI to implement such a system have met with consistent failure for half a century. Technological optimists, however, have maintained that a such system is possible, producing, if implemented, a feedback loop that would lead to a rapid exponential increase in intelligence. We examine the arguments for both positions and draw some conclusions.
Self-archive Archived 2010-02-15 at the Wayback Machine - Bringsjord, Selmer (December 2008). "The Logicist Manifesto: At Long Last Let Logic-Based AI Become a Field Unto Itself". Journal of Applied Logic. 6 (4): 502–525. doi:10.1016/j.jal.2008.09.001.
This paper is a sustained argument for the view that logic-based AI should become a self-contained field, entirely divorced from paradigms that are currently still included under the AI "umbrella"—paradigms such as connectionism and the continuous systems approach. The paper includes a self-contained summary of logic-based AI, as well as rebuttals to a number of objections that will inevitably be brought against the declaration of independence herein expressed.
Self-archive - Müller, Vincent C. (March 2007). "Is There a Future for AI Without Representation?". Minds and Machines. 17 (1): 101–115. doi:10.1007/s11023-007-9067-1. S2CID 14355608.
This paper investigates the prospects of Rodney Brooks' proposal for AI without representation. It turns out that the supposedly characteristic features of "new AI" (embodiment, situatedness, absence of reasoning, and absence of representation) are all present in conventional systems: "New AI" is just like old AI. Brooks proposal boils down to the architectural rejection of central control in intelligent agents—Which, however, turns out to be crucial. Some of more recent cognitive science suggests that we might do well to dispose of the image of intelligent agents as central representation processors. If this paradigm shift is achieved, Brooks' proposal for cognition without representation appears promising for full-blown intelligent agents—Though not for conscious agents.
Self-archive Archived 2009-11-17 at the Wayback Machine - Thórisson, Kristinn R. (March 2007). "Integrated A.I. systems". Minds and Machines. 17 (1): 11–25. doi:10.1007/s11023-007-9055-5. S2CID 21891058.
The broad range of capabilities exhibited by humans and animals is achieved through a large set of heterogeneous, tightly integrated cognitive mechanisms. To move artificial systems closer to such general-purpose intelligence we cannot avoid replicating some subset—quite possibly a substantial portion—of this large set. Progress in this direction requires that systems integration be taken more seriously as a fundamental research problem. In this paper I make the argument that intelligence must be studied holistically. I present key issues that must be addressed in the area of integration and propose solutions for speeding up rate of progress towards more powerful, integrated A.I. systems, including (a) tools for building large, complex architectures, (b) a design methodology for building realtime A.I. systems and (c) methods for facilitating code sharing at the community level.
- Steinhart, Eric (October 2007). "Survival as a Digital Ghost". Minds and Machines. 17 (3): 261–271. doi:10.1007/s11023-007-9068-0. S2CID 2741620.
You can survive after death in various kinds of artifacts. You can survive in diaries, photographs, sound recordings, and movies. But these artifacts record only superficial features of yourself. We are already close to the construction of programs that partially and approximately replicate entire human lives (by storing their memories and duplicating their personalities). A digital ghost is an artificially intelligent program that knows all about your life. It is an animated auto-biography. It replicates your patterns of belief and desire. You can survive after death in a digital ghost. We discuss a series of digital ghosts over the next 50 years. As time goes by and technology advances, they are progressively more perfect replicas of the lives of their original authors.
- Schmidt, Colin T. A. (October 2007). "Children, Robots and... the Parental Role". Minds and Machines. 17 (3): 273–286. doi:10.1007/s11023-007-9069-z. S2CID 6578298.
The raison d'être of this article is that many a spry-eyed analyst of the works in intelligent computing and robotics fail to see the essential concerning applications development, that of expressing their ultimate goal. Alternatively, they fail to state it suitably for the lesser-informed public eye. The author does not claim to be able to remedy this. Instead, the visionary investigation offered couples learning and computing with other related fields as part of a larger spectre to fully simulate people in their embodied image. For the first time, the social roles attributed to the technical objects produced are questioned, and so with a humorous illustration.
- Anderson, Michael; Susan Leigh Anderson (March 2007). "The status of machine ethics: a report from the AAAI Symposium". Minds and Machines. 17 (1): 1–10. doi:10.1007/s11023-007-9053-7. S2CID 33329318.
This paper is a summary and evaluation of work presented at the AAAI 2005 Fall Symposium on Machine Ethics that brought together participants from the fields of Computer Science and Philosophy to the end of clarifying the nature of this newly emerging field and discussing different approaches one could take towards realizing the ultimate goal of creating an ethical machine.
- Guarini, Marcello (March 2007). "Computation, Coherence, and Ethical Reasoning". Minds and Machines. 17 (1): 27–46. doi:10.1007/s11023-007-9056-4. S2CID 7794353.
Theories of moral, and more generally, practical reasoning sometimes draw on the notion of coherence. Admirably, Paul Thagard has attempted to give a computationally detailed account of the kind of coherence involved in practical reasoning, claiming that it will help overcome problems in foundationalist approaches to ethics. The arguments herein rebut the alleged role of coherence in practical reasoning endorsed by Thagard. While there are some general lessons to be learned from the preceding, no attempt is made to argue against all forms of coherence in all contexts. Nor is the usefulness of computational modelling called into question. The point will be that coherence cannot be as useful in understanding moral reasoning as coherentists may think. This result has clear implications for the future of Machine Ethics, a newly emerging subfield of AI.
External links
- Dartmouth Artificial Intelligence Conference: The Next Fifty Years. Official conference Web site.
- James Moor. The Dartmouth College Artificial Intelligence Conference: The Next Fifty Years. AI Magazine 27:4 [2006]: 87–91. ISSN 0738-4602. Official conference report, with photos; freely available online PDF.
- Peter Norvig, Pictures from AI@50. Photographs of conference presenters.
Notes and comments
Conference blogger Meg Houston Maker provided on-the-scene coverage of the conference, including entries on:
- AI@50 Opening - Brief abstracts of opening remarks, including Carey Heckman's on the original conference and first usage of the term "artificial intelligence"
- AI — Past, Present Future — Brief abstracts of papers by John McCarthy and Marvin Minsky
- The Future Model of Thinking — Brief abstracts of papers by Ron Brachman, David Mumford, and Stuart Russell
- The Future of Network Models — Brief abstracts of papers by Geoffrey Hinton, Simon Odinero, and Rick Granger
- The Future of Learning and Search — Brief abstracts of papers by Oliver Selfridge, Ray Solomonoff, Leslie Pack Kaelbling, and Peter Norvig
- The Future of AI — Brief abstracts of papers by Rod Brooks, Nils Nilsson, and Eric Horvitz
- The Future of Vision — Brief abstracts of papers by Eric Grimson, Takeo Kanade, and Terry Sejnowski
- The Future of Reasoning — Brief abstracts of papers by Alan Bundy, Edwina Rissland, and Bart Selman
- The Future of Language and Cognition — Brief abstracts of papers by Trenchard More, Eugene Charniak, and Pat Langley
- The Future of the Future — Brief abstract of Ray Kurzweil's paper
- AI and Games — Brief abstracts of papers by Jonathan Schaeffer and Danny Kopec
- Future Interactions with Intelligent Machines — Brief abstracts of papers by Daniela Rus and Sherry Turkle
- Selected Submitted Papers: Future Strategies for AI — Brief abstracts of papers by J. Storrs Hall and Selmer Bringsjord
- Selected Submitted Papers: Future Possibilities for AI — Brief abstracts of papers by Eric Steinhart, C. T. A. Schmidt, Michael Anderson, and Susan Leigh Anderson