Friday, December 8, 2006

“Artificially Intelligent”

“Artificially Intelligent”

In this paper, I argue that the development of Artificial Intelligent computer programs can benefit from Dewey's model of thought, and that the development of an autonomous techné can benefit from Evolutionary Theory. First, I provide examples of how current Artificial Intelligence can benefit from modeling critical inquiry. Our tools to test for Intelligence, such as chess, the Turing Test, and John Searle's Chinese Room are useful, but are not thorough. However, defining Artificial Intelligence is not necessary for its refinement. I postulate that to be considered Intelligent, it must be capable of critical inquiry only into a problem that it was designed to solve. I support this by analyzing human interactions with speech recognition programs and AI in video games, and offer these as examples of how they are capable of inquiry.

Next, as an approach to “Strong AI”, or a truly autonomous Intelligence, I argue that it would be useful to model the mainstay of Evolutionary Theory, Natural Selection, in an infinitely complex environment. I support this by offering examples of current self organizing algorithms, such as Stephen Wolfram's cellular automaton, or Evolutionary Computation, are not truly self organizing because they exist in a construct. To approach the problem of creating Strong AI, we might benefit to create a dynamic environment for the computational substrate. This environment must have a limited amount of “food” and space, and must be favorable to only the programs which most effectively use inquiry, deleting the ones less suited to retain information from past inquiry. Thus, the result is an Intelligence which is capable of inquiry into the problem of its own survival.

Our definition of Intelligence, when applied to computer models thereof, is forever changing. Here I summarize some methods of applying these definitions. Alan Turing proposed that a conversational program could be tested against a human in a control experiment, and the successful “tricking” of a human judge would constitute Intelligence. (Turing) John Searle countered his argument with the “Chinese Room” thought experiment. In his hypothetical “room”, Searle placed a man with no knowledge of the Chinese language, and who is fed slips of paper with Chinese characters written on them. The man has access to a set of instructions which outline how he should respond to the papers he is fed. Outside the room, it becomes apparent that the mans responses are indistinguishable from a real conversation between two people fluent in Chinese. According to the Turing Test, one would reasonably say that the “program” (the book the man is using), is intelligent. We can now see how Searle was effective in disputing the Turing Test; a set of instructions, however complex, can not be considered “intelligent”. (Searle) There must be an understanding of natural language, rather than a mimicking thereof. Thus, there is a sense that a machine that understands and learns would be considered “Strong AI”.

Intelligence, as defined above, is therefore a description of human thought, rather than a definition. Simply modeling a certain aspect of thought such as natural language does not denote Intelligence in a hard sense. A set of instructions of how to mimic this thought process (an artificial model of intelligent thought), will never be intelligent in this regard if the problem for which it was created to solve is just to mimic natural language. When we say that a machine is intelligent because it can preform a task as well as a human, that does not imply that it is capable of understanding the task on as many levels as a human might. AI might therefore seem ambiguous, an impossible goal of modeling the infinitely complex world of human interaction. It is only ambiguous to the extent that we fall into the trap of ontologically categorizing programs.

In Philosophical Tools for a Technological Culture, Larry Hickman highlighted the importance of avoiding the critique of techné in this way. Text-type determinists reject what texts can be, just as some reject what AI could be. Just as ontological conclusions about texts can be dangerous, so can defining the function of a programs essence be dangerous. To say that the “essence” of a computer program is intelligent because it resembles human behavior, and vice-versa, is erroneous. What we do with these programs therefore determines their functions. (Hickman 119) It might be interesting to imagine what the text-type determinisms would say about a novel produced by AI.

However, ontology is an integral part of AI, as the Semantic Web, knowledge engineering, and modeling thought with heuristic techniques, all employ ontological methods. This school of AI seeks to maximize the potential for HCI (human computer interaction) by developing tools which aid us in our inquiries, rather than take over for us. Knowledge representation will forever remain an imperfect science, as “there is no single 'best' theory that explains all human knowledge organization”. (Gaševic 14) The Semantic Web project seeks to develop intelligent services for the end user (humans) to access the vast information on the Internet in a fashion more efficiently than current markup languages. In effect, a Semantic Web would guide human inquiry. Ontological categorization techniques have become highly useful in our search for knowledge, and reciprocally aid in the development of stronger AI. A “stronger” AI here implies one that is closer to self -training, or more autonomous.

To say that a computer is intelligent because it can consistently defeat a human at chess offers us another definition of Intelligence. John Dewey would disagree here. The computer acts in accordance with its programmers instructions without fail. The instructions might seem to be guiding the machine toward a goal, and one might say that the computer “perceives” loosing as a problem. However, even if the machine can predict all future moves, or develop new strategies tailored to its opponent, it can not rewrite its own “dogmatic” programming if it anticipates a loss, or even give up, unless that is part of its program already. In a sense it exists in a finite construct and it cannot “learn”.

The ability to learn, therefore, seems a much better definition of Intelligence, whether it be applied to humans or a machine. For as far as a machine can “choose” in accordance with its own programming, this is analogous to choosing in accordance with our own beliefs. What is necessary then, is a “felt difficulty” when the computer's programming is insufficient in solving a problem, and the ability to change its own programming, but still allowing for further inquiry. Dewey posits that “logical ideas are like keys” [and] “are a method of evading, circumventing, or surmounting through reflection obstacles that otherwise would have to be attacked by brute force” (Dewey 110). This obviously correlates to experimentation, from which new beliefs can arise, or in the case of the computer; new programming. If dogmatism is analogous to static programming, and “knowing” to dynamic, self-altered programming, we can readily apply some of Dewey's models of thought. A machine capable of experimentation and scientific inquiry might therefore have a dichotomy,

which includes these two sets of instructions. Neural Networks coupled with Semantic understanding are a direct method of modeling this dichotomy.

Neurons and their connections can be modeled by Neural Networks, but sensory input becomes the hurdle which cognitive science has yet to surmount. In quantifying the importance of certain input, we are left with a quandary: Does the human brain have a mechanism which filters information it receives according to rules that can, in fact, be modeled by these Neural Networks? The cognitive scientists working in this area are currently trying to model this mechanism using Parallel Distributed Processing. “If the PDP model fulfills its promises we would develop Artificial Intelligence systems that are really intelligent rather than programs that only appear to be intelligent.” (Whiston) In his article, Whiston shows how a Semantic knowledge base might be mapped along side a Neural Network. A large area of AI concentrates on this type of interdisciplinary model building.

Another tool prominent in our culture is video games, of which, the development constitutes a multi-billion dollar a year industry. Direct descendants of early thought in AI, video games today are increasingly utilizing these concepts to improve HCI, and create a more human-like opponent. We could use a Massive Multiplayer Online Game (MMOG) to obtain semantic data. This can cut the development time for a Semantic application, thus improving HCI with a less “brittle” AI that actually understands. Currently, the project CYC is doing this, albeit on a small scale compared to other MMOGs, to develop a Semantic application with “common sense”. Dr. Douglas Lenat, CEO of Cycorp, refers to the current state of Semantic AI as an “idiot savant”, with a poor understanding of natural language. (Lenat)

The artificially intelligent tools that are currently available to us might already model logical thought in a manner more effective than how I have outlined here, but these tools are obviously not yet fully autonomous. Notwithstanding the fact that AI research is a massive undertaking, the research is in fact, still being carried out by humans. Once human thought is modeled in a way that allows for autonomous self-programming, or learning, the hard part is over. This will be a major milestone, as once a computer can train its own thought, and become intelligent with respect to how it interacts with humans, we will have passed through a gate of sorts. Any further research would take the form of teaching, just as one teaches a child. Here, John Dewey's thoughts on education and thought become even more applicable. We will be, and in some sense already are, teaching these machines “know-how”, and letting them develop at their own pace. How fast this pace will be, however, is still up in the air.

There are three current modes of thinking about Strong AI; one that can truly be said to be equal to, or surpass human Intelligence. The first is whether or not it is possible. In looking at progress of knowledge based, Semantic applications, it does appear that human understanding of our universe is increasing at an exponential rate, along with the speed of the substrate upon which they run. However, current models of the human brain fall short of the parallel processing power needed by orders of magnitude. That we can not yet create a computational substrate “fast” enough to plug this model into is not the main opposition to this mode of thought; the main opposition is that we can not actually create a complete model of human thought. Those that believe Strong AI is impossible (while thinking of it in this mode) look at it from “the top down”. This sort of “top-down” thinking has been a part of AI ever since the beginning, and even before computational substrates even existed, Philosophers struggled to Ontologically categorize human behavior and thought. Lee Spector calls this top-down design in regards to AI “pre-Darwinian thinking”, and notes that “the shortest path to achieving AI's log term goals will probably involve both human design and Evolution.” He also rejects the notion that general AI can only be created by intelligent, human design. (Spector)

Alongside with the Theory of Evolution, we have a need to place a robust Theory of the Evolution of Intelligence. Cognitive Science can be thought of as a “static” science, where we study the apparent “end state” of a network of biological entities: Intelligence. Viewing Intelligence in this manner (as static, not evolving), and attempting to model it seems redundant, or even foolish. If we are truly trying to model human thought, we must look toward Evolutionary Theory as a model, and begin from “the bottom-up”.

The second mode of thought about Strong AI is what the machine would look like, or what the mechanism for thought will be. As I mentioned above, it might be beneficial to use ET, and model Natural Selection as a mechanism for a bottom-up approach in designing the computational substrate. This becomes problematic, however, when the substrate exists in a static environment. Currently, some stronger forms of AI model ET, and can be said to “evolve”, in that they increase in complexity, and in general behave much as biological organisms do. Evolutionary Computation, and Genetic Algorithms both seek to do this, although while running iterations in this manner will never result in anything more complex than the computer itself. HCI may improve dramatically, but the machine remains just that: a machine.

For example, Cellular Automata, pioneered by John von Neumann, and later expounded upon by John Horton Conway in “The Game of Life”, and Stephen Wolfram in A New Kind of Science, model nature quite effectively. Phenomenon apparent in the natural world appear readily when these Cellular Automata run their algorithms inside of their respective programs. There appears to be a self-organization, as groups of Automata are subject to their artificial environment, as well as artificial selection as they compete. Wolfram also noted that some Natural Phenomena, which appear to be Chaotic, are reproduced by these simple algorithms. (Wolfram) In Freedom Evolves Daniel C. Dennet describes this idea of a computer programmers creating life inside of a computational substrate as “Hacker Gods”. (Dennet 44) He looks at Neural Networks and these cellular Automata, and describes them as deterministic. Although they create an illusion of being alive, at their base level their actions are determined by a program. Thus Automata can evovle inside the construct into more complex Automata, but can never evolve into something truly “alive”, such as bacterium or a duck.

Strong AI, if it resembled an Automata as outlined above, would still exist in a construct. It might be accurately described as “intelligent”, but it still exists in a construct, and not on the level that we humans do. When von Neumann postulated his thought experiment of a Cellular Automata, he was also envisioning a self-reproducing machine that existed in the real world. The development of AI along these lines gave rise to the field of Artificial Life, and may be the route to a truly intelligent substrate; one that can interact with humans on a level that humans interact with each other.

Subjecting these machines to Natural Selection in the real sense, and allowing for competition would negate the necessity to build a model of Intelligence. The model would in fact build itself as a necessity for survival. However, there still exists a need to “program” these entities for the need to survive, and create a model for their “food”, which would obviously involve electricity. Thus, Strong AI would be halfway between top-down and bottom-up, as some form of modeling is absolutely necessary as a starting point. This becomes problematic, however, if Strong AI exists halfway between biological organisms and computers.

The third mode of thought in regards to Strong AI is what we should do when it arrives. Many Science Fiction writers have approached this idea. For example, Isaac Azimov's “Three Laws” provided a model for how Strong AI (in his case, Intelligent Robots), might be programmed to interact with humans. Such a simple solution seems foolhardy, for as I mentioned above, a truly intelligent AI would have the capability to change its own beliefs/programming. “To maintain the state of doubt, and to carry on a systematic and protracted inquiry-these are the essentials of thinking.” (Dewey 13) As Dewey deplores dogmatic thinking, and elevates critical inquiry to be the main tenet of human thought, forcing a machine to behave this way would relegate it to a weaker, non-intelligent state.

In conclusion, AI research exists as a parallel between many fields of science, and should be thought of as a Philosophical endeavor, rather than just an exercise in programming. Progress in AI has dual implications: We can use this tool to expand our own knowledge about the natural world, as well as use it to refine itself. As HCI becomes robust enough to allow for the integration of AI into our current tools, we are seeing the emergence of an Intelligence. How we define this Intelligence is therefore subject to critical analysis, and can be fed back into its development. Because Natural Intelligence is a result of Evolution, AI must model not only human thought, but the natural world as a whole.



Works Cited

Dewey, John “How We Think” Boston, D.C. Health, 1910

Dennet, Daniel C., “Freedom Evolves”, Viking Penguin, 2003

Gaševic, Dragan, Djuric, Dragan, Devedšic, Vladan, “Model Driven Architecture and Ontology Development" 2006, ISBN-10: 3-540-32180-2

Hickman, Larry “Philosophical Tools for Technological Culture”, Indiana University Press, 2001

Lenat, Douglas “Google TechTalks”, May 30, 2006

Searle, John R. "Minds, Brains, and Programs,", from The Behavioral and Brain Sciences, vol. 3. Copyright 1980 Cambridge University Press. Reprinted by permission of Cambridge University Press.

Spector, Lee “Evolution of artificial Intelligence”, Artificial Intelligence Volume 170, Issue 18 , December 2006, Pages 1251-1253 Special Review Issue

Turing, Alan "Computing machinery and Intelligence" Mind, vol. LIX, no. 236, October 1950, pp. 433-460

Witson, George “An introduction to the parallel distributed processing model of cognition and some examples of how it is changing the teaching of artificial Intelligence”, ACM SIGCSE Bulletin Volume 20, Issue 1 Feb. 1988