Friday, December 8, 2006

“Artificially Intelligent”

“Artificially Intelligent”

In this paper, I argue that the development of Artificial Intelligent computer programs can benefit from Dewey's model of thought, and that the development of an autonomous techné can benefit from Evolutionary Theory. First, I provide examples of how current Artificial Intelligence can benefit from modeling critical inquiry. Our tools to test for Intelligence, such as chess, the Turing Test, and John Searle's Chinese Room are useful, but are not thorough. However, defining Artificial Intelligence is not necessary for its refinement. I postulate that to be considered Intelligent, it must be capable of critical inquiry only into a problem that it was designed to solve. I support this by analyzing human interactions with speech recognition programs and AI in video games, and offer these as examples of how they are capable of inquiry.

Next, as an approach to “Strong AI”, or a truly autonomous Intelligence, I argue that it would be useful to model the mainstay of Evolutionary Theory, Natural Selection, in an infinitely complex environment. I support this by offering examples of current self organizing algorithms, such as Stephen Wolfram's cellular automaton, or Evolutionary Computation, are not truly self organizing because they exist in a construct. To approach the problem of creating Strong AI, we might benefit to create a dynamic environment for the computational substrate. This environment must have a limited amount of “food” and space, and must be favorable to only the programs which most effectively use inquiry, deleting the ones less suited to retain information from past inquiry. Thus, the result is an Intelligence which is capable of inquiry into the problem of its own survival.

Our definition of Intelligence, when applied to computer models thereof, is forever changing. Here I summarize some methods of applying these definitions. Alan Turing proposed that a conversational program could be tested against a human in a control experiment, and the successful “tricking” of a human judge would constitute Intelligence. (Turing) John Searle countered his argument with the “Chinese Room” thought experiment. In his hypothetical “room”, Searle placed a man with no knowledge of the Chinese language, and who is fed slips of paper with Chinese characters written on them. The man has access to a set of instructions which outline how he should respond to the papers he is fed. Outside the room, it becomes apparent that the mans responses are indistinguishable from a real conversation between two people fluent in Chinese. According to the Turing Test, one would reasonably say that the “program” (the book the man is using), is intelligent. We can now see how Searle was effective in disputing the Turing Test; a set of instructions, however complex, can not be considered “intelligent”. (Searle) There must be an understanding of natural language, rather than a mimicking thereof. Thus, there is a sense that a machine that understands and learns would be considered “Strong AI”.

Intelligence, as defined above, is therefore a description of human thought, rather than a definition. Simply modeling a certain aspect of thought such as natural language does not denote Intelligence in a hard sense. A set of instructions of how to mimic this thought process (an artificial model of intelligent thought), will never be intelligent in this regard if the problem for which it was created to solve is just to mimic natural language. When we say that a machine is intelligent because it can preform a task as well as a human, that does not imply that it is capable of understanding the task on as many levels as a human might. AI might therefore seem ambiguous, an impossible goal of modeling the infinitely complex world of human interaction. It is only ambiguous to the extent that we fall into the trap of ontologically categorizing programs.

In Philosophical Tools for a Technological Culture, Larry Hickman highlighted the importance of avoiding the critique of techné in this way. Text-type determinists reject what texts can be, just as some reject what AI could be. Just as ontological conclusions about texts can be dangerous, so can defining the function of a programs essence be dangerous. To say that the “essence” of a computer program is intelligent because it resembles human behavior, and vice-versa, is erroneous. What we do with these programs therefore determines their functions. (Hickman 119) It might be interesting to imagine what the text-type determinisms would say about a novel produced by AI.

However, ontology is an integral part of AI, as the Semantic Web, knowledge engineering, and modeling thought with heuristic techniques, all employ ontological methods. This school of AI seeks to maximize the potential for HCI (human computer interaction) by developing tools which aid us in our inquiries, rather than take over for us. Knowledge representation will forever remain an imperfect science, as “there is no single 'best' theory that explains all human knowledge organization”. (Gaševic 14) The Semantic Web project seeks to develop intelligent services for the end user (humans) to access the vast information on the Internet in a fashion more efficiently than current markup languages. In effect, a Semantic Web would guide human inquiry. Ontological categorization techniques have become highly useful in our search for knowledge, and reciprocally aid in the development of stronger AI. A “stronger” AI here implies one that is closer to self -training, or more autonomous.

To say that a computer is intelligent because it can consistently defeat a human at chess offers us another definition of Intelligence. John Dewey would disagree here. The computer acts in accordance with its programmers instructions without fail. The instructions might seem to be guiding the machine toward a goal, and one might say that the computer “perceives” loosing as a problem. However, even if the machine can predict all future moves, or develop new strategies tailored to its opponent, it can not rewrite its own “dogmatic” programming if it anticipates a loss, or even give up, unless that is part of its program already. In a sense it exists in a finite construct and it cannot “learn”.

The ability to learn, therefore, seems a much better definition of Intelligence, whether it be applied to humans or a machine. For as far as a machine can “choose” in accordance with its own programming, this is analogous to choosing in accordance with our own beliefs. What is necessary then, is a “felt difficulty” when the computer's programming is insufficient in solving a problem, and the ability to change its own programming, but still allowing for further inquiry. Dewey posits that “logical ideas are like keys” [and] “are a method of evading, circumventing, or surmounting through reflection obstacles that otherwise would have to be attacked by brute force” (Dewey 110). This obviously correlates to experimentation, from which new beliefs can arise, or in the case of the computer; new programming. If dogmatism is analogous to static programming, and “knowing” to dynamic, self-altered programming, we can readily apply some of Dewey's models of thought. A machine capable of experimentation and scientific inquiry might therefore have a dichotomy,

which includes these two sets of instructions. Neural Networks coupled with Semantic understanding are a direct method of modeling this dichotomy.

Neurons and their connections can be modeled by Neural Networks, but sensory input becomes the hurdle which cognitive science has yet to surmount. In quantifying the importance of certain input, we are left with a quandary: Does the human brain have a mechanism which filters information it receives according to rules that can, in fact, be modeled by these Neural Networks? The cognitive scientists working in this area are currently trying to model this mechanism using Parallel Distributed Processing. “If the PDP model fulfills its promises we would develop Artificial Intelligence systems that are really intelligent rather than programs that only appear to be intelligent.” (Whiston) In his article, Whiston shows how a Semantic knowledge base might be mapped along side a Neural Network. A large area of AI concentrates on this type of interdisciplinary model building.

Another tool prominent in our culture is video games, of which, the development constitutes a multi-billion dollar a year industry. Direct descendants of early thought in AI, video games today are increasingly utilizing these concepts to improve HCI, and create a more human-like opponent. We could use a Massive Multiplayer Online Game (MMOG) to obtain semantic data. This can cut the development time for a Semantic application, thus improving HCI with a less “brittle” AI that actually understands. Currently, the project CYC is doing this, albeit on a small scale compared to other MMOGs, to develop a Semantic application with “common sense”. Dr. Douglas Lenat, CEO of Cycorp, refers to the current state of Semantic AI as an “idiot savant”, with a poor understanding of natural language. (Lenat)

The artificially intelligent tools that are currently available to us might already model logical thought in a manner more effective than how I have outlined here, but these tools are obviously not yet fully autonomous. Notwithstanding the fact that AI research is a massive undertaking, the research is in fact, still being carried out by humans. Once human thought is modeled in a way that allows for autonomous self-programming, or learning, the hard part is over. This will be a major milestone, as once a computer can train its own thought, and become intelligent with respect to how it interacts with humans, we will have passed through a gate of sorts. Any further research would take the form of teaching, just as one teaches a child. Here, John Dewey's thoughts on education and thought become even more applicable. We will be, and in some sense already are, teaching these machines “know-how”, and letting them develop at their own pace. How fast this pace will be, however, is still up in the air.

There are three current modes of thinking about Strong AI; one that can truly be said to be equal to, or surpass human Intelligence. The first is whether or not it is possible. In looking at progress of knowledge based, Semantic applications, it does appear that human understanding of our universe is increasing at an exponential rate, along with the speed of the substrate upon which they run. However, current models of the human brain fall short of the parallel processing power needed by orders of magnitude. That we can not yet create a computational substrate “fast” enough to plug this model into is not the main opposition to this mode of thought; the main opposition is that we can not actually create a complete model of human thought. Those that believe Strong AI is impossible (while thinking of it in this mode) look at it from “the top down”. This sort of “top-down” thinking has been a part of AI ever since the beginning, and even before computational substrates even existed, Philosophers struggled to Ontologically categorize human behavior and thought. Lee Spector calls this top-down design in regards to AI “pre-Darwinian thinking”, and notes that “the shortest path to achieving AI's log term goals will probably involve both human design and Evolution.” He also rejects the notion that general AI can only be created by intelligent, human design. (Spector)

Alongside with the Theory of Evolution, we have a need to place a robust Theory of the Evolution of Intelligence. Cognitive Science can be thought of as a “static” science, where we study the apparent “end state” of a network of biological entities: Intelligence. Viewing Intelligence in this manner (as static, not evolving), and attempting to model it seems redundant, or even foolish. If we are truly trying to model human thought, we must look toward Evolutionary Theory as a model, and begin from “the bottom-up”.

The second mode of thought about Strong AI is what the machine would look like, or what the mechanism for thought will be. As I mentioned above, it might be beneficial to use ET, and model Natural Selection as a mechanism for a bottom-up approach in designing the computational substrate. This becomes problematic, however, when the substrate exists in a static environment. Currently, some stronger forms of AI model ET, and can be said to “evolve”, in that they increase in complexity, and in general behave much as biological organisms do. Evolutionary Computation, and Genetic Algorithms both seek to do this, although while running iterations in this manner will never result in anything more complex than the computer itself. HCI may improve dramatically, but the machine remains just that: a machine.

For example, Cellular Automata, pioneered by John von Neumann, and later expounded upon by John Horton Conway in “The Game of Life”, and Stephen Wolfram in A New Kind of Science, model nature quite effectively. Phenomenon apparent in the natural world appear readily when these Cellular Automata run their algorithms inside of their respective programs. There appears to be a self-organization, as groups of Automata are subject to their artificial environment, as well as artificial selection as they compete. Wolfram also noted that some Natural Phenomena, which appear to be Chaotic, are reproduced by these simple algorithms. (Wolfram) In Freedom Evolves Daniel C. Dennet describes this idea of a computer programmers creating life inside of a computational substrate as “Hacker Gods”. (Dennet 44) He looks at Neural Networks and these cellular Automata, and describes them as deterministic. Although they create an illusion of being alive, at their base level their actions are determined by a program. Thus Automata can evovle inside the construct into more complex Automata, but can never evolve into something truly “alive”, such as bacterium or a duck.

Strong AI, if it resembled an Automata as outlined above, would still exist in a construct. It might be accurately described as “intelligent”, but it still exists in a construct, and not on the level that we humans do. When von Neumann postulated his thought experiment of a Cellular Automata, he was also envisioning a self-reproducing machine that existed in the real world. The development of AI along these lines gave rise to the field of Artificial Life, and may be the route to a truly intelligent substrate; one that can interact with humans on a level that humans interact with each other.

Subjecting these machines to Natural Selection in the real sense, and allowing for competition would negate the necessity to build a model of Intelligence. The model would in fact build itself as a necessity for survival. However, there still exists a need to “program” these entities for the need to survive, and create a model for their “food”, which would obviously involve electricity. Thus, Strong AI would be halfway between top-down and bottom-up, as some form of modeling is absolutely necessary as a starting point. This becomes problematic, however, if Strong AI exists halfway between biological organisms and computers.

The third mode of thought in regards to Strong AI is what we should do when it arrives. Many Science Fiction writers have approached this idea. For example, Isaac Azimov's “Three Laws” provided a model for how Strong AI (in his case, Intelligent Robots), might be programmed to interact with humans. Such a simple solution seems foolhardy, for as I mentioned above, a truly intelligent AI would have the capability to change its own beliefs/programming. “To maintain the state of doubt, and to carry on a systematic and protracted inquiry-these are the essentials of thinking.” (Dewey 13) As Dewey deplores dogmatic thinking, and elevates critical inquiry to be the main tenet of human thought, forcing a machine to behave this way would relegate it to a weaker, non-intelligent state.

In conclusion, AI research exists as a parallel between many fields of science, and should be thought of as a Philosophical endeavor, rather than just an exercise in programming. Progress in AI has dual implications: We can use this tool to expand our own knowledge about the natural world, as well as use it to refine itself. As HCI becomes robust enough to allow for the integration of AI into our current tools, we are seeing the emergence of an Intelligence. How we define this Intelligence is therefore subject to critical analysis, and can be fed back into its development. Because Natural Intelligence is a result of Evolution, AI must model not only human thought, but the natural world as a whole.



Works Cited

Dewey, John “How We Think” Boston, D.C. Health, 1910

Dennet, Daniel C., “Freedom Evolves”, Viking Penguin, 2003

Gaševic, Dragan, Djuric, Dragan, Devedšic, Vladan, “Model Driven Architecture and Ontology Development" 2006, ISBN-10: 3-540-32180-2

Hickman, Larry “Philosophical Tools for Technological Culture”, Indiana University Press, 2001

Lenat, Douglas “Google TechTalks”, May 30, 2006

Searle, John R. "Minds, Brains, and Programs,", from The Behavioral and Brain Sciences, vol. 3. Copyright 1980 Cambridge University Press. Reprinted by permission of Cambridge University Press.

Spector, Lee “Evolution of artificial Intelligence”, Artificial Intelligence Volume 170, Issue 18 , December 2006, Pages 1251-1253 Special Review Issue

Turing, Alan "Computing machinery and Intelligence" Mind, vol. LIX, no. 236, October 1950, pp. 433-460

Witson, George “An introduction to the parallel distributed processing model of cognition and some examples of how it is changing the teaching of artificial Intelligence”, ACM SIGCSE Bulletin Volume 20, Issue 1 Feb. 1988

Wednesday, November 8, 2006

Network Neutrality


On June 8th, 2006, the US House of Representatives passed the Communications, Opportunity, Promotions and Enhancement Act (COPE), without provisions for what has widely become known as “Network Neutrality”. With the Democrats regaining of control in the House and Senate, there has been increasing hope for a bipartisan agreement, and moves to include language which restricts the large telecommunication companies from creating a “tiered” Internet. “The Senate bill fails to promote innovation and competition by prohibiting broadband network operators from unfairly discriminating against their rivals.” (Inouye et al.) This has attracted much criticism . “The primary problem with the proposals is more complicated than their advocates have admitted in their calls to ensure that all Internet content is treated equally.” (Koprowski) Once the 110th Congress convenes on January 3rd, 2007, there should be a better understanding among congressmen and women about what Net Neutrality restrictions mean. Broadband services, the FCC's involvement thereof, and Infrastructure development are among many issues that the COPE act address.

The large majority of Americans haven't the slightest idea what a non-neutral net would entail, and why companies such as AT&T and Verizon are taking great steps to avoid this. Some Americans trust that their ISP has their best interests in mind, that they will have access to whatever they want, because they pay for access to the World Wide Web. We sometimes forget that it is each individual's contribution, however small, taken collectively that makes the Internet so robust. A non-neutral net would mean that Internet Service providers would be able to prioritize certain traffic, and thus drastically alter the way we surf the Internet. The creation of a “tired” Internet would mean favoring services owned by the cable providers and that small Internet start-ups, as well as the major players, such as Google, Youtube, and others, would need to pay more to the Internet Service Providers (ISPs) for equal exposure.

The idea of “Network Neutrality”, and the potential for abuse by service providers, is highlighted at length by Columbia Law School professor Tim Wu. He has been credited with coining the phrase, and has accomplished much in the way of alerting the public to its importance. In his 2003 paper “Network Neutrality, Broadband Discrimination”, he explains other restrictions placed on those who subscribe to their services, and advocates for the preservation of an “evolutionary”, or “Darwinian” Internet. Professor Wu agrees that the “end to end” model of the Internet, where alternative forms of traffic should not be discriminated against, allows for a “Natural Selection”, if you will, for those applications which are the most innovative to survive. “Email, the web, and streaming applications are in battle for the attention and interest of the end users. It is therefore important that the platform be neutral to ensure the competition remains meritocratic.” (Wu 6)

The traffic on the Information Super Highway we have all come to know as “free and open”, is already being modulated by the highly integrated media giant gatekeepers, and is in danger of being “taxed”, if not kept neutral. These companies, such as AOL/Time Warner, Verizon, and AT&T, do not own the traffic they ferry just as the telephone companies do not own the traffic of voices along their lines, and if Net Neutrality restrictions are not imposed, the opportunity exists for an absolute and unmitigated control of how and when you can send and receive information on the Internet. The effects on E-commerce could change it from an open market to a highly controlled, advertisement driven arena where any new and innovative idea dies when they cant compete with the vertically integrated holdings of Big Telco.

As far as the Internet goes, its infrastructure is universally unique. Most people would agree with the analogy to our public highway system. Our government awards contracts to certain companies to improve the infrastructure, paving the way for more development, innovation, and improving the Public Good. There are, however, toll systems set up to help maintain it. However, this does not take into account the fact that the Internet grew up on a diet of small, independent and private innovators. This was the American Dream in action; during the Dot-Com Boom of the 1990's, anyone with a good idea could become rich, and many did. Multi billion dollar companies arose from these start-ups, and a few survived the popped bubble at the beginning of the 21st century. If we do not learn from that lesson, we will be damned to another collapse due to a frantic grab at revenue. It is obvious to see how discriminatory an Internet “tax” would be, and an unjust, unchecked highway owned by Media Giants could cripple our economy.

In figure 1, we have a basic representation of what is referred to as the “Internet Backbone”. The physical structure itself is comprised of several different networks, each one being owned by an ISP, such as AT&T, Sprint/NEXTEL, or is run by the military. Local ISPs then connect to this backbone. This becomes problematic, however, when bandwidth promised to the customer is restricted (Wu 37), and inefficiencies in the service are apparent because of a lack of foresight (Wu 38) . Until the telecommunication companies are forced by the FCC to keep up with the innovation of new technologies, and divorce themselves from the trends which led to the bubble burst in 2002, we will start to see a decline of the Internet as a tool, rather, it will resemble mainstream television.

Figure 1

Currently, many Internet Service Providers are “throttling down” certain forms of data, without any input, guidance, or control by the FCC. This is called Traffic Shaping, and some forms of data which are being altered include VOIP, P2P, and Torrent traffic. “The wicked ISPs have, increasingly, opted to block BitTorrent (and indeed other P2P protocols as well) using technology known as traffic shaping.

” (Livingstone). The problem with this is that only some of this data may be illegal, and the only way to tell for sure is to violate the privacy of their own customers. So why would these companies go to such great lengths to shape what traffic goes through them, the Gatekeepers? The answer is simple; to control, if only slightly, what we get when we pay for our bandwidth. With The United States lagging behind 14 other countries in Broadband Penetration, its clear to see that the infrastructure of the Internet is underdeveloped, and in danger of literally dying. (see fig 2.) It is therefore our elected official's jobs to place restrictions to prevent this monopolizing of web content, and to encourage progress in Infrastructure development and multimedia innovation through regulation by the Federal Government.

Figure 2

The lack of a robust Internet Backbone, slow implementation of fiber-optics, coupled with an overcharging of the consumer, leads to mistrust of Big Telco, and necessitates a need for a supplementary network. Metropolitan Wireless Mesh Networks are an obvious choice, but although they cant yet fully supplant our current Infrastructure, even small cities are trying to take advantage of this technology. It offers a solution which is not heavily regulated by the FCC, and can increase Broadband Penetration. Providing High-Speed Internet for a large number of people is now being taken up by the Local Government, and increasing innovation. Recently, The City of Carbondale’s Information Systems Division awarded a contract to do just that. Commenting “Technology is crucial to growing commerce and tourism in downtown areas”, Lt. Governor Pat Quinn catches the essence of how Infrastructure development and an open Internet creates opportunities for small businesses.

We can, however, see exactly how this opportunity can be closed to the general public and future start-ups on the Internet. If the telecommunication companies and broadband service providers wished to slow, or even completely ban the traffic of certain applications, advertisements, or even certain E-commerce, currently there would be nothing stopping them. If the people who write programs, such as web browsers, have these extra hurdles in front of them, how could they compete in such a market? If a company's advertisements load slower than others, or even not at all, how can they survive?

We can also see that the public is still mostly unaware of the subject of network neutrality, or at least are being mislead. Currently, “[T]he National Cable and Telecommunications Association is taking its Net Neutrality campaign to a bigger stage, showing 30 -second TV spots which dismiss the debate as “mumbo jumbo,” nationally.” (Wilson) Ironically, this campaign has extended beyond using their own media to confuse the public, and posting videos on sites such as Youtube.com, which is one type of innovative web application threatened by their proposed “Hands Off”, and non-neutral Internet.

A large majority of consumers would agree that placing restrictions on companies which traffic information would seem to violate our right to Free Speech. However, it seems as though the telecommunications companies have been enjoying the “status quo”, and any move to prevent them from monopolizing the infrastructure for a hegemonic control of E-commerce threatens their hold on it, and is met with their lobbyists. What most people fail to realize is that the most wonderful recent innovations are “bandwidth intensive”, and our infrastructure must improve at a faster rate, rather than capping customers speeds, or charging them more. This leads me to my final observation about how Grassroots organizations are taking up the issue, defying Bi-Partisan Politics, and what the Internet could resemble in the near future if the dialog of Net Neutrality is included in the COPE Act.

The organization Freepress.net is a media watch group, which is affiliated with SaveTheInternet.org, and are both working to change public policy regarding Big Media. With increasing integration of large media conglomerates, such as AT&T's potential merger with Bell South, groups such as these need a unique approach to combat special interest groups and lobbyists. The coalition of Save The Internet members span across political boundaries, and work to educate and petition, rather than lobby. It includes Conservatives, such as The Christian Coalition of America and Liberals, like Moveon.org, among many others. They appeal to the public sector that have come to need and love the Internet. Success for Net Neutrality could have a domino effect on the world of politics, and remove obstacles blocking our path to a superior Internet. Bridging the partisan gap with social networking sites, Americans could see a profound and positive surge in the Internet experience as a whole, and create a demand for a truly “open and free” forum for change. For this we need a robust and neutral environment driven by ideas rather than special interests.


Works Cited

Inouye, Dorgan, and Boxer, United States. Communications Opportunity, Promotion and Enhancement Act of 2006. 109 Cong.

Koprowski, Gene “Putting the Net in Neutral: The Communications, Opportunity, Promotion, and Enhancement Act” EContent; Sep2006, Vol. 29 Issue 7, p8-8, 1p

Livingstone, Adam. "A Bit of Bit Torrent Bother." News.Bbc.Co.Uk. 28 Feb. 2006. Producer, BBC News night. 27 Nov. 2006 http://news.bbc.co.uk/2/hi/programmes/newsnight/4758636.stm

Press Release,“Lt. Gov. Quinn awards $17,875 grant to create a Wi-Fi community Downtown Carbondale Goes Wireless!" www.illinois.gov. 9 Oct. 2006. 27 Nov. 2006

Wilson, Carol “Net Neutrality Not Going Away” Telephony 9/11/2006, Vol. 247 Issue 14, p9-9, 1/2p

Wu, Tim, "Network Neutrality, Broadband Discrimination", Journal of Telecommunications and High Technology Law, Vol. 2, p. 141, 2003

Figure 1:"Ipnetmapbig." Map. edatarack.Com. edatarack.com. 27 Nov. 2006

Figure 2: "Broadband Subscribers Per 100 Inhabitants." Chart. Website optimization. International Telecommunications Union. 27 Nov. 2006