About
Community
Bad Ideas
Drugs
Ego
Erotica
Fringe
Society
Technology
Hack
Phreak
Broadcast Technology
Computer Technology
Cryptography
Science & Technology
Space, Astronomy, NASA
Telecommunications
The Internet: Technology of Freedom
Viruses
register | bbs | search | rss | faq | about
meet up | add to del.icio.us | digg it

Why Artificial Intelligence is Impossible

by dagnabitt

Both GOFAI (Good Old Fashioned Artificial Intelligence) and Connectionism attempt to [B]describe[/B] the way intelligent systems must operate to account for the vast reserve of human behaviors. Both theories contest the other based on the extent to which their own position can explain various behavioral phenomena found in human intelligence, from formal deduction to pattern recognition. The unifying claim of both theories is that given the accurate exposition of some internally communicative physical network, neural or otherwise, that can imitate to greater or lesser degrees the intelligent behavior of humans, that this will be a sufficient condition to describe the intelligence of humans. Further, both theories maintain that we as investigators can attribute a similar intelligence to any non-human entity or physical system that can reproduce such behavior. Hence if a machine can mimic the behavior of humans, so that it reproduces the same output as humans given similar input, then this is a sufficient condition to say that both machine and human have the same intelligent capacities.

My thesis in this paper is to show that regardless of how human intelligence may be described, or how intelligent behavior may be replicated in non-human entities, we as human investigators can never [B]necessarily[/B] attribute a similar intelligence to machines, so that all possible doubt is removed. I wish to argue that when we consider “human intelligence” we cannot help but do this within the context of our own intimate or personal understanding of what it [B]means[/B] to be human. That is the agendas, motivations, and biopsychophysical coercions that are intrinsic to human entities are fundamental aspects of human behavior that cannot be removed from how we think about or judge human intelligence. Regarding our own behavior, we are implicitly aware that our intelligence is always in service of some motivation or need, apart from which such intelligence would be of little value or comprehension. We, as humans, intimately know that our intelligence is not gratuitous, but is rather in service of our [B]purpose[/B] as humans, however that may be defined. We also clearly attribute this quality to other humans, as we take for granted the intelligence of fellow investigators or laypersons in this venture, and argue only about machines. In this way we demonstrate in practice that machines are foreign and abstract entities to us, while the same is clearly not true of other humans. It is thus not possible to make judgements about the intellectual equality of humans and machines, as we as humans can in no way relate to a machine, which has no such agenda or intrinsic motivation, as our ontological equal.

Hence both GOFAI and Connectionism, while placing emphasis only on description and reproduction of intelligent behavior, are mistaken regarding their ability to further extrapolate that such efficacy is identical with a total conceptualization of human intelligence. So long as we do not share with our creations the instinctual and motivational necessity inherent in us, and that we take for granted in other humans, we shall never be able to say that they are intelligent in the same way. We do not look at our own intelligence independent of our motivations and agendas, so we surely cannot do this for machines. Between humans and machines there is a necessary difference, and this difference must be respected when comparing intelligence between species.

GOFAI claims that intelligence involves the manipulation of atomic symbols, which act as templates of experience. Such templates are atomic in that they are elemental and irreducible, and GOFAI presumes they act as the fundamental building blocks from which all knowledge is formed. That is, the ability to use and store symbols within some physical system, be it organic or otherwise, is a necessary condition for intelligence. These symbols are manipulated according to rules of formal logic, such as deduction. Within such a system each symbol may act as an axiom from which the epistemic value of earlier elements in a sequence (input) can determine the logical conclusion of a sequence (output). Insofar as such behavior is replicable in physical symbol systems that are not human, it is to this same extent that such systems are to be considered intelligent on par with humans. That is a physical symbol system has both the necessary and sufficient means for intelligent behavior, and is limited only to the extent that it lacks applicable symbols which to utilize. Hence GOFAI claims that a physical symbol system with the shear number of symbolic representations as a human, would possess a comparable intelligence.

GOFAI sees intelligence as essentially reducible to the ability of a dynamic system to manipulate and store signifiers or symbols in a logical way. Such symbols might be letters, which in themselves are arbitrary representations of entities (in the case of letters, phonemic elements). These can be combined via logical syntactic rules to form words (for example I before E, except after C), sentences, and paragraphs, or expressions that can be communicated (output). When the rules of syntax are applied, then any such letters, provided they are authentic, can be formed into comprehensive combinations that allow for semantic expression and some degree of understanding.

For GOFAI, the elements in this case become somewhat inconsequential during this process, while the process itself, that is logical syntactic manipulation, can be presumed fundamental. The various elemental letters become arbitrary expressions of a logical system that operates independent of its particular content. This process, for GOFAI, is thought to express the functionality of human intelligence, as well as be fairly easily replicable in non-organic entities. That is to say that flesh is not the only medium that such behavior can manifest in. (1) A machine, for example, can be programmed according to basic input/output rules that will reproduce logical deduction in an automated fashion. Such a machine, given certain symbolic input, for example a proposition (all crows are black), will change, eliminate or reproduce further symbolic composites later in the sequence to logically follow from this base symbolic axiom. Hence, If our original propositional input is “all crows are black”, and later input is “this is a crow”, the system can properly deduce as output that the second proposition relates to the first in such a way as to conclude that “this crow must be black.” The process itself however, operates independent of specific content so that the computer need not ever “know” a crow to be able to deduce these truths, but only similarities and differences concerning the symbolic representations involved.

GOFAI looks at intelligence as if it is a collection of modular representations upon which basic logic can be performed to derive certainties, and hence knowledge. That is, each possible symbolic axiom has a unitary position within the intellectual model, and an entity’s intelligence is limited only by the number of representations and hence the number of possible logical conclusions that can be deduced. Further, it is also presumed that human intelligence is merely that of interaction of basic logical operations in conjunction with a large number of individual representations. So “All crows are black” may relate to other semantically similar terms like “all crows are birds”, then “all birds are animals”, then “all animals are alive”, then “all birds are alive” etc… All knowledge in this framework is derived from the semantic relatedness of similar signifiers, while each particular meaning is derived from a particular causal interaction with this logical web of meanings and relations.

This is problematic for several reasons. First, GOFAI ignores the behavior of some creatures that do not explicitly appear to use symbol manipulation in their behavior. Dolphins, for example, or even dogs, cannot be presumed to use logic in the same way the GOFAI model presumes, yet it is clear we cannot consider them “unintelligent” creatures. Perhaps, under my thesis, we have even more grounds for calling a dog more intelligent than a machine despite the obvious use of symbols, as it is clear that whatever intelligence the animal does possess is clearly related to it’s purpose as an animal. It has motivations and turbulence (for example sleep and hunger) to which we can more easily relate than a machine, and hence have more understanding that a dog [B]must[/B] be intelligent to some degree. Such animals clearly use procedural rather than propositional intelligence, regardless of what their mental states may be like. This cannot be ignored when we consider intelligence as a whole.

Also, GOFAI focuses only on top-down, or conceptually based deduction as its criteria for intelligence. Even if we grant that such behavior is a necessary constituent of intelligence, we cannot say that it is an exhaustive expression of intelligence. Intelligence clearly involves many pre-epistemic, bottom-up, or perceptual behaviors that are as fundamental as the concepts we may later derive from them. Object and pattern recognition, for example, are pre-conceptual psychological processes that can in no way be excluded from the process of intelligence; yet GOFAI fails to account for such behavior.

A third possible complaint against GOFAI is that it cannot account for novelty and spontaneity in human expression. The amount of time spent on searching previously stored templates of information for a given output is clearly uneconomical, in that the amount of time it should take to solve any problem should far surpass the immediacy with which humans can perform these actions. Also, humans are clearly adept to adjusting their behavior to new situations that have never been experienced before, or more specifically, generating output for situations for which they have previously received no input. It is difficult to see given the GOFAI framework how humans can so easily adapt to novelty, as they theoretically should not have the epistemic resources.

Lastly, GOFAI cannot account for ambiguity or contextual variability in the use of certain phrases, such as a human easily does. To say “Stan has a big heart”, for example, carries with it at least two distinct semantic meanings that in itself expresses neither with any certainty. It is difficult to see how a physical symbol system can possess a representation of such a phrase independent of the context within which it is used. So even if a computer can manipulate such phrases logically, it is difficult to see that they could have any semantic comprehension of the terms involved. For these reasons GOFAI is left wanting.

Connectionism, modeled on Hebb’s “cell assembly” theory of neurons, takes a functionalist approach to intelligence, taking the biological (versus the theoretical) human brain as its physical model. According to Connectionism, intelligence does not so much work by linearly deducing logical conclusions from symbolic propositions, as work in parallel to perform many relevant operations simultaneously. Knowledge in such a system is represented not in static nodes that require individual reference during each task, but rather in the [B]connections[/B] between groups of units. In this way, intelligence does not manifest so much in logical deduction and manipulation of coherent symbols, but rather in patterns of neural operation that activate representations epiphenomenal to an assembly’s neural operation. Symbolism is not purely representative of static concepts and entities in the isomorphic way presumed by GOFAI, but is rather existent in the [B]patterns[/B] of neural firing.

Neural networks don’t need to worry about the unrealistic search procedures and storage problems implicit in physical symbol systems. According to Connectionism, “Neurons that fire together, wire together”. This embodies the idea that when a neuron is activated, perhaps during a task such as pattern recognition, that it is not activated alone, but rather in accompaniment with several other neurons, perhaps those for colour, and size. The theory is that when one neuron fires in an assembly it works so as to create an affinity between other neurons that fire simultaneously. This affinity allows for greater sensitivity of each neuron in the assembly, increasing the likelihood that a given neuron will fire if one of its assembly partners fires. Thus, if an apple is the original representation, then we have perhaps three (for the sake of argument) separate neurons that will fire upon presentation and encode some information. We’ll say one fires for colour, one for size, and the other for shape. Now because these neurons all fire in unison upon the original presentation of the apple, the following activation of only one or two of the neurons, for example colour and size, will be sufficient to activate the “shape” neuron as well, even in that qualities absence. Hence the representation of the apple is made more concrete with each presentation, and the salience and familiarity we experience are the result of this learning process. This whole scheme implies that the knowledge that is most relevant to us, defined as such by being the most presented to us in our learning environment or from past internal manipulations, is the most accessible to us given its importance, so that when we observe something familiar, or encounter previously experienced problems, we are quicker to proclaim “apple” than some other possibility.

Such a theory accounts for our necessary connection to an environment to which we must adapt and has particular educational significance for us. The solutions we come up with for the problems we encounter are not the result of searching a potentially infinite (or number that is limited by storage space) number of discrete representations, but rather the activation of like-wired neurons that are familiar with the problem via experience.

Connectionism thus accounts for many of the problems related to GOFAI. For example, spontaneity is easily accounted for, as no search process is needed. Novelty is accounted for by the degree of similarity with which a novel situation resembles similar situations encountered previously. This can be evidenced by the obvious human tendency to simply pause, or approach with caution, novel situations. The degree of novelty, in theory, should approximate the latency in problem solving.

Animals are better understood within the connectionist framework, as such a neural model can be applied to all behaviors, even if we do not wish to prescribe a like consciousness to other species. The tendency of a dog, for example, to retrieve its owner to get it some food, or some such behavior, can be explained by simple stimulus-response behaviorism that can easily be integrated into the Connectionist model. The animal has learned to associate his owner with food, and hence these co-representations have become strongly attached. Symbolic representations become unnecessary in this scheme.

Connectionism also can account for lower-order, pre-epistemic, or bottom-up behaviors such as pattern or object recognition. This can be exemplified in my apple example above. The representations of particular objects are not so much modular concepts held in store, as the strengthened affiliations between basic qualities that are continuously presented in the environment. So where in GOFAI there would be a deduction from propositions such as “It is a red thing”, “it is a thing of this size”, “It is of this shape”, therefore, “it must be an apple”, Connectionism would recognize that all of these attributes constantly are experienced (along with the word “apple”) and hence are more likely to all fire simultaneously the next time the problem of whether or not an object is an apple arises.

Lastly, the contextual variability and relative appropriateness of terms such as “Stan has a big heart” is no longer a problem, as Connectionism assumes specifically this context. All things held constant, a person devoid of all experience would not be able to identify the specific contextual meaning of this term, where as given Connectionism such differentiation is gratuitous.

While connectionism does appear to be the more comprehensive of the two theories it is not without its problems. For example, it does not do a good job of accounting for what does seem to be the logical structure of our thought. If all intelligence is merely degrees in efficacy of pattern recognition, then why do all humans tend to share a rational structure both in thought and discourse? It would be congruent with Connectionism to say that such order comes from the rational structure of the world itself, and that our understanding merely reflects this order. However, as the philosophical tradition has demonstrated such claims ultimately refer to a world independent of our actual experience, and are hence unwarranted and problematic. Thus while GOFAI is faulty for many practical reasons, Connectionism is not without its philosophical downfalls.

While both GOFAI and Connectionism attempt to describe the necessary internal operations of intelligent systems, their analysis can only go this far. When either theory attempts to extrapolate beyond description, and credits itself with the ability to attribute intelligence in non-humans, both fail. As stated above, what is independent of the description of an intelligent system is the inherent purpose of that system. In humans, such a system is clearly in service of an agenda within which the operations of the system can be ultimately given meaning. Independent of this agenda, such operations remain [B]solely[/B] descriptive. Humans deduce meanings from formal propositions for a reason: to solve a problem. While a machine may mimic this, it cannot be said to [B]have[/B] the same problem. That is, it cannot be said to have its “intelligence” in service of any greater motivation than the act of solving the problem in and of itself. If the computer were to differentiate an apple from a series of other objects, say rocks, it does so gratuitously. It is concerned only with the immediate problem at hand. It is not concerned with the reason of why it is doing so in the first place. Simply, it is not hungry. It has no explanation for why it engages in such behavior, and without such a reason the behavior is it self rendered meaningless.

I think it can be said with some confidence that the human organism, with its needs and desires, is itself the base logical axiom from which all behavior, be it deduction or pattern recognition, is ultimately justified. Humans locate biologically significant stimuli in the environment, such as apples, because such stimuli facilitate the changing of some drive or motivational state within the human. That is, humans locate apples apart from rocks because humans get hungry and only apples, and not rocks, will help solve the problem of being hungry. It is difficult to imagine a human behavior of any kind that is not similarly in service of some motivation, that is itself in service of some need, or removal of discomfort, or increasing some pleasure. These motivations are unique to humans (and probably other animals), and are not presumed to occur in machines. Hence, a machine has no explanation for its behavior. Its actions are not in service of its own needs, but rather in service of our own. Its action is gratuitous and has no purpose other than what is programmed into it by another. The Human condition, that of survival, problem solving, and the production of comfort and removal of pain, is the only framework within which behavior of any kind can be called “intelligent”. Our intelligence is the degree to which we can solve the problems we are faced with, and reduce our discomfort in the face of difficulties. But such intelligence can only be recognized in an entity that is at risk of losing, of not succeeding, or of dying. That is the being is only intelligent if it is possible that to not use such methods would cause the being to suffer. A being would simply cease to exist if it failed to ascertain its subsistence; a computer has no such subsistence to ascertain.

It is also clear that, in practice, we take for granted the intelligence of others who share with us our condition. We look not just at their behaviors, but we make judgements concerned only with if what they do is in fact in their best interest. These judgements would be pointless if we knew their existence didn’t depend on anything they could not in some way control. We presume they have struggles. We presume they will die if they don’t eat, or if they don’t slow down when driving in bad weather. We presume that they will fail to reproduce if they exert themselves too strongly or weakly, and that this will cause in them a discomfort. We presume these things because we can empathize with their condition; it is natural for us to do so. They share our struggle, and we cannot help but recognize this. In turn we cannot help but recognize their intelligence. It is also clear that we do not share this bond with machines. We can look at them only in terms of the behaviors they can reproduce, and thus our understanding of them is limited only to our description of them. We cannot go beyond. The machine is our creation, and is truly alienated from us like any art from its artisan. Its structure and content is, for us beautiful, as would be a painting of a beautiful image. But we cannot mistake this painted image for the beautiful object that inspired it. We cannot relate to a machine in any way to sympathize with its condition the same way we do for other humans. And we cannot attribute to a machine intelligence that is only given meaning within the context of our unique human experience. To attempt to do this is pure anthropomorphism of the oldest kind. We may only understand intelligence relative to ourselves. We may not do so for our creations, which have no purpose other than extension of human purpose and accomplishment. We can in no way attribute to our creations a purpose of their own, independent of us. It is like building a kite and then insisting that the kite must fly to ascertain its own existence. It does not. It is for our enjoyment only.

So both GOFAI and Connectionism are descriptive to various degrees of how intelligent systems must operate. But they cannot explain in any way [B]why[/B] such systems operate the way they do, or what condition such intelligence will be in service of. Only the human condition, with its needs and desires, motivations and agendas, can explain this. Since we do not presume machines to have such attributes, and since we do not relate to machines like we do to each other, then we are in no position to presume any equality between humans and machines concerning their intelligence. Hence both theories, while more or less descriptive of human intelligence, are fundamentally mistaken if they feel they can extrapolate so far as to claim intelligence in a mechanic entity simply because it mimics such described behavior.

Notes,

1)This claim has been validated to some degree by the theory of the “Turing machine”.

 
To the best of our knowledge, the text on this page may be freely reproduced and distributed.
If you have any questions about this, please check out our Copyright Policy.

 

totse.com certificate signatures
 
 
About | Advertise | Bad Ideas | Community | Contact Us | Copyright Policy | Drugs | Ego | Erotica
FAQ | Fringe | Link to totse.com | Search | Society | Submissions | Technology
Hot Topics
What do you call the main box of the computer?
Comp keeps freezing after bootup :(
Essential Programs Thread
Your tech related job
32-bit OS on 64-bit computer
Split Hard Drive???
computer crashed
Intel's Q6600
 
Sponsored Links
 
Ads presented by the
AdBrite Ad Network

 

 

TSHIRT HELL T-SHIRTS