What is Artificial Intelligence? Consider this excerpt from Tom Holt’s novel “Almost Human”:
“The robot hesitated, while the Appeal Court of its mind pondered the nuances of the Laws of Robotics. Eventually they handed down a decision stating that the overriding law which supervened all others was that no robot shall say anything, no matter how true, that will inevitably earn it a smack in the mouth with a 5/8” Whitworth spanner. “Sure thing, boss.” it said”
Is “artificial intelligence” then the point at which a machine’s ability to think can override programming, or is it the lesser test of applying mere rules/programming to provide answers to a variety of problems?
At present our best efforts to create artificial intelligence have produced little more than the amazing, human-like ability of a computer program to understand that the letter Y means “yes” and the letter N means “no”. This may seen a little pragmatic however this is ironically not far from the truth of the situation.
If we forgo any preconceptions as to the semantics applied to the word “intelligence” with respect to a technological form as apposed to a human, it becomes apparent that this is nothing akin to using the word “flying” to describe both birds (biological) and aircraft (technological) forms of heaver than air flight.
The field of study into the possibility of artificial intelligence necessarily assumes that it is possible to synthesise something that satisfies the conditions for “intelligence”, not everybody accepts the current presumptions made about human cogitation and deductive system which from time to time are ridiculed by critics whom argue on a variety of grounds that artificial intelligence is doomed to failure. A good example of such a philosophy is known as Tesler’s law, which defines artificial intelligence as “that which machines cannot do” which implies that any possibility of an artificial intelligence is impossible and that concepts and attributes such as intuition are abilities that are unique to human.
At this point I would like to draw the distinction between artificial intelligence as inferred in the hypothetical procedures based on interrogation in the Turing test, which in effect is merely a test of the systems ability to imitate human-scale performance, through programming, and as such is a simulation of the desired effect on the one hand, and a system’s intellectual capacity to learn, manage, and manipulate natural language or exhibit free will; etcetera on the other.
For example using the Turing test as a model, if a computer exhibited the ability to take decision that if made by a human would indicate the use of intuition, the system would pass due to the fact that it is not a test of human-scale performance, but is simply testing its ability to react to a process of pure stimulus-response replies to input (not action of its own accord).
The study of artificial intelligence, is a sub-field of computer science primarily concerned with the goal of introducing human-scale performance that is totally indistinguishable from a human’s concepts of symbolic inference (the derivation of new facts from known facts) and symbolic knowledge representation for use in introducing the ability to make inferences into programmable systems.
An example of inference is, given that all men are mortal and that Socrates is a man, it is a trivial step to infer that Socrates is mortal. Humans can express these concepts symbolically as this is a basic part of human reasoning; in this manner artificial intelligence can be seen as an attempt to model aspects of human thought and this is the underlying approach to artificial intelligence research.
If for the sake of argument we were to assume that ‘intelligent’ processes are reducible to a computational system of binary representation, then the general consensus amongst artificial intelligence authorities that there is nothing fundamental about computers that could potentially prevent them from eventually behaving in such a way as to simulate human reasoning is logical. However this necessarily assumes that practical everyday reasoning is not the optimum form of human cogitation and deductive, mathematical, and logical reasoning is all that is required to be ‘intelligent’.
If however we assume for the sake of argument that intelligence is not a mutually exclusive entity, and is rather the convergence of characteristics other than logical deduction or mathematical reasoning, such as emotional characteristics that together play a collective role in thought, decision making and creativity, then the greatest part of human intelligence is not computational, and consequently it is not precise and the development of artificial intelligence based the current model of pure binary logic would potentially result in only precise forms of human thought being simulated.
A great deal of research has been done on inference mechanisms and neural or nerve networks which has ironically been of more use in learning about human intelligence through the process of simulating intelligence in the machine, rather that the other way around. Such research has however produced an uncertainty about our own thought processes.