next up previous
Next: Logic was meant Up: The historical roots Previous: The historical roots

Cyc was a natural consequence of history

Building Cyc within the above design space may seem like an odd way of doing things. To a student of history, it will seem more like a natural outcome of the developments preceding it. Let's take a quick look at how things got there in two thousand years.

Logic is the representational framework with long roots in history. As with almost every other science, its beginnings can be tracked down to the dialogues of Socrates and the writings of Aristotle (4th century BC). At the time, the axiomatization of mathematics had just begun. They were preceded by Pythagoras by 200 years, and immediately followed by Euclid. Aristotle set down the rules for dialectic (the art of logical discussion employed in finding out the truth of a theory or opinion), he categorized syllogisms (the early form of deduction). Although the rules set down weren't as clearcut as the modern logicians would like them to be, a collection of regularities was discovered for making logical arguments [Stevenson, 1996].

Leibniz (1646-1716), is the founder of mathematical logic. Unfortunately he abstained from publishing his results, because he kept on finding evidence that Aristotle's doctrine of syllogism was wrong on some points, and respect for Aristotle made it impossible for him to believe this. He mistakenly supposed that the errors must be his own. Thus mathematical logic wasn't discovered for another century and a half. His inspiration was the hope of discovering a kind of generalized mathematics, which he called Characteristica Universalis, by means of which thinking could be replaced by calculation. He is quoted to say

``If we had it, we should be able to reason in metaphysics and morals in much the same way as in geometry and analysis. If controversies were to arise, there would be no more need of disputation between two philosophers than between two accountants. For it would suffice to take their pencils in their hands, to sit down to their slates, and to say to each other: Let us calculate.'' [Russell, 1945]

With a little imagination we can replace the people with pencils with computers. Maybe we should think of Leibniz as the founder of AI. Almost two centuries later, inspired by the writings of Leibniz, Boole developed the theory of propositional logic, and modestly called it The Laws of Thought (1854) [Kurzweil, 1990].

Frege finally layed the mature foundations of modern logic as we know it (1879). From his work it followed that arithmetic, and pure mathematics generally, is nothing but a prolongation of deductive logic. He remained without recognition until Russell drew attention to him in 1903. Russell's own work with Whitehead culminated in Principia Mathematica which layed the foundation of mathematics from set theoretical principles and logic.

Logic went through its own evolution over the centuries. We can separate this evolution into three stages. The idea of the classical logic was to start from a limited number of axioms (the core), to show that each theorem follows logically from the axioms and the theorems which precede it according to a limited number of rules of inference. The idea of the modern logic is to represent axioms and theorems as sequences of opaque symbols, and specify the mechanical manipulations of these symbols that correspond to the chain of deductions in classical logic. The idea of meta-mathematics, originated by Hilbert at the turn of the century, is to construct the symbols and their mechanisms of manipulation, independently of any interpretation of it. The system can then be interpreted as representing a deductive system if a valid isomorphism can be found. At each of the above steps there is an abstraction away from the actual subject matter represented. This is important to be able to prove theorems about the symbol manipulation system itself. In fact, this effort culminated in Gödel's proof (1930) of the incompleteness theorem, which basically states that the two thousand year effort of formalizing mathematics can never be achieved completely.[Gödel, 1962]

Now let's look at how this development might have effected Artificial Intelligence. At around the same time as Gödel, Turing's theorems and abstract machines gave hint of the fundamental idea that the computer could be used to model the symbol-manipulating process. This was the bridge between Leibniz's dream and its computer implementation.

In about thirty years, the field of Artificial Intelligence was born. Being a new science, it drew from its ancestors. Psychology at the time did not provide an adequate foundation. It was a collection of simple learning rules, Gestalt principles, Freudian theories, non of which were sufficient as a root to build on. The mathematical logic seemed much closer to the computational way of thinking. The idea that one can specify a set of symbols and mechanical rules of manipulation, which can correspond to some external reality seemed appealing. This led to the first principle of the new science, the physical symbol system hypothesis.

Some people take the physical symbol system hypothesis as a form of the Church-Turing thesis. That a universal machine will be able to simulate anything in the universe given enough time and memory. I want to make clear that the physical symbol system hypothesis should not be understood in this trivial sense. Remember the point about the Doability Argument from the introduction. It is pointless to argue about the impossibility of an approach. If you push too hard, the physical symbol system hypothesis might include simulating each atom in someone's brain, in which case we have no argument. So what is meant by the hypothesis is that the trick pulled by the modern logicians to formalize mathematics can be applied to ``thought'' in general. i.e. Tying each separate ``concept'' (not atom or neuron) to a symbol, and then specifying some mechanical rules to push around these symbols is sufficient means for intelligent action. It basically boils down to the claim that a system like Cyc should work.

It should also be noted that this was not an original hypothesis. In fact, Frege is the first person who explicitly defended the role of symbolic formalism as a general representation to be applied to arbitrary domains. The definition of truth by Tarski [Tarski, 1935] which later led to the implementation of model-theoretic semantics in linguistics by Montague [Montague, 1974] pre-dates the Dartmouth Conference (1956).

Now let's trace the thirty years of the AI experience that led to Cyc. The first era of AI focused on isolated problems and search methodologies. Soon the need to attack real world problems as opposed to investigating toy domains arose. The link between the programs and the real world had to be more knowledge. In 1970's knowledge is power hypothesis gained popularity. The expert systems became popular. But by 1980's it was evident that the narrow sighted problem solvers were no more closer to achieving intelligence than their search based predecessors. An obvious difference between the programs that pushed meaningless symbols around and the real intelligent beings that interpreted them was the ability to access the meanings of those symbols. Thus Cyc was born.



next up previous
Next: Logic was meant Up: The historical roots Previous: The historical roots



Deniz Yuret
Tue Apr 1 21:26:01 EST 1997