next up previous
Next: Conclusion Up: Beyond the binding Previous: Plausible reasoning

The ``high level'' reasoning

Imagine standing in front of an ice pond, wearing flat-soled shoes and thinking ``what would happen if I tried to walk on this pond?''. Let's ask this question to Cyc and try to visualize the steps it would take. It would dig up what it knows about ice and shoes and walking. It would find out that ice typically has a low friction surface. It would see that shoes are what you wear when you walk. It would go down the chain of its concepts to as deep a level as necessary to get the information it needs. Somewhere in its ontology, there is an explanation of what friction is, and that it is needed for walking. If we are lucky, it would finally conclude that we would slip and fall if we walked on an ice pond with flat-soled shoes.

On the other hand, your previous experience had introduced you to the experience of falling down on ice. You can imagine the sequence of your foot sliding, a sense of losing balance, landing on your back. In fact you have low level machinery that supports how you take each of your steps when you walk on ice, rocky terrain, in crowd etc. This means that you have a native system whose primitives are things like walking, running, sliding and falling.

Cyc has a hierarchy in its ontology. High level concepts are defined in terms of lower level concepts. Cyc understands sliding and falling in terms of friction. We understood friction in high school in terms of our memories of sliding and falling. Lakoff characterizes this phenomenon by the term basic level category [Lakoff, 1987]. We tend to understand more abstract concepts in terms of our bodies. We also understand more primitive concepts (if you agree that friction is more primitive than sliding) in terms of our bodies. It is as if our bodily concepts cut an ontology like Cyc's horizontally in half, and we reduce both the upper branches, and the lower roots to the midline.

Why is this twisted organization of things, when there is a natural hierarchy of generalizations and specializations. The simple reason is that we have the computational machinery to evaluate things efficiently that come from the midline. We learn speaking before grammar, we learn walking before physics, we learn how to interact with our friends before sociology. It is only natural that we build the later layers of knowledge, on top of what already exists, irrespective of whether they are higher or lower in some hierarchy. Papert's Mindstorms has wonderful examples of this structure [Papert, 1980].

Humans come to life with powerful computers and optimized representations in a number of domains. The visuo-spatial system gives us powerful tools to perceive and think about the forms and positions of objects. The motor system senses and directs our posture, orientation and movements. The language system is specialized in syntactic manipulation. The socio-emotional system not only provides motivation for our behavior but also allows us to understand the behavior and internal state of others by putting us in their shoes and running simulations. And probably a conceptual system not unlike Cyc, organizes our explicit memories and leverages off the knowledge and simulation capabilities of all the other systems.

Each system has its own independent memory, its own regularity detection and learning mechanisms, its own representation of concepts. Consider the concept ``vertical'', for example. It means quite different things to your visual system (imagine a vertical bar), to your motor system (sensation of a vertical posture), and to your vestibular system (sense of balance, compare being vertical versus being tilted like the Pisa tower). When you think about ``vertical'', though, all these systems are ready to help you with their own special expertise of efficient inference.

What does this tell us about how to organize a system with common sense? One would first have to select some primitive domains as a basis for the system [Rao, 1995a]. There are two requirements a basis domain has to satisfy to be useful:

(1) It has to be expressive enough, so that other domains can be mapped on it.
(2) It has to have efficient inference engines.
A visual system satisfies both of the above criteria. A theory of mathematical groups satisfy neither.

Once the basis systems are built and are able to use each other as computational ``demons'', the system can start learning other domains by mapping them onto the existing ones. We frequently do this sort of mapping. For example the Venn diagram representations of events are often useful in understanding probability. Each time I forget Bayes' formula, I draw a Venn diagram of two events and read the formula off of it.

The mapping will typically not be perfect. For example, when there are too many events, the Venn diagrams become confusing. All the inferences you can draw from the basis domain might not be valid in the target domain. However, different basis domains can cover different parts of the inference space. If you have powerful engines handling a great deal of your inference, one can focus on the exceptions and structure them further easily.

There is a section on analogy in the Cyc book. Judging from the lack of any positive evidence in the recent publications, I conclude that it is one of the ideas that did not give satisfactory results. Typically ``analogy'', in the knowledge representation world means that you go and look for similarities between two existing structures. What I am suggesting in this section is a step further. You define a new domain as analogies or metaphors to existing ones. Thus analogies are already existing links in the system. We use spatial metaphors for quantities, not because it is a hot trend in the literary fashion, but because we understand quantities in terms of spatial primitives. Our inferences about them depend on the powerful engines of spatial representations that exist in our minds.



next up previous
Next: Conclusion Up: Beyond the binding Previous: Plausible reasoning



Deniz Yuret
Tue Apr 1 21:26:01 EST 1997