Lakoff to Pinker –

Fellow at The Rockridge Institute; Author, The Little Blue Book

To: Steven Pinker

I am absolutely delighted to hear that Steve Pinker believes that the Computer Program Theory of Mind is “mad.” I agree with him completely. It is mad.

However, the discussion I cited from “How The Mind Works” might lead other readers to interpret Pinker as saying something that he does not believe. If I misread Pinker (as I hope I have), other readers may misread him too. This is a fine opportunity to set the record straight.

The issue needs a bit of elaboration. One possible source of confusion is that there is not one “Computational Theory of Mind” but two, with variations on each. Those two principal computational theories are at odds with one another and the disagreement defines one of the major divisions within contemporary cognitive science. Here are the two computational theories of mind:

1. The Neural Computational Theory of Mind.

The neural structure of the brain is conceptualized as “circuitry,” with axons and dendrites seen as “connections”, with activation and inhibition as positive and negative numerical values. Neural cell bodies are conceptualized as “units” that can do basic numerical computations such as adding, multiplying, etc. Synapses are seen as points of contact between connections and units. Chemical action at the synapses determines a “synaptic weight” — a multiplicative factor. Learning is modeled as change in these synaptic weights. Neural “firing” is modeled in terms of a “threshold”, a number indicating the amount of charge required for the “neural unit” to fire. The computations are all numerical.

The Neural Computational Theory comes in a number of flavors, each reflecting research programs that focus on modeling different kinds of phenomena: (1) Highly structured special purpose neural circuits that describe low-level functions, e.g., topographic maps of the visual field or assemblies of center-surround structures that form line detectors. (2) Highly structured, sparsely connected, special purpose neural circuits that model higher-level functions, e.g., high-level motor control, spatial relations, abstract reasoning, language, etc. (the so-called “structured connectionist models”) (3) Layered, densely connected neural circuits for modeling general learning mechanisms (the so-called “PDP connectionist models”). These are not necessarily mutually exclusive approaches. Given the complexity of the brain, it would not be surprising if each was used in different regions for different purposes.

The fundamental claim is that “higher level” rational functions like language and thought are carried out in the same way as “lower-level” descriptions of the details of the visual system, of motor synergies, etc.

The Neural Computational Theory of Mind states that the mind is constituted by neural computations carried out by the brain, and that those neural computations are the ONLY computations involved in the characterization of mind. The result is a Brain-Mind, a single entity characterized by (1) the specific detailed neural architecture of the brain, (2) the neural connections between the brain and the rest of the body, and (3) neural computation.

The connections between the brain and the rest of the body are crucial to all this. The brain, after all, is structured to function in combination with a body. Its specific neural architectures, which are central to the neural computational theory, are there to perform bodily functions – movement, vision, audition, olfaction, and so on, and their structures have evolved to work with the bodies we have and with the kinds of systems that neurochemistry allows (e.g., topographic maps). Thus, the Neural Computational Theory is inherently an embodied theory.

Patricia Churchland and Terry Sejnowski’s wonderful book, The Computation Brain, is about the Neural Computational Theory of Mind.

2. The Computer Program Theory of Mind (aka “The Symbolic Computational Theory of Mind”)

In the Symbolic Computational Theory, a “mind” is characterized via the manipulation of uninterpreted symbols, as in a computer program. The “symbols” are arbitrary: they could be strings of zeroes and ones, or letters of some alphabet, or any other symbols as long as they can be distinguished from one another. Nothing the symbols “mean” can enter into the computations. Computations are performed by strictly stated formal rules that convert one sequence of symbols into another.

The symbols and the computations are abstract mathematical entities. In the general case, this kind of symbolic computational “mind” is disembodied, and nothing about real human bodies or brains is needed to define what symbols or rules can be. A mind is conceptualized as a large computer program, written in symbols. It is an abstract, disembodied entity with a computational structure of its own.

The symbol system becomes physical when it is “implemented” in some physical system like a physical computer made of silicon chips, or (it is often claimed) a human brain. The manner of “implementation” doesn’t matter to characterization of mind. Only the symbolic program does. This kind of “mind” thus has an existence independent of how it is implemented. It is in the form of software than can be run on any suitable hardware.

Of course, it is possible to MODEL a neural computational model of brain structure using such a general symbol system and it is done all the time: You just impose severe limitations: Model only neural units, connections, levels of activation, weights, thresholds, delays, firing frequencies, etc. and compute only the numerical functions that the neural units compute. This is a model of a very specific type of model of a physical system, the brain.

But this fact is not really germane to the Symbolic Computational Theory of Mind. Such models of how the physical BRAIN computes are not what the Symbolic Computational Theory claims a MIND is. Minds are to be characterized by symbolic computations that are supposed to characterize reasoning, for example, the kind of “reasoning” carried out by the pure manipulations of symbols in symbolic logic or in “problem solving” programs.

A special case of the Computer Program Theory of Mind is obtained by adding a constraint, namely, that the program be implementable by a human brain. Let us call this the Brain-Implementable version of the Computer Program Theory. In a Brain-implementable Computer Program Theory, the program is LIMITED by what a brain could implement, but nothing in it is DETERMINED by the structure of the brain. Its computations are not brain computations-they are still computer software that can presumably be “run on brain hardware.” Naturally, such a brain-implementable computer program theory would allow the program to also be implementable on all kinds of hardware other than a brain. The “mind” defined by the computations of the program would be unaffected by how the program was implemented.

There is in addition a Two Minds theory, in which the mind is separated into two parts: one part of the mind works by the Neural Computational Theory and the other part of the mind works by the Symbolic Computational (or Computer Program) theory. The Two Minds Theory separates mind and body: it posits a form of faculty psychology in which there is a rational faculty governing thought and a language faculty governing language, which are autonomous and distinct from bodily faculties governing perception, motor activity, emotions, and all other bodily activities. In the Two Minds Theory, the Neural Computational Theory is reserved for the bodily functions: low-level vision, motor synergies, the governing of heartbeat rate, and so on are left to neural computation alone. But the “higher” faculties of mind and language are characterized by the Brain-implementable version of the Computer Program Theory which works by symbolic computation. The Computer Program parts of the mind in this theory-the rational faculties and language-are characterized in a disembodied way, with no structure imposed by the brain, and can be implemented on either brain or nonbrain hardware.

Reading Pinker, I was (I hope mistakenly) led to believe that he had accepted the Computer Program Theory in the Brain-implementable version for rational functions and language. Here are some passages from both The Language Instinct and How The Mind Works that led me to the conclusion that he held such a theory.

In The Language Instinct, there is chapter called “Mentalese.” The title is from Jerry Fodor’s Language of Thought theory of mind, which is a version of the computer program theory. On pages 73-77, Pinker describes a Turing machine, an instance of the Computer Program Theory of Mind, as “intelligent” (p. 76). On p. 77, he describes how the abstract symbolic representations might be implemented neurally. At this point he adds: “Or the whole thing might be done in silicon chips. . . Add an eye that might detect certain contours in the world and turn on representations that symbolize them, and muscles that can act on the world whenever certain representations symbolizing goals are turned on, and you have a behaving organism (or add a TV camera and a set of levers and wheels, and you have a robot).”

“This, in a nutshell, is the theory of thinking called “the physical symbol system hypothesis” or the “computational” or “representational” theory of mind. It is as fundamental to cognitive science as the cell doctrine is to biology. . . The representations that one posits in the mind have to be arrangements of the symbols.”

There are also passages in How The Mind Works that sound as if Pinker is advocating a version of the Computer Program Theory. On page 24, Pinker says,

“This book is about the brain, but I will not say much about neurons . . . The brain’s special status comes from a special thing the brain does . . . information processing, or computation.”

One might think that here Pinker was leading up to the Neural Computational Theory of Mind, but then he says:

“Information and computation reside in patterns of data and in relations of logic that are independent of the physical medium that carries them.” He describes how a message might be carried by neurons, and continues, “Likewise a given program can run on computers made of vacuum tubes, electromagnetic switches, transistors, integrated circuits, or well-trained pigeons, and it accomplishes the same things for the same reasons . . . The computational theory of mind . . . says that beliefs and desires are information, incarnated as configurations of symbols. The symbols are physical states of bits of matter, like chips in the computer or neurons in the brain.”

This sure sounds as though Pinker is accepting the Computer Program Theory of Mind in its Brain-implementable version. Other readers may not have been as badly misled on this matter as I was, but it will be useful to hear from Pinker why these passages are not versions of the Computer Program Theory of Mind (aka The Symbolic Computational Theory) in its brain-implementable version.

Indeed, later in the book (p. 112), Pinker seems to be advocating the Two Minds Theory:

“Where do the rules and representations in mentalese leave off and the neural networks begin? Most cognitive scientists agree on the extremes. At the highest level of cognition, where we consciously plod through steps and invoke rules we learned in school or discovered ourselves, the mind is something like a production system, with symbolic inscriptions in memory and demons that carry out procedures. At a lower level, the inscriptions and rules are implemented in something like neural networks, which respond to familiar patterns and associate them with other patterns. But the boundary is in dispute. Do simple neural networks handle the bulk of everyday thought, leaving only the products of book learning to be handled by explicit rules and propositions? … The other view-which I favor- is that those neural networks alone cannot do the job. It is the structuring of networks into programs for manipulating symbols that explains much of human intelligence. That’s not all of cognition, but it is a lot of it; it’s everything we can talk about to ourselves and others.”

This sure sounds like the Two Minds Theory with the Computer Program Theory of Mind applying to rational thought and language – “everything we can talk about to ourselves and others.”

At this point, the Dehaene book becomes relevant. Since mathematics is part of rational thought, part of “everything we can talk about to ourselves and others,” it would seem that Pinker is implicitly claiming that mathematical cognition too is to be characterized not by the Neural Computational Theory of Mind, but by the Computer Program (or Symbolic Computational) Theory. If so, this would seem to directly contradict Dehaene, who claims that very elementary arithmetic is characterized by neural circuitry in the brain, not by a symbol manipulation system. Again, I may be misreading Pinker and he can explain the apparent disparity. Dehaene’s research seems to contradict what Pinker is taking as his basic beliefs.

The issue, of course, is not just who advocates what position, but what the evidence is. What kind of evidence could separate out the Neural Computational Theory from the Two Minds Theory in which concepts, reason, and language are all characterized by the Computer Program theory (aka Symbolic Computation) in its Brain-instantiable version, while the bodily functions are characterized by the Neural Computation Theory? There is such evidence, and it comes down on the side of the pure Neural Computational Theory.

The argument hinges on the Two Minds Theory’s use of faculty psychology, in which visual perception, mental imagery, motor activity, and so on are NOT part of the rational/linguistic faculty (or faculties). Neither Pinker nor anyone else these days proposes that human visual and motor systems work by symbolic rather than neural computation. So, if we can assume that the visual and motor systems work according to the Neural Computational Theory of Mind, can we show that the conceptual system, including human reason and language, makes use of aspects of the motor and visual system that use neural computation not symbolic computation?

The first evidence for such a view came in the mid-1970’s, when Eleanor Rosch showed that basic-level categories in the conceptual system-categories like Car and Chair-made essential use of mental imagery, gestalt perception and motor programs. (For discussion, see my Women, Fire, and Dangerous Things, pp. 46-52). Similarly, research on the neuroscience of color vision indicated that the linguistic and conceptual properties of color concepts were consequences of neural structure of color vision. More recently, contemporary neuroscience research has shown that visual and motor areas are active during linguistic activity.

Recent neural modeling research also supports the idea that the sensory-motor system enters into CONCEPTS and LANGUAGE. Terry Regier has argued that models of topographic maps of the visual field, orientation-sensitive cell assemblies, and center-surround receptive fields are necessary to characterize and learn spatial relations CONCEPTS and linguistic expressions. (See discussion in Regier’s The Human Semantic Potential, MIT Press, 1995, especially Chapter 5, pp. 81-120.) In the past year, David Bailey and Srini Narayanan in their Berkeley dissertations have provided further arguments. Bailey demonstrated that verbs of hand motion in the various of the world’s languages and hand-motion CONCEPTS can be defined and learned on the basis of the motor characteristics of the hand – neural motor schemas and motor synergies. Narayanan, even more dramatically showed that semantics of aspect (event structure) in the world’s languages and its logic arise from motor control systems and that the same neural control system involved in moving your body can perform abstract reasoning about the structure of events. (For details, the dissertations can be found on the website of the Neural Theory of Language group at International Computer Science Institute at Berkeley (

These results should not be surprising. Our spatial relations concepts are about space, and it is not surprising our neural systems for vision and negotiating space should shape those CONCEPTS, their LOGIC, and the LANGUAGE that expresses them. Nor should it be surprising that our CONCEPTS about bodily movement and their LOGIC and LANGUAGE should be shaped by our actual motor schemas and motor parameters. And one should not have been surprised to learn that our aspectual concepts, that is, our conceptual system for structuring, reasoning about, and talking about actions and events in general are shaped by the most important actions we perform, moving our bodies, and that general neural motor control schemas should be used for structuring and reasoning about events in general. Furthermore, given that conceptual metaphor maps body-based concepts onto abstract concepts preserving their logic and often their language, it should be no surprise that the Neural Computational Theory governing the detailed structures of our sensory-motor system ought to apply as well not only to sensory-motor concepts, but to abstract concepts based on them. This is exactly what has been confirmed in studies over two decades.

Dehaene’s book presents an important piece of that evidence, that the rational activity of basic arithmetic is neural in character and to be characterized by the Neural Computational Theory of Mind. Dehaene’s work fits perfectly with recent work on conceptual systems and language in cognitive linguistics and structured neural modeling. The research that Dehaene cites-by himself, Changeux, and others-seems to disconfirm the Two Minds Theory and the idea from faculty psychology that there is an autonomous faculty of reason that humans have entirely and that animals have none of, with mathematics as an example of that faculty of reason.

If these results about basic arithmetic and body-based concepts are correct, as they seem to be, then the assumed faculty psychology is wrong. There are no separate faculties of reason and language that are fully autonomous and independent of the visual and motor systems. Instead, CONCEPTS, REASONING, and LANGUAGE make use of parts of the visual and motor systems. Since these must be characterized using the Neural Computational Theory, it follows that the Neural Computational Theory must be used in concepts, reasoning, and language. If Dehaene is right, as he seems to be, then the Neural Computational Theory needed to characterize the structure of basic arithmetic is also used in REASONING about basic arithmetic, which is a rational capacity. For this reason, the rational and language capacities cannot be characterized purely in terms of the Symbolic Computational Theory. Therefore, it would appear that the evidence falls on the side of the pure Neural Computational Theory of Mind. The Two Minds Theory does not work. What makes the Symbolic Theory of Mind for reason and language a “mad theory ” (in Pinker’s terminology) is that it does not fit the facts. Read the sources and make up your own minds.

Despite Pinker’s writings advocating the Symbolic Computational Theory for reason and language, Pinker really ought to like the version of the Neural Computational Theory of Mind coming out of the Berkeley group, Regier’s Chicago group, and other groups. In that version of the theory, neural modeling is done by highly STRUCTURED connectionist models (rather than PDP connectionist models). We agree with Pinker that conceptual structure, reasoning, and language require structure, and that is just what structured neural models of the sort we and others have been developing over the past couple of decades provide.

The field of neural modeling is evolving very quickly. At the time Pinker was writing How The Mind Works, Regier’s book had not yet been published and some of the more important recent research on structured neural models of mind and language had not been completed. Perhaps Pinker was under the impression that the Neural Computational Theory could not characterize the kinds of conceptual and linguistic structures we now know it can characterize. Perhaps Pinker, correctly seeing that important parts of the structure in thought and language cannot be characterized by PDP connectionist models, and not being aware of structured neural models, was driven to what he saw as the only alternative: the “mad” Two Minds Theory, with Symbolic Computation providing the structure to thought and language.

I am encouraged by Pinker’s present dismissal of the Computer Program Theory of Mind, even though he previously espoused it in his books. With the field developing this rapidly, changes in position are natural. There is no reason for us to disagree on this matter, given that we both recognize the need for conceptual and linguistic structuring, and given that structured neural modeling provides that structure in a biologically responsible way. I hope Pinker’s dismissal of the Computer Program Theory means that he has given up on the Two Minds Theory and has adopted the sensible alternative that also best fits the facts-the Neural Computational Theory of Mind. The evidence warrants it.

Leave a comment

Your email address will not be published. Required fields are marked *

+ two = 5

Leave a Reply