JM: Noam, let me ask about what you take to be your most important contributions. Do you want to say anything about that?
NC: Well, I think that the idea of studying language in all its variety as a biological object ought to become a part of future science – and the recognition that something very similar has to be true of every other aspect of human capacity. The idea that – there was talk of this in Aspects, but I didn't really spell it out – the belief . . .
[Wait; I'll start over. B. F.] Skinner's observation is correct that the logic of behaviorism and the logic of evolution are very similar – that observation is correct. But I think his conclusion – and the conclusion of others – is wrong. Namely, that that shows that they're both correct. Rather, it shows that they're both incorrect, because the logic of behaviorism doesn't work for growth and development, and for the same reason, the notion of natural selection is only going to work in a limited way for evolution. So there are other factors. As I said in Aspects, there's certainly no possibility of thinking that what a child knows is based on a general procedure applied to experience, and there's also no reason to assume that the genetic endowment is just the result of various different things that happen to have happened in evolutionary history. There must be further factors involved – the kind that Turing [in his work on morphogenesis] was looking for, and others were and are looking for. And the idea that maybe you can do something with that notion is potentially important. It's now more or less agreed that you can do something with that notion for, say, bacteria. If you can also do something with it for the most recent – and by some dimension most complex – outcomes of evolutionary history like language, that would suggest that maybe it holds all the way through.
JM: You've suggested many times that human cognitive capacities have limitations; they must have, because they're biologically based. You've also suggested that one could investigate those limitations.
NC: in principle.
JM: . . . in principle. Unlike Kant, you're not going to simply exclude that kind of study. He seems to have thought that it's beyond the capacity of human beings to define the limits . . .
NC: . . . well, it might be beyond a human capacity; but that's just another empirical statement about limitations, like the statement that I can't see ultraviolet light, that it's beyond my capacity.
JM: OK; but is the investigation of our cognitive limitations in effect an investigation of the concepts that we have?
NC: Well, it may be contradictory, but I don't see any internal contradiction in the idea that we can investigate the nature of our science-forming capacities and discover something about their scope and limits. There's no internal contradiction in that program; whether we can carry it out or not is another question.
JM: And common sense has its limitations too.
NC: Unless we're angels. Either we're angels or we're organic creatures. If we're organic creatures, every capacity is going to have its scope and limits. That's the nature of the organic world. You ask “Can we ever find the truth in science?” – well, we've run into this question. Peirce, for example, thought that truth is just the limit that science reaches. That's not a good definition of truth. If our cognitive capacities are organic entities, which I take for granted they are, there is some limit they'll reach; but we have no confidence that that's the truth about the world. It may be a part of the truth; but maybe some Martian with different cognitive capacities is laughing at us and asking why we're going off in this false direction all the time. And the Martian might be right.
JM: What is the relationship between – I know you've been asked this several times (including by me), and you as often have for various reasons dodged the question –
NC: . . . then I'll dodge it again; because I'm sure that the reasons still hold . . .
JM: Well, I'll try anyway: what is the relationship between your linguistic work and your political work?
NC: Well, it's principled, but it's weak . . .
JM: You've said that there's no deep intellectual connection; I've always read that as saying that there's no way of deducing . . .
NC: . . . there's no deductive connection. You could take any view on either of these topics, and it wouldn't be inconsistent to hold them . . . You know the line, and I don't have to repeat it. There's some point at which a commitment to human freedom enters into both. But you can't do much with that in itself.
JM: Still in the vein we've been talking about, I'd like to ask about linguistic development (language growth) in the individual. You've employed the concept of – or at least alluded to the concept of – canalization, C. H. Waddington's term from about fifty or sixty years ago, and suggested that the linguistic development of the child is like canalization. Can parameters be understood as a way of capturing canalization?
NC: Canalization sounds like the right idea, but as far as I know, there are not a lot of empirical applications for it in biology.
With regard to parameters, there are some basic questions that have to be answered. One question is: why isn't there only a single language? Why do languages vary at all? So suppose this mutation – the great leap forward – took place; why didn't it fix the language exactly? We don't know what the parameters are, but whatever they are, why is it these, and not those? So those questions have got to come up, but they are really at the edge of research. There's a conceivable answer in terms of optimal efficiency – efficiency of computation. That answer could be something like this, although no one's proposed it; it's really speculation. To the extent that biology yields a single language, that increases the genetic load: you have to have more genetic information to determine a single language than you do to allow for a variety of languages. So there's kind of a saving in having languages not be too minimal. On the other hand, it makes acquisition much harder: it's easier to acquire a minimal language. And it could be that there's a mathematical solution to this problem of simultaneous maximization: how can you optimize these two conflicting factors? It would be a nice problem; but you can't formulate it.
And there are other speculations around; you've read Mark Baker's book (Atoms of Language), haven't you?
JM: You used to draw a distinction between the language faculty narrowly conceived and the language faculty more broadly conceived, where it might include some performance systems. Is that distinction understood in that way still plausible?
NC: We're assuming – it's not a certainty – but we're basically adopting the Aristotelian framework that there's sound and meaning and something connecting them. So just starting with that as a crude approximation, there is a sensory-motor system for externalization and there is a conceptual system that involves thought and action, and these are, at least in part, language-independent – internal, but language-independent. The broad faculty of language includes those and whatever interconnects them. And then the narrow faculty of language is whatever interconnects them. Whatever interconnects them is what we call syntax, ‘semantics’ [in the above sense, not the usual one], phonology, morphology . . ., and the assumption is that the faculty narrowly conceived yields the infinite variety of expressions that provide information which is used by the two interfaces. Beyond that, the sensory-motor system – which is the easier one to study, and probably the peripheral one (in fact, it's pretty much external to language) – does what it does. And when we look at the conceptual system, we're looking at human action, which is much too complicated a topic to study. You can try to pick pieces out of it in the way Galileo hoped to with inclined planes, and maybe we'll come up with something, with luck. But no matter what you do, that's still going to connect it with the way people refer to things, talk about the world, ask questions and – more or less in [John] Austin style – perform speech acts, which is going to be extremely hard to get anywhere with. If you want, it's pragmatics, as it's understood in the traditional framework [that distinguishes syntax, semantics, and pragmatics].
All of these conceptual distinctions just last. Very interesting questions arise as to just where the boundaries are. As soon as you begin to get into the real way it works in detail, I think there's persuasive – never conclusive, but very persuasive – evidence that the connecting system really is based on some merge-like operation, so that it's compositional to the core. It's building up pieces and then transferring them over to the interfaces and interpreting. So everything is compositional, or cyclic in linguistic terms. Then what you would expect from a well-functioning system is that there are constraints on memory load, which means that when you send something over the interface, you process it and forget about it; you don't have to re-process it. Then you go on to the next stage, and you don't re-process that. Well, that seems to work pretty well and to give lots of good empirical results.
JM: Could we get back to human nature again? I'm still trying to figure out just what is distinctive about human nature. What I mean by ‘distinctive' is: distinguishing us from other sorts of primates, or apes. Clearly Merge – some kind of recursive system – human conceptual systems; in that, we are distinct. Is there anything else you've thought of?
NC: If you look at language, you can find a thousand things that look different. If you look at a system you don't understand, everything looks special. As you begin to understand it, things begin to fall into place, and you see that some things that look special, really aren't. Take Move – the displacement phenomenon. It's just a fact about language that displacement is ubiquitous. All over the place, you're pronouncing something in one position, and interpreting it in some other position. That's the crude phenomenon of displacement – it's just inescapable. It's always seemed to me some kind of imperfection in language – a strange phenomenon of language that has to be explained somehow. And now, I think, we can see that it's an inevitable part of language: you'd have to explain why it isn't around. Because if you do have the fundamental recursive operation which forms hierarchic structures of discrete infinity, one of the possibilities – which you'd have to stipulate to eliminate – is what amounts to movement – taking something from within one of the units you've formed and putting it at the edge; that's movement. So what looked like a fundamental property of language and also looked like a strange imperfection of language turns out to be an inevitable property of language – and then the question is, how is it used, how does it work, and so on and so forth. That's a serious rethinking of perspective. And that's what happens when you learn something about what looks like a chaotic system.
While this book will be of interest to the specialist, it is intended for a general audience. The title, The Science of Language, might appear daunting, but Professor Chomsky's contributions to the interview can be understood by all and – where readers might want some additional information or aid in understanding why Chomsky adopts the unusual views that he does – I provide ample explanations. However, some might still ask why they should be interested in the science of language at all, and in Chomsky's views of it in particular.
A recent (January 2010) PBS series called The Human Spark starring Alan Alda explored the question of what makes modern humans distinctive. After all, there have been humanoid creatures around for hundreds of thousands of years, but it was only relatively recently in evolutionary time – on a reasonable guess, somewhere between fifty thousand and a hundred thousand years ago – that humans began to display the remarkable cognitive powers that so clearly distinguish us from chimps and other ‘higher’ apes. We form non-kin communities that do not involve direct contact or acquaintance with others; we have science and mathematics and seek ultimate explanations, sometimes in the form of religions; we think about things both temporally and spatially distant, and produce and enjoy fiction and fantasy; we organize and plan for the future in ways that go beyond anything other creatures can manage; we speculate; we draw and employ other forms of artistic media; we produce and enjoy music; we see connections between distant events and seek explanations that will prove reliable and yield good policies; and so on. The conclusion the PBS series reached was that the introduction of language must surely be among the most important factors explaining how these remarkable capacities came to us.
JM: I'd like to better understand your view of what might – this is a question that is partly driven by what a graduate student asked me to ask you – of what you think a philosopher could plausibly contribute now. It seems that some philosophers – philosophers after Descartes's and Hume's time – have been behind the times. They have not fully comprehended how advanced the sciences [and in particular, linguistics] are . . .
NC: There are some philosophers who know the sciences very well, and who have contributed to [the sciences]. They don't question the sciences; they try to clarify what they are doing and even contribute to them at some conceptual level. That's pretty much what Descartes and Kant and others did who were called philosophers. You can be connected to the sciences and know them extremely well. Take someone like Jim Higginbotham. He knows linguistics very well, and contributes to it.
JM: Indeed . . .
NC: and is doing it not the way technical linguists do, but with philosophical interests that relate to traditional questions of philosophy, and so on. I think that that's always a possibility. I suspect that John Austin was right when he said that philosophy should be the mother of the sciences. It's clearing away the thickets and the underbrush and trying to set things up in such a way that the sciences can take over.
JM: So the job of philosophers is to beat around in the bushes and see if they can scare up any birds . . .?
NC: Not only in the sciences, but in people's lives. . . . Take for example [John] Rawls[, the political philosopher]. He's not working in the sciences. He's trying to figure out what concepts of justice we have that underlie our moral systems, and so on. And it does verge on the sciences. So when John Mikhail [who has a degree in philosophy but is also developing a science of a moral faculty that distinguishes permissible from impermissible actions] picks it up, it becomes a science.
Chomsky's notion of an I-language was introduced in part (in 1986) by appeal to a contrast with what he called an “E-language” approach to the study of language. An E-language approach is one that studies language that is ‘externalized.’ One form that externalization might take is found in a philosophers’ favorite, the notion of a public language. What is a public language? David Lewis and Wilfrid Sellars, among many others, assume that a language is an institution shared by individuals in a population, taught by training procedures with the aim of getting the child to conform to the rules for word and sentence usage (for Lewis, “conventions,” and for Sellars, “practices”) of the relevant population. This view turns out to be hopeless as a basis for scientific research for reasons taken up in appendices VI and XI. It does, however, conform quite nicely to a commonsense conception of language.
Another version of an E-language approach is found in Quine, where he insists that there is no “fact of the matter” with regard to deciding between two grammars for ‘a language,’ so long as they are “extensionally equivalent.” To say that they are extensionally equivalent, each would have to generate all and only the same set of sentences, where a sentence is understood to be a ‘string’ of ‘words.’ To make sense of this, one must think that it is possible to identify a language for purposes of scientific investigation with a set – an infinite set – of strings. However, that belief is erroneous, for several reasons that become clear below; essentially, a language is a system in the head that has the competence to generate a potential infinity of sound–meaning pairs, where these pairs are defined by appeal to the theory, as is the recursive procedure that can yield them. What a person actually produces in various contexts during his or her lifetime is a very different creature: in Chomsky's terminology, it is an “epiphenomenon,” not a grouping of strings that can be the subject matter of a naturalistic scientific effort.
JM: To get back to business . . . can we talk about the place of language in the mind?
JM: It's not a peripheral system; you've mentioned that it has some of the characteristics of a central system. What do you mean by that?
NC: Well, peripheral systems are systems that are input systems and output systems. So, the visual system receives data from the outside and transmits some information to the inside. And the articulatory system takes some information from the inside and does some things, and has an effect on the outside world. That's what input and output systems are. Language makes use of those systems, obviously; I'm hearing what you say and I'm producing something. But that's just something being done with language. There's some internal system that you and I pretty much share that enables the noises that I make to get into your auditory system and the internal system that you have is doing something with those noises and understanding them pretty much the way my own internal system is creating them. And those are systems of knowledge; those are fixed capacities. If that's not an internal system, I don't know what the word means.
JM: OK; there are other systems, such as facial recognition. That also is not a peripheral system. It gets information from the visual system.
NC: Well, the facial recognition system is an input system, but of course it makes use of internal knowledge that you have about how to interpret faces. People interpret faces very differently from other objects. Show a person a face upside down; he or she can't recognize it.
JM: Incidentally, your LSA paper and the emphasis on the third factor threw a bit of a monkey wrench into my efforts to write a chapter on innateness as a contribution to a book on cognitive science . . .
NC: Well, you just don't know . . . The more you can attribute to the third factor – which is the way that science ought to go; the goal of any serious scientist interested in this is to see how much of the complexity of an organism you can explain in terms of general properties of the world. That's almost like the nature of science. Insofar as there is a residue, you have to attribute it to some specific genetic encoding; and then you've got to worry about where that came from. Obviously, there's got to be something there; we're not all amoebas. Something has got to be there; so, what is it?
JM: It might be nice to have answers.
NC: I'm not sure; I like the edges of the puzzle.
JM: OK, you're right. They're much more fun.
NC: Think of how boring the world would be if we knew everything we can know, and even knew that we can't understand the rest.
JM: Yes, Peirce's millennial form of science does sound boring.
NC: Well, the nice thing about it is that his view can't be true, because he was making a serious error about evolution – assuming that we're basically angels by natural selection. But you could have something like it. You could imagine that the species would reach the point that everything knowable is known, including the limits of knowledge. So you could know that there are puzzles out there that can't be formulated. That would be ultimate boring.
JM: Yes, worse than heaven.[C]
JM: Now we switch to human nature . . .
JM: Human beings as a species are remarkably uniform, genetically speaking. Yet humans have proven extraordinarily adaptable in various environments, extremely flexible in their ability to solve practical problems, endlessly productive in their linguistic output, and unique in their capacity for inventing scientific explanations. Some, a great many in fact, have taken all this as reason to think that human nature is plastic, perhaps molded by environment – including social environment – and individual invention. The engines of this flexibility and invention are claimed to lie in some recognition of similarities, in induction, or in some other unspecified but general learning and invention technique. This plastic view of human nature has even been thought to be a progressive, socially responsible one. Clearly you disagree. Could you explain why you think that a fixed biologically determined and uniform human nature is compatible with and perhaps even underlies such flexibility, productivity, adaptability, and conceptual inventiveness?
NC: First of all, there's a factual question – does a fixed biological capacity underlie these human capacities? I don't know of any alternative. If somebody can tell me what a general learning mechanism is, we can discuss the question. But if you can't tell me what it is, then there's nothing to discuss. So let's wait for a proposal. Hilary Putnam, for example, has argued for years that you can account for cognitive growth, language growth and so on, by general learning mechanisms. Fine, let's see one.
Actually, there is some work on this which is not uninteresting. Charles Yang's (2004) work in which he tries to combine a rather sensible and sophisticated general learning mechanism with the principles of Universal Grammar, meaning either the first or the third factor – we don't really know, but something other than experience – and tries to show how by integrating those two concepts you can account for some interesting aspects of language growth and development. I think that's perfectly sensible.
JM: Can we talk about Nelson Goodman for a while? Your relationship to him as an undergraduate and after is not often documented, while your relationship to Zellig Harris is, in several places, although not always accurately. But as for Goodman: there's not much discussion of what you got from him and what you think is valuable in his work. He was your teacher at Penn. He . . .
NC: We stayed quite close for many years later.
JM: And you must have been his protégé; certainly he must have gone to considerable effort to ensure that you got the position as a Harvard Junior Fellow, which made a very big difference in your life. I know that there are important philosophical differences between you, but there are also respects in which, it seems to me, you owe a debt to him. His conception of constructional systems, for example . . . What about your conception of simplicity – is that in any way owed to Goodman?
NC: My interest in it was certainly stimulated by his work. And you'll see the occasional footnote in his writings where we talked about . . .
I met Goodman when I was about 17 or so. I had never had any background in philosophy and I started taking his graduate courses with people who had a serious background, and he was very accommodating and helpful, and he didn't consider it inappropriate in any way that I didn't know anything. He'd direct me to read things, and the like. He was teaching at the time what became the Structure of Appearance, and the later courses that I took with him were teaching what became Fact, Fiction, and Forecast. What struck me particularly was that, whether you agreed with his conclusions or not, whatever he was doing was pursued with absolute intellectual integrity – and that's unusual, and striking. And it was worth pursuing in detail, even if you didn't agree, because you were witnessing a serious mind at work, taking what he's doing very, very seriously, pursuing the difficulties – trying to find the difficulties, and seeing if he could overcome them – all of it clearly on issues of very considerable significance. And he had a very ambitious project – more so, I think, than my interpretation of it at first. The Structure of Appearance was supposed to be a preliminary study to the Structure of Reality – and, of course, that never came. It got sidetracked into Fact, Fiction, and Forecast.
JM: Who was the person who did the interesting work on the eye and the PAX-6 gene; I forgot.
NC: Walter Gehring.
JM: Gehring in Switzerland. That kind of work might throw quite a different kind of light on the question of how a system that had Merge built into it . . .[C]
NC: His work is extremely interesting; and basically, what he shows – I don't have any expert judgment, but it seems to be pretty well accepted – is that all visual systems (maybe even phototropic plants) seem to begin with some stochastic event that got a particular class of molecules into a cell – the rhodopsin molecules that happen to have the property that they transmit light energy in the form of chemical energy. So you have the basis for reacting to light. And after that comes a series of developments which apparently are very restrictive. There's a regulatory gene that seems to show up all over the place, and the further developments, according to his account, are highly restricted by the possibilities of inserting genes into a collection of genes, which probably has only certain physical possibilities . . .
JM: the third factor . . .
NC: . . . yes, the third factor, which gives you the variety of eyes. That's very suggestive; it's quite different from the traditional view.
JM: Does it have any bearing on language?
NC: Only that it suggests that there is another system that seems to have powerful third factor effects.
Email your librarian or administrator to recommend adding this to your organisation's collection.