To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
Live Statistics
English Articles
Improved in 24 Hours
Added in 24 Hours
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds

Ray Jackendoff

From Wikipedia, the free encyclopedia

Ray Jackendoff
Born (1945-01-23) January 23, 1945 (age 79)
Alma materMIT, Swarthmore
AwardsFellow of the AAAS
Jean Nicod Prize (2003)
Rumelhart Prize (2014)
Scientific career
FieldsGenerative grammar, cognitive science, music cognition
InstitutionsTufts, Brandeis
Doctoral advisorNoam Chomsky
Notable studentsNeil Cohn

Ray Jackendoff (born January 23, 1945) is an American linguist. He is professor of philosophy, Seth Merrin Chair in the Humanities and, with Daniel Dennett, co-director of the Center for Cognitive Studies at Tufts University. He has always straddled the boundary between generative linguistics and cognitive linguistics, committed to both the existence of an innate universal grammar (an important thesis of generative linguistics) and to giving an account of language that is consistent with the current understanding of the human mind and cognition (the main purpose of cognitive linguistics).

Jackendoff's research deals with the semantics of natural language, its bearing on the formal structure of cognition, and its lexical and syntactic expression. He has conducted extensive research on the relationship between conscious awareness and the computational theory of mind, on syntactic theory, and, with Fred Lerdahl, on musical cognition, culminating in their generative theory of tonal music. His theory of conceptual semantics developed into a comprehensive theory on the foundations of language, which indeed is the title of a monograph (2002): Foundations of Language. Brain, Meaning, Grammar, Evolution. In his 1983 Semantics and Cognition, he was one of the first linguists to integrate the visual faculty into his account of meaning and human language.

Jackendoff studied under linguists Noam Chomsky and Morris Halle at the Massachusetts Institute of Technology, where he received his PhD in linguistics in 1969. Before moving to Tufts in 2005, Jackendoff was professor of linguistics and chair of the linguistics program at Brandeis University from 1971 to 2005. During the 2009 spring semester, he was an external professor at the Santa Fe Institute. Jackendoff was awarded the Jean Nicod Prize in 2003. He received the 2014 David E. Rumelhart Prize. He has also been granted honorary degrees by the Université du Québec à Montréal (2010), the National Music University of Bucharest (2011), the Music Academy of Cluj-Napoca (2011), the Ohio State University (2012), and Tel Aviv University (2013).

YouTube Encyclopedic

  • 1/3
    Views:
    3 719
    1 549
    2 737
  • CARTA: How Language Evolves: Ray Jackendoff: What Can You Say without Syntax?
  • Ray Jackendoff: Rumelhart Prize Lecture
  • Reduplication | Learn English | Canguro English

Transcription

♪ [music] ♪ - We are the paradoxical ape, bi-pedal, naked, large brained, long the master of fire, tools and language but still trying to understand ourselves. Aware that death is inevitable yet filled with optimism. We grow up slowly. We hand down knowledge. We empathize and deceive. We shape the future from our shared understanding of the past. CARTA brings together experts from diverse disciplines to exchange insights on who we are and how we got here. An exploration made possible by the generosity of humans like you. ♪ [music] ♪ - [Ray] I should first give equal billing to Eva Wittenberg who did a lot of this work with me. Can you identify yourself there? Yay, right there. One of the big questions for anthropogeny is, "How did language evolve in our species?" This is a big question. My almost five-year-old granddaughter asked a couple of weeks ago, she asked, "How did people make up words when there weren't words before?" So I mean it's an obvious question. Well here's the thing. Languages are learned by children and their progress looks like learning other things like motor skills and social skills. So a lot of people think there's nothing more to it than that. Humans just got to be good at learning stuff. So I want to restate the question not, "How did human language evolve?" but, "How did humans evolve so as to be able to learn language as well as all those other things?" And in order to answer this, you have to ask another question, "When you learn a language, what do you end up knowing?" For starters knowing a language is being able to map between patterns of sounds or gestures on one hand and meanings, or thoughts or concepts whatever they are. So something like this. So when you're speaking, you're going upwards in this picture from meanings to producing sounds, and when you're understanding, you're going downward. Now let me add a little bit of the big picture. Conceptual structures are also linked to perception and action. So you can talk about what you see and you can talk about what actions you plan to carry out and so on. Now just for fun, what do you get if you take away phonetic patterns like this? Well you get an organism that can perceive, and have thoughts and act based on those thoughts. And I think this is a plausible sketch of ape cognition, which is pretty sophisticated, but it doesn't have language. So the evolution of language had to involve at least a new ability to map concepts to sounds and gestures and to use these communicatively. Well we've been hearing this all day. Well linguists actually think there's a good deal more to that than language. First comes phonological structure, which is the systematized organization of sounds or gestures, and we just heard about that from David Perlmutter in ASL. Second one is morphology, the internal structure of words such that a word like, say "procedurally" can be seen as built from "proceed" plus "ure" to form "procedure," plus "al" to perform procedural, plus "ly" to perform "procedurally." And the third one is syntax, the organization of words into phrases and sentences. So syntax determines things like canonical word order. So if you hear something like,"The boy kissed the girl," you know who did the kissing and who got kissed. It also allows you to elaborate descriptions of characters and events into phrases. So something like, "The boy in the blue hat and red sneakers tried to kiss the girl that he loved," and you still know that the boy did the kissing, even though the word "boy" is nowhere near "kiss," and you also know that he the loved the girl even though "boy" is nowhere near "love." So language in modern humans involves this part circled in red of the network of cognitive organization. And what's evolved in humans is the ability to learn to do this kind of thing to turn thoughts into sounds by structuring them into words and phrases and to be able to pull thoughts out of the sounds that other people make. Well how could this have happened? Basically there's no direct evidence for what our ancestors talked about and when they started talking. There aren't any fossil vowels. The usual way to test evolutionary scenarios is comparison to other species. But here it isn't too helpful as we've heard already today, modern apes don't learn very much in the way of human languages, and they certainly don't invent language spontaneously as deaf children do in the absence of sign language input as we also have heard about today. So there's a big cognitive gap between apes and humans here. Another way to form plausible hypothesis about evolution is through reverse engineering, figuring out what components could have been useful in the absence of others. So think about the eye. A primitive retina would have been useful for vision without the muscles that move the eyeball although it might be more limited than our modern vision. On the other hand, without a retina the muscles wouldn't help you see at all. So it makes sense that something like the retina probably evolved before the muscles. I want to propose something like that for language. So a primitive system for communicating thoughts via sounds or gestures is useful without phonology, morphology or syntax. These components can improve an existing communication system, but they're useless on their own. So if the components of language evolved in some order it makes sense that the connection between phonetics and meaning came first, and then these further refinements going from there to. . . So the hypothesis is that this is the kind of system that some ancestors of modern humans could learn. I can't prove that this is the way language evolved, but what I'll try to do today is to show you the simpler systems of this sort exist in the languages of today and show you a little bit about how these systems work. And the basic idea comes from Derek Bickerton. He proposed that there's a form of language that he called "protolanguage," which surfaces in many different circumstances, and he proposed that this form of language is a relic of early stages in the human or Hominid system. I'm going to suggest that this form of language is a subset of the full language system. It omits morphology and syntax and I'll call it a linear grammar. What is a linear grammar like? Well it has words and the words have to come in some order. They map to meanings but there's no structure. There aren't syntactic phrases like, "The boy in the blue hat and red sneakers," or structure inside words as we saw with the word "procedurally." Now in this kind of language, word order could still matter. You could say, "Boy kissed girl," and mean that the boy did the kissing and the girl got kissed, not vice versa, but it wouldn't be because the subject precedes the verb and the verb precedes the object. Because a linear grammar can't have syntactic things like nouns, and verbs, and subjects and objects. What it still has is semantic notions like the word denoting an actor here, "the boy" precedes the word denoting the action namely "kiss." That is this kind of language would map directly between linear order and phonology and the meaning. Now a linear grammar doesn't have morphology, so it can't have things likes tenses and agreement on verbs. So you'll get, "Boy kiss girl," not "Boy kissed girl" or "Boy kisses girl." You'll leave it up to the context to indicate when this kissing took place. And you might also expect not to have functional items like definite articles, which perform more of a syntactic role, namely marking noun phrases than a semantic one. And a linear grammar is linear, so it can't have subordinate clauses like the relative clause in "The girl that he loved." You might still express this thought but maybe as two sentences. You might say something like, "Boy love girl. Kiss girl." Something like that. Well I want to tell you about some systems that look like this. The first one is pidgins. These are the early stages of contact languages as we also heard about earlier today. This is great. This symposium just set up my talk perfectly. Pidgins are often described as having no subordination, no morphology, no grammatical words like "the," and they have unstable word order that's governed by semantic principles like actor before action. If the context permits, you can leave out characters in the action. So if you already knew about the girl you might just say, "Boy kiss," where English would make you use a pronoun like, "The boy kissed her." So from the perspective of linear grammar we can ask, "Is there any evidence that pidgins have parts of speech like nouns and verbs? Is there any evidence for syntactic phrases?" And my preliminary conclusion is there's very little evidence for it and suggests that pidgins would be a good example of a linear grammar. Later on of course contact languages add many features of more complex languages like conventionalized word order, grammatical categories, syntactic subordination again as we heard today. And these kinds of languages are called creoles. And Eva and I see the transition form from a pidgin to creole as going not from it's not a language it's just junk to a language. Now it's a language. But just adding some syntactic and morphological principles that weren't there in the pidgin just goosing it up a little bit. For a second case, Wolfgang Klein and Clive Perdue did a multi-language longitudinal study of immigrants all over Europe learning second languages. And they found that all speakers achieved a stage of what you might call semi-proficiency that Klein and Perdue called the "basic variety." Many speakers went on to improve on it, but others just stopped there. That's as far as they got. And in this stage as they describe it there's no inflection or morphology, no tenses, no plurals and so on, no sentential subordination. You can leave out known characters freely. There's simple and semantically-based principles of word order including our favorite actor before action. That is from our standpoint the basic variety also looks like another kind of linear grammar. Third case is home sign. As we've heard already the languages invented by deaf children who have no exposure to a signed language and Susan Goldin-Meadow has shown that they have at most rudimentary morphology. They freely emit known characters. And on our analysis they only have a semantic distinction of object versus action not a syntactic distinction of noun versus verb. The word order is kind of probabilistic, but if anything it's based on semantic roles. Home signers do produce some sentences with multiple verbs or action words if you don't think they have verbs. Goldin-Meadow has in the past described these as embedding. We think these are rudimentary serial verb constructions, which I can't explain to non-linguists, I'm sorry without embedding. So it looks to us like a linear grammar with possibly a bit of morphology added. And I should add to be perfectly honest Goldin-Meadow does't agree with us altogether. So this is controversial. Another case we've looked at is village sign languages, which develop in isolated communities where there's a significant occurrence of hereditary deafness. And the best known of these is ABSL, which we've just heard about from Mark Aronoff. I'm going to talk about Central Taurus Sign Language or CTSL, which is spoken in a couple of remote villages in the mountains of Turkey. This language came to my attention two years ago thorough my student Rabia Ergin who remarkably has deaf family members who live in the village and speak it. You want to raise your hand, Rabia, our new celebrity? There she is. Rabia, along with Naomi Caselli, Irit Meir and some of the people from Carol Padden's group from about which we've heard already have been documenting this language. And what they find is that the language has a fair amount of morphology, but there's very little evidence for syntactic structure in sentences involving one character. So somebody jumped or somebody fell down or something. The word order is normally the actor preceding the action. If there are two character sentences with inanimate patients like, "The boy rolled the ball," it's normally well maybe you put in the actor maybe you don't, and then you put in the thing that's acted upon and then the action. But if there are two animate characters, so the semantics alone can't resolve the potential ambiguity, word order isn't very stable. It's a bit vague whether they boy kissed the girl or vice versa. And there's a lot of reliance. People rely a lot on pragmatics and context. In fact, there's a very strong tendency to mention only one of the characters in an event no matter how many there are. So a CTSL looks like a linear grammar augmented by a substantial amount of morphology and this isn't too far off what we've seen in ABSL and the earlier stages of Nicaraguan sign languages we heard from Annie Senghas. Now what's cool I think is that these less complex systems aren't confined to emerging languages. Townsend and Bever discuss what they call "semantically-based interpretive strategies" that influence language comprehension. Hearers tend to rely in part on semantically based principles of word order like actor precedes action which is why on our story they have more difficulty with constructions such as reversible passives and object relatives. Again, for the non-linguist, it doesn't matter. Where the actor doesn't precede the action. That's the crucial thing. Similarly Fernanda Ferreira and her colleagues discussed what they call good enough parsing where people apparently rely on linear order and semantic plausibility rather than syntactic structure. And it's well-known that we see similar symptoms in language comprehension by a grammatic aphasics. And Heather van der Lely has argued that a particular population of children with specific language impairment behave as though they're processing language through something like a linear grammar. The literature describes these so-called strategies or heuristics as something separate from language, but there's still mappings between phonology and meaning. They're just simpler ones that bypass syntax. So we conjecture that the language processor makes use of both syntactic grammar and the simpler linear grammar. And when the two kinds of rules produce conflicting analysis interpretation is slower and less stable even when the syntax wins out. And when the syntactic rules break down under conditions of stress or disability then the linear grammar is still there doing its thing. We've also come across two full-blown languages whose grammar appears to be close to a linear grammar. And others have come along that I haven't looked at very closely yet. But one of them is Riau Indonesian. This is a vernacular with several million speakers described by David Gil. Gil argues that this language has no syntactic parts of speech, no inflection or morphology like tense, or plural or agreement. Known characters in the discourse are freely omitted, and things that English expresses with syntactic subordination are expressed in Riau by simply jamming simple sentences together like "Boy kiss love girl." The word order is pretty free, but actors tend to precede actions and actions tend to precede patients. But here's some illustration of the freedom of this language. If you have the expression "chicken eat," this can mean all of these different things depending on the context. And the ones at the bottom, where you say, "Chicken eat," and you mean someone is eating with the chicken. They require a lot, or where the chicken is eating, they require a lot of contextual support, but people do say these things. So this again looks like a linear grammar. So here's basically a full language that's syntactically simple in our sense. Another example is the controversial case of Pirahá studied extensively by Dan Everett. This has exuberant morphology, so it's not simple in this respect. It seems to have a syntactic noun-verb distinction and fairly fixed word order, but no definite or indefinite articles, no markers of plurality, no agreement. Now Everett's most famous claim is that Pirahá likes recursion that is subordinate clauses. Everything that's expressed in English with recursive syntax either just jam simple sentences together or require some sort of circumlocution. So this looks like a syntactically relatively simple language, though not a simple as Riau Indonesian. So to sum up we find that's remarkably similar grammatical systems turn up in all these different scenarios, and it suggests that linear grammar is quite a robust phenomenon entrenched in the modern human brain. It provides a scaffolding on top of which fully syntactic languages can develop either in an individual as in the case of the basic variety or in a community as we've seen with sign languages and creoles, and it provides a sort of safety net when syntactic grammar is damaged as we've seen with aphasia and specific language impairment. And we've also seen that you can say a lot without syntax for example in Riau Indonesian though having syntax gives you a lot more fancy tools for expressing yourself. So let me go back to the original question about the evolution of the human ability to learn language. I suggest that we can think about it through reverse engineering asking what kind of system there could have been that preceded the modern human language faculty. And I think linear grammar's a good candidate. As I said at the beginning, I have no idea how we could prove it nor when the Hominid line achieved either linear grammar or syntactic grammar. Maybe some day we'll get better evidence from genetics. But for now I'm happy to see it as an intriguing hypothesis. So altogether then we think this is telling us a lot of interesting and new things about the texture of the human faculty, and I'm eager to get on with filling in the picture further. Thank you. [applause] ♪ [music] ♪

Interfaces and generative grammar

Jackendoff argues against a syntax-centered view of generative grammar (which he calls syntactocentrism), at variance with earlier models such as the standard theory (1968), the extended standard theory (1972), the revised extended standard theory (1975), the government and binding theory (1981), and the minimalist program (1993), in which syntax is the sole generative component in the language. Jackendoff takes syntax, semantics, and phonology all to be generative, interconnected via interface components. The task of his theory is to formalize the proper interface rules.

While rejecting mainstream generative grammar due to its syntactocentrism, the cognitive semantics school has offered an insight that Jackendoff would sympathize with, namely, that meaning is a separate combinatorial system not entirely dependent upon syntax. Unlike many of the cognitive semantics approaches, he contends that neither syntax alone should determine semantics, nor vice versa. Syntax need only interface with semantics to the degree necessary to produce properly ordered phonological output (see Jackendoff 1996, 2002; Culicover & Jackendoff 2005).

Contribution to musical cognition

Jackendoff, together with Fred Lerdahl, has been interested in the human capacity for music and its relationship to the human capacity for language. In particular, music has structure as well as a "grammar" (a means by which sounds are combined into structures). When a listener hears music in an idiom he or she is familiar with, the music is not merely heard as a stream of sounds; rather, the listener constructs an unconscious understanding of the music and is able to understand pieces of music never heard previously. Jackendoff is interested in what cognitive structures or "mental representations" this understanding consists of in the listener's mind, how a listener comes to acquire the musical grammar necessary to understand a particular musical idiom, what innate resources in the human mind make this acquisition possible and, finally, what parts of the human music capacity are governed by general cognitive functions and what parts result from specialized functions geared specifically for music (Jackendoff & Lerdahl, 1983; Lerdahl, 2001). Similar questions have also been raised regarding human language, although there are differences. For instance, it is more likely that humans evolved a specialized language module than having evolved one for music, since even the specialized aspects of music comprehension are tied to more general cognitive functions.[1]

Selected works

  • Jackendoff, Ray (1972). Semantic Interpretation in Generative Grammar. Cambridge, Massachusetts: MIT Press. pp. 400. ISBN 0-262-10013-4.
  • Jackendoff, Ray (1977). X-Bar Syntax: A Study of Phrase Structure. Cambridge, Massachusetts: MIT Press. pp. 248. ISBN 0-262-10018-5.
  • Jackendoff, Ray (1983). Semantics and Cognition. Cambridge, Massachusetts: MIT Press. pp. 283. ISBN 0-262-10027-4.
  • Lerdahl, Fred; Ray Jackendoff (1983). A Generative Theory of Tonal Music. Cambridge, Massachusetts: MIT Press. pp. 369. ISBN 0-262-12094-1.
  • Jackendoff, Ray (1987). Consciousness and the Computational Mind. Cambridge, Massachusetts: MIT Press. pp. 356. ISBN 0-262-10037-1.
  • Jackendoff, Ray (1990). Semantic Structures. Cambridge, Massachusetts: MIT Press. pp. 322. ISBN 0-262-10043-6.
  • Jackendoff, Ray (1992). Languages of the Mind: Essays on Mental Representation. Cambridge, Massachusetts: MIT Press. p. 200. ISBN 0-262-10047-9.
  • Jackendoff, Ray (1993). Patterns in the Mind: Language and Human Nature. New York, NY: Harvester Wheatsheaf. p. 243. ISBN 0-7450-0962-X.
  • Jackendoff, Ray (1997). The Architecture of the Language Faculty. Cambridge, Massachusetts: MIT Press. pp. 262. ISBN 0-262-10059-2.
  • Jackendoff, Ray (2002). Foundations of Language: Brain, Meaning, Grammar, Evolution. Oxford: Oxford University Press. p. 477. ISBN 0-19-827012-7.
  • Culicover, Peter W.; Ray Jackendoff (2005). Simpler syntax. Oxford: Oxford University Press. p. 589. ISBN 0-19-927108-9.
  • Jackendoff, Ray (2007). Language, Consciousness, Culture: Essays on Mental Structure (Jean Nicod Lectures). Cambridge, Massachusetts: MIT Press. pp. 403. ISBN 978-0-262-10119-6.
  • Jackendoff, Ray (2010). Meaning and the Lexicon: The Parallel Architecture 1975–2010. Oxford: Oxford University Press. p. 504. ISBN 978-0-19-956888-8.
  • Jackendoff, Ray (2012). A User's Guide to Thought and Meaning. Oxford: Oxford University Press. p. 274. ISBN 978-0-19-969320-7.

See also

References

External links

This page was last edited on 12 October 2023, at 22:03
Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.