To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
Live Statistics
English Articles
Improved in 24 Hours
Added in 24 Hours
Languages
Recent
Show all languages
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds

From Wikipedia, the free encyclopedia

Linguistics is the scientific study of language.[1] It involves analysing language form, language meaning, and language in context.[2] The earliest activities in the documentation and description of language have been attributed to the 6th-century-BC Indian grammarian Pāṇini[3][4] who wrote a formal description of the Sanskrit language in his Aṣṭādhyāyī.[5]

Linguists traditionally analyse human language by observing an interplay between sound and meaning.[6] Phonetics is the study of speech and non-speech sounds, and delves into their acoustic and articulatory properties. The study of language meaning, on the other hand, deals with how languages encode relations between entities, properties, and other aspects of the world to convey, process, and assign meaning, as well as manage and resolve ambiguity.[7] While the study of semantics typically concerns itself with truth conditions, pragmatics deals with how situational context influences the production of meaning.[8]

Grammar is a system of rules which governs the production and use of utterances in a given language. These rules apply to sound[9] as well as meaning, and include componential subsets of rules, such as those pertaining to phonology (the organisation of phonetic sound systems), morphology (the formation and composition of words), and syntax (the formation and composition of phrases and sentences).[10] Modern theories that deal with the principles of grammar are largely based within Noam Chomsky's framework of generative linguistics.[11]

In the early 20th century, Ferdinand de Saussure distinguished between the notions of langue and parole in his formulation of structural linguistics. According to him, parole is the specific utterance of speech, whereas langue refers to an abstract phenomenon that theoretically defines the principles and system of rules that govern a language.[12] This distinction resembles the one made by Noam Chomsky between competence and performance in his theory of transformative or generative grammar. According to Chomsky, competence is an individual's innate capacity and potential for language (like in Saussure's langue), while performance is the specific way in which it is used by individuals, groups, and communities (i.e., parole, in Saussurean terms).[13]

The study of parole (which manifests through cultural discourses and dialects) is the domain of sociolinguistics, the sub-discipline that comprises the study of a complex system of linguistic facets within a certain speech community (governed by its own set of grammatical rules and laws). Discourse analysis further examines the structure of texts and conversations emerging out of a speech community's usage of language.[14] This is done through the collection of linguistic data, or through the formal discipline of corpus linguistics, which takes naturally occurring texts and studies the variation of grammatical and other features based on such corpora (or corpus data).

Stylistics also involves the study of written, signed, or spoken discourse through varying speech communities, genres, and editorial or narrative formats in the mass media.[15] In the 1960s, Jacques Derrida, for instance, further distinguished between speech and writing, by proposing that written language be studied as a linguistic medium of communication in itself.[16] Palaeography is therefore the discipline that studies the evolution of written scripts (as signs and symbols) in language.[17] The formal study of language also led to the growth of fields like psycholinguistics, which explores the representation and function of language in the mind; neurolinguistics, which studies language processing in the brain; biolinguistics, which studies the biology and evolution of language; and language acquisition, which investigates how children and adults acquire the knowledge of one or more languages.

Linguistics also deals with the social, cultural, historical and political factors that influence language, through which linguistic and language-based context is often determined.[18] Research on language through the sub-branches of historical and evolutionary linguistics also focus on how languages change and grow, particularly over an extended period of time.

Language documentation combines anthropological inquiry (into the history and culture of language) with linguistic inquiry, in order to describe languages and their grammars. Lexicography involves the documentation of words that form a vocabulary. Such a documentation of a linguistic vocabulary from a particular language is usually compiled in a dictionary. Computational linguistics is concerned with the statistical or rule-based modeling of natural language from a computational perspective. Specific knowledge of language is applied by speakers during the act of translation and interpretation, as well as in language education – the teaching of a second or foreign language. Policy makers work with governments to implement new plans in education and teaching which are based on linguistic research.

Related areas of study also includes the disciplines of semiotics (the study of direct and indirect language through signs and symbols), literary criticism (the historical and ideological analysis of literature, cinema, art, or published material), translation (the conversion and documentation of meaning in written/spoken text from one language or dialect onto another), and speech-language pathology (a corrective method to cure phonetic disabilities and dis-functions at the cognitive level).

YouTube Encyclopedic

  • 1/5
    Views:
    1 143 491
    150 492
    799
    114 979
    34 237
  • ✪ Steven Pinker: Linguistics as a Window to Understanding the Brain
  • ✪ [Introduction to Linguistics] Introduction and Overview
  • ✪ What Is Linguistics? | All About My Major
  • ✪ What is Linguistics? | Definition and Branches of Linguistics
  • ✪ Misconceptions about Linguistics

Transcription

My name is Steve Pinker, and I’m Professor of Psychology at Harvard University.  And today I’m going to speak to you about language.  I’m actually not a linguist, but a cognitive scientist.  I’m not so much interested as language as an object in its own right, but as a window to the human mind. Language is one of the fundamental topics in the human sciences.  It’s the trait that most conspicuously distinguishes humans from other species, it’s essential to human cooperation; we accomplish amazing things by sharing our knowledge or coordinating our actions by means of words.  It poses profound scientific mysteries such as, how did language evolve in this particular species?  How does the brain compute language? But also, language has many practical applications not surprisingly given how central it is to human life.  Language comes so naturally to us that we’re apt to forget what a strange and miraculous gift it is.  But think about what you’re doing for the next hour.   You’re going to be listening patiently as a guy makes noise as he exhales.  Now, why would you do something like that?  It’s not that I can claim that the sounds I’m going to make are particularly mellifluous, but rather I’ve coded information into the exact sequences of hisses and hums and squeaks and pops that I’ll be making.  You have the ability to recover the information from that stream of noises allowing us to share ideas. Now, the ideas we are going to share are about this talent, language, but with a slightly different sequence of hisses and squeaks, I could cause you to be thinking thoughts about a vast array of topics, anything from the latest developments in your favorite reality show to theories of the origin of the universe.  This is what I think of as the miracle of language, its vast expressive power, and it’s a phenomenon that still fills me with wonder, even after having studied language for 35 years.  And it is the prime phenomenon that the science of language aims to explain.   Not surprisingly, language is central to human life.  The Biblical story of the Tower of Babel reminds us that humans accomplish great things because they can exchange information about their knowledge and intentions via the medium of language.  Language, moreover, is not a peculiarity of one culture, but it has been found in every society ever studied by anthropologists. There’s some 6,000 languages spoken on Earth, all of them complex, and no one has ever discovered a human society that lacks complex language.  For this and other reasons, Charles Darwin wrote, “Man has an instinctive tendency to speak as we see in the babble of our young children while no child has an instinctive tendency to bake, brew or write.”  Language is an intricate talent and it’s not surprising that the science of language should be a complex discipline. It includes the study of how language itself works including:  grammar, the assembly of words, phrases and sentences; phonology, the study of sound; semantics, the study of meaning; and pragmatics, the study of the use of language in conversation.  Scientists interested in language also study how it is processed in real time, a field called psycholinguistics; how is it acquired by children, the study of language acquisition.  And how it is computed in the brain, the discipline called neurolinguistics. 
 Now, before we begin, it’s important to not to confuse language with three other things that are closely related to language.  One of them is written language.  Unlike spoken language, which is found in all human cultures throughout history, writing was invented a very small number of times in human history, about 5,000 years ago.   And alphabetic writing where each mark on the page stands for a vowel or a consonant, appears to have been invented only once in all of human history by the Canaanites about 3,700 years ago.  And as Darwin pointed out, children have no instinctive tendency to write, but have to learn it through construction and schooling. A second thing not to confuse language with is proper grammar.  Linguists distinguish between descriptive grammar - the rules, that characterize how people to speak - and prescriptive grammar - rules that characterize how people ought to speak if they are writing careful written prose.   A dirty secret from linguistics is that not only are these not the same kinds of rules, but many of the prescriptive rules of language make no sense whatsoever.  Take one of the most famous of these rules, the rule not to split infinitives.   According to this rule, Captain Kirk made a grievous grammatical error when he said that the mission of the Enterprise was “to boldly go where no man has gone before.”  He should have said, according to these editors, “to go boldly where no man has gone before,” which immediately clashes with the rhythm and structure of ordinary English.  In fact, this prescriptive rule was based on a clumsy analogy with Latin where you can’t splint an infinitive because it’s a single word, as in facary[ph] to do.  Julius Caesar couldn’t have split an infinitive if he wanted to.  That rule was translated literally over into English where it really should not apply.   Another famous prescriptive rule is that, one should never use a so-called double negative.  Mick Jagger should not have sung, “I can’t get no satisfaction,” he really should have sung, “I can’t get any satisfaction.”  Now, this is often promoted as a rule of logical speaking, but “can’t” and “any” is just as much of a double negative as “can’t” and “no.”  The only reason that “can’t get any satisfaction” is deemed correct and “can’t get no satisfaction” is deemed ungrammatical is that the dialect of English spoken in the south of England in the 17th century used “can’t” “any” rather than “can’t” “no.”   If the capital of England had been in the north of the country instead of the south of the country, then “can’t get no,” would have been correct and “can’t get any,” would have been deemed incorrect.  There’s nothing special about a language that happens to be chosen as the standard for a given country.  In fact, if you compare the rules of languages and so-called dialects, each one is complex in different ways.  Take for example, African-American vernacular English, also called Black English or Ebonics.  There is a construction in African-American where you can say, “He be workin,” which is not an error or bastardization or a corruption of Standard English, but in fact conveys a subtle distinction, one that’s different than simply, “He workin.”  “He be workin,” means that he is employed; he has a job, “He workin,” means that he happens to be working at the moment that you and I are speaking.   Now, this is a tense difference that can be made in African-American English that is not made in Standard English, one of many examples in which the dialects have their own set of rules that is just as sophisticated and complex as the one in the standard language.   Now, a third thing, not to confuse language with is thought.  Many people report that they think in language, but commune of psychologists have shown that there are many kinds of thought that don’t actually take place in the form of sentences.   (1.) Babies (and other mammals) communicate without speech For example, we know from ingenious experiments that non-linguistic creatures, such as babies before they’ve learned to speak, or other kinds of animals, have sophisticated kinds of cognition, they register cause and effect and objects and the intentions of other people, all without the benefit of speech.   (2.) Types of thinking go on without language--visual thinking We also know that even in creatures that do have language, namely adults, a lot of thinking goes on in forms other than language, for example, visual imagery.  If you look at the top two three-dimensional figures in this display, and I would ask you, do they have the same shape or a different shape?  People don’t solve that problem by describing those strings of cubes in words, but rather by taking an image of one and mentally rotating it into the orientation of the other, a form of non-linguistic thinking.   (3.) We use tacit knowledge to understand language and remember the gist For that matter, even when you understand language, what you come away with is not in itself the actual language that you hear.  Another important finding in cognitive psychology is that long-term memory for verbal material records the gist or the meaning or the content of the words rather than the exact form of the words.   For example, I like to think that you retain some memory of what I have been saying for the last 10 minutes.  But I suspect that if I were to ask you to reproduce any sentence that I have uttered, you would be incapable of doing so.  What sticks in memory is far more abstract than the actual sentences, something that we can call meaning or content or semantics.   In fact, when it even comes to   understanding a sentence, the actual words are the tip of a vast iceberg of a very rapid, unconscious, non-linguistic processing that’s necessary even to make sense of the language itself.  And I’ll illustrate this with a classic bit of poetry, the lines from the shampoo bottle.  “Wet hair, lather, rinse, repeat.”   Now, in understanding that very simple snatch of language, you have to know, for example, that when you repeat, you don’t wet your hair a second time because its already wet, and when you get to the end of it and you see “repeat,” you don’t keep repeating over and over in infinite loop, repeat here means, “repeat just once.”  Now this tacit knowledge of what the writers **** of language had in mind is necessary to understand language, but it, itself, is not language.  (4.) If language is thinking, then where did it come from? Finally, if language were really thought, it would raise the question of where language would come from if it were incapable of thinking without language.  After all, the English language was not designed by some committee of Martians who came down to Earth and gave it to us.  Rather, language is a grassroots phenomenon.  It’s the original wiki, which aggregates the contributions of hundreds of thousands of people who invent jargon and slang and new constructions, some of them get accumulated into the language as people seek out new ways of expressing their thoughts, and that’s how we get a language in the first place.   Now, this not to deny that language can affect thought and linguistics has long been interested in what has sometimes been called, the linguistic relativity hypothesis or the Sapir-Whorf Hypothesis (note correct spelling, named after the two linguists who first formulated it, namely that language can affect thought.  There’s a lot of controversy over the status of the linguistic relativity hypothesis, but no one believes that language is the same thing as thought and that all of our mental life consists of reciting sentences.   Now that we have set aside what language is not, let’s turn to what language is beginning with the question of how language works. In a nutshell, you can divide language into three topics.   There are the words that are the basic components of sentences that are stored in a part of long-term memory that we can call the mental lexicon or the mental dictionary.  There are rules, the recipes or algorithms that we use to assemble bits of language into more complex stretches of language including syntax, the rules that allow us to assemble words into phrases and sentences; Morphology, the rules that allow us to assemble bits of words, like prefixes and suffixes into complex words; Phonology, the rules that allow us to combine vowels and consonants into the smallest words.  And then all of this knowledge of language has to connect to the world through interfaces that allow us to understand language coming from others to produce language that others can understand us, the language interfaces. Let’s start with words. The basic principle of a word was identified by the Swiss linguist, Ferdinand de Saussure, more than 100 years ago when he called attention to the arbitrariness of the sign.  Take for example the word, “duck.”  The word, “duck” doesn’t look like a duck or walk like a duck or quack like a duck, but I can use it to get you to think the thought of a duck because all of us at some point in our lives have memorized that brute force association between that sound and that meaning, which means that it has to be stored in memory in some format, in a very simplified form and an entry in the mental lexicon might look something like this.  There is a symbol for the word itself, there is some kind of specification of its sound and there’s some kind of specification of its meaning.   Now, one of the remarkable facts about the mental lexicon is how capacious it is.  Using dictionary sampling techniques where you say, take the top left-hand word on every 20th page of the dictionary, give it to people in a multiple choice test, correct for guessing, and multiply by the size of the dictionary, you can estimate that a typical high school graduate has a vocabulary of around 60,000 words, which works out to a rate of learning of about one new word every two hours starting from the age of one.  When you think that every one of these words is arbitrary as a telephone number of a date in history, you’re reminded about the remarkable capacity of human long-term memory to store the meanings and sounds of words.   But of course, we don’t just blurt out individual words, we combine them into phrases and sentences.  And that brings up the second major component of language; namely, grammar.   Now the modern study of grammar is inseparable to the contributions of one linguist, the famous scholar, Noam Chomsky, who set the agenda for the field of linguistics for the last 60 years.  To begin with, Chomsky noted that the main puzzle that we have to explain in understanding language is creativity or as linguists often call it productivity, the ability to produce and understand new sentences.   Except for a small number of clichéd formulas, just about any sentence that you produce or understand is a brand new combination produced for the first time perhaps in your life, perhaps even in the history of the species.  We have to explain how people are capable of doing it.  It shows that when we know a language, we haven’t just memorized a very long list of sentences, but rather have internalized a grammar or algorithm or recipe for combining elements into brand new assemblies.  For that reason, Chomsky has insisted that linguistics is really properly a branch of psychology and is a window into the human mind.  A second insight is that languages have a syntax which can’t be identified with their meaning.  Now, the only quotation that I know of, of a linguist that has actually made it into Bartlett’s Familiar Quotations, is the following sentence from Chomsky, from 1956, “Colorless, green ideas sleep furiously.”  Well, what’s the point of that sentence?  The point is that it is very close to meaningless.  On the other hand, any English speaker can instantly recognize that it conforms to the patterns of English syntax.  Compare, for example, “furiously sleep ideas dream colorless,” which is also meaningless, but we perceive as a word salad.   A third insight is that syntax doesn’t consist of a string of word by word associations as in stimulus response theories in psychology where producing a word is a response which you then hear and it becomes a stimulus to producing the next word, and so on.  Again, the sentence, “colorless green ideas sleep furiously,” can help make this point.  Because if you look at the word by word transition probabilities in that sentence, for example, colorless and then green; how often have you heard colorless and green in succession.  Probably zero times.  Green and ideas, those two words never occur together, ideas and sleep, sleep and furiously.  Every one of the transition probabilities is very close to zero, nonetheless, the sentence as a whole can be perceived as a well-formed English sentence.   Language in general has long distance dependencies.  The word in one position in a sentence can dictate the choice of the word several positions downstream.  For example, if you begin a sentence with “either,” somewhere down the line, there has to be an “or.”  If you have an “if,” generally, you expect somewhere down the line there to be a “then.”  There’s a story about a child who says to his father, “Daddy, why did you bring that book that I don’t want to be read to out of, up for?”  Where you have a set of nested or embedded long distance dependencies.   Indeed, one of the applications of linguistics to the study of good prose style is that sentences can be rendered difficult to understand if they have too many long distance dependencies because that could put a strain on the short-term memory of the reader or listener while trying to understand them.   Rather than a set of word by word associations, sentences are assembled in a hierarchical structure that looks like an upside down tree.  Let me give you an example of how that works in the case of English.  One of the basic rules of English is that a sentence consists of a noun phrase, the subject, followed by a verb phrase, the predicate. A second rule in turn expands the verb phrase.  A very phrase consists of a verb followed by a noun phrase, the object, followed by a sentence, the complement as, “I told him that it was sunny outside.”    Now, why do linguists insist that language must be composed out of  phrase structural rules?   (1.) Rules allow for open-ended creativity Well for one thing, that helps explain the main phenomenon that we want to explain, mainly the open-ended creativity of language.   (2.) Rules allow for expression of unfamiliar meaning It allows us to express unfamiliar meanings.  There’s a cliché in journalism for example, that when a dog bites a man, that isn’t news, but when a man bites a dog, that is news.  The beauty of grammar is that it allows us to convey news by assembling into familiar word in brand new combinations.  Also, because of the way phrase structure rules work, they produce a vast number of possible combinations.  (3.) Rules allow for production of vast numbers of combinations Moreover, the number of different thoughts that we can express through the combinatorial power of grammar is not just humongous, but in a technical sense, it’s infinite.  Now of course, no one lives an infinite number of years, and therefore can shell off their ability to understand an infinite number of sentences, but you can make the point in the same way that a mathematician can say that someone who understands the rules of arithmetic knows that there are an infinite number of numbers, namely if anyone ever claimed to have found the longest one, you can always come up with one that’s even bigger by adding a one to it.  And you can do the same thing with language.   Let me illustrate it in the following way.  As a matter of fact, there has been a claim that there is a world’s longest sentence.   Who would make such a claim?  Well, who else?  The Guinness Book of World Records.  You can look it up.  There is an entry for the World’s Longest Sentence.  It is 1,300 words long.  And it comes from a novel by William Faulkner.  Now I won’t read all 1,300 words, but I’ll just tell you how it begins.   “They both bore it as though in deliberate flatulent exaltation…” and it runs on from there.  But I’m here to tell you that in fact, this is not the world’s longest sentence.  And I’ve been tempted to obtain immortality in Guinness by submitting the following record breaker.  "Faulkner wrote, they both bore it as though in deliberate flatulent exaltation.”  But sadly, this would not be immortality after all but only the proverbial 15 minutes of fame because based on what you now know, you could submit a record breaker for the record breaker namely, "Guinness noted that Faulkner wrote" or "Pinker mentioned that Guinness noted that Faulkner wrote", or "who cares that Pinker mentioned that Guinness noted that Faulkner wrote…"   Take for example, the following wonderfully ambiguous sentence that appeared in TV Guide.  “On tonight’s program, Conan will discuss sex with Dr. Ruth.”   Now this has a perfectly innocent meaning in which the verb, “discuss” involves two things, namely the topic of discussion, “sex” and the person with who it’s being discussed, in this case, with Dr. Ruth.  But is has a somewhat naughtier meaning if you rearrange the words into phrases according to a different structure in which case “sex with Dr. Ruth” is the topic of conversation, and that’s what’s being discussed.   Now, phrase structure not only can account for our ability to produce so many sentences, but it’s also necessary for us to understand what they mean.  The geometry of branches in a phrase structure is essential to figuring out who did what to whom. Another important contribution of Chomsky to the science of language is the focus on language acquisition by children. Now, children can’t memorize sentences because knowledge of language isn’t just one long list of memorized sentences, but somehow they must distill out or abstract out the rules that goes into assembling sentences based on what they hear coming out of their parent’s mouths when they were little.  And the talent of using rules to produce combinations is in evidence from the moment that kids begin to speak.   Children create sentences unheard from adults At the two-word stage, which you typically see in children who are 18 months or a bit older, kids are producing the smallest sentences that deserve to be counted as sentences, namely two words long.  But already it’s clear that they are putting them together using rules in their own mind.  To take an example, a child might say, “more outside,” meaning, take them outside or let them stay outside.  Now, adults don’t say, “more outside.”  So it’s not a phrase that the child simply memorized by rote, but it shows that already children are using these rules to put together new combinations.   Another example, a child having jam washed from his fingers said to his mother 'all gone sticky'. Again, not a phrase that you could ever have copied from a parent, but one that shows the child producing new combinations.   Past tense rule An easy way of showing that children assimilate rules of grammar unconsciously from the moment they begin to speak, is the use of the past tense rule.  For example, children go through a long stage in which they make errors like, “We holded the baby rabbits” or “He teared the paper and then he sticked it.”  Cases in which they over generalize the regular rule of forming the past tense, add ‘ed’ to irregular verbs like “hold,” “stick” or “tear.”  And it’s easy to show… it’s easy to get children to flaunt this ability to apply rules productively in a laboratory demonstration called the Wug Test.  You bring a kid into a lab.  You show them a picture of a little bird and you say, “This is a wug.”  And you show them another picture and you say, “Well, now there are two of them.”  There are two and children will fill in the gap by saying “wugs.”  Again, a form they could not have memorize because it’s invented for the experiment, but it shows that they have productive mastery of the regular plural rule in English.   And famously, Chomsky claimed that children solved the problem of language acquisition by having the general design of language already wired into them in the form of a universal grammar.   A spec sheet for what the rules of any language have to look like.   What is the evidence that children are born with a universal grammar?  Well, surprisingly, Chomsky didn’t propose this by actually studying kids in the lab or kids in the home, but through a more abstract argument called, “The poverty of the input.”  Namely, if you look at what goes into the ears of a child and look at the talent they end up with as adults, there is a big chasm between them that can only be filled in by assuming that the child has a lot of knowledge of the way that language works already built in.   Here’s how the argument works.  One of the things that children have to learn when they learn English is how to form a question.  Now, children will get evidence from parent’s speech to how the question rule works, such as sentences like, “The man is here,” and the corresponding question, “Is the man here?”   Now, logically speaking, a child getting that kind of input could posit two different kinds of rules. There’s a simple word by word linear rule.  In this case, find the first “is” in the sentence and move it to the front.  “The man is here,” “Is the man here?” Now there’s a more complex rule that the child could posit called a structure dependent rule, one that looks at the geometry of the phrase structure tree.  In this case, the rule would be:  find the first “is” after the subject noun phrase and move that to the front of the sentence.  A diagram of what that rule would look like is as follows:  you look for the “is” that occurs after the subject noun phrase and that’s what gets moved to the front of the sentence.  Now, what’s the difference between the simple word-by-word rule and the more complex structured dependent rule?  Well, you can see the difference when it comes to performing the question from a slightly more complex sentence like, “The man who is tall is in the room.”   But how is the child supposed to learn that?  How did all of us end up with the correct structured dependent of the rule rather than the far simpler word-by-word version of the rule?  “Well,” Chomsky argues, “if you were actually to look at the kind of language that all of us hear, it’s actually quite rare to hear a sentence like, “Is the man who is tall in the room?  The kind of input that would logically inform you that the word-by-word rule is wrong and the structure dependent rule is right.  Nonetheless, we all grow up into adults who unconsciously use the structure dependent rule rather than the word-by-word rule.  Moreover, children don’t make errors like, “is the man who tall is in the room,” as soon as they begin to form complex questions, they use the structure dependent rule.  And that,” Chomsky argues, “is evidence that structure dependent rules are part of the definition of universal grammar that children are born with.”   Now, though Chomsky has been fantastically influential in the science of language that does not mean that all language scientists agree with him.  And there have been a number of critiques of Chomsky over the years.  For one thing, the critics point out, Chomsky hasn’t really shown principles of universal grammar that are specific to language itself as opposed to general ways in which the human mind works across multiple domains, language and vision and control of motion and memory and so on.  We don’t really know that universal grammar is specific to language, according to this critique.  Secondly, Chomsky and the linguists working with him have not examined all 6,000 of the world’s languages and shown that the principles of universal grammar apply to all 6,000.  They’ve posited it based on a small number of languages and the logic of the poverty of the input, but haven’t actually come through with the data that would be necessary to prove that universal grammar is really universal.   Finally, the critics argue, Chomsky has not shown that more general purpose learning models, such as neuro network models, are incapable of learning language together with all the other things that children learn, and therefore has not proven that there has to be specific knowledge how grammar works in order for the child to learn grammar.     Another component of language governs the sound pattern of language, the ways that the vowels and consonants can be assembled into the minimal units that go into words.  Phonology, as this branch of linguistics is called, consists of formation rules that capture what is a possible word in a language according to the way that it sounds.   To give you an example, the sequence, bluk, is not an English word, but you get a sense that it could be an English word that someone could coin a new form… that someone could coin a new term of English that we pronounce “bluk.”  But when you hear the sound ****, you instantly know thatthat not only isn’t it an English word, but it really couldn’t be an English word.  ****, by the way, comes from Yiddish and it means kind of to sigh or to moan.  Oi.  That’s to ****.   The reason that we recognize that it’s not English is because it has sounds like **** and sequences like ****, which aren’t part of the formation rules of English phonology.  But together with the rules that define the basic words of a language, there are also phonological rules that make adjustments to the sounds, depending on what the other words the word appears with.  Very few of us realize, for example, in English, that the past tense suffix “ed” is actually pronounced in three different ways.  When we say, “He walked,” we pronounce the “ed” like a “ta,” walked.  When we say “jogged,” we pronounce it as a “d,” jogged.  And when we say “patted,” we stick in a vowel, pat-ted, showing that the same suffix, “ed” can be readjusted in its pronunciation according to the rules of English phonology.   Now, when someone acquires English as a foreign language or acquires a foreign language in general, they carry over the rules of phonology of their first language and apply it to their second language.  We have a word for it; we call it an “accent.”  When a language user deliberately manipulates the rules of phonology, that is, when they don’t just speak in order to convey content, they pay attention as to what phonological structures are being used; we call it poetry and rhetoric.   So far, I’ve been talking about knowledge of language, the rules that go into defining what are possible sequences of language.  But those sequences have to get into the brain during speech comprehension and they have to get out during speech production.  And that takes us to the topic of language interfaces.   And let’s start with production.   This diagram here is literally a human cadaver that has been sawn in half.  An anatomist took a saw and [sound] allowing it to see in cross section the human vocal tract.  And that can illustrate how we get out knowledge of language out into the world as a sequence of sounds.   Now, each of us has at the top of our windpipe or trachea, a complex structure called the larynx or voice box; it’s behind your Adam’s Apple.  And the air coming out of your lungs have to go passed two cartilaginous flaps that vibrate and produce a rich, buzzy sound source, full of harmonics.  Before that vibrating sound gets out to the world, it has to pass through a gauntlet or chambers of the vocal tract.  The throat behind the tongue, the cavity above the tongue, the cavity formed by the lips, and when you block off airflow through the mouth, it can come out through the nose.   Now, each one of those cavities has a shape that, thanks to the laws of physics, will amplify some of the harmonics in that buzzy sound source and suppress others.  We can change the shape of those cavities when we move our tongue around.  When we move our tongue forward and backward, for example, as in “eh,” “aa,” “eh,” “aa,” we change the shape of the cavity behind the tongue, change the frequencies that are amplified or suppressed and the listener hears them as two different vowels.   Likewise, when we raise or lower the tongue, we change the shape of the resonant cavity above the tongue as in say, “eh,” “ah,” “eh,” “ah.”  Once again, the change in the mixture of harmonics is perceived as a change in the nature of the vowel.   When we stop the flow of air and then release it as in, “t,” “ca,” “ba.”  Then we hear a consonant rather than a vowel or even when we restrict the flow of air as in “f,” “ss” producing a chaotic noisy sound.  Each one of those sounds that gets sculpted by different articulators is perceived by the brain as a qualitatively different vowel or consonant.   Now, an interesting peculiarity of the human vocal track is that it obviously co-ops structures that evolved for different purposes for breathing and for swallowing and so on.  And it’s an… And it’s an interesting fact first noted by Darwin that the larynx over the course of evolution has descended in the throat so that every particle of food going from the mouth through the esophagus to the stomach has to pass over the opening into the larynx with some probability of being inhaled leading to the danger of death by choking.  And in fact, until the invention of the Heimlich Maneuver, several thousand people every year died of choking because of this maladaptive of the human vocal tract.  Why did we evolve a mouth and throat that leaves us vulnerable to choking?  Well, a plausible hypothesis is that it’s a compromise that was made in the course of evolution to allow us to speak.  By giving range to a variety of possibilities for alternating the resonant cavities, for moving the tongue back and forth and up and down, we expanded the range of speech sounds we could make, improve the efficiency of language, but suffered the compromise of an increased risk of choking showing that language presumably had some survival advantage that compensated for the disadvantage in choking.   What about the flow of information in the other direction, that is from the world into the brain, the process of speech comprehension?   Speech comprehension turns out to be an extraordinarily complex computational process, which we're reminded of every time we interact with a voicemail menu on a telephone or you use a dictation on our computers.  For example, One writer, using the state-of-the-art speech-to-text systems dictated the following words into his computer.  He dictated “book tour,” and it came out on the screen as “back to work.”  Another example, he said, “I truly couldn’t see,” and it came out on the screen as, “a cruelly good MC.”  Even more disconcertingly, he started a letter to his parents by saying, “Dear mom and dad,” and what came out on the screen, “The man is dead.”    Now, dictation systems have gotten better and better, but they still have a way to go before they can duplicate a human stenographer.   What is it about the problem of speech understanding that makes it so easy for a human, but so hard for a computer? Well, there are two main contributors.  One of them is the fact that each phony, each vowel or consonant actually comes out very differently, depending on what comes before and what comes after.  A phenomenon sometimes called co-articulation.   Let me give you an example.  The place called Cape Cod has two “c” sounds.   
Each of them symbolized by the letter “C,” the hard “C.”  Nonetheless, when you pay attention to the way you pronounce them, you notice that in fact, you pronounce them in very different parts of the mouth.  Try it.  Cape Cod, Cape Cod… “c,” “c”.  In one case, the “c” is produced way back in the mouth; the other it’s produced much farther forward.  We don’t notice that we pronounce “c” in two different ways depending whether it comes before an “a” or an “ah,” but that difference forms a difference in the shape of the resonant cavity in our mouth which produces a very different wave form.  And unless a computer is specifically programmed to take that variability into account, it will perceive those two different “c’s,” as a different sound that objectively speaking, they really are:  “c-eh” “c-oa”.  They really are different sounds, but our brain lumps them together.   The other reason that speech recognition is such a difficult problem is because of the absence of segmentation.  Now we have an illusion when we listen to speech that consists of a sequence to sounds corresponding to words.  But if you actually were to look at the wave form of a sentence on a oscilloscope, there would not be little silences between the words the way there are little bits of white space in printed words on a page, but rather a continuous ribbon in which the end of one word leads right to the beginning of the next.   It’s something that we’re aware of… It’s something that we’re aware of when we listen to speech in a foreign language when we have no idea where one word ends and the other one begins.  In our own language, we detect the word boundaries simply because in our mental lexicon, we have stretches of sound that correspond to one word that tell us where it ends.  But you can’t get that information from the wave form itself.   In fact, there’s a whole genre of wordplay that takes advantage of the fact that word boundaries are not physically present in the speech wave.  Novelty songs like Mairzy doats and dozy doats and liddle lamzy divey 
A kiddley divey too, wooden shoe? 

Now, it turns out that this is actually a grammatical sequence in words in English… Mares eat oats and does eat oats and little lambs eat ivy, a kid'll eat ivy too, wouldn’t you? When it is spoken or sung normally, the boundaries between words are obliterated and so the same sequence of sounds can be perceived either as nonsense or if you know what they’re meant to convey, as sentences.   Another example familiar to most children, Fuzzy Wuzzy was a bear, Fuzzy Wuzzy had no hair.  Fuzzy Wuzzy wasn’t very fuzzy, was he?  And the famous dogroll, I scream, you scream, we all scream for ice cream.  We are generally unaware of how unambiguous language is.  In context, we effortlessly and unconsciously derive the intended meaning of a sentence, but a poor computer not equipped with all of our common sense and human abilities and just going by the words and the rules is often flabbergasted by all the different possibilities.  Take a sentence as simple as “Mary had a little lamb,” you might think that that’s a perfectly simple unambiguous sentence.  But now imagine that it was continued with “with mint sauce.”  You realize that “have” is actually a highly ambiguous word. As a result, the computer translations can often deliver comically incorrect results.   According to legend, one of the first computer systems that was designed to translate from English to Russian and back again did the following given the sentence, “The spirit is willing, but the flesh is weak,” it translated it back as “The vodka is agreeable, but the meat is rotten.”  So why do people understand language so much better than computers?  What is the knowledge that we have that has been so hard to program into our machines?  Well, there’s a third interface between language and the rest of the mind, and that is the subject matter of the branch of linguistics called Pragmatics, namely, how people understand language in context using their knowledge of the world and their expectation about how other speakers communicate.   The most important principle of Pragmatics is called “the cooperative principle,” namely; assume that your conversational partner is working with you to try to get a meaning across truthfully and clearly.  And our knowledge of Pragmatics, like our knowledge of syntax and phonology and so on, is deployed effortlessly, but involves many intricate computations.  For example, if I were to say, “If you could pass the guacamole, that would be awesome.”  You understand that as a polite request meaning, give me the guacamole.  You don’t interpret it literally as a rumination about a hypothetical affair, you just assume that the person wanted something and was using that string of words to convey the request politely.   Often comedies will use the absence of pragmatics in robots as a source of humor.  As in the old “Get Smart” situation comedy, which had a robot named, Hymie, and a recurring joke in the series would be that Maxwell Smart would say to Hymie, “Hymie, can you give me a hand?”  And then Hymie would go, {sound}, remove his hand and pass it over to Maxwell Smart not understanding that “give me a hand,” in context means, help me rather than literally transfer the hand over to me.   Or take the following example of Pragmatics in action.  Consider the following dialogue, Martha says, “I’m leaving you.”  John says, “Who is he?”  Now, understanding language requires finding the antecedents pronouns, in this case who the “he” refers to, and any competent English speaker knows exactly who the “he” is, presumably John’s romantic rival even though it was never stated explicitly in any part of the dialogue.  This shows how we bring to bear on language understanding a vast store of knowledge about human behavior, human interactions, human relationships.  And we often have to use that background knowledge even to solve mechanical problems like, who does a pronoun like “he” refer to.  It’s that knowledge that’s extraordinarily difficult, to say the least to program into a computer.   Language is a miracle of the natural world because it allows us to exchange an unlimited number of ideas using a finite set of mental tools.  Those mental tools comprise a large lexicon of memorized words and a powerful mental grammar that can combine them.  Language thought of in this way should not be confused with writing, with the prescriptive rules of proper grammar or style or with thought itself.   Modern linguistics is guided by the questions, though not always the answers suggested by the linguist known as Noam Chomsky, namely how is the unlimited creativity of language possible?  What are the abstract mental structures that relate word to one another? How do children acquire them?   What is universal across languages?  And what does that say about the human mind?   The study of language has many practical applications including computers that understand and speak, the diagnosis and treatment of language disorders, the teaching of reading, writing, and foreign languages, the interpreting of the language of law, politics and literature. But for someone like me, language is eternally fascinating because it speaks to such fundamental questions of the human condition.  [Language] is really at the center of a number of different concerns of thought, of social relationships, of human biology, of human evolution, that all speak to what’s special about the human species.  Language is the most distinctively human talent.  Language is a window into human nature, and most significantly, the vast expressive power of language is one of the wonders of the natural world.  Thank you.

Contents

Nomenclature

Before the 20th century, the term philology, first attested in 1716,[19] was commonly used to refer to the study of language, which was then predominantly historical in focus.[20][21] Since Ferdinand de Saussure's insistence on the importance of synchronic analysis, however, this focus has shifted[22] and the term philology is now generally used for the "study of a language's grammar, history, and literary tradition", especially in the United States[23] (where philology has never been very popularly considered as the "science of language").[19]

Although the term "linguist" in the sense of "a student of language" dates from 1641,[24] the term "linguistics" is first attested in 1847.[24] It is now the usual term in English for the scientific study of language,[citation needed] though linguistic science is sometimes used.

Linguistics is a multi-disciplinary field of research that combines tools from natural sciences, social sciences, and the humanities.[25][26][27] Many linguists, such as David Crystal, conceptualize the field as being primarily scientific.[28] The term linguist applies to someone who studies language or is a researcher within the field, or to someone who uses the tools of the discipline to describe and analyse specific languages.[29]

Variation and universality

While some theories on linguistics focus on the different varieties that language produces, among different sections of society, others focus on the universal properties that are common to all human languages. The theory of variation therefore would elaborate on the different usages of popular languages like French and English across the globe, as well as its smaller dialects and regional permutations within their national boundaries. The theory of variation looks at the cultural stages that a particular language undergoes, and these include the following.

Pidgin

The pidgin stage in a language is a stage when communication occurs through a grammatically simplified means, developing between two or more groups that do not have a language in common. Typically, it is a mixture of languages at the stage when there occurs a mixing between a primary language with other language elements.

Creole

A creole stage in language occurs when there is a stable natural language developed from a mixture of different languages. It is a stage that occurs after a language undergoes its pidgin stage. At the creole stage, a language is a complete language, used in a community and acquired by children as their native language.

Dialect

A dialect is a variety of language that is characteristic of a particular group among the language speakers.[30] The group of people who are the speakers of a dialect are usually bound to each other by social identity. This is what differentiates a dialect from a register or a discourse, where in the latter case, cultural identity does not always play a role. Dialects are speech varieties that have their own grammatical and phonological rules, linguistic features, and stylistic aspects, but have not been given an official status as a language. Dialects often move on to gain the status of a language due to political and social reasons. Differentiation amongst dialects (and subsequently, languages too) is based upon the use of grammatical rules, syntactic rules, and stylistic features, though not always on lexical use or vocabulary. The popular saying that "a language is a dialect with an army and navy" is attributed as a definition formulated by Max Weinreich.

Universal grammar takes into account general formal structures and features that are common to all dialects and languages, and the template of which pre-exists in the mind of an infant child. This idea is based on the theory of generative grammar and the formal school of linguistics, whose proponents include Noam Chomsky and those who follow his theory and work.

"We may as individuals be rather fond of our own dialect. This should not make us think, though, that it is actually any better than any other dialect. Dialects are not good or bad, nice or nasty, right or wrong – they are just different from one another, and it is the mark of a civilised society that it tolerates different dialects just as it tolerates different races, religions and sexes."[31]

Discourse

Discourse is language as social practice (Baynham, 1995) and is a multilayered concept. As a social practice, discourse embodies different ideologies through written and spoken texts. Discourse analysis can examine or expose these ideologies. Discourse influences genre, which is chosen in response to different situations and finally, at micro level, discourse influences language as text (spoken or written) at the phonological or lexico-grammatical level. Grammar and discourse are linked as parts of a system.[32] A particular discourse becomes a language variety when it is used in this way for a particular purpose, and is referred to as a register.[33] There may be certain lexical additions (new words) that are brought into play because of the expertise of the community of people within a certain domain of specialization. Registers and discourses therefore differentiate themselves through the use of vocabulary, and at times through the use of style too. People in the medical fraternity, for example, may use some medical terminology in their communication that is specialized to the field of medicine. This is often referred to as being part of the "medical discourse", and so on.

Standard language

When a dialect is documented sufficiently through the linguistic description of its grammar, which has emerged through the consensual laws from within its community, it gains political and national recognition through a country or region's policies. That is the stage when a language is considered a standard variety, one whose grammatical laws have now stabilised from within the consent of speech community participants, after sufficient evolution, improvisation, correction, and growth. The English language, besides perhaps the French language, may be examples of languages that have arrived at a stage where they are said to have become standard varieties.

The study of a language's universal properties, on the other hand, include some of the following concepts.

Lexicon

The lexicon is a catalogue of words and terms that are stored in a speaker's mind. The lexicon consists of words and bound morphemes, which are parts of words that can't stand alone, like affixes. In some analyses, compound words and certain classes of idiomatic expressions and other collocations are also considered to be part of the lexicon. Dictionaries represent attempts at listing, in alphabetical order, the lexicon of a given language; usually, however, bound morphemes are not included. Lexicography, closely linked with the domain of semantics, is the science of mapping the words into an encyclopedia or a dictionary. The creation and addition of new words (into the lexicon) is called coining or neologization,[34] and the new words are called neologisms.

It is often believed that a speaker's capacity for language lies in the quantity of words stored in the lexicon. However, this is often considered a myth by linguists. The capacity for the use of language is considered by many linguists to lie primarily in the domain of grammar, and to be linked with competence, rather than with the growth of vocabulary. Even a very small lexicon is theoretically capable of producing an infinite number of sentences.

Relativity

As constructed popularly through the Sapir–Whorf hypothesis, relativists believe that the structure of a particular language is capable of influencing the cognitive patterns through which a person shapes his or her world view. Universalists believe that there are commonalities between human perception as there is in the human capacity for language, while relativists believe that this varies from language to language and person to person. While the Sapir–Whorf hypothesis is an elaboration of this idea expressed through the writings of American linguists Edward Sapir and Benjamin Lee Whorf, it was Sapir's student Harry Hoijer who termed it thus. The 20th century German linguist Leo Weisgerber also wrote extensively about the theory of relativity. Relativists argue for the case of differentiation at the level of cognition and in semantic domains. The emergence of cognitive linguistics in the 1980s also revived an interest in linguistic relativity. Thinkers like George Lakoff have argued that language reflects different cultural metaphors, while the French philosopher of language Jacques Derrida's writings have been seen to be closely associated with the relativist movement in linguistics, especially through deconstruction[35] and was even heavily criticized in the media at the time of his death for his theory of relativism.[36]

Structures

Linguistic structures are pairings of meaning and form. Any particular pairing of meaning and form is a Saussurean sign. For instance, the meaning "cat" is represented worldwide with a wide variety of different sound patterns (in oral languages), movements of the hands and face (in sign languages), and written symbols (in written languages). Linguistic patterns have proven their importance for the knowledge engineering field especially with the ever-increasing amount of available data.

Linguists focusing on structure attempt to understand the rules regarding language use that native speakers know (not always consciously). All linguistic structures can be broken down into component parts that are combined according to (sub)conscious rules, over multiple levels of analysis. For instance, consider the structure of the word "tenth" on two different levels of analysis. On the level of internal word structure (known as morphology), the word "tenth" is made up of one linguistic form indicating a number and another form indicating ordinality. The rule governing the combination of these forms ensures that the ordinality marker "th" follows the number "ten." On the level of sound structure (known as phonology), structural analysis shows that the "n" sound in "tenth" is made differently from the "n" sound in "ten" spoken alone. Although most speakers of English are consciously aware of the rules governing internal structure of the word pieces of "tenth", they are less often aware of the rule governing its sound structure. Linguists focused on structure find and analyze rules such as these, which govern how native speakers use language.

Linguistics has many sub-fields concerned with particular aspects of linguistic structure. The theory that elucidates on these, as propounded by Noam Chomsky, is known as generative theory or universal grammar. These sub-fields range from those focused primarily on form to those focused primarily on meaning. They also run the gamut of level of analysis of language, from individual sounds, to words, to phrases, up to cultural discourse.

Grammar

Sub-fields that focus on a grammatical study of language include the following.

  • Phonetics, the study of the physical properties of speech sound production and perception
  • Phonology, the study of sounds as abstract elements in the speaker's mind that distinguish meaning (phonemes)
  • Morphology, the study of morphemes, or the internal structures of words and how they can be modified
  • Syntax, the study of how words combine to form grammatical phrases and sentences
  • Semantics, the study of the meaning of words (lexical semantics) and fixed word combinations (phraseology), and how these combine to form the meanings of sentences
  • Pragmatics, the study of how utterances are used in communicative acts, and the role played by context and non-linguistic knowledge in the transmission of meaning
  • Discourse analysis, the analysis of language use in texts (spoken, written, or signed)
  • Stylistics, the study of linguistic factors (rhetoric, diction, stress) that place a discourse in context
  • Semiotics, the study of signs and sign processes (semiosis), indication, designation, likeness, analogy, metaphor, symbolism, signification, and communication

Style

Stylistics is the study and interpretation of texts for aspects of their linguistic and tonal style. Stylistic analysis entails the analysis of description of particular dialects and registers used by speech communities. Stylistic features include rhetoric,[37] diction, stress, satire, irony, dialogue, and other forms of phonetic variations. Stylistic analysis can also include the study of language in canonical works of literature, popular fiction, news, advertisements, and other forms of communication in popular culture as well. It is usually seen as a variation in communication that changes from speaker to speaker and community to community. In short, Stylistics is the interpretation of text.

Approaches

Theoretical

One major debate in linguistics concerns the very nature of language and how it should be understood. Some linguists hypothesize that there is a module in the human brain that allows people to undertake linguistic behaviour, which is part of the formalist approach. This "universal grammar" is considered to guide children when they learn language and to constrain what sentences are considered grammatical in any human language. Proponents of this view, which is predominant in those schools of linguistics that are based on the generative theory of Noam Chomsky, do not necessarily consider that language evolved for communication in particular. They consider instead that it has more to do with the process of structuring human thought (see also formal grammar).

Functional

Another group of linguists, by contrast, use the term "language" to refer to a communication system that developed to support cooperative activity and extend cooperative networks. Such theories of grammar, called "functional", view language as a tool that emerged and is adapted to the communicative needs of its users, and the role of cultural evolutionary processes are often emphasized over that of biological evolution.[38]

Methodology

Linguistics is primarily descriptive.[2] Linguists describe and explain features of language without making subjective judgments on whether a particular feature or usage is "good" or "bad". This is analogous to practice in other sciences: a zoologist studies the animal kingdom without making subjective judgments on whether a particular species is "better" or "worse" than another.

Prescription, on the other hand, is an attempt to promote particular linguistic usages over others, often favouring a particular dialect or "acrolect". This may have the aim of establishing a linguistic standard, which can aid communication over large geographical areas. It may also, however, be an attempt by speakers of one language or dialect to exert influence over speakers of other languages or dialects (see Linguistic imperialism). An extreme version of prescriptivism can be found among censors, who attempt to eradicate words and structures that they consider to be destructive to society. Prescription, however, may be practised appropriately in the teaching of language, like in ELT, where certain fundamental grammatical rules and lexical terms need to be introduced to a second-language speaker who is attempting to acquire the language.

Anthropology

The objective of describing languages is often to uncover cultural knowledge about communities. The use of anthropological methods of investigation on linguistic sources leads to the discovery of certain cultural traits among a speech community through its linguistic features. It is also widely used as a tool in language documentation, with an endeavour to curate endangered languages. However, now, linguistic inquiry uses the anthropological method to understand cognitive, historical, sociolinguistic and historical processes that languages undergo as they change and evolve, as well as general anthropological inquiry uses the linguistic method to excavate into culture. In all aspects, anthropological inquiry usually uncovers the different variations and relativities that underlie the usage of language.

Sources

Most contemporary linguists work under the assumption that spoken data and signed data are more fundamental than written data. This is because

  • Speech appears to be universal to all human beings capable of producing and perceiving it, while there have been many cultures and speech communities that lack written communication;
  • Features appear in speech which aren't always recorded in writing, including phonological rules, sound changes, and speech errors;
  • All natural writing systems reflect a spoken language (or potentially a signed one), even with pictographic scripts like Dongba writing Naxi homophones with the same pictogram, and text in writing systems used for two languages changing to fit the spoken language being recorded;
  • Speech evolved before human beings invented writing;
  • People learnt to speak and process spoken language more easily and earlier than they did with writing.

Nonetheless, linguists agree that the study of written language can be worthwhile and valuable. For research that relies on corpus linguistics and computational linguistics, written language is often much more convenient for processing large amounts of linguistic data. Large corpora of spoken language are difficult to create and hard to find, and are typically transcribed and written. In addition, linguists have turned to text-based discourse occurring in various formats of computer-mediated communication as a viable site for linguistic inquiry.

The study of writing systems themselves, graphemics, is, in any case, considered a branch of linguistics.

Analysis

Before the 20th century, linguists analysed language on a diachronic plane, which was historical in focus. This meant that they would compare linguistic features and try to analyse language from the point of view of how it had changed between then and later. However, with Saussurean linguistics in the 20th century, the focus shifted to a more synchronic approach, where the study was more geared towards analysis and comparison between different language variations, which existed at the same given point of time.

At another level, the syntagmatic plane of linguistic analysis entails the comparison between the way words are sequenced, within the syntax of a sentence. For example, the article "the" is followed by a noun, because of the syntagmatic relation between the words. The paradigmatic plane on the other hand, focuses on an analysis that is based on the paradigms or concepts that are embedded in a given text. In this case, words of the same type or class may be replaced in the text with each other to achieve the same conceptual understanding.

History

Early grammarians

The formal study of language began in India with Pāṇini, the 6th century BC grammarian who formulated 3,959 rules of Sanskrit morphology. Pāṇini's systematic classification of the sounds of Sanskrit into consonants and vowels, and word classes, such as nouns and verbs, was the first known instance of its kind. In the Middle East, Sibawayh, a non-Arab, made a detailed description of Arabic in AD 760 in his monumental work, Al-kitab fi al-nahw (الكتاب في النحو, The Book on Grammar), the first known author to distinguish between sounds and phonemes (sounds as units of a linguistic system). Western interest in the study of languages began somewhat later than in the East,[39] but the grammarians of the classical languages did not use the same methods or reach the same conclusions as their contemporaries in the Indic world. Early interest in language in the West was a part of philosophy, not of grammatical description. The first insights into semantic theory were made by Plato in his Cratylus dialogue, where he argues that words denote concepts that are eternal and exist in the world of ideas. This work is the first to use the word etymology to describe the history of a word's meaning. Around 280 BC, one of Alexander the Great's successors founded a university (see Musaeum) in Alexandria, where a school of philologists studied the ancient texts in and taught Greek to speakers of other languages. While this school was the first to use the word "grammar" in its modern sense, Plato had used the word in its original meaning as "téchnē grammatikḗ" (Τέχνη Γραμματική), the "art of writing", which is also the title of one of the most important works of the Alexandrine school by Dionysius Thrax.[40] Throughout the Middle Ages, the study of language was subsumed under the topic of philology, the study of ancient languages and texts, practised by such educators as Roger Ascham, Wolfgang Ratke, and John Amos Comenius.[41]

Comparative philology

In the 18th century, the first use of the comparative method by William Jones sparked the rise of comparative linguistics.[42] Bloomfield attributes "the first great scientific linguistic work of the world" to Jacob Grimm, who wrote Deutsche Grammatik.[43] It was soon followed by other authors writing similar comparative studies on other language groups of Europe. The study of language was broadened from Indo-European to language in general by Wilhelm von Humboldt, of whom Bloomfield asserts:[43]

This study received its foundation at the hands of the Prussian statesman and scholar Wilhelm von Humboldt (1767–1835), especially in the first volume of his work on Kavi, the literary language of Java, entitled Über die Verschiedenheit des menschlichen Sprachbaues und ihren Einfluß auf die geistige Entwickelung des Menschengeschlechts (On the Variety of the Structure of Human Language and its Influence upon the Mental Development of the Human Race).

Structuralism

Early in the 20th century, Saussure introduced the idea of language as a static system of interconnected units, defined through the oppositions between them. By introducing a distinction between diachronic and synchronic analyses of language, he laid the foundation of the modern discipline of linguistics. Saussure also introduced several basic dimensions of linguistic analysis that are still foundational in many contemporary linguistic theories, such as the distinctions between syntagm and paradigm, and the langue-parole distinction, distinguishing language as an abstract system (langue) from language as a concrete manifestation of this system (parole).[44] Substantial additional contributions following Saussure's definition of a structural approach to language came from The Prague school, Leonard Bloomfield, Charles F. Hockett, Louis Hjelmslev, Émile Benveniste and Roman Jakobson.[45][46]

Generativism

During the last half of the 20th century, following the work of Noam Chomsky, linguistics was dominated by the generativist school. While formulated by Chomsky in part as a way to explain how human beings acquire language and the biological constraints on this acquisition, in practice it has largely been concerned with giving formal accounts of specific phenomena in natural languages. Generative theory is modularist and formalist in character. Chomsky built on earlier work of Zellig Harris to formulate the generative theory of language. According to this theory the most basic form of language is a set of syntactic rules universal for all humans and underlying the grammars of all human languages. This set of rules is called Universal Grammar, and for Chomsky describing it is the primary objective of the discipline of linguistics. For this reason the grammars of individual languages are of importance to linguistics only in so far as they allow us to discern the universal underlying rules from which the observable linguistic variability is generated.

In the classic formalization of generative grammars first proposed by Noam Chomsky in the 1950s,[47][48] a grammar G consists of the following components:

A formal description of language attempts to replicate a speaker's knowledge of the rules of their language, and the aim is to produce a set of rules that is minimally sufficient to successfully model valid linguistic forms.

Functionalism

Functional theories of language propose that since language is fundamentally a tool, it is reasonable to assume that its structures are best analysed and understood with reference to the functions they carry out. Functional theories of grammar differ from formal theories of grammar, in that the latter seek to define the different elements of language and describe the way they relate to each other as systems of formal rules or operations, whereas the former defines the functions performed by language and then relates these functions to the linguistic elements that carry them out. This means that functional theories of grammar tend to pay attention to the way language is actually used, and not just to the formal relations between linguistic elements.[49]

Functional theories describe language in term of the functions existing at all levels of language.

  • Phonological function: the function of the phoneme is to distinguish between different lexical material.
  • Semantic function: (Agent, Patient, Recipient, etc.), describing the role of participants in states of affairs or actions expressed.
  • Syntactic functions: (e.g. Subject and Object), defining different perspectives in the presentation of a linguistic expression
  • Pragmatic functions: (Theme and Rheme, Topic and Focus, Predicate), defining the informational status of constituents, determined by the pragmatic context of the verbal interaction. Functional descriptions of grammar strive to explain how linguistic functions are performed in communication through the use of linguistic forms.

Cognitive linguistics

Cognitive linguistics emerged as a reaction to generativist theory in the 1970s and 1980s. Led by theorists like Ronald Langacker and George Lakoff, cognitive linguists propose that language is an emergent property of basic, general-purpose cognitive processes. In contrast to the generativist school of linguistics, cognitive linguistics is non-modularist and functionalist in character. Important developments in cognitive linguistics include cognitive grammar, frame semantics, and conceptual metaphor, all of which are based on the idea that form–function correspondences based on representations derived from embodied experience constitute the basic units of language.

Cognitive linguistics interprets language in terms of concepts (sometimes universal, sometimes specific to a particular tongue) that underlie its form. It is thus closely associated with semantics but is distinct from psycholinguistics, which draws upon empirical findings from cognitive psychology in order to explain the mental processes that underlie the acquisition, storage, production and understanding of speech and writing. Unlike generative theory, cognitive linguistics denies that there is an autonomous linguistic faculty in the mind; it understands grammar in terms of conceptualization; and claims that knowledge of language arises out of language use.[50] Because of its conviction that knowledge of language is learned through use, cognitive linguistics is sometimes considered to be a functional approach, but it differs from other functional approaches in that it is primarily concerned with how the mind creates meaning through language, and not with the use of language as a tool of communication.

Areas of research

Historical linguistics

Historical linguists study the history of specific languages as well as general characteristics of language change. The study of language change is also referred to as "diachronic linguistics" (the study of how one particular language has changed over time), which can be distinguished from "synchronic linguistics" (the comparative study of more than one language at a given moment in time without regard to previous stages). Historical linguistics was among the first sub-disciplines to emerge in linguistics, and was the most widely practised form of linguistics in the late 19th century. However, there was a shift to the synchronic approach in the early twentieth century with Saussure, and became more predominant in western linguistics with the work of Noam Chomsky.

Ecolinguistics

Ecolinguistics explores the role of language in the life-sustaining interactions of humans, other species and the physical environment. The first aim is to develop linguistic theories which see humans not only as part of society, but also as part of the larger ecosystems that life depends on. The second aim is to show how linguistics can be used to address key ecological issues, from climate change and biodiversity loss to environmental justice.[51]

Sociolinguistics

Sociolinguistics is the study of how language is shaped by social factors. This sub-discipline focuses on the synchronic approach of linguistics, and looks at how a language in general, or a set of languages, display variation and varieties at a given point in time. The study of language variation and the different varieties of language through dialects, registers, and ideolects can be tackled through a study of style, as well as through analysis of discourse. Sociolinguists research on both style and discourse in language, and also study the theoretical factors that are at play between language and society.

Developmental linguistics

Developmental linguistics is the study of the development of linguistic ability in individuals, particularly the acquisition of language in childhood. Some of the questions that developmental linguistics looks into is how children acquire different languages, how adults can acquire a second language, and what the process of language acquisition is.

Neurolinguistics

Neurolinguistics is the study of the structures in the human brain that underlie grammar and communication. Researchers are drawn to the field from a variety of backgrounds, bringing along a variety of experimental techniques as well as widely varying theoretical perspectives. Much work in neurolinguistics is informed by models in psycholinguistics and theoretical linguistics, and is focused on investigating how the brain can implement the processes that theoretical and psycholinguistics propose are necessary in producing and comprehending language. Neurolinguists study the physiological mechanisms by which the brain processes information related to language, and evaluate linguistic and psycholinguistic theories, using aphasiology, brain imaging, electrophysiology, and computer modelling. Amongst the structures of the brain involved in the mechanisms of neurolinguistics, the cerebellum which contains the highest numbers of neurons has a major role in terms of predictions required to produce language.[52]

Applied linguistics

Linguists are largely concerned with finding and describing the generalities and varieties both within particular languages and among all languages. Applied linguistics takes the results of those findings and "applies" them to other areas. Linguistic research is commonly applied to areas such as language education, lexicography, translation, language planning, which involves governmental policy implementation related to language use, and natural language processing. "Applied linguistics" has been argued to be something of a misnomer.[53] Applied linguists actually focus on making sense of and engineering solutions for real-world linguistic problems, and not literally "applying" existing technical knowledge from linguistics. Moreover, they commonly apply technical knowledge from multiple sources, such as sociology (e.g., conversation analysis) and anthropology. (Constructed language fits under Applied linguistics.)

Today, computers are widely used in many areas of applied linguistics. Speech synthesis and speech recognition use phonetic and phonemic knowledge to provide voice interfaces to computers. Applications of computational linguistics in machine translation, computer-assisted translation, and natural language processing are areas of applied linguistics that have come to the forefront. Their influence has had an effect on theories of syntax and semantics, as modelling syntactic and semantic theories on computers constraints.

Linguistic analysis is a sub-discipline of applied linguistics used by many governments to verify the claimed nationality of people seeking asylum who do not hold the necessary documentation to prove their claim.[54] This often takes the form of an interview by personnel in an immigration department. Depending on the country, this interview is conducted either in the asylum seeker's native language through an interpreter or in an international lingua franca like English.[54] Australia uses the former method, while Germany employs the latter; the Netherlands uses either method depending on the languages involved.[54] Tape recordings of the interview then undergo language analysis, which can be done either by private contractors or within a department of the government. In this analysis, linguistic features of the asylum seeker are used by analysts to make a determination about the speaker's nationality. The reported findings of the linguistic analysis can play a critical role in the government's decision on the refugee status of the asylum seeker.[54]

Interdisciplinary fields

Within the broad discipline of linguistics, various emerging sub-disciplines focus on a more detailed description and analysis of language, and are often organized on the basis of the school of thought and theoretical approach that they pre-suppose, or the external factors that influence them.

Semiotics

Semiotics is the study of sign processes (semiosis), or signification and communication, signs, and symbols, both individually and grouped into sign systems, including the study of how meaning is constructed and understood. Semioticians often do not restrict themselves to linguistic communication when studying the use of signs but extend the meaning of "sign" to cover all kinds of cultural symbols. Nonetheless, semiotic disciplines closely related to linguistics are literary studies, discourse analysis, text linguistics, and philosophy of language. Semiotics, within the linguistics paradigm, is the study of the relationship between language and culture. Historically, Edward Sapir and Ferdinand De Saussure's structuralist theories influenced the study of signs extensively until the late part of the 20th century, but later, post-modern and post-structural thought, through language philosophers including Jacques Derrida, Mikhail Bakhtin, Michel Foucault, and others, have also been a considerable influence on the discipline in the late part of the 20th century and early 21st century.[55] These theories emphasize the role of language variation, and the idea of subjective usage, depending on external elements like social and cultural factors, rather than merely on the interplay of formal elements.

Language documentation

Since the inception of the discipline of linguistics, linguists have been concerned with describing and analysing previously undocumented languages. Starting with Franz Boas in the early 1900s, this became the main focus of American linguistics until the rise of formal structural linguistics in the mid-20th century. This focus on language documentation was partly motivated by a concern to document the rapidly disappearing languages of indigenous peoples. The ethnographic dimension of the Boasian approach to language description played a role in the development of disciplines such as sociolinguistics, anthropological linguistics, and linguistic anthropology, which investigate the relations between language, culture, and society.

The emphasis on linguistic description and documentation has also gained prominence outside North America, with the documentation of rapidly dying indigenous languages becoming a primary focus in many university programmes in linguistics. Language description is a work-intensive endeavour, usually requiring years of field work in the language concerned, so as to equip the linguist to write a sufficiently accurate reference grammar. Further, the task of documentation requires the linguist to collect a substantial corpus in the language in question, consisting of texts and recordings, both sound and video, which can be stored in an accessible format within open repositories, and used for further research.[56]

Translation

The sub-field of translation includes the translation of written and spoken texts across mediums, from digital to print and spoken. To translate literally means to transmute the meaning from one language into another. Translators are often employed by organizations, such as travel agencies as well as governmental embassies to facilitate communication between two speakers who do not know each other's language. Translators are also employed to work within computational linguistics setups like Google Translate for example, which is an automated, programmed facility to translate words and phrases between any two or more given languages. Translation is also conducted by publishing houses, which convert works of writing from one language to another in order to reach varied audiences. Academic Translators, specialize and semi specialize on various other disciplines such as; Technology, Science, Law, Economics etc.

Biolinguistics

Biolinguistics is the study of the biology and evolution of language. It is a highly interdisciplinary field, including linguists, biologists, neuroscientists, psychologists, mathematicians, and others. By shifting the focus of investigation in linguistics to a comprehensive scheme that embraces natural sciences, it seeks to yield a framework by which the fundamentals of the faculty of language are understood.

Clinical linguistics

Clinical linguistics is the application of linguistic theory to the fields of Speech-Language Pathology. Speech language pathologists work on corrective measures to cure communication disorders and swallowing disorders

Chaika (1990) showed that people with schizophrenia who display speech disorders, like rhyming inappropriately, have attentional dysfunction, as when a patient, shown a colour chip and then asked to identify it, responded "looks like clay. Sounds like gray. Take you for a roll in the hay. Heyday, May Day." The color chip was actually clay-colored, so his first response was correct.'

However, most people suppress or ignore words which rhyme with what they've said unless they are deliberately producing a pun, poem or rap. Even then, the speaker shows connection between words chosen for rhyme and an overall meaning in discourse. People with schizophrenia with speech dysfunction show no such relation between rhyme and reason. Some even produce stretches of gibberish combined with recognizable words.[57]

Computational linguistics

Computational linguistics is the study of linguistic issues in a way that is "computationally responsible", i.e., taking careful note of computational consideration of algorithmic specification and computational complexity, so that the linguistic theories devised can be shown to exhibit certain desirable computational properties and their implementations. Computational linguists also work on computer language and software development.

Evolutionary linguistics

Evolutionary linguistics is the interdisciplinary study of the emergence of the language faculty through human evolution, and also the application of evolutionary theory to the study of cultural evolution among different languages. It is also a study of the dispersal of various languages across the globe, through movements among ancient communities.[58]

Forensic linguistics

Forensic linguistics is the application of linguistic analysis to forensics. Forensic analysis investigates on the style, language, lexical use, and other linguistic and grammatical features used in the legal context to provide evidence in courts of law. Forensic linguists have also contributed expertise in criminal cases.

See also

References

  1. ^ Halliday, Michael A.K.; Jonathan Webster (2006). On Language and Linguistics. Continuum International Publishing Group. p. vii. ISBN 978-0-8264-8824-4.
  2. ^ a b Martinet, André (1960). Elements of General Linguistics. Studies in General Linguistics, vol. i. Translated by Elisabeth Palmer Rubbert. London: Faber. p. 15.
  3. ^ Rens Bod (2014). A New History of the Humanities: The Search for Principles and Patterns from Antiquity to the Present. Oxford University Press. ISBN 978-0-19-966521-1.
  4. ^ "Chapter VI: Sanskrit Literature". The Imperial Gazetteer of India. 2. 1908. p. 263.
  5. ^ S.C. Vasu (Tr.) (1996). The Ashtadhyayi of Panini (2 Vols.). Vedic Books. ISBN 978-81-208-0409-8.
  6. ^ Jakobson, Roman (1937). Six Lectures on Sound and Meaning. MIT Press, Cambridge, Massachusetts. ISBN 978-0-262-60010-1.
  7. ^ Sharada Narayanan (2010). "Vakyapadiya: Sphota, Jati, and Dravya". The Hindu.
  8. ^ Chierchia, Gennaro & Sally McConnell-Ginet (2000). Meaning and Grammar: An Introduction to Semantics. MIT Press, Cambridge, Massachusetts. ISBN 978-0-262-53164-1.
  9. ^ All references in this article to the study of sound should be taken to include the manual and non-manual signs used in sign languages.
  10. ^ Adrian Akmajian; Richard A. Demers; Ann K. Farmer; Robert M. Harnish (2010). Linguistics (6th ed.). The MIT Press. ISBN 978-0-262-51370-8. Retrieved 25 July 2012.
  11. ^ Syntax: A Generative Introduction (Second Edition), 2013. Andrew Carnie. Blackwell Publishing.
  12. ^ de Saussure, F. (1986). Course in general linguistics (3rd ed.). (R. Harris, Trans.). Chicago: Open Court Publishing Company. (Original work published 1972). pp. 9–10, 15.
  13. ^ Chomsky, Noam. (1965). Aspects of the Theory of Syntax. Cambridge, MA: MIT Press.
  14. ^ Raymond Mougeon & Terry Nadasdi (1998). "Sociolinguistic Discontinuity in Minority Language Communities". Language. 74 (1): 40–55. JSTOR 417564.
  15. ^ ""Stylistics" by Joybrato Mukherjee. Chapter 49. Encyclopedia of Linguistics" (PDF). Archived from the original (PDF) on 4 October 2013. Retrieved 4 October 2013.
  16. ^ Writing and Difference by Jacques Derrida, 1967, and Of Grammatology
  17. ^ Chapter 1, section 1.1 in Elmer H. Antonsen (2002). Trends in Linguistics: Runes and Germanic Linguistics (6th ed.). Mouton de Gruyter. ISBN 978-3-11-017462-5.
  18. ^ Journal of Language and Politics
  19. ^ a b Harper, Douglas. "philology". Online Etymology Dictionary. Retrieved 2018-03-05.
  20. ^ Nichols, Stephen G. (1990). "Introduction: Philology in a Manuscript Culture". Speculum. 65 (1): 1–10. doi:10.2307/2864468. JSTOR 2864468.
  21. ^ McMahon, A.M.S. (1994). Understanding Language Change. Cambridge University Press. p. 19. ISBN 978-0-521-44665-5.
  22. ^ McMahon, A.M.S. (1994). Understanding Language Change. Cambridge University Press. p. 9. ISBN 978-0-521-44665-5.
  23. ^ A. Morpurgo Davies Hist. Linguistics (1998) 4 I. 22.
  24. ^ a b Harper, Douglas. "linguist". Online Etymology Dictionary. Retrieved 2018-03-05.
  25. ^ Spolsky, Bernard; Hult, Francis M. (February 2010). The Handbook of Educational Linguistics. John Wiley & Sons. ISBN 978-1-4443-3104-2.
  26. ^ Berns, Margie (2010-03-20). Concise Encyclopedia of Applied Linguistics. Elsevier. pp. 23–25. ISBN 978-0-08-096503-1.
  27. ^ "The Science of Linguistics". Linguistic Society of America. Retrieved 2018-04-17. Modern linguists approach their work with a scientific perspective, although they use methods that used to be thought of as solely an academic discipline of the humanities. Contrary to previous belief, linguistics is multidisciplinary. It overlaps each of the human sciences including psychology, neurology, anthropology, and sociology. Linguists conduct formal studies of sound structure, grammar and meaning, but they also investigate the history of language families, and research language acquisition.
  28. ^ Crystal, David (1990). Linguistics. Penguin Books. ISBN 978-0-14-013531-2.
  29. ^ "Linguist". The American Heritage Dictionary of the English Language. Houghton Mifflin Harcourt. 2000. ISBN 978-0-395-82517-4.
  30. ^ Oxford English dictionary.
  31. ^ Trudgill, P. (1994). Dialects. Ebooks Online Routledge. Florence, KY.
  32. ^ Ariel, Mira (2009). "Discourse, grammar, discourse". Discourse Studies. 11 (1): 5–36. JSTOR 24049745.
  33. ^ Helen Leckie-Tarry, Language and Context: a Functional Linguistic Theory of Register, Continuum International Publishing Group, 1995, p. 6. ISBN 1-85567-272-3
  34. ^ Zuckermann, Ghil'ad (2003). Language Contact and Lexical Enrichment in Israeli Hebrew. Palgrave Macmillan. pp. 2ff. ISBN 978-1-4039-1723-2.
  35. ^ Jacques Derrida (1978). Writing and Difference. Translated by Alan Bass. University of Chicago Press. ISBN 978-0-226-14329-3.
  36. ^ Lea, Richard (18 November 2004). "Relative Thinking". The Guardian.
  37. ^ IA Richards (1965). The Philosophy of Rhetoric. Oxford University Press (New York).
  38. ^ Isac, Daniela; Charles Reiss (2013). I-language: An Introduction to Linguistics as Cognitive Science, 2nd edition. Oxford University Press. ISBN 978-0-19-966017-9.
  39. ^ Bloomfield 1983, p. 307.
  40. ^ Seuren, Pieter A. M. (1998). Western linguistics: An historical introduction. Wiley-Blackwell. pp. 2–24. ISBN 978-0-631-20891-4.
  41. ^ Bloomfield 1983, p. 308.
  42. ^ Bloomfield 1983, p. 310.
  43. ^ a b Bloomfield 1983, p. 311.
  44. ^ Clarke, David S. (1990). Sources of Semiotic: Readings with Commentary from Antiquity to the Present. Carbondale: Southern Illinois University Press. pp. 143–44. ISBN 978-0-8093-1614-4.
  45. ^ Holquist 1981, pp. xvii–xviii.[citation not found]
  46. ^ de Saussure, Ferdinand. Course in General Linguistics. New York: McGraw Hill. ISBN 978-0-8022-1493-5.
  47. ^ Chomsky, Noam (1956). "Three Models for the Description of Language". IRE Transactions on Information Theory. 2 (2): 113–24. doi:10.1109/TIT.1956.1056813.
  48. ^ Chomsky, Noam (1957). Syntactic Structures. The Hague: Mouton.
  49. ^ Nichols, Johanna (1984). "Functional Theories of Grammar". Annual Review of Anthropology. 13: 97–117. doi:10.1146/annurev.an.13.100184.000525. [Functional grammar] analyzes grammatical structure, as do formal and structural grammar; but it also analyses the entire communicative situation: the purpose of the speech event, its participants, its discourse context. Functionalists maintain that the communicative situation motivates, constrains, explains, or otherwise determines grammatical structure, and that a structural or formal approach is not merely limited to an artificially restricted data base, but is inadequate as a structural account. Functional grammar, then, differs from formulae and structural grammar in that it purports not to model but to explain; and the explanation is grounded in the communicative situation.
  50. ^ Croft, William & D. Alan Cruse (2004). Cognitive Linguistics. Cambridge: Cambridge University Press. p. 1.
  51. ^ "Ecolinguistics Association".
  52. ^ Mariën, Peter; Manto, Mario (2017-10-25). "Cerebellum as a Master-Piece for Linguistic Predictability". Cerebellum (London, England). 17 (2): 101–03. doi:10.1007/s12311-017-0894-1. ISSN 1473-4230. PMID 29071518.
  53. ^ Barbara Seidlhofer (2003). Controversies in Applied Linguistics (pp. 288). Oxford University Press. ISBN 978-0-19-437444-6.
  54. ^ a b c d Eades, Diana (2005). "Applied Linguistics and Language Analysis in Asylum Seeker Cases" (PDF). Applied Linguistics. 26 (4): 503–26. doi:10.1093/applin/ami021.
  55. ^ Miller, Paul Allen (1998). "The Classical Roots of Post-Structuralism: Lacan, Derrida and Foucault". International Journal of the Classical Tradition. 5 (2): 204–25. doi:10.1007/bf02688423. JSTOR 30222818.
  56. ^ Himmelman, Nikolaus "Language documentation: What is it and what is it good for?" in P. Gippert, Jost, Nikolaus P Himmelmann & Ulrike Mosel. (2006) Essentials of Language documentation. Mouton de Gruyter, Berlin & New York.
  57. ^ Chaika, Elaine Ostrach. 1990. Understanding Psychotic Speech: Between Freud and Chomsky. Chas. Thomas Publishers.
  58. ^ Croft, William (October 2008). "Evolutionary Linguistics". Annual Review of Anthropology. 37: 219–34. doi:10.1146/annurev.anthro.37.081407.085156.

Bibliography

External links

This page was last edited on 11 February 2019, at 01:08
Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.