To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
Live Statistics
English Articles
Improved in 24 Hours
Added in 24 Hours
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds

Tree-adjoining grammar

From Wikipedia, the free encyclopedia

Tree-adjoining grammar (TAG) is a grammar formalism defined by Aravind Joshi. Tree-adjoining grammars are somewhat similar to context-free grammars, but the elementary unit of rewriting is the tree rather than the symbol. Whereas context-free grammars have rules for rewriting symbols as strings of other symbols, tree-adjoining grammars have rules for rewriting the nodes of trees as other trees (see tree (graph theory) and tree (data structure)).

YouTube Encyclopedic

  • 1/3
    Views:
    607
    2 123
    327
  • Tree-adjoining grammar
  • Ray Jackendoff: Rumelhart Prize Lecture
  • TAG - Tuples and function l for an initial tree

Transcription

History

TAG originated in investigations by Joshi and his students into the family of adjunction grammars (AG),[1] the "string grammar" of Zellig Harris.[2] AGs handle exocentric properties of language in a natural and effective way, but do not have a good characterization of endocentric constructions; the converse is true of rewrite grammars, or phrase-structure grammar (PSG). In 1969, Joshi introduced a family of grammars that exploits this complementarity by mixing the two types of rules. A few very simple rewrite rules suffice to generate the vocabulary of strings for adjunction rules. This family is distinct from the Chomsky-Schützenberger hierarchy but intersects it in interesting and linguistically relevant ways.[3] The center strings and adjunct strings can also be generated by a dependency grammar, avoiding the limitations of rewrite systems entirely.[4] [5]

Description

The rules in a TAG are trees with a special leaf node known as the foot node, which is anchored to a word. There are two types of basic trees in TAG: initial trees (often represented as '') and auxiliary trees (''). Initial trees represent basic valency relations, while auxiliary trees allow for recursion.[6] Auxiliary trees have the root (top) node and foot node labeled with the same symbol. A derivation starts with an initial tree, combining via either substitution or adjunction. Substitution replaces a frontier node with another tree whose top node has the same label. The root/foot label of the auxiliary tree must match the label of the node at which it adjoins. Adjunction can thus have the effect of inserting an auxiliary tree into the center of another tree.[4]

Other variants of TAG allow multi-component trees, trees with multiple foot nodes, and other extensions.

Complexity and application

Tree-adjoining grammars are more powerful (in terms of weak generative capacity) than context-free grammars, but less powerful than linear context-free rewriting systems,[7] indexed[note 1] or context-sensitive grammars.

A TAG can describe the language of squares (in which some arbitrary string is repeated), and the language . This type of processing can be represented by an embedded pushdown automaton. Languages with cubes (i.e. triplicated strings) or with more than four distinct character strings of equal length cannot be generated by tree-adjoining grammars.

For these reasons, tree-adjoining grammars are often described as mildly context-sensitive. These grammar classes are conjectured to be powerful enough to model natural languages while remaining efficiently parsable in the general case.[8]

Equivalences

Vijay-Shanker and Weir (1994)[9] demonstrate that linear indexed grammars, combinatory categorial grammar, tree-adjoining grammars, and head grammars are weakly equivalent formalisms, in that they all define the same string languages.

Lexicalized

Lexicalized tree-adjoining grammars (LTAG) are a variant of TAG in which each elementary tree (initial or auxiliary) is associated with a lexical item. A lexicalized grammar for English has been developed by the XTAG Research Group of the Institute for Research in Cognitive Science at the University of Pennsylvania.[5]

See also

Notes

  1. ^ since for each tree-adjoining grammar, a linear indexed grammar can be found producing the same language, see below, and for the latter, a weakly equivalent (proper) indexed grammar can be found, in turn, see Indexed grammar#Computational Power

References

  1. ^ Joshi, Aravind; S. R. Kosaraju; H. Yamada (1969). "String Adjunct Grammars" (Document). Proceedings Tenth Annual Symposium on Automata Theory, Waterloo, Canada. Joshi, Aravind K.; Kosaraju, S. Rao; Yamada, H. M. (1972), "String Adjunct Grammars: I. Local and Distributed Adjunction", Information and Control, 21 (2): 93–116, doi:10.1016/S0019-9958(72)90051-4 Joshi, Aravind K.; Kosaraju, S. Rao; Yamada, H. M. (1972), "String Adjunct Grammars: II. Equational Representation, Null Symbols, and Linguistic Relevance", Information and Control, 21 (3): 235–260, doi:10.1016/S0019-9958(72)80005-6
  2. ^ Harris, Zellig S. (1962). String analysis of sentence structure. Papers on Formal Linguistics. Vol. 1. The Hague: Mouton & Co.
  3. ^ Joshi, Aravind (1969). "Properties of Formal Grammars with Mixed Types of Rules and Their Linguistic Relevance" (Document). Proceedings Third International Symposium on Computational Linguistics, Stockholm, Sweden.
  4. ^ a b Joshi, Aravind; Owen Rambow (2003). "A Formalism for Dependency Grammar Based on Tree Adjoining Grammar" (PDF). Proceedings of the Conference on Meaning-Text Theory.
  5. ^ a b "A Lexicalized Tree Adjoining Grammar for English".
  6. ^ Jurafsky, Daniel; James H. Martin (2000). Speech and Language Processing. Upper Saddle River, NJ: Prentice Hall. p. 354.
  7. ^ Kallmeyer, Laura (2010). Parsing Beyond Context-Free Grammars. Springer. Here: p.215-216
  8. ^ Joshi, Aravind (1985). "How much context-sensitivity is necessary for characterizing structural descriptions". In D. Dowty; L. Karttunen; A. Zwicky (eds.). Natural Language Processing: Theoretical, Computational, and Psychological Perspectives. New York, NY: Cambridge University Press. pp. 206–250. ISBN 9780521262033.
  9. ^ Vijay-Shanker, K. and Weir, David J. 1994. The Equivalence of Four Extensions of Context-Free Grammars. Mathematical Systems Theory 27(6): 511–546.

External links

  • The XTAG project, which uses a TAG for natural language processing.
  • A tutorial on TAG
  • SemConst Documentation A quick survey on Syntax and Semantic Interface problematic within the TAG framework.
  • The TuLiPa project The Tübingen Linguistic Parsing Architecture (TuLiPA) is a multi-formalism syntactic (and semantic) parsing environment, designed mainly for multi-component tree adjoining grammars with tree tuples
  • The Metagrammar Toolkit which provides several tools to edit and compile MetaGrammars into TAGs. It also include a wide coverage French Metagrammars.
  • LLP2 A lexicalized tree adjoining grammar parser which provides an easy to use graphical environment (page in French)
This page was last edited on 1 July 2023, at 00:34
Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.