Available:*
Library | Item Barcode | Call Number | Material Type | Item Category 1 | Status |
---|---|---|---|---|---|
Searching... | 30000010113506 | QA76.9.N38 N83 2006 | Open Access Book | Book | Searching... |
On Order
Summary
Summary
The areas of natural language processing and computational linguistics have continued to grow in recent years, driven by the demand to automatically process text and spoken data. With the processing power and techniques now available, research is scaling up from lab prototypes to real-world, proven applications.
This book teaches the principles of natural language processing, first covering linguistics issues such as encoding, entropy, and annotation schemes; defining words, tokens and parts of speech; and morphology. It then details the language-processing functions involved, including part-of-speech tagging using rules and stochastic techniques; using Prolog to write phase-structure grammars; parsing techniques and syntactic formalisms; semantics, predicate logic and lexical semantics; and analysis of discourse, and applications in dialog systems. The key feature of the book is the author's hands-on approach throughout, with extensive exercises, sample code in Prolog and Perl, and a detailed introduction to Prolog. The reader is supported with a companion website that contains teaching slides, programs, and additional material.
The book is suitable for researchers and students of natural language processing and computational linguistics.
Author Notes
Pierre Nugues' research is focused on natural language processing for advanced user interfaces and spoken dialogue. This includes the design and the implementation of conversational agents within a multimodal framework and text visualization. He led the team that designed a navigation agent - Ulysse - that enables a user to navigate in a virtual reality environment using language, and the team thatdesignedthe CarSim system that generates animated 3D scenes from written texts.
Pierre Nugues has taught natural-language processing and computational linguistics at the following institutions:ISMRA, Caen, France;University of Nottigham, UK;Staffordshire University, UK; FH Konstanz, Germany;Lund University, Sweden;and Ghent University, Belgium.
Table of Contents
1 An Overview of Language Processing | p. 1 |
1.1 Linguistics and Language Processing | p. 1 |
1.2 Applications of Language Processing | p. 2 |
1.3 The Different Domains of Language Processing | p. 3 |
1.4 Phonetics | p. 4 |
1.5 Lexicon and Morphology | p. 6 |
1.6 Syntax | p. 8 |
1.6.1 Syntax as Defined by Noam Chomsky | p. 8 |
1.6.2 Syntax as Relations and Dependencies | p. 10 |
1.7 Semantics | p. 11 |
1.8 Discourse and Dialogue | p. 14 |
1.9 Why Speech and Language Processing Are Difficult | p. 14 |
1.9.1 Ambiguity | p. 15 |
1.9.2 Models and Their Implementation | p. 16 |
1.10 An Example of Language Technology in Action: the Persona Project | p. 17 |
1.10.1 Overview of Persona | p. 17 |
1.10.2 The Persona's Modules | p. 18 |
1.11 Further Reading | p. 19 |
2 Corpus Processing Tools | p. 23 |
2.1 Corpora | p. 23 |
2.1.1 Types of Corpora | p. 23 |
2.1.2 Corpora and Lexicon Building | p. 24 |
2.1.3 Corpora as Knowledge Sources for the Linguist | p. 26 |
2.2 Finite-State Automata | p. 27 |
2.2.1 A Description | p. 27 |
2.2.2 Mathematical Definition of Finite-State Automata | p. 28 |
2.2.3 Finite-State Automata in Prolog | p. 29 |
2.2.4 Deterministic and Nondeterministic Automata | p. 30 |
2.2.5 Building a Deterministic Automata from a Nondeterministic One | p. 31 |
2.2.6 Searching a String with a Finite-State Automaton | p. 31 |
2.2.7 Operations on Finite-State Automata | p. 33 |
2.3 Regular Expressions | p. 35 |
2.3.1 Repetition Metacharacters | p. 36 |
2.3.2 The Longest Match | p. 37 |
2.3.3 Character Classes | p. 38 |
2.3.4 Nonprintable Symbols or Positions | p. 39 |
2.3.5 Union and Boolean Operators | p. 41 |
2.3.6 Operator Combination and Precedence | p. 41 |
2.4 Programming with Regular Expressions | p. 42 |
2.4.1 Perl | p. 42 |
2.4.2 Matching | p. 42 |
2.4.3 Substitutions | p. 43 |
2.4.4 Translating Characters | p. 44 |
2.4.5 String Operators | p. 44 |
2.4.6 Back References | p. 45 |
2.5 Finding Concordances | p. 46 |
2.5.1 Concordances in Prolog | p. 46 |
2.5.2 Concordances in Perl | p. 48 |
2.6 Approximate String Matching | p. 50 |
2.6.1 Edit Operations | p. 50 |
2.6.2 Minimum Edit Distance | p. 51 |
2.6.3 Searching Edits in Prolog | p. 54 |
2.7 Further Reading | p. 55 |
3 Encoding, Entropy, and Annotation Schemes | p. 59 |
3.1 Encoding Texts | p. 59 |
3.2 Character Sets | p. 60 |
3.2.1 Representing Characters | p. 60 |
3.2.2 Unicode | p. 61 |
3.2.3 The Unicode Encoding Schemes | p. 63 |
3.3 Locales and Word Order | p. 66 |
3.3.1 Presenting Time, Numerical Information, and Ordered Words | p. 66 |
3.3.2 The Unicode Collation Algorithm | p. 67 |
3.4 Markup Languages | p. 69 |
3.4.1 A Brief Background | p. 69 |
3.4.2 An Outline of XML | p. 69 |
3.4.3 Writing a DTD | p. 71 |
3.4.4 Writing an XML Document | p. 74 |
3.4.5 Namespaces | p. 75 |
3.5 Codes and Information Theory | p. 76 |
3.5.1 Entropy | p. 76 |
3.5.2 Huffman Encoding | p. 77 |
3.5.3 Cross Entropy | p. 80 |
3.5.4 Perplexity and Cross Perplexity | p. 81 |
3.6 Entropy and Decision Trees | p. 82 |
3.6.1 Decision Trees | p. 82 |
3.6.2 Inducing Decision Trees Automatically | p. 82 |
3.7 Further Reading | p. 84 |
4 Counting Words | p. 87 |
4.1 Counting Words and Word Sequences | p. 87 |
4.2 Words and Tokens | p. 87 |
4.2.1 What Is a Word? | p. 87 |
4.2.2 Breaking a Text into Words: Tokenization | p. 88 |
4.3 Tokenizing Texts | p. 89 |
4.3.1 Tokenizing Texts in Prolog | p. 89 |
4.3.2 Tokenizing Texts in Perl | p. 91 |
4.4 N-grams | p. 92 |
4.4.1 Some Definitions | p. 92 |
4.4.2 Counting Unigrams in Prolog | p. 93 |
4.4.3 Counting Unigrams with Perl | p. 93 |
4.4.4 Counting Bigrams with Perl | p. 95 |
4.5 Probabilistic Models of a Word Sequence | p. 95 |
4.5.1 The Maximum Likelihood Estimation | p. 95 |
4.5.2 Using ML Estimates with Nineteen Eighty-Four | p. 97 |
4.6 Smoothing N-gram Probabilities | p. 99 |
4.6.1 Sparse Data | p. 99 |
4.6.2 Laplace's Rule | p. 100 |
4.6.3 Good-Turing Estimation | p. 101 |
4.7 Using N-grams of Variable Length | p. 102 |
4.7.1 Linear Interpolation | p. 103 |
4.7.2 Back-off | p. 104 |
4.8 Quality of a Language Model | p. 104 |
4.8.1 Intuitive Presentation | p. 104 |
4.8.2 Entropy Rate | p. 105 |
4.8.3 Cross Entropy | p. 105 |
4.8.4 Perplexity | p. 106 |
4.9 Collocations | p. 106 |
4.9.1 Word Preference Measurements | p. 107 |
4.9.2 Extracting Collocations with Perl | p. 108 |
4.10 Application: Retrieval and Ranking of Documents on the Web | p. 109 |
4.11 Further Reading | p. 111 |
5 Words, Parts of Speech, and Morphology | p. 113 |
5.1 Words | p. 113 |
5.1.1 Parts of Speech | p. 113 |
5.1.2 Features | p. 114 |
5.1.3 Two Significant Parts of Speech: The Noun and the Verb | p. 115 |
5.2 Lexicons | p. 117 |
5.2.1 Encoding a Dictionary | p. 119 |
5.2.2 Building a Trie in Prolog | p. 121 |
5.2.3 Finding a Word in a Trie | p. 123 |
5.3 Morphology | p. 123 |
5.3.1 Morphemes | p. 123 |
5.3.2 Morphs | p. 124 |
5.3.3 Inflection and Derivation | p. 125 |
5.3.4 Language Differences | p. 129 |
5.4 Morphological Parsing | p. 130 |
5.4.1 Two-Level Model of Morphology | p. 130 |
5.4.2 Interpreting the Morphs | p. 131 |
5.4.3 Finite-State Transducers | p. 131 |
5.4.4 Conjugating a French Verb | p. 133 |
5.4.5 Prolog Implementation | p. 134 |
5.4.6 Ambiguity | p. 136 |
5.4.7 Operations on Finite-State Transducers | p. 137 |
5.5 Morphological Rules | p. 138 |
5.5.1 Two-Level Rules | p. 138 |
5.5.2 Rules and Finite-State Transducers | p. 139 |
5.5.3 Rule Composition: An Examplewith French Irregular Verbs | p. 141 |
5.6 Application Examples | p. 142 |
5.7 Further Reading | p. 142 |
6 Part-of-Speech Tagging Using Rules | p. 147 |
6.1 Resolving Part-of-Speech Ambiguity | p. 147 |
6.1.1 A Manual Method | p. 147 |
6.1.2 Which Method to Use to Automatically Assign Parts of Speech | p. 147 |
6.2 Tagging with Rules | p. 149 |
6.2.1 Brill's Tagger | p. 149 |
6.2.2 Implementation in Prolog | p. 151 |
6.2.3 Deriving Rules Automatically | p. 153 |
6.2.4 Confusion Matrices | p. 154 |
6.3 Unknown Words | p. 154 |
6.4 Standardized Part-of-Speech Tagsets | p. 156 |
6.4.1 Multilingual Part-of-Speech Tags | p. 156 |
6.4.2 Parts of Speechfor English | p. 158 |
6.4.3 An Annotation Schemefor Swedish | p. 160 |
6.5 Further Reading | p. 162 |
7 Part-of-Speech Tagging Using Stochastic Techniques | p. 163 |
7.1 The Noisy Channel Model | p. 163 |
7.1.1 Presentation | p. 163 |
7.1.2 The N-gram Approximation | p. 164 |
7.1.3 Tagging a Sentence | p. 165 |
7.1.4 The Viterbi Algorithm: An Intuitive Presentation | p. 166 |
7.2 Markov Models | p. 167 |
7.2.1 Markov Chains | p. 167 |
7.2.2 Hidden Markov Models | p. 169 |
7.2.3 Three Fundamental Algorithms to Solve Problems with HMMs | p. 170 |
7.2.4 The Forward Procedure | p. 171 |
7.2.5 Viterbi Algorithm | p. 173 |
7.2.6 The Backward Procedure | p. 174 |
7.2.7 The Forward-Backward Algorithm | p. 175 |
7.3 Tagging with Decision Trees | p. 177 |
7.4 Unknown Words | p. 179 |
7.5 An Application of the Noisy Channel Model: Spell Checking | p. 179 |
7.6 A Second Application: Language Models for Machine Translation | p. 180 |
7.6.1 Parallel Corpora | p. 180 |
7.6.2 Alignment | p. 181 |
7.6.3 Translation | p. 183 |
7.7 Further Reading | p. 184 |
8 Phrase-Structure Grammars in Prolog | p. 185 |
8.1 Using Prolog to Write Phrase-Structure Grammars | p. 185 |
8.2 Representing Chomsky's Syntactic Formalism in Prolog | p. 185 |
8.2.1 Constituents | p. 185 |
8.2.2 Tree Structures | p. 186 |
8.2.3 Phrase-Structure Rules | p. 187 |
8.2.4 The Definite Clause Grammar (DCG) Notation | p. 188 |
8.3 Parsing with DCGs | p. 190 |
8.3.1 Translating DCGs into Prolog Clauses | p. 190 |
8.3.2 Parsing and Generation | p. 192 |
8.3.3 Left-Recursive Rules | p. 193 |
8.4 Parsing Ambiguity | p. 194 |
8.5 Using Variables | p. 196 |
8.5.1 Gender and Number Agreement | p. 196 |
8.5.2 Obtaining the Syntactic Structure | p. 198 |
8.6 Application: Tokenizing Texts Using DCG Rules | p. 200 |
8.6.1 Word Breaking | p. 200 |
8.6.2 Recognition of Sentence Boundaries | p. 201 |
8.7 Semantic Representation | p. 202 |
8.7.1 A-Calculus | p. 202 |
8.7.2 Embedding A-Expressions into DCG Rules | p. 203 |
8.7.3 Semantic Composition of Verbs | p. 205 |
8.8 An Application of Phrase-Structure Grammars and a Worked Example | p. 206 |
8.9 Further Reading | p. 210 |
9 Partial Parsing | p. 213 |
9.1 Is Syntax Necessary? | p. 213 |
9.2 Word Spotting and Template Matching | p. 213 |
9.2.1 ELIZA | p. 213 |
9.2.2 Word Spotting in Prolog | p. 214 |
9.3 Multiword Detection | p. 217 |
9.3.1 Multiwords | p. 217 |
9.3.2 AStandard Multiword Annotation | p. 217 |
9.3.3 Detecting Multiwords with Rules | p. 219 |
9.3.4 The Longest Match | p. 219 |
9.3.5 Running the Program | p. 220 |
9.4 Noun Groups and Verb Groups | p. 222 |
9.4.1 Groups Versus Recursive Phrases | p. 223 |
9.4.2 DCG Rules to Detect Noun Groups | p. 223 |
9.4.3 DCG Rules to Detect Verb Groups | p. 225 |
9.4.4 Running the Rules | p. 226 |
9.5 Group Detection as a Tagging Problem | p. 227 |
9.5.1 Tagging Gaps | p. 227 |
9.5.2 Tagging Words | p. 228 |
9.5.3 Using Symbolic Rules | p. 229 |
9.5.4 Using Statistical Tagging | p. 229 |
9.6 Cascading Partial Parsers | p. 230 |
9.7 Elementary Analysis of Grammatical Functions | p. 231 |
9.7.1 Main Functions | p. 231 |
9.7.2 Extracting Other Groups | p. 232 |
9.8 An Annotation Scheme for Groups in French | p. 235 |
9.9 Application: The FASTUS System | p. 237 |
9.9.1 The Message Understanding Conferences | p. 237 |
9.9.2 The Syntactic Layers of the FASTUS System | p. 238 |
9.9.3 Evaluationof Information Extraction Systems | p. 239 |
9.10 Further Reading | p. 240 |
10 Syntactic Formalisms | p. 243 |
10.1 Introduction | p. 243 |
10.2 Chomsky's Grammar in Syntactic Structures | p. 244 |
10.2.1 Constituency: A Formal Definition | p. 244 |
10.2.2 Transformations | p. 246 |
10.2.3 Transformations and Movements | p. 248 |
10.2.4 Gap Threading | p. 248 |
10.2.5 Gap Threading to Parse Relative Clauses | p. 250 |
10.3 Standardized Phrase Categories for English | p. 252 |
10.4 Unification-Based Grammars | p. 254 |
10.4.1 Features | p. 254 |
10.4.2 Representing Features in Prolog | p. 255 |
10.4.3 A Formalism for Features and Rules | p. 257 |
10.4.4 Features Organization | p. 258 |
10.4.5 Features and Unification | p. 260 |
10.4.6 A Unification Algorithm for Feature Structures | p. 261 |
10.5 Dependency Grammars | p. 263 |
10.5.1 Presentation | p. 263 |
10.5.2 Properties of a Dependency Graph | p. 266 |
10.5.3 Valence | p. 268 |
10.5.4 Dependencies and Functions | p. 270 |
10.6 Further Reading | p. 273 |
11 Parsing Techniques | p. 277 |
11.1 Introduction | p. 277 |
11.2 Bottom-up Parsing | p. 278 |
11.2.1 The Shift-Reduce Algorithm | p. 278 |
11.2.2 Implementing Shift-Reduce Parsing in Prolog | p. 279 |
11.2.3 Differences Between Bottom-up and Top-down Parsing | p. 281 |
11.3 Chart Parsing | p. 282 |
11.3.1 Backtracking and Efficiency | p. 282 |
11.3.2 Structure of a Chart | p. 282 |
11.3.3 The Active Chart | p. 283 |
11.3.4 Modules of an Earley Parser | p. 285 |
11.3.5 The Earley Algorithm in Prolog | p. 288 |
11.3.6 The Earley Parser to Handle Left-Recursive Rules and Empty Symbols | p. 293 |
11.4 Probabilistic Parsing of Context-Free Grammars | p. 294 |
11.5 A Description of PCFGs | p. 294 |
11.5.1 The Bottom-up Chart | p. 297 |
11.5.2 The Cocke-Younger-Kasami Algorithm in Prolog | p. 298 |
11.5.3 Adding Probabilities to the CYK Parser | p. 300 |
11.6 Parser Evaluation | p. 301 |
11.6.1 Constituency-Based Evaluation | p. 301 |
11.6.2 Dependency-Based Evaluation | p. 302 |
11.6.3 PerformanceofPCFG Parsing | p. 302 |
11.7 Parsing Dependencies | p. 303 |
11.7.1 Dependency Rules | p. 304 |
11.7.2 Extending the Shift-Reduce Algorithm to Parse Dependencies | p. 305 |
11.7.3 Nivre's Parser in Prolog | p. 306 |
11.7.4 Finding Dependencies Using Constraints | p. 309 |
11.7.5 Parsing Dependencies Using Statistical Techniques | p. 310 |
11.8 Further Reading | p. 313 |
12 Semantics and Predicate Logic | p. 317 |
12.1 Introduction | p. 317 |
12.2 Language Meaning and Logic: An Illustrative Example | p. 317 |
12.3 Formal Semantics | p. 319 |
12.4 First-Order Predicate Calculus to Represent the State of Affairs | p. 319 |
12.4.1 Variables and Constants | p. 320 |
12.4.2 Predicates | p. 320 |
12.5 Querying the Universe of Discourse | p. 322 |
12.6 Mapping Phrases onto Logical Formulas | p. 322 |
12.6.1 Representing Nouns and Adjectives | p. 323 |
12.6.2 Representing Noun Groups | p. 324 |
12.6.3 Representing Verbs and Prepositions | p. 324 |
12.7 The Case of Determiners | p. 325 |
12.7.1 Determiners and Logic Quantifiers | p. 325 |
12.7.2 Translating Sentences Using Quantifiers | p. 326 |
12.7.3 A General Representation of Sentences | p. 327 |
12.8 Compositionality to Translate Phrases to Logical Forms | p. 329 |
12.8.1 Translating the Noun Phrase | p. 329 |
12.8.2 Translating the Verb Phrase | p. 330 |
12.9 Augmenting the Database and Answering Questions | p. 331 |
12.9.1 Declarations | p. 332 |
12.9.2 Questions with Existential and Universal Quantifiers | p. 332 |
12.9.3 Prolog and Unknown Predicates | p. 334 |
12.9.4 Other Determiners and Questions | p. 335 |
12.10 Application: The Spoken Language Translator | p. 335 |
12.10.1 Translating Spoken Sentences | p. 335 |
12.10.2 Compositional Semantics | p. 336 |
12.10.3 Semantic Representation Transfer | p. 338 |
12.11 Further Reading | p. 340 |
13 Lexical Semantics | p. 343 |
13.1 Beyond Formal Semantics | p. 343 |
13.1.1 La langue etlaparole | p. 343 |
13.1.2 Language and the Structure of the World | p. 343 |
13.2 Lexical Structures | p. 344 |
13.2.1 Some Basic Terms and Concepts | p. 344 |
13.2.2 Ontological Organization | p. 344 |
13.2.3 Lexical Classes and Relations | p. 345 |
13.2.4 Semantic Networks | p. 347 |
13.3 Building a Lexicon | p. 347 |
13.3.1 The Lexicon and Word Senses | p. 349 |
13.3.2 Verb Models | p. 350 |
13.3.3 Definitions | p. 351 |
13.4 An Example of Exhaustive Lexical Organization: Word Net | p. 352 |
13.4.1 Nouns | p. 353 |
13.4.2 Adjectives | p. 354 |
13.4.3 Verbs | p. 355 |
13.5 Automatic Word Sense Disambiguation | p. 356 |
13.5.1 Senses as Tags | p. 356 |
13.5.2 Associating a Word with a Context | p. 357 |
13.5.3 Guessing the Topic | p. 357 |
13.5.4 Naive Bayes | p. 358 |
13.5.5 Using Constraints on Verbs | p. 359 |
13.5.6 Using Dictionary Definitions | p. 359 |
13.5.7 An Unsupervised Algorithm to Tag Senses | p. 360 |
13.5.8 Senses and Languages | p. 362 |
13.6 Case Grammars | p. 363 |
13.6.1 Cases in Latin | p. 363 |
13.6.2 Cases and Thematic Roles | p. 364 |
13.6.3 Parsing with Cases | p. 365 |
13.6.4 Semantic Grammars | p. 366 |
13.7 Extending Case Grammars | p. 367 |
13.7.1 Frame Net | p. 367 |
13.7.2 A Statistical Method to Identify Semantic Roles | p. 368 |
13.8 An Example of Case Grammar Application: EVAR | p. 371 |
13.8.1 EVAR's Ontology and Syntactic Classes | p. 371 |
13.8.2 Cases in EVAR | p. 373 |
13.9 Further Reading | p. 373 |
14 Discourse | p. 377 |
14.1 Introduction | p. 377 |
14.2 Discourse: A Minimalist Definition | p. 378 |
14.2.1 A Description of Discourse | p. 378 |
14.2.2 Discourse Entities | p. 378 |
14.3 References: An Application-Oriented View | p. 379 |
14.3.1 References and Noun Phrases | p. 379 |
14.3.2 Finding Names - Proper Nouns | p. 380 |
14.4 Coreference | p. 381 |
14.4.1 Anaphora | p. 381 |
14.4.2 Solving Coreferences in an Example | p. 382 |
14.4.3 A Standard Coreference Annotation | p. 383 |
14.5 References: A More Formal View | p. 384 |
14.5.1 Generating Discourse Entities: The Existential Quantifier | p. 384 |
14.5.2 Retrieving Discourse Entities: Definite Descriptions | p. 385 |
14.5.3 Generating Discourse Entities: The Universal Quantifier | p. 386 |
14.6 Centering: A Theory on Discourse Structure | p. 387 |
14.7 Solving Coreferences | p. 388 |
14.7.1 A Simplistic Method: Using Syntactic and Semantic Compatibility | p. 389 |
14.7.2 Solving Coreferences with Shallow Grammatical Information | p. 390 |
14.7.3 Salience in a Multimodal Context | p. 391 |
14.7.4 Using a Machine-Learning Technique to Resolve Coreferences | p. 391 |
14.7.5 More Complex Phenomena: Ellipses | p. 396 |
14.8 Discourse and Rhetoric | p. 396 |
14.8.1 Ancient Rhetoric: An Outline | p. 397 |
14.8.2 Rhetorical Structure Theory | p. 397 |
14.8.3 Types of Relations | p. 399 |
14.8.4 Implementing Rhetorical Structure Theory | p. 400 |
14.9 Events and Time | p. 401 |
14.9.1 Events | p. 403 |
14.9.2 Event Types | p. 404 |
14.9.3 Temporal Representation of Events | p. 404 |
14.9.4 Events and Tenses | p. 406 |
14.10 Time ML, an Annotation Scheme for Time and Events | p. 407 |
14.11 Further Reading | p. 409 |
15 Dialogue | p. 411 |
15.1 Introduction | p. 411 |
15.2 Why a Dialogue? | p. 411 |
15.3 Simple Dialogue Systems | p. 412 |
15.3.1 Dialogue Systems Based on Automata | p. 412 |
15.3.2 Dialogue Modeling | p. 413 |
15.4 Speech Acts: A Theory of Language Interaction | p. 414 |
15.5 Speech Acts and Human-Machine Dialogue | p. 417 |
15.5.1 Speech Acts as a Tagging Model | p. 417 |
15.5.2 Speech Acts Tags Used in the SUNDIAL Project | p. 418 |
15.5.3 Dialogue Parsing | p. 419 |
15.5.4 Interpreting Speech Acts | p. 421 |
15.5.5 EVAR: A Dialogue Application Using Speech Acts | p. 422 |
15.6 Taking Beliefs and Intentions into Account | p. 423 |
15.6.1 Representing Mental States | p. 425 |
15.6.2 The STRIPS Planning Algorithm | p. 427 |
15.6.3 Causality | p. 429 |
15.7 Further Reading | p. 430 |
A An Introduction to Prolog | p. 433 |
A.1 A Short Background | p. 433 |
A.2 Basic Features of Prolog | p. 434 |
A.2.1 Facts | p. 434 |
A.2.2 Terms | p. 435 |
A.2.3 Queries | p. 437 |
A.2.4 Logical Variables | p. 437 |
A.2.5 Shared Variables | p. 438 |
A.2.6 Data Types in Prolog | p. 439 |
A.2.7 Rules | p. 440 |
A.3 Running a Program | p. 442 |
A.4 Unification | p. 443 |
A.4.1 Substitution and Instances | p. 443 |
A.4.2 Terms and Unification | p. 444 |
A.4.3 The Herbrand Unification Algorithm | p. 445 |
A.4.4 Example | p. 445 |
A.4.5 The Occurs-Check | p. 446 |
A.5 Resolution | p. 447 |
A.5.1 Modus Ponens | p. 447 |
A.5.2 A Resolution Algorithm | p. 447 |
A.5.3 Derivation Trees and Backtracking | p. 448 |
A.6 Tracing and Debugging | p. 450 |
A.7 Cuts, Negation, and Related Predicates | p. 452 |
A.7.1 Cuts | p. 452 |
A.7.2 Negation | p. 453 |
A.7.3 The once/1 Predicate | p. 454 |
A.8 Lists | p. 455 |
A.9 Some List-Handling Predicates | p. 456 |
A.9.1 The member/2 Predicate | p. 456 |
A.9.2 The append/3 Predicate | p. 457 |
A.9.3 The delete/3 Predicate | p. 458 |
A.9.4 The intersection/3 Predicate | p. 458 |
A.9.5 The reverse/2 Predicate | p. 459 |
A.9.6 The Mode of an Argument | p. 459 |
A.10 Operators and Arithmetic | p. 460 |
A.10.1 Operators | p. 460 |
A.10.2 Arithmetic Operations | p. 460 |
A.10.3 Comparison Operators | p. 462 |
A.10.4 Lists and Arithmetic: The length/2 Predicate | p. 463 |
A.10.5 Lists and Comparison: The quicksort/2 Predicate | p. 463 |
A.11 Some Other Built-in Predicates | p. 464 |
A.11.1 Type Predicates | p. 464 |
A.11.2 Term Manipulation Predicates | p. 465 |
A.12 Handling Run-Time Errors and Exceptions | p. 466 |
A.13 Dynamically Accessing and Updatingthe Database | p. 467 |
A.13.1 Accessing a Clause: The clause/2 Predicate | p. 467 |
A.13.2 Dynamic and Static Predicates | p. 468 |
A.13.3 Adding a Clause: The asserta/1 and 1 assertz/Predicates | p. 468 |
A.13.4 Removing Clauses: The retract/1 and abolish/2 Predicates | p. 469 |
A.13.5 Handling Unknown Predicates | p. 470 |
A.14 All-Solutions Predicates | p. 470 |
A.15 Fundamental Search Algorithms | p. 471 |
A.15.1 Representing the Graph | p. 472 |
A.15.2 Depth-First Search | p. 473 |
A.15.3 Breadth-First Search | p. 474 |
A.15.4 A* Search | p. 475 |
A.16 Input/Output | p. 476 |
A.16.1 Reading and Writing Characters with Edinburgh Prolog | p. 476 |
A.16.2 Reading and Writing Terms with Edinburgh Prolog | p. 476 |
A.16.3 Opening and Closing Files with Edinburgh Prolog | p. 477 |
A.16.4 Reading and Writing Characters with Standard Prolog | p. 478 |
A.16.5 Reading and Writing Terms with Standard Prolog | p. 479 |
A.16.6 Opening and Closing Files with Standard Prolog | p. 479 |
A.16.7 Writing Loops | p. 480 |
A.17 Developing Prolog Programs | p. 481 |
A.17.1 Presentation Style | p. 481 |
A.17.2 Improving Programs | p. 482 |
Index | p. 487 |
References | p. 497 |