Available:*
Library | Item Barcode | Call Number | Material Type | Item Category 1 | Status |
---|---|---|---|---|---|
Searching... | 30000010064562 | QA279.5 K67 2004 | Open Access Book | Book | Searching... |
Searching... | 30000010138982 | QA279.5 K67 2004 | Open Access Book | Book | Searching... |
On Order
Summary
Summary
As the power of Bayesian techniques has become more fully realized, the field of artificial intelligence has embraced Bayesian methodology and integrated it to the point where an introduction to Bayesian techniques is now a core course in many computer science programs. Unlike other books on the subject, Bayesian Artificial Intelligencekeeps mathematical detail to a minimum and covers a broad range of topics. The authors integrate all of Bayesian net technology and learning Bayesian net technology and apply them both to knowledge engineering. They emphasize understanding and intuition but also provide the algorithms and technical background needed for applications. Software, exercises, and solutions are available on the authors' website.
Author Notes
Ann Nicholson is a Senior Lecturer and Kevin Korb is a Reader in the School of Computer Science and Software Engineering at Monash University, Victoria, Australia
Reviews 1
Choice Review
Korb and Nicholson (both, Monash Univ., Australia) say in their preface that this book is aimed at advanced undergraduates in computer science who have some background in artificial intelligence, and at those who wish to engage in applied or pure research in applications of Bayesian inference in AI. They also explain how this book is different--an emphasis on causal discovery and interpretation of Bayesian networks and discussion of applications. The book has the following noteworthy features: clear and helpful introductions and summaries for each chapter, problems for each chapter for reinforcement, lists of up-to-date references, and an annotated list of relevant software showing features availability, etc. The three parts of the book, dealing respectively with fundamental background material on probabilistic reasoning, causal models, and knowledge engineering, treat a wide span of relevant material and case studies, written lucidly and carrying readers smoothly from the simple to the complex. This book certainly deserves to be in the library of any institution where undergraduate or graduate courses in computer science are taught, and would also be an excellent resource for anyone who wants to learn more about this cutting-edge area of computing. ^BSumming Up: Essential. General readers; upper-division undergraduates through professionals; two-year technical program students. R. Bharath emeritus, Northern Michigan University
Table of Contents
Part I Probabilistic Reasoning | p. 1 |
Chapter 1 Bayesian Reasoning | p. 3 |
1.1 Reasoning under uncertainty | p. 3 |
1.2 Uncertainty in AI | p. 4 |
1.3 Probability calculus | p. 5 |
1.3.1 Conditional probability theorems | p. 8 |
1.3.2 Variables | p. 9 |
1.4 Interpretations of probability | p. 10 |
1.5 Bayesian philosophy | p. 12 |
1.5.1 Bayes' theorem | p. 12 |
1.5.2 Betting and odds | p. 13 |
1.5.3 Expected utility | p. 15 |
1.5.4 Dutch books | p. 16 |
1.5.5 Bayesian reasoning examples | p. 17 |
1.6 The goal of Bayesian AI | p. 21 |
1.7 Achieving Bayesian AI | p. 22 |
1.8 Are Bayesian networks Bayesian? | p. 22 |
1.9 Summary | p. 23 |
1.10 Bibliographic notes | p. 23 |
1.11 Technical notes | p. 24 |
1.12 Problems | p. 25 |
Chapter 2 Introducing Bayesian Networks | p. 29 |
2.1 Introduction | p. 29 |
2.2 Bayesian network basics | p. 29 |
2.2.1 Nodes and values | p. 30 |
2.2.2 Structure | p. 31 |
2.2.3 Conditional probabilities | p. 32 |
2.2.4 The Markov property | p. 33 |
2.3 Reasoning with Bayesian networks | p. 33 |
2.3.1 Types of reasoning | p. 34 |
2.3.2 Types of evidence | p. 35 |
2.3.3 Reasoning with numbers | p. 36 |
2.4 Understanding Bayesian networks | p. 37 |
2.4.1 Representing the joint probability distribution | p. 37 |
2.4.2 Pearl's network construction algorithm | p. 37 |
2.4.3 Compactness and node ordering | p. 38 |
2.4.4 Conditional independence | p. 39 |
2.4.5 d-separation | p. 41 |
2.5 More examples | p. 43 |
2.5.1 Earthquake | p. 43 |
2.5.2 Metastatic cancer | p. 43 |
2.5.3 Asia | p. 43 |
2.6 Summary | p. 44 |
2.7 Bibliographic notes | p. 45 |
2.8 Problems | p. 47 |
Chapter 3 Inference in Bayesian Networks | p. 53 |
3.1 Introduction | p. 53 |
3.2 Exact inference in chains | p. 54 |
3.2.1 Two node network | p. 54 |
3.2.2 Three node chain | p. 55 |
3.3 Exact inference in polytrees | p. 56 |
3.3.1 Kim and Pearl's message passing algorithm | p. 57 |
3.3.2 Message passing example | p. 60 |
3.3.3 Algorithm features | p. 61 |
3.4 Inference with uncertain evidence | p. 62 |
3.4.1 Using a virtual node | p. 63 |
3.4.2 Virtual nodes in the message passing algorithm | p. 65 |
3.5 Exact inference in multiply-connected networks | p. 66 |
3.5.1 Clustering methods | p. 66 |
3.5.2 Junction trees | p. 68 |
3.6 Approximate inference with stochastic simulation | p. 72 |
3.6.1 Logic sampling | p. 72 |
3.6.2 Likelihood weighting | p. 74 |
3.6.3 Markov Chain Monte Carlo (MCMC) | p. 75 |
3.6.4 Using virtual evidence | p. 75 |
3.6.5 Assessing approximate inference algorithms | p. 76 |
3.7 Other computations | p. 77 |
3.7.1 Belief revision | p. 77 |
3.7.2 Probability of evidence | p. 78 |
3.8 Causal inference | p. 79 |
3.9 Summary | p. 81 |
3.10 Bibliographic notes | p. 81 |
3.11 Problems | p. 82 |
Chapter 4 Decision Networks | p. 89 |
4.1 Introduction | p. 89 |
4.2 Utilities | p. 89 |
4.3 Decision network basics | p. 91 |
4.3.1 Node types | p. 91 |
4.3.2 Football team example | p. 92 |
4.3.3 Evaluating decision networks | p. 93 |
4.3.4 Information links | p. 94 |
4.3.5 Fever example | p. 96 |
4.3.6 Types of actions | p. 96 |
4.4 Sequential decision making | p. 98 |
4.4.1 Test-action combination | p. 98 |
4.4.2 Real estate investment example | p. 99 |
4.4.3 Evaluation using a decision tree model | p. 101 |
4.4.4 Value of information | p. 103 |
4.4.5 Direct evaluation of decision networks | p. 104 |
4.5 Dynamic Bayesian networks | p. 104 |
4.5.1 Nodes, structure and CPTs | p. 105 |
4.5.2 Reasoning | p. 107 |
4.5.3 Inference algorithms for DBNs | p. 109 |
4.6 Dynamic decision networks | p. 110 |
4.6.1 Mobile robot example | p. 111 |
4.7 Summary | p. 112 |
4.8 Bibliographic notes | p. 113 |
4.9 Problems | p. 114 |
Chapter 5 Applications of Bayesian Networks | p. 117 |
5.1 Introduction | p. 117 |
5.2 A brief survey of BN applications | p. 118 |
5.2.1 Types of reasoning | p. 118 |
5.2.2 BN structures for medical problems | p. 118 |
5.2.3 Other medical applications | p. 120 |
5.2.4 Non-medical applications | p. 121 |
5.3 Bayesian poker | p. 122 |
5.3.1 Five-card stud poker | p. 123 |
5.3.2 A decision network for poker | p. 124 |
5.3.3 Betting with randomization | p. 127 |
5.3.4 Bluffing | p. 128 |
5.3.5 Experimental evaluation | p. 129 |
5.4 Ambulation monitoring and fall detection | p. 129 |
5.4.1 The domain | p. 129 |
5.4.2 The DBN model | p. 130 |
5.4.3 Case-based evaluation | p. 133 |
5.4.4 An extended sensor model | p. 134 |
5.5 A Nice Argument Generator (NAG) | p. 136 |
5.5.1 NAG architecture | p. 137 |
5.5.2 Example: An asteroid strike | p. 138 |
5.5.3 The psychology of inference | p. 139 |
5.5.4 Example: The asteroid strike continues | p. 141 |
5.5.5 The future of argumentation | p. 142 |
5.6 Summary | p. 142 |
5.7 Bibliographic notes | p. 143 |
5.8 Problems | p. 144 |
Part II Learning Causal Models | p. 147 |
Chapter 6 Learning Linear Causal Models | p. 151 |
6.1 Introduction | p. 151 |
6.2 Path models | p. 153 |
6.2.1 Wright's first decomposition rule | p. 155 |
6.2.2 Parameterizing linear models | p. 159 |
6.2.3 Learning linear models is complex | p. 159 |
6.3 Conditional independence learners | p. 161 |
6.3.1 Markov equivalence | p. 164 |
6.3.2 PC algorithm | p. 167 |
6.3.3 Causal discovery versus regression | p. 169 |
6.4 Summary | p. 170 |
6.5 Bibliographic notes | p. 170 |
6.6 Technical notes | p. 171 |
6.7 Problems | p. 172 |
Chapter 7 Learning Probabilities | p. 175 |
7.1 Introduction | p. 175 |
7.2 Parameterizing discrete models | p. 176 |
7.2.1 Parameterizing a binomial model | p. 176 |
7.2.2 Parameterizing a multinomial model | p. 179 |
7.3 Incomplete data | p. 181 |
7.3.1 The Bayesian solution | p. 182 |
7.3.2 Approximate solutions | p. 182 |
7.3.3 Incomplete data: summary | p. 187 |
7.4 Learning local structure | p. 187 |
7.4.1 Causal interaction | p. 187 |
7.4.2 Noisy-or connections | p. 188 |
7.4.3 Classification trees and graphs | p. 189 |
7.4.4 Logit models | p. 191 |
7.4.5 Dual model discovery | p. 191 |
7.5 Summary | p. 192 |
7.6 Bibliographic notes | p. 192 |
7.7 Technical notes | p. 193 |
7.8 Problems | p. 194 |
Chapter 8 Learning Discrete Causal Structure | p. 197 |
8.1 Introduction | p. 197 |
8.2 Cooper & Herskovits' K2 | p. 198 |
8.2.1 Learning variable order | p. 200 |
8.3 MDL causal discovery | p. 201 |
8.3.1 Lam and Bacchus's MDL code for causal models | p. 203 |
8.3.2 Suzuki's MDL code for causal discovery | p. 205 |
8.4 Metric pattern discovery | p. 205 |
8.5 CaMML: Causal discovery via MML | p. 207 |
8.5.1 An MML code for causal structures | p. 207 |
8.5.2 An MML metric for linear models | p. 210 |
8.6 CaMML stochastic search | p. 211 |
8.6.1 Genetic algorithm (GA) search | p. 211 |
8.6.2 Metropolis search | p. 211 |
8.6.3 Prior constraints | p. 213 |
8.6.4 MML models | p. 214 |
8.6.5 An MML metric for discrete models | p. 215 |
8.7 Experimental evaluation | p. 215 |
8.7.1 Qualitative evaluation | p. 216 |
8.7.2 Quantiative evaluation | p. 216 |
8.8 Summary | p. 217 |
8.9 Bibliographic notes | p. 218 |
8.10 Technical notes | p. 218 |
8.11 Problems | p. 219 |
Part III Knowledge Engineering | p. 221 |
Chapter 9 Knowledge Engineering with Bayesian Networks | p. 225 |
9.1 Introduction | p. 225 |
9.1.1 Bayesian network modeling tasks | p. 225 |
9.2 The KEBN process | p. 226 |
9.2.1 KEBN lifecycle model | p. 226 |
9.2.2 Prototyping and spiral KEBN | p. 227 |
9.2.3 Are BNs suitable for the domain problem? | p. 228 |
9.2.4 Process management | p. 229 |
9.3 Modeling and elicitation | p. 230 |
9.3.1 Variables and values | p. 230 |
9.3.2 Graphical structure | p. 233 |
9.3.3 Probabilities | p. 241 |
9.3.4 Local structure | p. 247 |
9.3.5 Variants of Bayesian networks | p. 250 |
9.3.6 Modeling example: missing car | p. 251 |
9.3.7 Decision networks | p. 254 |
9.4 Adaptation | p. 257 |
9.4.1 Adapting parameters | p. 258 |
9.4.2 Structural adaptation | p. 259 |
9.5 Summary | p. 260 |
9.6 Bibliographic notes | p. 260 |
9.7 Problems | p. 261 |
Chapter 10 Evaluation | p. 263 |
10.1 Introduction | p. 263 |
10.2 Elicitation review | p. 263 |
10.3 Sensitivity analysis | p. 264 |
10.3.1 Sensitivity to evidence | p. 264 |
10.3.2 Sensitivity to changes in parameters | p. 271 |
10.4 Case-based evaluation | p. 272 |
10.4.1 Explanation methods | p. 273 |
10.5 Validation methods | p. 274 |
10.5.1 Predictive accuracy | p. 276 |
10.5.2 Expected value | p. 277 |
10.5.3 Kullback-Leibler divergence | p. 278 |
10.5.4 Information reward | p. 280 |
10.5.5 Bayesian information reward | p. 281 |
10.6 Summary | p. 283 |
10.7 Bibliographic notes | p. 284 |
10.8 Technical notes | p. 284 |
10.9 Problems | p. 286 |
Chapter 11 KEBN Case Studies | p. 287 |
11.1 Introduction | p. 287 |
11.2 Bayesian poker revisited | p. 287 |
11.2.1 The initial prototype | p. 287 |
11.2.2 Subsequent developments | p. 288 |
11.2.3 Ongoing Bayesian poker | p. 289 |
11.2.4 KEBN aspects | p. 290 |
11.3 An intelligent tutoring system for decimal understanding | p. 290 |
11.3.1 The ITS domain | p. 291 |
11.3.2 ITS system architecture | p. 293 |
11.3.3 Expert elicitation | p. 294 |
11.3.4 Automated methods | p. 301 |
11.3.5 Field trial evaluation | p. 303 |
11.3.6 KEBN aspects | p. 304 |
11.4 Seabreeze prediction | p. 305 |
11.4.1 The seabreeze prediction problem | p. 305 |
11.4.2 The data | p. 306 |
11.4.3 Bayesian network modeling | p. 307 |
11.4.4 Experimental evaluation | p. 308 |
11.4.5 KEBN aspects | p. 312 |
11.5 Summary | p. 313 |
Appendix A Notation | p. 315 |
Appendix B Software Packages | p. 317 |
B.1 Introduction | p. 317 |
B.2 History | p. 318 |
B.3 Murphy's Software Package Survey | p. 318 |
B.4 BN software | p. 323 |
B.4.1 Analytica | p. 323 |
B.4.2 BayesiaLab | p. 324 |
B.4.3 Bayes Net Toolbox (BNT) | p. 325 |
B.4.4 GeNIe | p. 326 |
B.4.5 Hugin | p. 327 |
B.4.6 JavaBayes | p. 328 |
B.4.7 MSBNx | p. 329 |
B.4.8 Netica | p. 329 |
B.5 Bayesian statistical modeling | p. 330 |
B.5.1 BUGS | p. 330 |
B.5.2 First Bayes | p. 331 |
B.6 Causal discovery programs | p. 331 |
B.6.1 Bayesware Discoverer | p. 331 |
B.6.2 CaMML | p. 331 |
B.6.3 TETRAD | p. 332 |
B.6.4 WinMine | p. 332 |
References | p. 333 |
Index | p. 355 |