Skip to:Content
|
Bottom
Cover image for Bayesian artificial intelligence
Title:
Bayesian artificial intelligence
Personal Author:
Series:
Series in computer science and data analysis
Publication Information:
Boca Raton : Chapman & Hall/CRC, 2004
ISBN:
9781584883876
Added Author:

Available:*

Library
Item Barcode
Call Number
Material Type
Item Category 1
Status
Searching...
30000010064562 QA279.5 K67 2004 Open Access Book Book
Searching...
Searching...
30000010138982 QA279.5 K67 2004 Open Access Book Book
Searching...

On Order

Summary

Summary

As the power of Bayesian techniques has become more fully realized, the field of artificial intelligence has embraced Bayesian methodology and integrated it to the point where an introduction to Bayesian techniques is now a core course in many computer science programs. Unlike other books on the subject, Bayesian Artificial Intelligencekeeps mathematical detail to a minimum and covers a broad range of topics. The authors integrate all of Bayesian net technology and learning Bayesian net technology and apply them both to knowledge engineering. They emphasize understanding and intuition but also provide the algorithms and technical background needed for applications. Software, exercises, and solutions are available on the authors' website.


Author Notes

Ann Nicholson is a Senior Lecturer and Kevin Korb is a Reader in the School of Computer Science and Software Engineering at Monash University, Victoria, Australia


Reviews 1

Choice Review

Korb and Nicholson (both, Monash Univ., Australia) say in their preface that this book is aimed at advanced undergraduates in computer science who have some background in artificial intelligence, and at those who wish to engage in applied or pure research in applications of Bayesian inference in AI. They also explain how this book is different--an emphasis on causal discovery and interpretation of Bayesian networks and discussion of applications. The book has the following noteworthy features: clear and helpful introductions and summaries for each chapter, problems for each chapter for reinforcement, lists of up-to-date references, and an annotated list of relevant software showing features availability, etc. The three parts of the book, dealing respectively with fundamental background material on probabilistic reasoning, causal models, and knowledge engineering, treat a wide span of relevant material and case studies, written lucidly and carrying readers smoothly from the simple to the complex. This book certainly deserves to be in the library of any institution where undergraduate or graduate courses in computer science are taught, and would also be an excellent resource for anyone who wants to learn more about this cutting-edge area of computing. ^BSumming Up: Essential. General readers; upper-division undergraduates through professionals; two-year technical program students. R. Bharath emeritus, Northern Michigan University


Table of Contents

Part I Probabilistic Reasoningp. 1
Chapter 1 Bayesian Reasoningp. 3
1.1 Reasoning under uncertaintyp. 3
1.2 Uncertainty in AIp. 4
1.3 Probability calculusp. 5
1.3.1 Conditional probability theoremsp. 8
1.3.2 Variablesp. 9
1.4 Interpretations of probabilityp. 10
1.5 Bayesian philosophyp. 12
1.5.1 Bayes' theoremp. 12
1.5.2 Betting and oddsp. 13
1.5.3 Expected utilityp. 15
1.5.4 Dutch booksp. 16
1.5.5 Bayesian reasoning examplesp. 17
1.6 The goal of Bayesian AIp. 21
1.7 Achieving Bayesian AIp. 22
1.8 Are Bayesian networks Bayesian?p. 22
1.9 Summaryp. 23
1.10 Bibliographic notesp. 23
1.11 Technical notesp. 24
1.12 Problemsp. 25
Chapter 2 Introducing Bayesian Networksp. 29
2.1 Introductionp. 29
2.2 Bayesian network basicsp. 29
2.2.1 Nodes and valuesp. 30
2.2.2 Structurep. 31
2.2.3 Conditional probabilitiesp. 32
2.2.4 The Markov propertyp. 33
2.3 Reasoning with Bayesian networksp. 33
2.3.1 Types of reasoningp. 34
2.3.2 Types of evidencep. 35
2.3.3 Reasoning with numbersp. 36
2.4 Understanding Bayesian networksp. 37
2.4.1 Representing the joint probability distributionp. 37
2.4.2 Pearl's network construction algorithmp. 37
2.4.3 Compactness and node orderingp. 38
2.4.4 Conditional independencep. 39
2.4.5 d-separationp. 41
2.5 More examplesp. 43
2.5.1 Earthquakep. 43
2.5.2 Metastatic cancerp. 43
2.5.3 Asiap. 43
2.6 Summaryp. 44
2.7 Bibliographic notesp. 45
2.8 Problemsp. 47
Chapter 3 Inference in Bayesian Networksp. 53
3.1 Introductionp. 53
3.2 Exact inference in chainsp. 54
3.2.1 Two node networkp. 54
3.2.2 Three node chainp. 55
3.3 Exact inference in polytreesp. 56
3.3.1 Kim and Pearl's message passing algorithmp. 57
3.3.2 Message passing examplep. 60
3.3.3 Algorithm featuresp. 61
3.4 Inference with uncertain evidencep. 62
3.4.1 Using a virtual nodep. 63
3.4.2 Virtual nodes in the message passing algorithmp. 65
3.5 Exact inference in multiply-connected networksp. 66
3.5.1 Clustering methodsp. 66
3.5.2 Junction treesp. 68
3.6 Approximate inference with stochastic simulationp. 72
3.6.1 Logic samplingp. 72
3.6.2 Likelihood weightingp. 74
3.6.3 Markov Chain Monte Carlo (MCMC)p. 75
3.6.4 Using virtual evidencep. 75
3.6.5 Assessing approximate inference algorithmsp. 76
3.7 Other computationsp. 77
3.7.1 Belief revisionp. 77
3.7.2 Probability of evidencep. 78
3.8 Causal inferencep. 79
3.9 Summaryp. 81
3.10 Bibliographic notesp. 81
3.11 Problemsp. 82
Chapter 4 Decision Networksp. 89
4.1 Introductionp. 89
4.2 Utilitiesp. 89
4.3 Decision network basicsp. 91
4.3.1 Node typesp. 91
4.3.2 Football team examplep. 92
4.3.3 Evaluating decision networksp. 93
4.3.4 Information linksp. 94
4.3.5 Fever examplep. 96
4.3.6 Types of actionsp. 96
4.4 Sequential decision makingp. 98
4.4.1 Test-action combinationp. 98
4.4.2 Real estate investment examplep. 99
4.4.3 Evaluation using a decision tree modelp. 101
4.4.4 Value of informationp. 103
4.4.5 Direct evaluation of decision networksp. 104
4.5 Dynamic Bayesian networksp. 104
4.5.1 Nodes, structure and CPTsp. 105
4.5.2 Reasoningp. 107
4.5.3 Inference algorithms for DBNsp. 109
4.6 Dynamic decision networksp. 110
4.6.1 Mobile robot examplep. 111
4.7 Summaryp. 112
4.8 Bibliographic notesp. 113
4.9 Problemsp. 114
Chapter 5 Applications of Bayesian Networksp. 117
5.1 Introductionp. 117
5.2 A brief survey of BN applicationsp. 118
5.2.1 Types of reasoningp. 118
5.2.2 BN structures for medical problemsp. 118
5.2.3 Other medical applicationsp. 120
5.2.4 Non-medical applicationsp. 121
5.3 Bayesian pokerp. 122
5.3.1 Five-card stud pokerp. 123
5.3.2 A decision network for pokerp. 124
5.3.3 Betting with randomizationp. 127
5.3.4 Bluffingp. 128
5.3.5 Experimental evaluationp. 129
5.4 Ambulation monitoring and fall detectionp. 129
5.4.1 The domainp. 129
5.4.2 The DBN modelp. 130
5.4.3 Case-based evaluationp. 133
5.4.4 An extended sensor modelp. 134
5.5 A Nice Argument Generator (NAG)p. 136
5.5.1 NAG architecturep. 137
5.5.2 Example: An asteroid strikep. 138
5.5.3 The psychology of inferencep. 139
5.5.4 Example: The asteroid strike continuesp. 141
5.5.5 The future of argumentationp. 142
5.6 Summaryp. 142
5.7 Bibliographic notesp. 143
5.8 Problemsp. 144
Part II Learning Causal Modelsp. 147
Chapter 6 Learning Linear Causal Modelsp. 151
6.1 Introductionp. 151
6.2 Path modelsp. 153
6.2.1 Wright's first decomposition rulep. 155
6.2.2 Parameterizing linear modelsp. 159
6.2.3 Learning linear models is complexp. 159
6.3 Conditional independence learnersp. 161
6.3.1 Markov equivalencep. 164
6.3.2 PC algorithmp. 167
6.3.3 Causal discovery versus regressionp. 169
6.4 Summaryp. 170
6.5 Bibliographic notesp. 170
6.6 Technical notesp. 171
6.7 Problemsp. 172
Chapter 7 Learning Probabilitiesp. 175
7.1 Introductionp. 175
7.2 Parameterizing discrete modelsp. 176
7.2.1 Parameterizing a binomial modelp. 176
7.2.2 Parameterizing a multinomial modelp. 179
7.3 Incomplete datap. 181
7.3.1 The Bayesian solutionp. 182
7.3.2 Approximate solutionsp. 182
7.3.3 Incomplete data: summaryp. 187
7.4 Learning local structurep. 187
7.4.1 Causal interactionp. 187
7.4.2 Noisy-or connectionsp. 188
7.4.3 Classification trees and graphsp. 189
7.4.4 Logit modelsp. 191
7.4.5 Dual model discoveryp. 191
7.5 Summaryp. 192
7.6 Bibliographic notesp. 192
7.7 Technical notesp. 193
7.8 Problemsp. 194
Chapter 8 Learning Discrete Causal Structurep. 197
8.1 Introductionp. 197
8.2 Cooper & Herskovits' K2p. 198
8.2.1 Learning variable orderp. 200
8.3 MDL causal discoveryp. 201
8.3.1 Lam and Bacchus's MDL code for causal modelsp. 203
8.3.2 Suzuki's MDL code for causal discoveryp. 205
8.4 Metric pattern discoveryp. 205
8.5 CaMML: Causal discovery via MMLp. 207
8.5.1 An MML code for causal structuresp. 207
8.5.2 An MML metric for linear modelsp. 210
8.6 CaMML stochastic searchp. 211
8.6.1 Genetic algorithm (GA) searchp. 211
8.6.2 Metropolis searchp. 211
8.6.3 Prior constraintsp. 213
8.6.4 MML modelsp. 214
8.6.5 An MML metric for discrete modelsp. 215
8.7 Experimental evaluationp. 215
8.7.1 Qualitative evaluationp. 216
8.7.2 Quantiative evaluationp. 216
8.8 Summaryp. 217
8.9 Bibliographic notesp. 218
8.10 Technical notesp. 218
8.11 Problemsp. 219
Part III Knowledge Engineeringp. 221
Chapter 9 Knowledge Engineering with Bayesian Networksp. 225
9.1 Introductionp. 225
9.1.1 Bayesian network modeling tasksp. 225
9.2 The KEBN processp. 226
9.2.1 KEBN lifecycle modelp. 226
9.2.2 Prototyping and spiral KEBNp. 227
9.2.3 Are BNs suitable for the domain problem?p. 228
9.2.4 Process managementp. 229
9.3 Modeling and elicitationp. 230
9.3.1 Variables and valuesp. 230
9.3.2 Graphical structurep. 233
9.3.3 Probabilitiesp. 241
9.3.4 Local structurep. 247
9.3.5 Variants of Bayesian networksp. 250
9.3.6 Modeling example: missing carp. 251
9.3.7 Decision networksp. 254
9.4 Adaptationp. 257
9.4.1 Adapting parametersp. 258
9.4.2 Structural adaptationp. 259
9.5 Summaryp. 260
9.6 Bibliographic notesp. 260
9.7 Problemsp. 261
Chapter 10 Evaluationp. 263
10.1 Introductionp. 263
10.2 Elicitation reviewp. 263
10.3 Sensitivity analysisp. 264
10.3.1 Sensitivity to evidencep. 264
10.3.2 Sensitivity to changes in parametersp. 271
10.4 Case-based evaluationp. 272
10.4.1 Explanation methodsp. 273
10.5 Validation methodsp. 274
10.5.1 Predictive accuracyp. 276
10.5.2 Expected valuep. 277
10.5.3 Kullback-Leibler divergencep. 278
10.5.4 Information rewardp. 280
10.5.5 Bayesian information rewardp. 281
10.6 Summaryp. 283
10.7 Bibliographic notesp. 284
10.8 Technical notesp. 284
10.9 Problemsp. 286
Chapter 11 KEBN Case Studiesp. 287
11.1 Introductionp. 287
11.2 Bayesian poker revisitedp. 287
11.2.1 The initial prototypep. 287
11.2.2 Subsequent developmentsp. 288
11.2.3 Ongoing Bayesian pokerp. 289
11.2.4 KEBN aspectsp. 290
11.3 An intelligent tutoring system for decimal understandingp. 290
11.3.1 The ITS domainp. 291
11.3.2 ITS system architecturep. 293
11.3.3 Expert elicitationp. 294
11.3.4 Automated methodsp. 301
11.3.5 Field trial evaluationp. 303
11.3.6 KEBN aspectsp. 304
11.4 Seabreeze predictionp. 305
11.4.1 The seabreeze prediction problemp. 305
11.4.2 The datap. 306
11.4.3 Bayesian network modelingp. 307
11.4.4 Experimental evaluationp. 308
11.4.5 KEBN aspectsp. 312
11.5 Summaryp. 313
Appendix A Notationp. 315
Appendix B Software Packagesp. 317
B.1 Introductionp. 317
B.2 Historyp. 318
B.3 Murphy's Software Package Surveyp. 318
B.4 BN softwarep. 323
B.4.1 Analyticap. 323
B.4.2 BayesiaLabp. 324
B.4.3 Bayes Net Toolbox (BNT)p. 325
B.4.4 GeNIep. 326
B.4.5 Huginp. 327
B.4.6 JavaBayesp. 328
B.4.7 MSBNxp. 329
B.4.8 Neticap. 329
B.5 Bayesian statistical modelingp. 330
B.5.1 BUGSp. 330
B.5.2 First Bayesp. 331
B.6 Causal discovery programsp. 331
B.6.1 Bayesware Discovererp. 331
B.6.2 CaMMLp. 331
B.6.3 TETRADp. 332
B.6.4 WinMinep. 332
Referencesp. 333
Indexp. 355
Go to:Top of Page