Available:*
Library | Item Barcode | Call Number | Material Type | Item Category 1 | Status |
---|---|---|---|---|---|
Searching... | 30000010177443 | QA279.5 J46 2007 | Open Access Book | Book | Searching... |
On Order
Summary
Summary
Probabilistic graphical models and decision graphs are powerful modeling tools for reasoning and decision making under uncertainty. As modeling languages they allow a natural specification of problem domains with inherent uncertainty, and from a computational perspective they support efficient algorithms for automatic construction and query answering. This includes belief updating, finding the most probable explanation for the observed evidence, detecting conflicts in the evidence entered into the network, determining optimal strategies, analyzing for relevance, and performing sensitivity analysis.
The book introduces probabilistic graphical models and decision graphs, including Bayesian networks and influence diagrams. The reader is introduced to the two types of frameworks through examples and exercises, which also instruct the reader on how to build these models.
The book is a new edition of Bayesian Networks and Decision Graphs by Finn V. Jensen. The new edition is structured into two parts. The first part focuses on probabilistic graphical models. Compared with the previous book, the new edition also includes a thorough description of recent extensions to the Bayesian network modeling language, advances in exact and approximate belief updating algorithms, and methods for learning both the structure and the parameters of a Bayesian network. The second part deals with decision graphs, and in addition to the frameworks described in the previous edition, it also introduces Markov decision processes and partially ordered decision problems. The authors also
provide a well-founded practical introduction to Bayesian networks, object-oriented Bayesian networks, decision trees, influence diagrams (and variants hereof), and Markov decision processes. give practical advice on the construction of Bayesian networks, decision trees, and influence diagrams from domain knowledge.Author Notes
Thomas is an associate professor at the department of computer science at Aalbory University, Denmark.
Table of Contents
Preface | p. V |
1 Prerequisites on Probability Theory | p. 1 |
1.1 Two Perspectives on Probability Theory | p. 1 |
1.2 Fundamentals of Probability Theory | p. 2 |
1.2.1 Conditional Probabilities | p. 4 |
1.2.2 Probability Calculus | p. 5 |
1.2.3 Conditional Independence | p. 6 |
1.3 Probability Calculus for Variables | p. 7 |
1.3.1 Calculations with Probability Tables: An Example | p. 11 |
1.4 An Algebra of Potentials | p. 13 |
1.5 Random Variables | p. 15 |
1.5.1 Continuous Distributions | p. 15 |
1.6 Exercises | p. 16 |
Part I Probabilistic Graphical Models | |
2 Causal and Bayesian Networks | p. 23 |
2.1 Reasoning Under Uncertainty | p. 23 |
2.1.1 Car Start Problem | p. 23 |
2.1.2 A Causal Perspective on the Car Start Problem | p. 24 |
2.2 Causal Networks and d-Separation | p. 26 |
2.2.1 d-separation | p. 30 |
2.3 Bayesian Networks | p. 32 |
2.3.1 Definition of Bayesian Networks | p. 32 |
2.3.2 The Chain Rule for Bayesian Networks | p. 35 |
2.3.3 Inserting Evidence | p. 39 |
2.3.4 Calculating Probabilities in Practice | p. 41 |
2.4 Graphical Models - Formal Languages for Model Specification | p. 42 |
2.5 Summary | p. 44 |
2.6 Bibliographical Notes | p. 45 |
2.7 Exercises | p. 45 |
3 Building Models | p. 51 |
3.1 Catching the Structure | p. 51 |
3.1.1 Milk Test | p. 52 |
3.1.2 Cold or Angina? | p. 54 |
3.1.3 Insemination | p. 55 |
3.1.4 A Simplified Poker Game | p. 57 |
3.1.5 Naive Bayes Models | p. 58 |
3.1.6 Causality | p. 60 |
3.2 Determining the Conditional Probabilities | p. 60 |
3.2.1 Milk Test | p. 60 |
3.2.2 Stud Farm | p. 62 |
3.2.3 Poker Game | p. 66 |
3.2.4 Transmission of Symbol Strings | p. 68 |
3.2.5 Cold or Angina? | p. 71 |
3.2.6 Why Causal Networks? | p. 72 |
3.3 Modeling Methods | p. 73 |
3.3.1 Undirected Relations | p. 73 |
3.3.2 Noisy-Or | p. 75 |
3.3.3 Divorcing | p. 78 |
3.3.4 Noisy Functional Dependence | p. 80 |
3.3.5 Expert Disagreements | p. 81 |
3.3.6 Object-Oriented Bayesian Networks | p. 84 |
3.3.7 Dynamic Bayesian Networks | p. 91 |
3.3.8 How to Deal with Continuous Variables | p. 93 |
3.3.9 Interventions | p. 96 |
3.4 Special Features | p. 97 |
3.4.1 Joint Probability Tables | p. 98 |
3.4.2 Most-Probable Explanation | p. 98 |
3.4.3 Data Conflict | p. 98 |
3.4.4 Sensitivity Analysis | p. 99 |
3.5 Summary | p. 100 |
3.6 Bibliographical Notes | p. 101 |
3.7 Exercises | p. 102 |
4 Belief Updating in Bayesian Networks | p. 109 |
4.1 Introductory Examples | p. 109 |
4.1.1 A Single Marginal | p. 110 |
4.1.2 Different Evidence Scenarios | p. 111 |
4.1.3 All Marginals | p. 114 |
4.2 Graph-Theoretic Representation | p. 115 |
4.2.1 Task and Notation | p. 115 |
4.2.2 Domain Graphs | p. 116 |
4.3 Triangulated Graphs and Join Trees | p. 119 |
4.3.1 Join Trees | p. 122 |
4.4 Propagation in Junction Trees | p. 124 |
4.4.1 Lazy Propagation in Junction Trees | p. 127 |
4.5 Exploiting the Information Scenario | p. 130 |
4.5.1 Barren Nodes | p. 130 |
4.5.2 d-Separation | p. 131 |
4.6 Nontriangulated Domain Graphs | p. 132 |
4.6.1 Triangulation of Graphs | p. 134 |
4.6.2 Triangulation of Dynamic Bayesian Networks | p. 137 |
4.7 Exact Propagation with Bounded Space | p. 140 |
4.7.1 Recursive Conditioning | p. 140 |
4.8 Stochastic Simulation in Bayesian Networks | p. 145 |
4.8.1 Probabilistic Logic Sampling | p. 146 |
4.8.2 Likelihood Weighting | p. 148 |
4.8.3 Gibbs Sampling | p. 150 |
4.9 Loopy Belief Propagation | p. 152 |
4.10 Summary | p. 154 |
4.11 Bibliographical Notes | p. 156 |
4.12 Exercises | p. 157 |
5 Analysis Tools for Bayesian Networks | p. 167 |
5.1 IEJ Trees | p. 168 |
5.2 Joint Probabilities and A-Saturated Junction Trees | p. 169 |
5.2.1 A-Saturated Junction Trees | p. 169 |
5.3 Configuration of Maximal Probability | p. 171 |
5.4 Axioms for Propagation in Junction Trees | p. 173 |
5.5 Data Conflict | p. 174 |
5.5.1 Insemination | p. 175 |
5.5.2 The Conflict Measure conf | p. 175 |
5.5.3 Conflict or Rare Case | p. 176 |
5.5.4 Tracing of Conflicts | p. 177 |
5.5.5 Other Approaches to Conflict Detection | p. 179 |
5.6 SE Analysis | p. 179 |
5.6.1 Example and Definitions | p. 179 |
5.6.2 h-Saturated Junction Trees and SE Analysis | p. 182 |
5.7 Sensitivity to Parameters | p. 184 |
5.7.1 One-Way Sensitivity Analysis | p. 187 |
5.7.2 Two-Way Sensitivity Analysis | p. 188 |
5.8 Summary | p. 188 |
5.9 Bibliographical Notes | p. 190 |
5.10 Exercises | p. 191 |
6 Parameter Estimation | p. 196 |
6.1 Complete Data | p. 195 |
6.1.1 Maximum Likelihood Estimation | p. 196 |
6.1.2 Bayesian Estimation | p. 197 |
6.2 Incomplete Data | p. 200 |
6.2.1 Approximate Parameter Estimation: The EM Algorithm | p. 201 |
6.2.2 Why We Cannot Perform Exact Parameter Estimation | p. 207 |
6.3 Adaptation | p. 207 |
6.3.1 Fractional Updating | p. 210 |
6.3.2 Fading | p. 211 |
6.3.3 Specification of an Initial Sample Size | p. 212 |
6.3.4 Example: Strings of Symbols | p. 213 |
6.3.5 Adaptation to Structure | p. 214 |
6.3.6 Fractional Updating as an Approximation | p. 215 |
6.4 Tuning | p. 218 |
6.4.1 Example | p. 220 |
6.4.2 Determining grad dist(x, y) as a Function of t | p. 222 |
6.5 Summary | p. 223 |
6.6 Bibliographical Notes | p. 225 |
6.7 Exercises | p. 226 |
7 Learning the Structure of Bayesian Networks | p. 229 |
7.1 Constraint-Based Learning Methods | p. 230 |
7.1.1 From Skeleton to DAG | p. 231 |
7.1.2 From Independence Tests to Skeleton | p. 234 |
7.1.3 Example | p. 235 |
7.1.4 Constraint-Based Learning on Data Sets | p. 237 |
7.2 Ockham's Razor | p. 240 |
7.3 Score-Based Learning | p. 241 |
7.3.1 Score Functions | p. 242 |
7.3.2 Search Procedures | p. 245 |
7.3.3 Chow-Liu Trees | p. 250 |
7.3.4 Bayesian Score Functions | p. 253 |
7.4 Summary | p. 258 |
7.5 Bibliographical Notes | p. 260 |
7.6 Exercises | p. 261 |
8 Bayesian Networks as Classifiers | p. 265 |
8.1 Naive Bayes Classifiers | p. 266 |
8.2 Evaluation of Classifiers | p. 268 |
8.3 Extensions of Naive Bayes Classifiers | p. 270 |
8.4 Classification Trees | p. 272 |
8.5 Summary | p. 274 |
8.6 Bibliographical Notes | p. 275 |
8.7 Exercises | p. 276 |
Part II Decision Graphs | |
9 Graphical Languages for Specification of Decision Problems | p. 279 |
9.1 One-Shot Decision Problems | p. 280 |
9.1.1 Fold or Call? | p. 281 |
9.1.2 Mildew | p. 282 |
9.1.3 One Decision in General | p. 283 |
9.2 Utilities | p. 284 |
9.2.1 Instrumental Rationality | p. 287 |
9.3 Decision Trees | p. 290 |
9.3.1 A Couple of Examples | p. 293 |
9.3.2 Coalesced Decision Trees | p. 295 |
9.3.3 Solving Decision Trees | p. 296 |
9.4 Influence Diagrams | p. 302 |
9.4.1 Extended Poker Model | p. 302 |
9.4.2 Definition of Influence Diagrams | p. 305 |
9.4.3 Repetitive Decision Problems | p. 308 |
9.5 Asymmetric Decision Problems | p. 310 |
9.5.1 Different Sources of Asymmetry | p. 314 |
9.5.2 Unconstrained Influence Diagrams | p. 316 |
9.5.3 Sequential Influence Diagrams | p. 322 |
9.6 Decision Problems with Unbounded Time Horizons | p. 324 |
9.6.1 Markov Decision Processes | p. 324 |
9.6.2 Partially Observable Markov Decision Processes | p. 330 |
9.7 Summary | p. 332 |
9.8 Bibliographical Notes | p. 337 |
9.9 Exercises | p. 337 |
10 Solution Methods for Decision Graphs | p. 343 |
10.1 Solutions to Influence Diagrams | p. 343 |
10.1.1 The Chain Rule for Influence Diagrams | p. 345 |
10.1.2 Strategies and Expected Utilities | p. 346 |
10.1.3 An Example | p. 352 |
10.2 Variable Elimination | p. 353 |
10.2.1 Strong Junction Trees | p. 355 |
10.2.2 Required Past | p. 358 |
10.2.3 Policy Networks | p. 360 |
10.3 Node Removal and Arc Reversal | p. 362 |
10.3.1 Node Removal | p. 362 |
10.3.2 Arc Reversal | p. 363 |
10.3.3 An Example | p. 365 |
10.4 Solutions to Unconstrained Influence Diagrams | p. 367 |
10.4.1 Minimizing the S-DAG | p. 367 |
10.4.2 Determining Policies and Step Functions | p. 371 |
10.5 Decision Problems Without a Temporal Ordering: Troubleshooting | p. 373 |
10.5.1 Action Sequences | p. 373 |
10.5.2 A Greedy Approach | p. 375 |
10.5.3 Call Service | p. 378 |
10.5.4 Questions | p. 378 |
10.6 Solutions to Decision Problems with Unbounded Time Horizon | p. 380 |
10.6.1 A Basic Solution | p. 380 |
10.6.2 Value Iteration | p. 381 |
10.6.3 Policy Iteration | p. 385 |
10.6.4 Solving Partially Observable Markov Decision Processes | p. 388 |
10.7 Limited Memory Influence Diagrams | p. 392 |
10.8 Summary | p. 395 |
10.9 Bibliographical Notes | p. 400 |
10.10 Exercises | p. 401 |
11 Methods for Analyzing Decision Problems | p. 407 |
11.1 Value of Information | p. 407 |
11.1.1 Test for Infected Milk? | p. 407 |
11.1.2 Myopic Hypothesis-Driven Data Request | p. 409 |
11.1.3 Non-Utility-Based Value Functions | p. 411 |
11.2 Finding the Relevant Past and Future of a Decision Problem | p. 413 |
11.2.1 Identifying the Required Past | p. 415 |
11.3 Sensitivity Analysis | p. 420 |
11.3.1 Example | p. 421 |
11.3.2 One-Way Sensitivity Analysis in General | p. 423 |
11.4 Summary | p. 426 |
11.5 Bibliographical Notes | p. 427 |
11.6 Exercises | p. 427 |
List of Notation | p. 429 |
References | p. 431 |
Index | p. 441 |