Available:*
Library | Item Barcode | Call Number | Material Type | Item Category 1 | Status |
---|---|---|---|---|---|
Searching... | 30000010214233 | QA279.4 P37 2009 | Open Access Book | Book | Searching... |
On Order
Summary
Summary
Decision theory provides a formal framework for making logical choices in the face of uncertainty. Given a set of alternatives, a set of consequences, and a correspondence between those sets, decision theory offers conceptually simple procedures for choice. This book presents an overview of the fundamental concepts and outcomes of rational decision making under uncertainty, highlighting the implications for statistical practice.
The authors have developed a series of self contained chapters focusing on bridging the gaps between the different fields that have contributed to rational decision making and presenting ideas in a unified framework and notation while respecting and highlighting the different and sometimes conflicting perspectives.
This book:
* Provides a rich collection of techniques and procedures.
* Discusses the foundational aspects and modern day practice.
* Links foundations to practical applications in biostatistics, computer science, engineering and economics.
* Presents different perspectives and controversies to encourage readers to form their own opinion of decision making and statistics.
Decision Theory is fundamental to all scientific disciplines, including biostatistics, computer science, economics and engineering. Anyone interested in the whys and wherefores of statistical science will find much to enjoy in this book.
Author Notes
Giovanni Parmigiani is the author of Decision Theory: Principles and Approaches, published by Wiley.
Lurdes Yoshiko Tani Inoue is a Brazilian-born statistician of Japanese descent, who specializes in Bayesian inference. She works as a professor of biostatistics in the University of Washington School of Public Health.
Reviews 1
Choice Review
This book is designed for graduate students in statistics and biostatistics at both the master's and PhD levels. The work's novel feature, as Parmigiani (Johns Hopkins) and Inoue (Univ. of Washington) point out in the preface, is that instead of using a standard textbook format, they have "selected a set of exciting papers and book chapters, and developed a self-contained lecture around each one." These selections fall into three broad categories that constitute the three parts of the book. The first part, "Foundations," discusses utility in two separate chapters, Ramsey and Savage theories, and more. Part 2, "Statistical Decision Theory," includes a chapter titled "Decision Functions." "Optimal Design," the final section, contains chapters titled "Dynamic Programming" and "Sample Size." The authors are to be commended on their attractive approach, which seems to work very well. The extensive list of references at the end, in addition to the key articles in each chapter, makes the book a valuable reference that should be in all libraries supporting advanced students in statistics and its applications. It can also serve as a good textbook in these areas. Summing Up: Recommended. Graduate students and above. R. Bharath emeritus, Northern Michigan University
Table of Contents
Preface | p. xiii |
Acknowledgments | p. xvii |
1 Introduction | p. 1 |
1.1 Controversies | p. 1 |
1.2 A guided tour of decision theory | p. 6 |
Part 1 Foundations | p. 11 |
2 Coherence | p. 13 |
2.1 The "Dutch Book" theorem | p. 15 |
2.1.1 Betting odds | p. 15 |
2.1.2 Coherence and the axioms of probability | p. 17 |
2.1.3 Coherent conditional probabilities | p. 20 |
2.1.4 The implications of Dutch Book theorems | p. 21 |
2.2 Temporal coherence | p. 24 |
2.3 Scoring rules and the axioms of probabilities | p. 26 |
2.4 Exercises | p. 27 |
3 Utility | p. 33 |
3.1 St. Petersburg paradox | p. 34 |
3.2 Expected utility theory and the theory of means | p. 37 |
3.2.1 Utility and means | p. 37 |
3.2.2 Associative means | p. 38 |
3.2.3 Functional means | p. 39 |
3.3 The expected utility principle | p. 40 |
3.4 The von Neumann-Morgenstern representation theorem | p. 42 |
3.4.1 Axioms | p. 42 |
3.4.2 Representation of preferences via expected utility | p. 44 |
3.5 Allais' criticism | p. 48 |
3.6 Extensions | p. 50 |
3.7 Exercises | p. 50 |
4 Utility in action | p. 55 |
4.1 The "standard gamble" | p. 56 |
4.2 Utility of money | p. 57 |
4.2.1 Certainty equivalents | p. 57 |
4.2.2 Risk aversion | p. 57 |
4.2.3 A measure of risk aversion | p. 60 |
4.3 Utility functions for medical decisions | p. 63 |
4.3.1 Length and quality of life | p. 63 |
4.3.2 Standard gamble for health states | p. 64 |
4.3.3 The time trade-off methods | p. 64 |
4.3.4 Relation between QALYs and utilities | p. 65 |
4.3.5 Utilities for time in ill health | p. 66 |
4.3.6 Difficulties in assessing utility | p. 69 |
4.4 Exercises | p. 70 |
5 Ramsey and Savage | p. 75 |
5.1 Ramsey's theory | p. 76 |
5.2 Savage's theory | p. 81 |
5.2.1 Notation and overview | p. 81 |
5.2.2 The sure thing principle | p. 82 |
5.2.3 Conditional and a posteriori preferences | p. 85 |
5.2.4 Subjective probability | p. 85 |
5.2.5 Utility and expected utility | p. 90 |
5.3 Allais revisited | p. 91 |
5.4 Ellsberg paradox | p. 92 |
5.5 Exercises | p. 93 |
6 State independence | p. 97 |
6.1 Horse lotteries | p. 98 |
6.2 State-dependent utilities | p. 100 |
6.3 State-independent utilities | p. 101 |
6.4 Anscombe-Aumann representation theorem | p. 103 |
6.5 Exercises | p. 105 |
Part 2 Statistical Decision Theory | p. 109 |
7 Decision functions | p. 111 |
7.1 Basic concepts | p. 112 |
7.1.1 The loss function | p. 112 |
7.1.2 Minimax | p. 114 |
7.1.3 Expected utility principle | p. 116 |
7.1.4 Illustrations | p. 117 |
7.2 Data-based decisions | p. 120 |
7.2.1 Risk | p. 120 |
7.2.2 Optimality principles | p. 121 |
7.2.3 Rationality principles and the Likelihood Principle | p. 123 |
7.2.4 Nuisance parameters | p. 125 |
7.3 The travel insurance example | p. 126 |
7.4 Randomized decision rules | p. 131 |
7.5 Classification and hypothesis tests | p. 133 |
7.5.1 Hypothesis testing | p. 133 |
7.5.2 Multiple hypothesis testing | p. 136 |
7.5.3 Classification | p. 139 |
7.6 Estimation | p. 140 |
7.6.1 Point estimation | p. 140 |
7.6.2 Interval inference | p. 143 |
7.7 Minimax-Bayes connection | p. 144 |
7.8 Exercises | p. 150 |
8 Admissibility | p. 155 |
8.1 Admissibility and completeness | p. 156 |
8.2 Admissibility and minimax | p. 158 |
8.3 Admissibility and Bayes | p. 159 |
8.3.1 Proper Bayes rules | p. 159 |
8.3.2 Generalized Bayes rules | p. 160 |
8.4 Complete classes | p. 164 |
8.4.1 Completeness and Bayes | p. 164 |
8.4.2 Sufficiency and the Rao-Blackwell inequality | p. 165 |
8.4.3 The Neyman-Pearson lemma | p. 167 |
8.5 Using the same ¿ level across studies with different sample sizes is inadmissible | p. 168 |
8.6 Exercises | p. 171 |
9 Shrinkage | p. 175 |
9.1 The Stein effect | p. 176 |
9.2 Geometric and empirical Bayes heuristics | p. 179 |
9.2.1 Is x too big for $$? | p. 179 |
9.2.2 Empirical Bayes shrinkage | p. 181 |
9.3 General shrinkage functions | p. 183 |
9.3.1 Unbiased estimation of the risk of x+g(x) | p. 183 |
9.3.2 Bayes and minimax shrinkage | p. 185 |
9.4 Shrinkage with different likelihood and losses | p. 188 |
9.5 Exercises | p. 188 |
10 Scoring rules | p. 191 |
10.1 Betting and forecasting | p. 192 |
10.2 Scoring rules | p. 193 |
10.2.1 Definition | p. 193 |
10.2.2 Proper scoring rules | p. 194 |
10.2.3 The quadratic scoring rules | p. 195 |
10.2.4 Scoring rules that are not proper | p. 196 |
10.3 Local scoring rules | p. 197 |
10.4 Calibration and refinement | p. 200 |
10.4.1 The well-calibrated forecaster | p. 200 |
10.4.2 Are Bayesians well calibrated? | p. 205 |
10.5 Exercises | p. 207 |
11 Choosing models | p. 209 |
11.1 The "true model" perspective | p. 210 |
11.1.1 Model probabilities | p. 210 |
11.1.2 Model selection and Bayes factors | p. 212 |
11.1.3 Model averaging for prediction and selection | p. 213 |
11.2 Model elaborations | p. 216 |
11.3 Exercises | p. 219 |
Part 3 Optimal Design | p. 221 |
12 Dynamic programming | p. 223 |
12.1 History | p. 224 |
12.2 The travel insurance example revisited | p. 226 |
12.3 Dynamic programming | p. 230 |
12.3.1 Two-stage finite decision problems | p. 230 |
12.3.2 More than two stages | p. 233 |
12.4 Trading off immediate gains and information | p. 235 |
12.4.1 The secretary problem | p. 235 |
12.4.2 The prophet inequality | p. 239 |
12.5 Sequential clinical trials | p. 241 |
12.5.1 Two-armed bandit problems | p. 241 |
12.5.2 Adaptive designs for binary outcomes | p. 242 |
12.6 Variable selection in multiple regression | p. 245 |
12.7 Computing | p. 248 |
12.8 Exercises | p. 251 |
13 Changes in utility as information | p. 255 |
13.1 Measuring the value of information | p. 256 |
13.1.1 The value function | p. 256 |
13.1.2 Information from a perfect experiment | p. 258 |
13.1.3 Information from a statistical experiment | p. 259 |
13.1.4 The distribution of information | p. 264 |
13.2 Examples | p. 265 |
13.2.1 Tasting grapes | p. 265 |
13.2.2 Medical testing | p. 266 |
13.2.3 Hypothesis testing | p. 273 |
13.3 Lindley information | p. 276 |
13.3.1 Definition | p. 276 |
13.3.2 Properties | p. 278 |
13.3.3 Computing | p. 280 |
13.3.4 Optimal design | p. 281 |
13.4 Minimax and the value of information | p. 283 |
13.5 Exercises | p. 285 |
14 Sample size | p. 289 |
14.1 Decision-theoretic approaches to sample size | p. 290 |
14.1.1 Sample size and power | p. 290 |
14.1.2 Sample size as a decision problem | p. 290 |
14.1.3 Bayes and minimax optimal sample size | p. 292 |
14.1.4 A minimax paradox | p. 293 |
14.1.5 Goal sampling | p. 295 |
14.2 Computing | p. 298 |
14.3 Examples | p. 302 |
14.3.1 Point estimation with quadratic loss | p. 302 |
14.3.2 Composite hypothesis testing | p. 304 |
14.3.3 A two-action problem with linear utility | p. 306 |
14.3.4 Lindley information for exponential data | p. 309 |
14.3.5 Multicenter clinical trials | p. 311 |
14.4 Exercises | p. 316 |
15 Stopping | p. 323 |
15.1 Historical note | p. 324 |
15.2 A motivating example | p. 326 |
15.3 Bayesian optimal stopping | p. 328 |
15.3.1 Notation | p. 328 |
15.3.2 Bayes sequential procedure | p. 329 |
15.3.3 Bayes truncated procedure | p. 330 |
15.4 Examples | p. 332 |
15.4.1 Hypotheses testing | p. 332 |
15.4.2 An example with equivalence between sequential and fixed sample size designs | p. 336 |
15.5 Sequential sampling to reduce uncertainty | p. 337 |
15.6 The stopping rule principle | p. 339 |
15.6.1 Stopping rules and the Likelihood Principle | p. 339 |
15.6.2 Sampling to a foregone conclusion | p. 340 |
15.7 Exercises | p. 342 |
Appendix p. 345 | |
A.1 Notation | p. 345 |
A.2 Relations | p. 349 |
A.3 Probability (density) functions of some distributions | p. 350 |
A.4 Conjugate updating | p. 350 |
References | p. 353 |
Index | p. 367 |