Available:*
Library | Item Barcode | Call Number | Material Type | Item Category 1 | Status |
---|---|---|---|---|---|
Searching... | 30000010082249 | QC174.85.M64 B47 2004 | Open Access Book | Book | Searching... |
On Order
Summary
Summary
This book teaches modern Markov chain Monte Carlo (MC) simulation techniques step by step. The material should be accessible to advanced undergraduate students and is suitable for a course. It ranges from elementary statistics concepts (the theory behind MC simulations), through conventional Metropolis and heat bath algorithms, autocorrelations and the analysis of the performance of MC algorithms, to advanced topics including the multicanonical approach, cluster algorithms and parallel computing. Therefore, it is also of interest to researchers in the field. The book relates the theory directly to Web-based computer code. This allows readers to get quickly started with their own simulations and to verify many numerical examples easily. The present code is in Fortran 77, for which compilers are freely available. The principles taught are important for users of other programming languages, like C or C++.
Table of Contents
Preface | p. vii |
1 Sampling, Statistics and Computer Code | p. 1 |
1.1 Probability Distributions and Sampling | p. 1 |
1.1.1 Assignments for section 1.1 | p. 5 |
1.2 Random Numbers | p. 6 |
1.2.1 Assignments for section 1.2 | p. 12 |
1.3 About the Fortran Code | p. 13 |
1.3.1 CPU time measurements under Linux | p. 22 |
1.4 Gaussian Distribution | p. 23 |
1.4.1 Assignments for section 1.4 | p. 25 |
1.5 Confidence Intervals | p. 26 |
1.5.1 Assignment for section 1.5 | p. 30 |
1.6 Order Statistics and HeapSort | p. 30 |
1.6.1 Assignments for section 1.6 | p. 34 |
1.7 Functions and Expectation Values | p. 35 |
1.7.1 Moments and Tchebychev's inequality | p. 36 |
1.7.2 The sum of two independent random variables | p. 40 |
1.7.3 Characteristic functions and sums of N independent random variables | p. 41 |
1.7.4 Linear transformations, error propagation and covariance | p. 43 |
1.7.5 Assignments for section 1.7 | p. 46 |
1.8 Sample Mean and the Central Limit Theorem | p. 47 |
1.8.1 Probability density of the sample mean | p. 47 |
1.8.2 The central limit theorem | p. 50 |
1.8.2.1 Counter example | p. 51 |
1.8.3 Binning | p. 52 |
1.8.4 Assignments for section 1.8 | p. 53 |
2 Error Analysis for Independent Random Variables | p. 54 |
2.1 Gaussian Confidence Intervals and Error Bars | p. 54 |
2.1.1 Estimator of the variance and bias | p. 56 |
2.1.2 Statistical error bar routines (steb) | p. 57 |
2.1.2.1 Ratio of two means with error bars | p. 60 |
2.1.3 Gaussian difference test | p. 60 |
2.1.3.1 Combining more than two data points | p. 62 |
2.1.4 Assignments for section 2.1 | p. 64 |
2.2 The X[superscript 2] Distribution | p. 66 |
2.2.1 Sample variance distribution | p. 67 |
2.2.2 The X[superscript 2] distribution function and probability density | p. 70 |
2.2.3 Assignments for section 2.2 | p. 72 |
2.3 Gosset's Student Distribution | p. 73 |
2.3.1 Student difference test | p. 77 |
2.3.2 Assignments for section 2.3 | p. 81 |
2.4 The Error of the Error Bar | p. 81 |
2.4.1 Assignments for section 2.4 | p. 84 |
2.5 Variance Ratio Test (F-test) | p. 85 |
2.5.1 F ratio confidence limits | p. 88 |
2.5.2 Assignments for section 2.5 | p. 89 |
2.6 When are Distributions Consistent? | p. 89 |
2.6.1 X[superscript 2] Test | p. 89 |
2.6.2 The one-sided Kolmogorov test | p. 92 |
2.6.3 The two-sided Kolmogorov test | p. 98 |
2.6.4 Assignments for section 2.6 | p. 101 |
2.7 The Jackknife Approach | p. 103 |
2.7.1 Bias corrected estimators | p. 106 |
2.7.2 Assignments for section 2.7 | p. 108 |
2.8 Determination of Parameters (Fitting) | p. 109 |
2.8.1 Linear regression | p. 111 |
2.8.1.1 Confidence limits of the regression line | p. 114 |
2.8.1.2 Related functional forms | p. 115 |
2.8.1.3 Examples | p. 117 |
2.8.2 Levenberg-Marquardt fitting | p. 121 |
2.8.2.1 Examples | p. 125 |
2.8.3 Assignments for section 2.8 | p. 127 |
3 Markov Chain Monte Carlo | p. 128 |
3.1 Preliminaries and the Two-Dimensional Ising Model | p. 129 |
3.1.1 Lattice labeling | p. 133 |
3.1.2 Sampling and Re-weighting | p. 138 |
3.1.2.1 Important configurations and re-weighting range | p. 141 |
3.1.3 Assignments for section 3.1 | p. 142 |
3.2 Importance Sampling | p. 142 |
3.2.1 The Metropolis algorithm | p. 147 |
3.2.2 The O(3) [sigma]-model and the heat bath algorithm | p. 148 |
3.2.3 Assignments for section 3.2 | p. 152 |
3.3 Potts Model Monte Carlo Simulations | p. 152 |
3.3.1 The Metropolis code | p. 156 |
3.3.1.1 Initialization | p. 158 |
3.3.1.2 Updating routines | p. 160 |
3.3.1.3 Start and equilibration | p. 163 |
3.3.1.4 More updating routines | p. 164 |
3.3.2 Heat bath code | p. 165 |
3.3.3 Timing and time series comparison of the routines | p. 168 |
3.3.4 Energy references, data production and analysis code | p. 169 |
3.3.4.1 2d Ising model | p. 171 |
3.3.4.2 Data analysis | p. 173 |
3.3.4.3 2d 4-state and 10-state Potts models | p. 174 |
3.3.4.4 3d Ising model | p. 177 |
3.3.4.5 3d 3-state Potts model | p. 177 |
3.3.4.6 4d Ising model with non-zero magnetic field | p. 178 |
3.3.5 Assignments for section 3.3 | p. 179 |
3.4 Continuous Systems | p. 181 |
3.4.1 Simple Metropolis code for the O(n) spin models | p. 182 |
3.4.2 Metropolis code for the XY model | p. 186 |
3.4.2.1 Timing, discretization and rounding errors | p. 187 |
3.4.2.2 Acceptance rate | p. 189 |
3.4.3 Heat bath code for the O(3) model | p. 192 |
3.4.3.1 Rounding errors | p. 194 |
3.4.4 Assignments for section 3.4 | p. 194 |
4 Error Analysis for Markov Chain Data | p. 196 |
4.1 Autocorrelations | p. 197 |
4.1.1 Integrated autocorrelation time and binning | p. 202 |
4.1.2 Illustration: Metropolis generation of normally distributed data | p. 205 |
4.1.2.1 Autocorrelation function | p. 205 |
4.1.2.2 Integrated autocorrelation time | p. 207 |
4.1.2.3 Corrections to the confidence intervals of the binning procedure | p. 210 |
4.1.3 Self-consistent versus reasonable error analysis | p. 211 |
4.1.4 Assignments for section 4.1 | p. 213 |
4.2 Analysis of Statistical Physics Data | p. 214 |
4.2.1 The d = 2 Ising model off and on the critical point | p. 214 |
4.2.2 Comparison of Markov chain MC algorithms | p. 218 |
4.2.2.1 Random versus sequential updating | p. 218 |
4.2.2.2 Tuning the Metropolis acceptance rate | p. 219 |
4.2.2.3 Metropolis versus heat bath: 2d q = 10 Potts | p. 221 |
4.2.2.4 Metropolis versus heat bath: 3d Ising | p. 222 |
4.2.2.5 Metropolis versus heat bath: 2d O(3) [sigma] model | p. 223 |
4.2.3 Small fluctuations | p. 224 |
4.2.4 Assignments for section 4.2 | p. 227 |
4.3 Fitting of Markov Chain Monte Carlo Data | p. 229 |
4.3.1 One exponential autocorrelation time | p. 230 |
4.3.2 More than one exponential autocorrelation time | p. 233 |
4.3.3 Assignments for section 4.3 | p. 235 |
5 Advanced Monte Carlo | p. 236 |
5.1 Multicanonical Simulations | p. 236 |
5.1.1 Recursion for the weights | p. 239 |
5.1.2 Fortran implementation | p. 244 |
5.1.3 Example runs | p. 247 |
5.1.4 Performance | p. 250 |
5.1.5 Re-weighting to the canonical ensemble | p. 251 |
5.1.6 Energy and specific heat calculation | p. 254 |
5.1.7 Free energy and entropy calculation | p. 261 |
5.1.8 Time series analysis | p. 264 |
5.1.9 Assignments for section 5.1 | p. 267 |
5.2 Event Driven Simulations | p. 268 |
5.2.1 Computer implementation | p. 270 |
5.2.2 MC runs with the EDS code | p. 276 |
5.2.3 Assignments for section 5.2 | p. 278 |
5.3 Cluster Algorithms | p. 279 |
5.3.1 Autocorrelation times | p. 284 |
5.3.2 Assignments for section 5.3 | p. 286 |
5.4 Large Scale Simulations | p. 287 |
5.4.1 Assignments for section 5.4 | p. 289 |
6 Parallel Computing | p. 292 |
6.1 Trivially Parallel Computing | p. 292 |
6.2 Message Passing Interface (MPI) | p. 294 |
6.3 Parallel Tempering | p. 303 |
6.3.1 Computer implementation | p. 305 |
6.3.2 Illustration for the 2d 10-state Potts model | p. 310 |
6.3.3 Gaussian Multiple Markov chains | p. 315 |
6.3.4 Assignments for section 6.3 | p. 316 |
6.4 Checkerboard algorithms | p. 316 |
6.4.1 Assignment for section 6.4 | p. 318 |
7 Conclusions, History and Outlook | p. 319 |
Appendix A Computational Supplements | p. 326 |
A.1 Calculation of Special Functions | p. 326 |
A.2 Linear Algebraic Equations | p. 328 |
Appendix B More Exercises and some Solutions | p. 331 |
B.1 Exercises | p. 331 |
B.2 Solutions | p. 333 |
Appendix C More Fortran Routines | p. 338 |
Bibliography | p. 339 |
Index | p. 349 |