Available:*
Library | Item Barcode | Call Number | Material Type | Item Category 1 | Status |
---|---|---|---|---|---|
Searching... | 30000010103382 | QA76.58 S36 2005 | Open Access Book | Book | Searching... |
Searching... | 30000010102329 | QA76.58 S36 2005 | Open Access Book | Book | Searching... |
On Order
Summary
Summary
What does Google's management of billions of Web pages have in common with analysis of a genome with billions of nucleotides? Both apply methods that coordinate many processors to accomplish a single task. From mining genomes to the World Wide Web, from modeling financial markets to global weather patterns, parallel computing enables computations that would otherwise be impractical if not impossible with sequential approaches alone. Its fundamental role as an enabler of simulations and data analysis continues an advance in a wide range of application areas.
Scientific Parallel Computing is the first textbook to integrate all the fundamentals of parallel computing in a single volume while also providing a basis for a deeper understanding of the subject. Designed for graduate and advanced undergraduate courses in the sciences and in engineering, computer science, and mathematics, it focuses on the three key areas of algorithms, architecture, languages, and their crucial synthesis in performance.
The book's computational examples, whose math prerequisites are not beyond the level of advanced calculus, derive from a breadth of topics in scientific and engineering simulation and data analysis. The programming exercises presented early in the book are designed to bring students up to speed quickly, while the book later develops projects challenging enough to guide students toward research questions in the field. The new paradigm of cluster computing is fully addressed. A supporting web site provides access to all the codes and software mentioned in the book, and offers topical information on popular parallel computing systems.
Integrates all the fundamentals of parallel computing essential for today's high-performance requirements
Ideal for graduate and advanced undergraduate students in the sciences and in engineering, computer science, and mathematics
Extensive programming and theoretical exercises enable students to write parallel codes quickly
More challenging projects later in the book introduce research questions
New paradigm of cluster computing fully addressed
Supporting web site provides access to all the codes and software mentioned in the book
Author Notes
L. Ridgway Scott is Louis Block Professor of Computer Science and of Mathematics at the University of Chicago. He is the coauthor of The Mathematical Theory of Finite Element Methods . Terry Clark is Assistant Professor of Computer Science in the Department of Electrical Engineering and Computer Science at the University of Kansas. Babak Bagheri is a software architect at PROS Revenue Management, a company that designs software for pricing and revenue management. Scott, Clark, and Bagheri codeveloped the P-languages.
Reviews 1
Choice Review
Scott (computer science and mathematics, Univ. of Chicago), Clark (computer science, Univ. of Kansas), and Bagheri have prepared a thorough treatment of the foundational and advanced principles of parallel computing. Their book is quite mathematical in its treatment of the topic, and as such, is most relevant to upper-division undergraduate and graduate-level courses of study. It covers topics ranging from computer architecture through parallel processing languages. It drills far into the details, to the particulars of how the underlying hardware elements (processors, switches, memory) participate in parallel algorithms and where those physical elements present limitations. The book comes with numerous examples, illustrations, and exercises. It also contains a rich bibliography to help with further research on this topic. If readers are seeking a book on grid computing, they will find an introduction to the topic here under the heading of mesh methods; for a more complete description they will need other resources. Nevertheless, this book provides an excellent background for understanding grids and parallel algorithms in general. ^BSumming Up: Recommended. Upper-division undergraduates through professionals. F. H. Wild III University of Rhode Island
Table of Contents
Preface ix | |
Notation xiii | |
Chapter 1 Introduction | p. 1 |
1.1 Overview | p. 1 |
1.2 What is parallel computing? | p. 3 |
1.3 Performance | p. 4 |
1.4 Why parallel? | p. 11 |
1.5 Two simple examples | p. 15 |
1.6 Mesh-based applications | p. 24 |
1.7 Parallel perspectives | p. 30 |
1.8 Exercises | p. 33 |
Chapter 2 Parallel Performance | p. 37 |
2.1 Summation example | p. 37 |
2.2 Performance measures | p. 38 |
2.3 Limits to performance | p. 44 |
2.4 Scalability | p. 48 |
2.5 Parallel performance analysis | p. 56 |
2.6 Parallel payoff | p. 59 |
2.7 Real world parallelism | p. 64 |
2.8 Starting SPMD programming | p. 66 |
2.9 Exercises | p. 66 |
Chapter 3 Computer Architecture | p. 71 |
3.1 PMS notation | p. 71 |
3.2 Shared memory multiprocessor | p. 75 |
3.3 Distributed memory multicomputer | p. 79 |
3.4 Pipeline and vector processors | p. 87 |
3.5 Comparison of parallel architectures | p. 89 |
3.6 Taxonomies | p. 92 |
3.7 Current trends | p. 94 |
3.8 Exercises | p. 95 |
Chapter 4 Dependences | p. 99 |
4.1 Data dependences | p. 100 |
4.2 Loop-carried data dependences | p. 103 |
4.3 Dependence examples | p. 110 |
4.4 Testing for loop-carried dependences | p. 112 |
4.5 Loop transformations | p. 114 |
4.6 Dependence examples continued | p. 120 |
4.7 Exercises | p. 123 |
Chapter 5 Parallel Languages | p. 127 |
5.1 Critical factors | p. 129 |
5.2 Command and control | p. 134 |
5.3 Memory models | p. 136 |
5.4 Shared memory programming | p. 139 |
5.5 Message passing | p. 143 |
5.6 Examples and comments | p. 148 |
5.7 Parallel language developments | p. 153 |
5.8 Exercises | p. 154 |
Chapter 6 Collective Operations | p. 157 |
6.1 The @notation | p. 157 |
6.2 Tree/ring algorithms | p. 158 |
6.3 Reduction operations | p. 162 |
6.4 Reduction operation applications | p. 164 |
6.5 Parallel prefix algorithms | p. 168 |
6.6 Performance of reduction operations | p. 169 |
6.7 Data movement operations | p. 173 |
6.8 Exercises | p. 174 |
Chapter 7 Current Programming Standards | p. 177 |
7.1 Introduction to MPI | p. 177 |
7.2 Collective operations in MPI | p. 181 |
7.3 Introduction to POSIX threads | p. 184 |
7.4 Exercises | p. 187 |
Chapter 8 The Planguage Model | p. 191 |
8.1 I P language details | p. 192 |
8.2 Ranges and arrays | p. 198 |
8.3 Reduction operations in Pfortran | p. 200 |
8.4 Introduction to PC | p. 204 |
8.5 Reduction operations in PC | p. 206 |
8.6 Planguages versus message passing | p. 207 |
8.7 Exercises | p. 208 |
Chapter 9 High Performance Fortran | p. 213 |
9.1 HPF data distribution directives | p. 214 |
9.2 Other mechanisms for expressing concurrency | p. 219 |
9.3 Compiling HPF | p. 220 |
9.4 HPF comparisons and review | p. 221 |
9.5 Exercises | p. 222 |
Chapter 10 Loop Tiling | p. 227 |
10.1 Loop tiling | p. 227 |
10.2 Work vs.data decomposition | p. 228 |
10.3 Tiling in Open MP | p. 228 |
10.4 Teams | p. 232 |
10.5 Parallel regions | p. 233 |
10.6 Exercises | p. 234 |
Chapter 11 Matrix Eigen Analysis | p. 237 |
11.1 The Leslie matrix model | p. 237 |
11.2 The power method | p. 242 |
11.3 A parallel Leslie matrix program | p. 244 |
11.4 Matrix-vector product | p. 249 |
11.5 Power method applications | p. 251 |
11.6 Exercises | p. 253 |
Chapter 12 Linea Systems | p. 257 |
12.1 Gaussian elimination | p. 257 |
12.2 Solving triangular systems in parallel | p. 262 |
12.3 Divide-and-conquer algorithms | p. 271 |
12.4 Exercises | p. 277 |
12.5 Projects | p. 281 |
Chapter 13 Particle Dynamics | p. 283 |
13.1 Model assumptions | p. 284 |
13.2 Using Newton's third law | p. 285 |
13.3 Further code complications | p. 288 |
13.4 Pair list generation | p. 290 |
13.5 Force calculation with a pair list | p. 296 |
13.6 Performance of replication algorithm | p. 299 |
13.7 Case study:particle dynamics in HPF | p. 302 |
13.8 Exercises | p. 307 |
13.9 Projects | p. 310 |
Chapter 14 Mesh Methods | p. 315 |
14.1 Boundary value problems | p. 315 |
14.2 Iterative methods | p. 319 |
14.3 Multigrid methods | p. 322 |
14.4 Multidimensional problems | p. 327 |
14.5 Initial value problems | p. 328 |
14.6 Exercises | p. 333 |
14.7 Projects | p. 334 |
Chapter 15 Sorting | p. 335 |
15.1 Introduction | p. 335 |
15.2 Parallel sorting | p. 337 |
15.3 Spatial sorting | p. 342 |
15.4 Exercises | p. 353 |
15.5 Projects | p. 355 |
Bibliography | p. 357 |
Index | p. 369 |