Skip to:Content
|
Bottom
Cover image for Scientific parallel computing
Title:
Scientific parallel computing
Personal Author:
Publication Information:
Princeton, NJ : Princeton University Press, 2005
ISBN:
9780691119359

Available:*

Library
Item Barcode
Call Number
Material Type
Item Category 1
Status
Searching...
30000010103382 QA76.58 S36 2005 Open Access Book Book
Searching...
Searching...
30000010102329 QA76.58 S36 2005 Open Access Book Book
Searching...

On Order

Summary

Summary

What does Google's management of billions of Web pages have in common with analysis of a genome with billions of nucleotides? Both apply methods that coordinate many processors to accomplish a single task. From mining genomes to the World Wide Web, from modeling financial markets to global weather patterns, parallel computing enables computations that would otherwise be impractical if not impossible with sequential approaches alone. Its fundamental role as an enabler of simulations and data analysis continues an advance in a wide range of application areas.



Scientific Parallel Computing is the first textbook to integrate all the fundamentals of parallel computing in a single volume while also providing a basis for a deeper understanding of the subject. Designed for graduate and advanced undergraduate courses in the sciences and in engineering, computer science, and mathematics, it focuses on the three key areas of algorithms, architecture, languages, and their crucial synthesis in performance.


The book's computational examples, whose math prerequisites are not beyond the level of advanced calculus, derive from a breadth of topics in scientific and engineering simulation and data analysis. The programming exercises presented early in the book are designed to bring students up to speed quickly, while the book later develops projects challenging enough to guide students toward research questions in the field. The new paradigm of cluster computing is fully addressed. A supporting web site provides access to all the codes and software mentioned in the book, and offers topical information on popular parallel computing systems.



Integrates all the fundamentals of parallel computing essential for today's high-performance requirements
Ideal for graduate and advanced undergraduate students in the sciences and in engineering, computer science, and mathematics
Extensive programming and theoretical exercises enable students to write parallel codes quickly
More challenging projects later in the book introduce research questions
New paradigm of cluster computing fully addressed
Supporting web site provides access to all the codes and software mentioned in the book


Author Notes

L. Ridgway Scott is Louis Block Professor of Computer Science and of Mathematics at the University of Chicago. He is the coauthor of The Mathematical Theory of Finite Element Methods . Terry Clark is Assistant Professor of Computer Science in the Department of Electrical Engineering and Computer Science at the University of Kansas. Babak Bagheri is a software architect at PROS Revenue Management, a company that designs software for pricing and revenue management. Scott, Clark, and Bagheri codeveloped the P-languages.


Reviews 1

Choice Review

Scott (computer science and mathematics, Univ. of Chicago), Clark (computer science, Univ. of Kansas), and Bagheri have prepared a thorough treatment of the foundational and advanced principles of parallel computing. Their book is quite mathematical in its treatment of the topic, and as such, is most relevant to upper-division undergraduate and graduate-level courses of study. It covers topics ranging from computer architecture through parallel processing languages. It drills far into the details, to the particulars of how the underlying hardware elements (processors, switches, memory) participate in parallel algorithms and where those physical elements present limitations. The book comes with numerous examples, illustrations, and exercises. It also contains a rich bibliography to help with further research on this topic. If readers are seeking a book on grid computing, they will find an introduction to the topic here under the heading of mesh methods; for a more complete description they will need other resources. Nevertheless, this book provides an excellent background for understanding grids and parallel algorithms in general. ^BSumming Up: Recommended. Upper-division undergraduates through professionals. F. H. Wild III University of Rhode Island


Table of Contents

Preface ix
Notation xiii
Chapter 1 Introductionp. 1
1.1 Overviewp. 1
1.2 What is parallel computing?p. 3
1.3 Performancep. 4
1.4 Why parallel?p. 11
1.5 Two simple examplesp. 15
1.6 Mesh-based applicationsp. 24
1.7 Parallel perspectivesp. 30
1.8 Exercisesp. 33
Chapter 2 Parallel Performancep. 37
2.1 Summation examplep. 37
2.2 Performance measuresp. 38
2.3 Limits to performancep. 44
2.4 Scalabilityp. 48
2.5 Parallel performance analysisp. 56
2.6 Parallel payoffp. 59
2.7 Real world parallelismp. 64
2.8 Starting SPMD programmingp. 66
2.9 Exercisesp. 66
Chapter 3 Computer Architecturep. 71
3.1 PMS notationp. 71
3.2 Shared memory multiprocessorp. 75
3.3 Distributed memory multicomputerp. 79
3.4 Pipeline and vector processorsp. 87
3.5 Comparison of parallel architecturesp. 89
3.6 Taxonomiesp. 92
3.7 Current trendsp. 94
3.8 Exercisesp. 95
Chapter 4 Dependencesp. 99
4.1 Data dependencesp. 100
4.2 Loop-carried data dependencesp. 103
4.3 Dependence examplesp. 110
4.4 Testing for loop-carried dependencesp. 112
4.5 Loop transformationsp. 114
4.6 Dependence examples continuedp. 120
4.7 Exercisesp. 123
Chapter 5 Parallel Languagesp. 127
5.1 Critical factorsp. 129
5.2 Command and controlp. 134
5.3 Memory modelsp. 136
5.4 Shared memory programmingp. 139
5.5 Message passingp. 143
5.6 Examples and commentsp. 148
5.7 Parallel language developmentsp. 153
5.8 Exercisesp. 154
Chapter 6 Collective Operationsp. 157
6.1 The @notationp. 157
6.2 Tree/ring algorithmsp. 158
6.3 Reduction operationsp. 162
6.4 Reduction operation applicationsp. 164
6.5 Parallel prefix algorithmsp. 168
6.6 Performance of reduction operationsp. 169
6.7 Data movement operationsp. 173
6.8 Exercisesp. 174
Chapter 7 Current Programming Standardsp. 177
7.1 Introduction to MPIp. 177
7.2 Collective operations in MPIp. 181
7.3 Introduction to POSIX threadsp. 184
7.4 Exercisesp. 187
Chapter 8 The Planguage Modelp. 191
8.1 I P language detailsp. 192
8.2 Ranges and arraysp. 198
8.3 Reduction operations in Pfortranp. 200
8.4 Introduction to PCp. 204
8.5 Reduction operations in PCp. 206
8.6 Planguages versus message passingp. 207
8.7 Exercisesp. 208
Chapter 9 High Performance Fortranp. 213
9.1 HPF data distribution directivesp. 214
9.2 Other mechanisms for expressing concurrencyp. 219
9.3 Compiling HPFp. 220
9.4 HPF comparisons and reviewp. 221
9.5 Exercisesp. 222
Chapter 10 Loop Tilingp. 227
10.1 Loop tilingp. 227
10.2 Work vs.data decompositionp. 228
10.3 Tiling in Open MPp. 228
10.4 Teamsp. 232
10.5 Parallel regionsp. 233
10.6 Exercisesp. 234
Chapter 11 Matrix Eigen Analysisp. 237
11.1 The Leslie matrix modelp. 237
11.2 The power methodp. 242
11.3 A parallel Leslie matrix programp. 244
11.4 Matrix-vector productp. 249
11.5 Power method applicationsp. 251
11.6 Exercisesp. 253
Chapter 12 Linea Systemsp. 257
12.1 Gaussian eliminationp. 257
12.2 Solving triangular systems in parallelp. 262
12.3 Divide-and-conquer algorithmsp. 271
12.4 Exercisesp. 277
12.5 Projectsp. 281
Chapter 13 Particle Dynamicsp. 283
13.1 Model assumptionsp. 284
13.2 Using Newton's third lawp. 285
13.3 Further code complicationsp. 288
13.4 Pair list generationp. 290
13.5 Force calculation with a pair listp. 296
13.6 Performance of replication algorithmp. 299
13.7 Case study:particle dynamics in HPFp. 302
13.8 Exercisesp. 307
13.9 Projectsp. 310
Chapter 14 Mesh Methodsp. 315
14.1 Boundary value problemsp. 315
14.2 Iterative methodsp. 319
14.3 Multigrid methodsp. 322
14.4 Multidimensional problemsp. 327
14.5 Initial value problemsp. 328
14.6 Exercisesp. 333
14.7 Projectsp. 334
Chapter 15 Sortingp. 335
15.1 Introductionp. 335
15.2 Parallel sortingp. 337
15.3 Spatial sortingp. 342
15.4 Exercisesp. 353
15.5 Projectsp. 355
Bibliographyp. 357
Indexp. 369
Go to:Top of Page