Skip to:Content
|
Bottom
Cover image for Introduction to parallel computing
Title:
Introduction to parallel computing
Personal Author:
Series:
Oxford text in applied and engineering mathematics ; 9
Publication Information:
Oxford : Oxford University Press, 2004
ISBN:
9780198515760
Added Author:

Available:*

Library
Item Barcode
Call Number
Material Type
Item Category 1
Status
Searching...
30000010060048 QA76.58 P47 2004 Open Access Book Book
Searching...
Searching...
30000004735522 QA76.58 P47 2004 Open Access Book Book
Searching...

On Order

Summary

Summary

In the last few years, courses on parallel computation have been developed and offered in many institutions in the UK, Europe and US as a recognition of the growing significance of this topic in mathematics and computer science. There is a clear need for texts that meet the needs of students and lecturers and this book, based on the author's lecture at ETH Zurich, is an ideal practical student guide to scientific computing on parallel computers working up from a hardware instruction level, to shared memory machines, and finally to distributed memory machines.Aimed at advanced undergraduate and graduate students in applied mathematics, computer science, and engineering, subjects covered include linear algebra, fast Fourier transform, and Monte-Carlo simulations, including examples in C and, in some cases, Fortran. This book is also ideal for practitioners and programmers.


Author Notes

Peter Arbenz is at Institute for Scientific Computing, Department Informatik, ETHZ, Switzerland. Wesley Petersen is at Seminar for Applied Mathematics, Department of Mathematics, ETHZ, Switzerland.


Reviews 1

Choice Review

Parallelism in computing occurs on many levels--architectural, organizational, network, and algorithmic. This book is unique in that it provides a balanced treatment of the concepts of parallelism on all levels. Following a concise summary of basic computer architecture, Petersen and Arbenz (both, ETHZ, Zurich) discuss parallelism on the algorithmic level. Then instruction level parallelism through loop unrolling, pipelining, and vectorizing are treated, with examples drawn from popular microprocessors. Shared memory parallelism is discussed next, with examples drawn from supercomputers. Lastly, network parallelism through message passing is explained. This book is unique in that both theoretical concepts and practical examples are plainly presented and clearly discussed. Many examples are also illustrated with program segments written in C. This book, a tutorial, carefully compiles practical tricks and methods that have been successfully used in building fast machines and supercomputers. Hence, it emphasizes what can be done given a real problem. For computer science undergraduates learning about parallelism and for those who develop programs and systems that exploit as much parallelism as possible so as to maximize the desired performance. ^BSumming Up: Highly recommended. Upper-division undergraduates through professionals. J. Y. Cheung University of Oklahoma


Table of Contents

1 Basic issues
2 Applications
3 SIMD, Single Instruction Multiple Data
4 Shared Memory Parallelism
5 MIMD, Multiple Instruction Multiple Data
A SSE Intrinsics for Floating Point
B AltiVec Intrinsics for Floating Point
C OpenMP commands
D Summary of MPI commands
E Fortran and C communication
F Glossary of terms
G Notation and symbols
Go to:Top of Page