Cover image for UPC : distributed shared memory programming
Title:
UPC : distributed shared memory programming
Publication Information:
Hoboken, NJ : John Wiley & Sons, 2005
ISBN:
9780471220480
Added Author:

Available:*

Library
Item Barcode
Call Number
Material Type
Item Category 1
Status
Searching...
30000010128754 QA76.73.U63 U62 2005 Open Access Book Book
Searching...

On Order

Summary

Summary

This is the first book to explain the language Unified Parallel C and its use. Authors El-Ghazawi, Carlson, and Sterling are among the developers of UPC, with close links with the industrial members of the UPC consortium. Their text covers background material on parallel architectures and algorithms, and includes UPC programming case studies. This book represents an invaluable resource for the growing number of UPC users and applications developers. More information about UPC can be found at: http://upc.gwu.edu/

An Instructor Support FTP site is available from the Wiley editorial department.


Author Notes

Tarek El-Ghazawi received his PhD in electrical and computer engineering from New Mexico State University. Currently, he is an associate professor in the Electrical and Computer Engineering?Department at the George Washington University. His research? interests are in high-performance computing, computer architecture, reconfigurable computing, embedded systems, and experimental performance. He has over 70 technical journal and conference publications in these areas. He has served as the principal investigator for over two dozen funded research projects, and his research has been supported by NASA, DoD, NSF and industry. He has served as a guest editor for the IEEE concurrency and was an Associate Editor for the International Journal of Parallel and Distributed Computing and Networking. El-Ghazawi has also served as a visiting scientist at NASA GSFC and NASA Ames Research Center. He is a senior member of the IEEE and a member of the advisory board for the IEEE Task Force on Cluster Computing.

William Carlson received his PhD in Electrical Engineering from Purdue University. From 1988 to 1990, he was an assistant professor at the University of Wisconsin-Madision. His research interestes include performance evaluation of advanced computer architectures, operating systems, languages and compilers for parallel and distributed computers.

Thomas Sterling received his PhD as a Hertz Fellow from the Massachusetts Institute of Technology. His research interests include parallel computer architecture, system software and evaluation. He holds six patents, is the co-author of several books and has published dozens of papers in the field of parallel Computing.

Katherine Yelick received her? PhD in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology. Her research interests include parallel computing, memory hierarchy optimizations, programming languages and compilers. Currently, she is a Professor of Computer Science at the University of California, Berkeley.


Table of Contents

Preface
1 Introductory Tutorial
1.1 Getting Started
1.2 Private and Shared Data
1.3 Shared Arrays and Affinity of Shared Data
1.4 Synchronization and Memory Consistency
1.5 Work Sharing
1.6 UPC Pointers
1.7 Summary
Exercises
2 Programming View and UPC Data Types
2.1 Programming Models
2.2 UPC Programming Model
2.3 Shared and Private Variables
2.4 Shared and Private Arrays
2.5 Blocked Shared Arrays
2.6 Compiling Environments and Shared Arrays
2.7 Summary
Exercises
3 Pointers and Arrays
3.1 UPC Pointers
3.2 Pointer Arithmetic
3.3 Pointer Casting and Usage Practices
3.4 Pointer Information and Manipulation Functions
3.5 More Pointer Examples
3.6 Summary
Exercises
4 Work Sharing and Domain Decomposition
4.1 Basic Work Distribution
4.2 Parallel Iterations
4.3 Multidimensional Data
4.4 Distributing Trees
4.5 Summary
Exercises
5 Dynamic Shared Memory Allocation
5.1 Allocating a Global Shared Memory Space Collectively
5.2 Allocating Multiple Global Spaces
5.3 Allocating Local Shared Spaces
5.4 Freeing Allocated Spaces
5.5 Summary
Exercises
6 Synchronization and Memory Consistency
6.1 Barriers
6.2 Split-Phase Barriers
6.3 Locks
6.4 Memory Consistency
6.5 Summary
Exercises
7 Performance Tuning and Optimization
7.1 Parallel System Architectures
7.2 Performance Issues in Parallel Programming
7.3 Role of Compilers and Run-Time Systems
7.4 UPC Hand Optimization
7.5 Case Studies
7.6 Summary
Exercises
8 UPC Libraries
8.1 UPC Collective Library
8.2 UPC-IO Library
8.3 Summary
References
Appendix A UPC Language Specifications, v1.1.1
Appendix B UPC Collective Operations Specifications, v1.0
Appendix C UPC-IO Specifications, v1.0.Appendix D: How to Compile and Run UPC Programs
Appendix E Quick UPC Reference
Index