Cover image for Task scheduling for parallel systems
Title:
Task scheduling for parallel systems
Personal Author:
Publication Information:
Hoboken, NJ : John Wiley, 2007
ISBN:
9780471735762

Available:*

Library
Item Barcode
Call Number
Material Type
Item Category 1
Status
Searching...
30000010159389 QA76.58 S564 2007 Open Access Book Book
Searching...

On Order

Summary

Summary

A new model for task scheduling that dramatically improves the efficiency of parallel systems

Task scheduling for parallel systems can become a quagmire of heuristics, models, and methods that have been developed over the past decades. The author of this innovative text cuts through the confusion and complexity by presenting a consistent and comprehensive theoretical framework along with realistic parallel system models. These new models, based on an investigation of the concepts and principles underlying task scheduling, take into account heterogeneity, contention for communication resources, and the involvement of the processor in communications.

For readers who may be new to task scheduling, the first chapters are essential. They serve as an excellent introduction to programming parallel systems, and they place task scheduling within the context of the program parallelization process. The author then reviews the basics of graph theory, discussing the major graph models used to represent parallel programs. Next, the author introduces his task scheduling framework. He carefully explains the theoretical background of this framework and provides several examples to enable readers to fully understand how it greatly simplifies and, at the same time, enhances the ability to schedule.

The second half of the text examines both basic and advanced scheduling techniques, offering readers a thorough understanding of the principles underlying scheduling algorithms. The final two chapters address communication contention in scheduling and processor involvement in communications.

Each chapter features exercises that help readers put their new skills into practice. An extensive bibliography leads to additional information for further research. Finally, the use of figures and examples helps readers better visualize and understand complex concepts and processes.

Researchers and students in distributed and parallel computer systems will find that this text dramatically improves their ability to schedule tasks accurately and efficiently.


Author Notes

Oliver Sinnen is a senior lecturer in the Department of Electrical and Computer Engineering at the University of Auckland, New Zealand.


Table of Contents

Prefacep. xi
Acknowledgmentsp. xiii
1 Introductionp. 1
1.1 Overviewp. 1
1.2 Organizationp. 5
2 Parallel Systems and Programmingp. 7
2.1 Parallel Architecturesp. 7
2.1.1 Flynn's Taxonomyp. 7
2.1.2 Memory Architecturesp. 9
2.1.3 Programming Paradigms and Modelsp. 11
2.2 Communication Networksp. 13
2.2.1 Static Networksp. 13
2.2.2 Dynamic Networksp. 18
2.3 Parallelizationp. 22
2.4 Subtask Decompositionp. 24
2.4.1 Concurrency and Granularityp. 24
2.4.2 Decomposition Techniquesp. 25
2.4.3 Computation Type and Program Formulationp. 27
2.4.4 Parallelization Techniquesp. 28
2.4.5 Target Parallel Systemp. 28
2.5 Dependence Analysisp. 29
2.5.1 Data Dependencep. 29
2.5.2 Data Dependence in Loopsp. 32
2.5.3 Control Dependencep. 35
2.6 Concluding Remarksp. 36
2.7 Exercisesp. 37
3 Graph Representationsp. 40
3.1 Basic Graph Conceptsp. 40
3.1.1 Computer Representation of Graphsp. 43
3.1.2 Elementary Graph Algorithmsp. 46
3.2 Graph as a Program Modelp. 49
3.2.1 Computation and Communication Costsp. 50
3.2.2 Comparison Criteriap. 50
3.3 Dependence Graph (DG)p. 51
3.3.1 Iteration Dependence Graphp. 53
3.3.2 Summaryp. 55
3.4 Flow Graph (FG)p. 56
3.4.1 Data-Driven Execution Modelp. 60
3.4.2 Summaryp. 61
3.5 Task Graph (DAG)p. 62
3.5.1 Graph Transformations and Conversionsp. 64
3.5.2 Motivations and Limitationsp. 68
3.5.3 Summaryp. 69
3.6 Concluding Remarksp. 69
3.7 Exercisesp. 70
4 Task Schedulingp. 74
4.1 Fundamentalsp. 74
4.2 With Communication Costsp. 76
4.2.1 Schedule Examplep. 81
4.2.2 Scheduling Complexityp. 82
4.3 Without Communication Costsp. 86
4.3.1 Schedule Examplep. 87
4.3.2 Scheduling Complexityp. 88
4.4 Task Graph Propertiesp. 92
4.4.1 Critical Pathp. 93
4.4.2 Node Levelsp. 95
4.4.3 Granularityp. 101
4.5 Concluding Remarksp. 105
4.6 Exercisesp. 105
5 Fundamental Heuristicsp. 108
5.1 List Schedulingp. 108
5.1.1 Start Time Minimizationp. 111
5.1.2 With Dynamic Prioritiesp. 114
5.1.3 Node Prioritiesp. 115
5.2 Scheduling with Given Processor Allocationp. 118
5.2.1 Phase Twop. 119
5.3 Clusteringp. 119
5.3.1 Clustering Algorithmsp. 121
5.3.2 Linear Clusteringp. 124
5.3.3 Single Edge Clusteringp. 128
5.3.4 List Scheduling as Clusteringp. 135
5.3.5 Other Algorithmsp. 138
5.4 From Clustering to Schedulingp. 139
5.4.1 Assigning Clusters to Processorsp. 139
5.4.2 Scheduling on Processorsp. 141
5.5 Concluding Remarksp. 141
5.6 Exercisesp. 142
6 Advanced Task Schedulingp. 145
6.1 Insertion Techniquep. 145
6.1.1 List Scheduling with Node Insertionp. 148
6.2 Node Duplicationp. 150
6.2.1 Node Duplication Heuristicsp. 153
6.3 Heterogeneous Processorsp. 154
6.3.1 Schedulingp. 157
6.4 Complexity Resultsp. 158
6.4.1 [alpha]|[beta]|[gamma] Classificationp. 158
6.4.2 Without Communication Costsp. 165
6.4.3 With Communication Costsp. 165
6.4.4 With Node Duplicationp. 168
6.4.5 Heterogeneous Processorsp. 170
6.5 Genetic Algorithmsp. 170
6.5.1 Basicsp. 171
6.5.2 Chromosomesp. 172
6.5.3 Reproductionp. 177
6.5.4 Selection, Complexity, and Flexibilityp. 180
6.6 Concluding Remarksp. 182
6.7 Exercisesp. 183
7 Communication Contention in Schedulingp. 187
7.1 Contention Awarenessp. 188
7.1.1 End-Point Contentionp. 189
7.1.2 Network Contentionp. 190
7.1.3 Integrating End-Point and Network Contentionp. 192
7.2 Network Modelp. 192
7.2.1 Topology Graphp. 192
7.2.2 Routingp. 198
7.2.3 Scheduling Network Modelp. 202
7.3 Edge Schedulingp. 203
7.3.1 Scheduling Edge on Routep. 204
7.3.2 The Edge Schedulingp. 208
7.4 Contention Aware Schedulingp. 209
7.4.1 Basicsp. 209
7.4.2 NP-Completenessp. 211
7.5 Heuristicsp. 216
7.5.1 List Schedulingp. 216
7.5.2 Priority Schemes-Task Graph Propertiesp. 219
7.5.3 Clusteringp. 220
7.5.4 Experimental Resultsp. 221
7.6 Concluding Remarksp. 223
7.7 Exercisesp. 224
8 Processor Involvement in Communicationp. 228
8.1 Processor Involvement-Types and Characteristicsp. 229
8.1.1 Involvement Typesp. 229
8.1.2 Involvement Characteristicsp. 232
8.1.3 Relation to LogP and Its Variantsp. 236
8.2 Involvement Schedulingp. 238
8.2.1 Scheduling Edges on the Processorsp. 240
8.2.2 Node and Edge Schedulingp. 246
8.2.3 Task Graphp. 247
8.2.4 NP-Completenessp. 248
8.3 Algorithmic Approachesp. 250
8.3.1 Direct Schedulingp. 251
8.3.2 Scheduling with Given Processor Allocationp. 254
8.4 Heuristicsp. 257
8.4.1 List Schedulingp. 257
8.4.2 Two-Phase Heuristicsp. 261
8.4.3 Experimental Resultsp. 263
8.5 Concluding Remarksp. 264
8.6 Exercisesp. 265
Bibliographyp. 269
Author Indexp. 281
Subject Indexp. 285