Skip to:Content
|
Bottom
Cover image for Scheduling for parallel processing
Title:
Scheduling for parallel processing
Personal Author:
Series:
Computer communications and networks
Publication Information:
Dordrecht : Springer, 2009.
Physical Description:
xiii, 386 p. : ill. ; 24 cm.
ISBN:
9781848823099

Available:*

Library
Item Barcode
Call Number
Material Type
Item Category 1
Status
Searching...
30000010241772 QA76.58 D757 2009 Open Access Book Gift Book
Searching...

On Order

Summary

Summary

Overview and Goals This book is dedicated to scheduling for parallel processing. Presenting a research ?eld as broad as this one poses considerable dif?culties. Scheduling for parallel computing is an interdisciplinary subject joining many ?elds of science and te- nology. Thus, to understand the scheduling problems and the methods of solving them it is necessary to know the limitations in related areas. Another dif?culty is that the subject of scheduling parallel computations is immense. Even simple search in bibliographical databases reveals thousands of publications on this topic. The - versity in understanding scheduling problems is so great that it seems impossible to juxtapose them in one scheduling taxonomy. Therefore, most of the papers on scheduling for parallel processing refer to one scheduling problem resulting from one way of perceiving the reality. Only a few publications attempt to arrange this ?eld of knowledge systematically. In this book we will follow two guidelines. One guideline is a distinction - tween scheduling models which comprise a set of scheduling problems solved by dedicated algorithms. Thus, the aim of this book is to present scheduling models for parallel processing, problems de?ned on the grounds of certain scheduling models, and algorithms solving the scheduling problems. Most of the scheduling problems are combinatorial in nature. Therefore, the second guideline is the methodology of computational complexity theory. Inthisbookwepresentfourexamplesofschedulingmodels. Wewillgodeepinto the models, problems, and algorithms so that after acquiring some understanding of them we will attempt to draw conclusions on their mutual relationships.


Table of Contents

Prefacep. v
1 Introductionp. 1
1.1 Field of Scheduling for Parallel Processingp. 1
1.2 Basic Scheduling Notionsp. 2
1.2.1 Scheduling Theory Notionsp. 2
1.2.2 Computer Systems Notionsp. 3
1.3 Why We Need Schedulingp. 4
1.4 Problems, Models, Algorithms, and Schedulesp. 5
Referencesp. 7
2 Basicsp. 9
2.1 Selected Definitions Form Graph Theoryp. 9
2.2 Methodology of Complexity Theoryp. 13
2.2.1 Problems, Machines, Complexity Functionsp. 14
2.2.2 Basic Complexity Classesp. 15
2.2.3 Approximability of Hard Problemsp. 17
2.3 Solving Hard Combinatorial Problemsp. 19
2.3.1 Branch and Bound Algorithmp. 20
2.3.2 Dynamic Programmingp. 21
2.3.3 Linear Programmingp. 22
2.3.4 Metaheuristicsp. 23
2.4 Parallel Performance Metricsp. 25
Referencesp. 28
3 Vision of Scheduling in Parallel Systemsp. 31
3.1 Hardwarep. 31
3.1.1 CPUp. 31
3.1.2 Memoryp. 32
3.1.3 Interconnectionp. 34
3.2 Programming Environmentsp. 37
3.3 Runtime Environmentsp. 42
3.3.1 Operating System Peculiaritiesp. 43
3.3.2 Resource Managersp. 45
Referencesp. 51
4 Classic Scheduling Theoryp. 55
4.1 Definitionsp. 55
4.2 ¿/ß/¿ Notation and Complexity Inferencep. 59
4.3 Scheduling Parallel Processorsp. 61
4.3.1 Cmax Criterionp. 61
4.3.2 Lmax Criterionp. 67
4.3.3 ¿cj, ¿ wjcj Criteriap. 72
4.4 Beyond the Classicsp. 74
4.4.1 New Criteriap. 75
4.4.2 Multicriteria Schedulingp. 77
4.4.3 Online Schedulingp. 80
4.5 Remarks on the Classic Scheduling Theoryp. 82
Referencesp. 83
5 Parallel Tasksp. 87
5.1 Parallel Tasks in Practicep. 87
5.1.1 Parallel Applicationsp. 87
5.1.2 Bandwidth and Storage Allocationp. 91
5.1.3 Reliable Computingp. 92
5.2 Assumptions and Definitionsp. 92
5.2.1 Types of Parallel Tasksp. 92
5.2.2 Processing Time Functionsp. 95
5.2.3 Extension of ¿/ß/¿ Notationp. 96
5.3 Rigid Tasksp. 98
5.3.1 Cmax, Lmax Criteriap. 99
5.3.2 Minsum Criteriap. 108
5.3.3 Other Heuristics for Rigid Task Schedulingp. 111
5.4 Moldable Tasksp. 117
5.4.1 Cmax Criterionp. 117
5.4.2 Minsum Criteriap. 127
5.4.3 Other Heuristics for Moldable Task Schedulingp. 128
5.5 Malleable Tasksp. 130
5.5.1 Cmax, Lmax, max WjSj Criteriap. 131
5.5.2 Minsum Criteriap. 136
5.5.3 Other Heuristics for Malleable Task Schedulingp. 138
5.6 Tasks with Hypercube Shapep. 139
5.6.1 Subcube Allocationp. 140
5.6.2 Cmax, Lmax Criteriap. 147
5.6.3 Minsum Criteriap. 153
5.6.4 Other Heuristics for Tasks with Hypercube Shapep. 153
5.7 Tasks with Mesh Shapep. 156
5.7.1 Submesh Allocationp. 156
5.7.2 Heuristics with a Guaranteep. 168
5.7.3 Heuristics Without a Guaranteep. 176
5.8 Multiprocessor Tasksp. 180
5.8.1 Cmax, Lmax, Criteriap. 180
5.8.2 Minsum Criteriap. 193
5.9 Concluding Remarks on Parallel Task Modelp. 196
Referencesp. 197
6 Scheduling with Communication Delaysp. 209
6.1 Scheduling with Communication Delays in Practicep. 210
6.2 Formulation of the Problemp. 213
6.2.1 Notation and Preliminariesp. 213
6.2.2 Interconnection Topology and Communication Delay Modelsp. 215
6.2.3 Technical Terminologyp. 218
6.3 Extension of ¿/ß/¿ Notationp. 218
6.4 Limited Processor Number and No Duplicationp. 219
6.4.1 Hard Casesp. 219
6.4.2 Polynomial Casesp. 220
6.4.3 Heuristics with a Guaranteep. 234
6.4.4 Heuristics Without a Guaranteep. 238
6.5 Limited Processor Number and Duplicationp. 246
6.5.1 Complexity of the Problemp. 246
6.5.2 Heuristics with a Guaranteep. 246
6.5.3 Heuristics Without a Guaranteep. 247
6.6 Unlimited Processor Number and No Duplicationp. 250
6.6.1 Hard Casesp. 250
6.6.2 Polynomial Casesp. 250
6.6.3 Heuristics with a Guaranteep. 257
6.6.4 Heuristics Without a Guaranteep. 263
6.7 Unlimited Processor Number and Duplicationp. 265
6.7.1 Hard Casesp. 265
6.7.2 Polynomial Casesp. 265
6.7.3 Heuristics with a Guaranteep. 268
6.7.4 Heuristics Without a Guaranteep. 270
6.8 Scheduling in Processor Networksp. 273
6.9 Scheduling in Log P Modelp. 275
6.9.1 Notationp. 275
6.9.2 Hard Casesp. 278
6.9.3 Polynomial Casesp. 279
6.9.4 Heuristics with a Guaranteep. 281
6.9.5 Heuristics Without a Guaranteep. 284
6.9.6 Remarks on Scheduling in Log P Modelp. 285
6.10 Scheduling with Hierarchical Communicationp. 286
6.10.1 Problem Formulation and Notationp. 286
6.10.2 Complexity of the Problemp. 287
6.10.3 Heuristics with a Guaranteep. 289
6.11 Further Reading and Conclusionsp. 291
6.11.1 Other Branches of Scheduling with Communication Delaysp. 291
6.11.2 Observations on Scheduling with Communication Delaysp. 292
Referencesp. 293
7 Divisible Loadsp. 301
7.1 Star - Basic Formulationp. 302
7.1.1 Base Modelp. 302
7.1.2 Start-Up Times, Message Sequencing, Complexityp. 304
7.1.3 Communication Optionsp. 305
7.1.4 Equivalent Speedp. 307
7.2 Interconnection Topologiesp. 307
7.2.1 Chainsp. 307
7.2.2 Treesp. 309
7.2.3 Meshesp. 310
7.2.4 Hypercubesp. 313
7.2.5 Arbitrary Graphsp. 315
7.3 Multi-installment Processingp. 315
7.4 Memory Constraintsp. 317
7.4.1 Single Installement Processingp. 317
7.4.2 Multi-installment Processingp. 321
7.5 Processing Cost Optimizationp. 325
7.5.1 Negligible Start-Up Timesp. 325
7.5.2 Nonzero Start-Up Timesp. 326
7.6 Multiple Tasksp. 327
7.6.1 Single Source Loadsp. 327
7.6.2 Multiple Source Loadp. 331
7.7 Time-Varying Environmentp. 333
7.8 Expected Search Timep. 336
7.9 Steady-State Divisible Load Schedulingp. 337
7.9.1 Single Originatorp. 337
7.9.2 Multiple Originatorsp. 339
7.10 Online Schedulingp. 340
7.10.1 Measuring Heuristicsp. 340
7.10.2 Loop Schedulingp. 341
7.10.3 Other Heuristicsp. 343
7.11 Toward Discrete Load Granularityp. 346
7.11.1 Rounding Techniquesp. 346
7.11.2 Discretely Divisible Loadsp. 348
7.12 DLT and Performance Evaluationp. 349
7.12.1 DLT and the Classic Performance Measuresp. 349
7.12.2 Equivalent Processors and Ultimate Performancep. 351
7.12.3 Isoefficiencyp. 352
7.13 Divisible Loads in Practicep. 353
7.13.1 Divisible Applicationsp. 353
7.13.2 Empirical Confirmations of DLTp. 356
7.14 Concluding Remarks on DLTp. 359
Referencesp. 360
8 Back to Scheduling Modelsp. 367
8.1 On Scheduling Modelsp. 367
8.1.1 What Is a Parallel Application?p. 368
8.1.2 Parallel Systems in Scheduling Modelsp. 369
8.1.3 On Bridging the Modelsp. 369
8.1.4 The Case of Granularityp. 370
8.1.5 Criteria and Constraintsp. 371
8.2 On Scheduling Algorithmsp. 372
8.2.1 Computational Costsp. 372
8.2.2 Where Is My Scheduling Information?p. 373
8.3 It Is a Matter of Timep. 375
8.4 Toward Scheduling Problem Taxonomy Anyway?p. 376
Referencesp. 377
Appendix A Summary of the Notationp. 379
Indexp. 381
Go to:Top of Page