Available:*
Library | Item Barcode | Call Number | Material Type | Item Category 1 | Status |
---|---|---|---|---|---|
Searching... | 30000010241772 | QA76.58 D757 2009 | Open Access Book | Gift Book | Searching... |
On Order
Summary
Summary
Overview and Goals This book is dedicated to scheduling for parallel processing. Presenting a research ?eld as broad as this one poses considerable dif?culties. Scheduling for parallel computing is an interdisciplinary subject joining many ?elds of science and te- nology. Thus, to understand the scheduling problems and the methods of solving them it is necessary to know the limitations in related areas. Another dif?culty is that the subject of scheduling parallel computations is immense. Even simple search in bibliographical databases reveals thousands of publications on this topic. The - versity in understanding scheduling problems is so great that it seems impossible to juxtapose them in one scheduling taxonomy. Therefore, most of the papers on scheduling for parallel processing refer to one scheduling problem resulting from one way of perceiving the reality. Only a few publications attempt to arrange this ?eld of knowledge systematically. In this book we will follow two guidelines. One guideline is a distinction - tween scheduling models which comprise a set of scheduling problems solved by dedicated algorithms. Thus, the aim of this book is to present scheduling models for parallel processing, problems de?ned on the grounds of certain scheduling models, and algorithms solving the scheduling problems. Most of the scheduling problems are combinatorial in nature. Therefore, the second guideline is the methodology of computational complexity theory. Inthisbookwepresentfourexamplesofschedulingmodels. Wewillgodeepinto the models, problems, and algorithms so that after acquiring some understanding of them we will attempt to draw conclusions on their mutual relationships.
Table of Contents
Preface | p. v |
1 Introduction | p. 1 |
1.1 Field of Scheduling for Parallel Processing | p. 1 |
1.2 Basic Scheduling Notions | p. 2 |
1.2.1 Scheduling Theory Notions | p. 2 |
1.2.2 Computer Systems Notions | p. 3 |
1.3 Why We Need Scheduling | p. 4 |
1.4 Problems, Models, Algorithms, and Schedules | p. 5 |
References | p. 7 |
2 Basics | p. 9 |
2.1 Selected Definitions Form Graph Theory | p. 9 |
2.2 Methodology of Complexity Theory | p. 13 |
2.2.1 Problems, Machines, Complexity Functions | p. 14 |
2.2.2 Basic Complexity Classes | p. 15 |
2.2.3 Approximability of Hard Problems | p. 17 |
2.3 Solving Hard Combinatorial Problems | p. 19 |
2.3.1 Branch and Bound Algorithm | p. 20 |
2.3.2 Dynamic Programming | p. 21 |
2.3.3 Linear Programming | p. 22 |
2.3.4 Metaheuristics | p. 23 |
2.4 Parallel Performance Metrics | p. 25 |
References | p. 28 |
3 Vision of Scheduling in Parallel Systems | p. 31 |
3.1 Hardware | p. 31 |
3.1.1 CPU | p. 31 |
3.1.2 Memory | p. 32 |
3.1.3 Interconnection | p. 34 |
3.2 Programming Environments | p. 37 |
3.3 Runtime Environments | p. 42 |
3.3.1 Operating System Peculiarities | p. 43 |
3.3.2 Resource Managers | p. 45 |
References | p. 51 |
4 Classic Scheduling Theory | p. 55 |
4.1 Definitions | p. 55 |
4.2 ¿/ß/¿ Notation and Complexity Inference | p. 59 |
4.3 Scheduling Parallel Processors | p. 61 |
4.3.1 Cmax Criterion | p. 61 |
4.3.2 Lmax Criterion | p. 67 |
4.3.3 ¿cj, ¿ wjcj Criteria | p. 72 |
4.4 Beyond the Classics | p. 74 |
4.4.1 New Criteria | p. 75 |
4.4.2 Multicriteria Scheduling | p. 77 |
4.4.3 Online Scheduling | p. 80 |
4.5 Remarks on the Classic Scheduling Theory | p. 82 |
References | p. 83 |
5 Parallel Tasks | p. 87 |
5.1 Parallel Tasks in Practice | p. 87 |
5.1.1 Parallel Applications | p. 87 |
5.1.2 Bandwidth and Storage Allocation | p. 91 |
5.1.3 Reliable Computing | p. 92 |
5.2 Assumptions and Definitions | p. 92 |
5.2.1 Types of Parallel Tasks | p. 92 |
5.2.2 Processing Time Functions | p. 95 |
5.2.3 Extension of ¿/ß/¿ Notation | p. 96 |
5.3 Rigid Tasks | p. 98 |
5.3.1 Cmax, Lmax Criteria | p. 99 |
5.3.2 Minsum Criteria | p. 108 |
5.3.3 Other Heuristics for Rigid Task Scheduling | p. 111 |
5.4 Moldable Tasks | p. 117 |
5.4.1 Cmax Criterion | p. 117 |
5.4.2 Minsum Criteria | p. 127 |
5.4.3 Other Heuristics for Moldable Task Scheduling | p. 128 |
5.5 Malleable Tasks | p. 130 |
5.5.1 Cmax, Lmax, max WjSj Criteria | p. 131 |
5.5.2 Minsum Criteria | p. 136 |
5.5.3 Other Heuristics for Malleable Task Scheduling | p. 138 |
5.6 Tasks with Hypercube Shape | p. 139 |
5.6.1 Subcube Allocation | p. 140 |
5.6.2 Cmax, Lmax Criteria | p. 147 |
5.6.3 Minsum Criteria | p. 153 |
5.6.4 Other Heuristics for Tasks with Hypercube Shape | p. 153 |
5.7 Tasks with Mesh Shape | p. 156 |
5.7.1 Submesh Allocation | p. 156 |
5.7.2 Heuristics with a Guarantee | p. 168 |
5.7.3 Heuristics Without a Guarantee | p. 176 |
5.8 Multiprocessor Tasks | p. 180 |
5.8.1 Cmax, Lmax, Criteria | p. 180 |
5.8.2 Minsum Criteria | p. 193 |
5.9 Concluding Remarks on Parallel Task Model | p. 196 |
References | p. 197 |
6 Scheduling with Communication Delays | p. 209 |
6.1 Scheduling with Communication Delays in Practice | p. 210 |
6.2 Formulation of the Problem | p. 213 |
6.2.1 Notation and Preliminaries | p. 213 |
6.2.2 Interconnection Topology and Communication Delay Models | p. 215 |
6.2.3 Technical Terminology | p. 218 |
6.3 Extension of ¿/ß/¿ Notation | p. 218 |
6.4 Limited Processor Number and No Duplication | p. 219 |
6.4.1 Hard Cases | p. 219 |
6.4.2 Polynomial Cases | p. 220 |
6.4.3 Heuristics with a Guarantee | p. 234 |
6.4.4 Heuristics Without a Guarantee | p. 238 |
6.5 Limited Processor Number and Duplication | p. 246 |
6.5.1 Complexity of the Problem | p. 246 |
6.5.2 Heuristics with a Guarantee | p. 246 |
6.5.3 Heuristics Without a Guarantee | p. 247 |
6.6 Unlimited Processor Number and No Duplication | p. 250 |
6.6.1 Hard Cases | p. 250 |
6.6.2 Polynomial Cases | p. 250 |
6.6.3 Heuristics with a Guarantee | p. 257 |
6.6.4 Heuristics Without a Guarantee | p. 263 |
6.7 Unlimited Processor Number and Duplication | p. 265 |
6.7.1 Hard Cases | p. 265 |
6.7.2 Polynomial Cases | p. 265 |
6.7.3 Heuristics with a Guarantee | p. 268 |
6.7.4 Heuristics Without a Guarantee | p. 270 |
6.8 Scheduling in Processor Networks | p. 273 |
6.9 Scheduling in Log P Model | p. 275 |
6.9.1 Notation | p. 275 |
6.9.2 Hard Cases | p. 278 |
6.9.3 Polynomial Cases | p. 279 |
6.9.4 Heuristics with a Guarantee | p. 281 |
6.9.5 Heuristics Without a Guarantee | p. 284 |
6.9.6 Remarks on Scheduling in Log P Model | p. 285 |
6.10 Scheduling with Hierarchical Communication | p. 286 |
6.10.1 Problem Formulation and Notation | p. 286 |
6.10.2 Complexity of the Problem | p. 287 |
6.10.3 Heuristics with a Guarantee | p. 289 |
6.11 Further Reading and Conclusions | p. 291 |
6.11.1 Other Branches of Scheduling with Communication Delays | p. 291 |
6.11.2 Observations on Scheduling with Communication Delays | p. 292 |
References | p. 293 |
7 Divisible Loads | p. 301 |
7.1 Star - Basic Formulation | p. 302 |
7.1.1 Base Model | p. 302 |
7.1.2 Start-Up Times, Message Sequencing, Complexity | p. 304 |
7.1.3 Communication Options | p. 305 |
7.1.4 Equivalent Speed | p. 307 |
7.2 Interconnection Topologies | p. 307 |
7.2.1 Chains | p. 307 |
7.2.2 Trees | p. 309 |
7.2.3 Meshes | p. 310 |
7.2.4 Hypercubes | p. 313 |
7.2.5 Arbitrary Graphs | p. 315 |
7.3 Multi-installment Processing | p. 315 |
7.4 Memory Constraints | p. 317 |
7.4.1 Single Installement Processing | p. 317 |
7.4.2 Multi-installment Processing | p. 321 |
7.5 Processing Cost Optimization | p. 325 |
7.5.1 Negligible Start-Up Times | p. 325 |
7.5.2 Nonzero Start-Up Times | p. 326 |
7.6 Multiple Tasks | p. 327 |
7.6.1 Single Source Loads | p. 327 |
7.6.2 Multiple Source Load | p. 331 |
7.7 Time-Varying Environment | p. 333 |
7.8 Expected Search Time | p. 336 |
7.9 Steady-State Divisible Load Scheduling | p. 337 |
7.9.1 Single Originator | p. 337 |
7.9.2 Multiple Originators | p. 339 |
7.10 Online Scheduling | p. 340 |
7.10.1 Measuring Heuristics | p. 340 |
7.10.2 Loop Scheduling | p. 341 |
7.10.3 Other Heuristics | p. 343 |
7.11 Toward Discrete Load Granularity | p. 346 |
7.11.1 Rounding Techniques | p. 346 |
7.11.2 Discretely Divisible Loads | p. 348 |
7.12 DLT and Performance Evaluation | p. 349 |
7.12.1 DLT and the Classic Performance Measures | p. 349 |
7.12.2 Equivalent Processors and Ultimate Performance | p. 351 |
7.12.3 Isoefficiency | p. 352 |
7.13 Divisible Loads in Practice | p. 353 |
7.13.1 Divisible Applications | p. 353 |
7.13.2 Empirical Confirmations of DLT | p. 356 |
7.14 Concluding Remarks on DLT | p. 359 |
References | p. 360 |
8 Back to Scheduling Models | p. 367 |
8.1 On Scheduling Models | p. 367 |
8.1.1 What Is a Parallel Application? | p. 368 |
8.1.2 Parallel Systems in Scheduling Models | p. 369 |
8.1.3 On Bridging the Models | p. 369 |
8.1.4 The Case of Granularity | p. 370 |
8.1.5 Criteria and Constraints | p. 371 |
8.2 On Scheduling Algorithms | p. 372 |
8.2.1 Computational Costs | p. 372 |
8.2.2 Where Is My Scheduling Information? | p. 373 |
8.3 It Is a Matter of Time | p. 375 |
8.4 Toward Scheduling Problem Taxonomy Anyway? | p. 376 |
References | p. 377 |
Appendix A Summary of the Notation | p. 379 |
Index | p. 381 |