Cover image for Dependability benchmarking for computer systems
Title:
Dependability benchmarking for computer systems
Publication Information:
Los Alamitos, CA : IEEE Computer Society, 2008
Physical Description:
xviii, 362 p. : ill. ; 26 cm.
ISBN:
9780470230558

Available:*

Library
Item Barcode
Call Number
Material Type
Item Category 1
Status
Searching...
30000010191554 QA76.76.R44 D464 2008 Open Access Book Book
Searching...

On Order

Summary

Summary

A comprehensive collection of benchmarks for measuring dependability in hardware-software systems

As computer systems have become more complex and mission-critical, it is imperative for systems engineers and researchers to have metrics for a system's dependability, reliability, availability, and serviceability. Dependability benchmarks are useful for guiding development efforts for system providers, acquisition choices of system purchasers, and evaluations of new concepts by researchers in academia and industry.

This book gathers together all dependability benchmarks developed to date by industry and academia and explains the various principles and concepts of dependability benchmarking. It collects the expert knowledge of DBench, a research project funded by the European Union, and the IFIP Special Interest Group on Dependability Benchmarking, to shed light on this important area. It also provides a large panorama of examples and recommendations for defining dependability benchmarks.

Dependability Benchmarking for Computer Systems includes contributions from a credible mix of industrial and academic sources: IBM, Intel, Microsoft, Sun Microsystems, Critical Software, Carnegie Mellon University, LAAS-CNRS, Technical University of Valencia, University of Coimbra, and University of Illinois. It is an invaluable resource for engineers, researchers, system vendors, system purchasers, computer industry consultants, and system integrators.


Author Notes

Karama Kanoun is Directeur de Recherche at LAAS-CNRS, France. Her research interests include the modeling and evaluation of computer system dependability. She was the principal investigator for the DBench (Dependability Benchmarking) European project, and has been a consultant for the European Space Agency, Ansaldo Trasporti, and the International Telecommunication Union. Kanoun is vice-chair of the IFIP WG 10.4 on Dependable Computing and Fault Tolerance and chairs its SIG on Dependability Benchmarking. She also chairs the French SEE Technical Committee on Trustworthy Computer Systems.

Lisa Spainhower is an IBM Distinguished Engineer in the System Design organization of Systems and Technology Group (STG). STG designs and develops IBM's semiconductor technology, ranging from small x86-based servers to clusters of mainframes, operating systems, and storage subsystems. She is also a member of the IBM Academy of Technology,?IEEE, IEEE Computer Society, and the Technical Committee on Fault-Tolerant Computing Executive Committee. Spainhower is vice-chair of the IFIP WG 10.4 SIG on Dependability Benchmarking.


Table of Contents

Karama Kanoun and Phil Koopman and Henrique Madeira and Lisa SpainhowerJoyce Coleman and Tony Lau and Bhushan Lokhande and Peter Shum and Robert Wisniewski and Mary Peterson YostRichard Elling and Ira Pramanick and James Mauro and William Bryson and Dong TangRichard Elling and Ira Pramanick and James Mauro and William Bryson and Dong TangCristian ConstantinescuMarco Vieira and Joao Duraes and Henrique MadeiraJoao Duraes and Marco Vieira and Henrique MadeiraJuan-Carlos Ruiz and Pedro Gil and Pedro Yuste and David de-AndresKymie M. C. Tan and Roy A. MaxionSonya J. Wierman and Priya NarasimhanMario R. GarziaPhilip Koopman and Kobey DeVale and John DeValeKarama Kanoun and Yves Crouzet and Ali Kalakech and Ana-Elena RuginaDiamantino Costa and Ricardo Barbosa and Ricardo Maia and Francisco MoreiraArnaud Albinet and Jean Arlat and Jean-Charles FabreRavishankar Iyer and Zbigniew Kalbarczyk and Weining GuCristian Constantinescu
Prefacep. vii
Contributorsp. xi
Prologue: Dependability Benchmarking: A Reality or a Dream?p. xiii
1 The Autonomic Computing Benchmarkp. 3
2 Analytical Reliability, Availability, and Serviceability Benchmarksp. 23
3 System Recovery Benchmarksp. 35
4 Dependability Benchmarking Using Environmental Test Toolsp. 55
5 Dependability Benchmark for OLTP Systemsp. 63
6 Dependability Benchmarking of Web Serversp. 91
7 Dependability Benchmark of Automotive Engine Control Systemsp. 111
8 Toward Evaluating the Dependability of Anomaly Detectorsp. 141
9 Vajra: Evaluating Byzantine-Fault-Tolerant Distributed Systemsp. 163
10 User-Relevant Software Reliability Benchmarkingp. 185
11 Interface Robustness Testing: Experience and Lessons Learned from the Ballista Projectp. 201
12 Windows and Linux Robustness Benchmarks with Respect to Application Erroneous Behaviorp. 227
13 DeBERT: Dependability Benchmarking of Embedded Real-Time Off-the-Shelf Components for Space Applicationsp. 255
14 Benchmarking the Impact of Faulty Drivers: Application to the Linux Kernelp. 285
15 Benchmarking the Operating System against Faults Impacting Operating System Functionsp. 311
16 Neutron Soft Error Rate Characterization of Microprocessorsp. 341
Indexp. 351