Available:*
Library | Item Barcode | Call Number | Material Type | Item Category 1 | Status |
---|---|---|---|---|---|
Searching... | 30000010113008 | QA268 M67 2006 | Open Access Book | Book | Searching... |
On Order
Summary
Summary
Rapid advances in electronic and optical technology have enabled the implementation of powerful error-control codes, which are now used in almost the entire range of information systems with close to optimal performance. These codes and decoding methods are required for the detection and correction of the errors and erasures which inevitably occur in digital information during transmission, storage and processing because of noise, interference and other imperfections.
Error-control coding is a complex, novel and unfamiliar area, not yet widely understood and appreciated. This book sets out to provide a clear description of the essentials of the subject, with comprehensive and up-to-date coverage of the most useful codes and their decoding algorithms. A practical engineering and information technology emphasis, as well as relevant background material and fundamental theoretical aspects, provides an in-depth guide to the essentials of Error-Control Coding.
Provides extensive and detailed coverage of Block, Cyclic, BCH, Reed-Solomon, Convolutional, Turbo, and Low Density Parity Check (LDPC) codes, together with relevant aspects of Information Theory EXIT chart performance analysis for iteratively decoded error-control techniques Heavily illustrated with tables, diagrams, graphs, worked examples, and exercises Invaluable companion website features slides of figures, algorithm software, updates and solutions to problemsOffering a complete overview of Error Control Coding, this book is an indispensable resource for students, engineers and researchers in the areas of telecommunications engineering, communication networks, electronic engineering, computer science, information systems and technology, digital signal processing and applied mathematics.
Author Notes
Jorge Castiñera Moreira is Associate Professor (Senior Lecturer) in Communication Systems in the Electronics Department, School of Engineering, Mar del Plata University, Argentina.
He is Director of the Communications Laboratory, Director of the research project "Open Source software applications for wireless networks" and co-director of the research project "Information Theory. Data Networks. Chaos and Communications". Jorge is also responsible for the teaching area "Communications".
Patrick G. Farrell is Visiting Professor in the Department of Communication Systems at Lancaster University, UK, where he supervises 7 research assistants, 50 PhD and 35 MSc students. His research interests include error-control coding, coded modulation, digital communications, multi-user communications and information theory and source coding. Patrick has over 350 publications, reports and presentations, and is Editor of two book series (Academic Press and Research Studies Press).
Table of Contents
Preface | p. xiii |
Acknowledgements | p. xv |
List of Symbols | p. xvii |
Abbreviations | p. xxv |
1 Information and Coding Theory | p. 1 |
1.1 Information | p. 3 |
1.1.1 A Measure of Information | p. 3 |
1.2 Entropy and Information Rate | p. 4 |
1.3 Extended DMSs | p. 9 |
1.4 Channels and Mutual Information | p. 10 |
1.4.1 Information Transmission over Discrete Channels | p. 10 |
1.4.2 Information Channels | p. 10 |
1.5 Channel Probability Relationships | p. 13 |
1.6 The A Priori and A Posteriori Entropies | p. 15 |
1.7 Mutual Information | p. 16 |
1.7.1 Mutual Information: Definition | p. 16 |
1.7.2 Mutual Information: Properties | p. 17 |
1.8 Capacity of a Discrete Channel | p. 21 |
1.9 The Shannon Theorems | p. 22 |
1.9.1 Source Coding Theorem | p. 22 |
1.9.2 Channel Capacity and Coding | p. 23 |
1.9.3 Channel Coding Theorem | p. 25 |
1.10 Signal Spaces and the Channel Coding Theorem | p. 27 |
1.10.1 Capacity of the Gaussian Channel | p. 28 |
1.11 Error-Control Coding | p. 32 |
1.12 Limits to Communication and their Consequences | p. 34 |
Bibliography and References | p. 38 |
Problems | p. 38 |
2 Block Codes | p. 41 |
2.1 Error-Control Coding | p. 41 |
2.2 Error Detection and Correction | p. 41 |
2.2.1 Simple Codes: The Repetition Code | p. 42 |
2.3 Block Codes: Introduction and Parameters | p. 43 |
2.4 The Vector Space over the Binary Field | p. 44 |
2.4.1 Vector Subspaces | p. 46 |
2.4.2 Dual Subspace | p. 48 |
2.4.3 Matrix Form | p. 48 |
2.4.4 Dual Subspace Matrix | p. 49 |
2.5 Linear Block Codes | p. 50 |
2.5.1 Generator Matrix G | p. 51 |
2.5.2 Block Codes in Systematic Form | p. 52 |
2.5.3 Parity Check Matrix H | p. 54 |
2.6 Syndrome Error Detection | p. 55 |
2.7 Minimum Distance of a Block Code | p. 58 |
2.7.1 Minimum Distance and the Structure of the H Matrix | p. 58 |
2.8 Error-Correction Capability of a Block Code | p. 59 |
2.9 Syndrome Detection and the Standard Array | p. 61 |
2.10 Hamming Codes | p. 64 |
2.11 Forward Error Correction and Automatic Repeat ReQuest | p. 65 |
2.11.1 Forward Error Correction | p. 65 |
2.11.2 Automatic Repeat ReQuest | p. 68 |
2.11.3 ARQ Schemes | p. 69 |
2.11.4 ARQ Scheme Efficiencies | p. 71 |
2.11.5 Hybrid-ARQ Schemes | p. 72 |
Bibliography and References | p. 76 |
Problems | p. 77 |
3 Cyclic Codes | p. 81 |
3.1 Description | p. 81 |
3.2 Polynomial Representation of Codewords | p. 81 |
3.3 Generator Polynomial of a Cyclic Code | p. 83 |
3.4 Cyclic Codes in Systematic Form | p. 85 |
3.5 Generator Matrix of a Cyclic Code | p. 87 |
3.6 Syndrome Calculation and Error Detection | p. 89 |
3.7 Decoding of Cyclic Codes | p. 90 |
3.8 An Application Example: Cyclic Redundancy Check Code for the Ethernet Standard | p. 92 |
Bibliography and References | p. 93 |
Problems | p. 94 |
4 BCH Codes | p. 97 |
4.1 Introduction: The Minimal Polynomial | p. 97 |
4.2 Description of BCH Cyclic Codes | p. 99 |
4.2.1 Bounds on the Error-Correction Capability of a BCH Code: The Vandermonde Determinant | p. 102 |
4.3 Decoding of BCH Codes | p. 104 |
4.4 Error-Location and Error-Evaluation Polynomials | p. 105 |
4.5 The Key Equation | p. 107 |
4.6 Decoding of Binary BCH Codes Using the Euclidean Algorithm | p. 108 |
4.6.1 The Euclidean Algorithm | p. 108 |
Bibliography and References | p. 112 |
Problems | p. 112 |
5 Reed-Solomon Codes | p. 115 |
5.1 Introduction | p. 115 |
5.2 Error-Correction Capability of RS Codes: The Vandermonde Determinant | p. 117 |
5.3 RS Codes in Systematic Form | p. 119 |
5.4 Syndrome Decoding of RS Codes | p. 120 |
5.5 The Euclidean Algorithm: Error-Location and Error-Evaluation Polynomials | p. 122 |
5.6 Decoding of RS Codes Using the Euclidean Algorithm | p. 125 |
5.6.1 Steps of the Euclidean Algorithm | p. 127 |
5.7 Decoding of RS and BCH Codes Using the Berlekamp-Massey Algorithm | p. 128 |
5.7.1 B-M Iterative Algorithm for Finding the Error-Location Polynomial | p. 130 |
5.7.2 B-M Decoding of RS Codes | p. 133 |
5.7.3 Relationship Between the Error-Location Polynomials of the Euclidean and B-M Algorithms | p. 136 |
5.8 A Practical Application: Error-Control Coding for the Compact Disk | p. 136 |
5.8.1 Compact Disk Characteristics | p. 136 |
5.8.2 Channel Characteristics | p. 138 |
5.8.3 Coding Procedure | p. 138 |
5.9 Encoding for RS codes C[subscript RS](28, 24), C[superscript RS](32, 28) and C[subscript RS](255, 251) | p. 139 |
5.10 Decoding of RS Codes C[subscript RS](28, 24) and C[subscript RS](32, 28) | p. 142 |
5.10.1 B-M Decoding | p. 142 |
5.10.2 Alternative Decoding Methods | p. 145 |
5.10.3 Direct Solution of Syndrome Equations | p. 146 |
5.11 Importance of Interleaving | p. 148 |
Bibliography and References | p. 152 |
Problems | p. 153 |
6 Convolutional Codes | p. 157 |
6.1 Linear Sequential Circuits | p. 158 |
6.2 Convolutional Codes and Encoders | p. 158 |
6.3 Description in the D-Transform Domain | p. 161 |
6.4 Convolutional Encoder Representations | p. 166 |
6.4.1 Representation of Connections | p. 166 |
6.4.2 State Diagram Representation | p. 166 |
6.4.3 Trellis Representation | p. 168 |
6.5 Convolutional Codes in Systematic Form | p. 168 |
6.6 General Structure of Finite Impulse Response and Infinite Impulse Response FSSMs | p. 170 |
6.6.1 Finite Impulse Response FSSMs | p. 170 |
6.6.2 Infinite Impulse Response FSSMs | p. 171 |
6.7 State Transfer Function Matrix: Calculation of the Transfer Function | p. 172 |
6.7.1 State Transfer Function for FIR FSSMs | p. 172 |
6.7.2 State Transfer Function for IIR FSSMs | p. 173 |
6.8 Relationship Between the Systematic and the Non-Systematic Forms | p. 175 |
6.9 Distance Properties of Convolutional Codes | p. 177 |
6.10 Minimum Free Distance of a Convolutional Code | p. 180 |
6.11 Maximum Likelihood Detection | p. 181 |
6.12 Decoding of Convolutional Codes: The Viterbi Algorithm | p. 182 |
6.13 Extended and Modified State Diagram | p. 185 |
6.14 Error Probability Analysis for Convolutional Codes | p. 186 |
6.15 Hard and Soft Decisions | p. 189 |
6.15.1 Maximum Likelihood Criterion for the Gaussian Channel | p. 192 |
6.15.2 Bounds for Soft-Decision Detection | p. 194 |
6.15.3 An Example of Soft-Decision Decoding of Convolutional Codes | p. 196 |
6.16 Punctured Convolutional Codes and Rate-Compatible Schemes | p. 200 |
Bibliography and References | p. 203 |
Problems | p. 205 |
7 Turbo Codes | p. 209 |
7.1 A Turbo Encoder | p. 210 |
7.2 Decoding of Turbo Codes | p. 211 |
7.2.1 The Turbo Decoder | p. 211 |
7.2.2 Probabilities and Estimates | p. 212 |
7.2.3 Symbol Detection | p. 213 |
7.2.4 The Log Likelihood Ratio | p. 214 |
7.3 Markov Sources and Discrete Channels | p. 215 |
7.4 The BCJR Algorithm: Trellis Coding and Discrete Memoryless Channels | p. 218 |
7.5 Iterative Coefficient Calculation | p. 221 |
7.6 The BCJR MAP Algorithm and the LLR | p. 234 |
7.6.1 The BCJR MAP Algorithm: LLR Calculation | p. 235 |
7.6.2 Calculation of Coefficients [gama iota](u[prime], u) | p. 236 |
7.7 Turbo Decoding | p. 239 |
7.7.1 Initial Conditions of Coefficients [alpha][subscript [iota]-1] (u[prime]) and [Beta iota] (u) | p. 248 |
7.8 Construction Methods for Turbo Codes | p. 249 |
7.8.1 Interleavers | p. 249 |
7.8.2 Block Interleavers | p. 250 |
7.8.3 Convolutional Interleavers | p. 250 |
7.8.4 Random Interleavers | p. 251 |
7.8.5 Linear Interleavers | p. 253 |
7.8.6 Code Concatenation Methods | p. 253 |
7.8.7 Turbo Code Performance as a Function of Size and Type of Interleaver | p. 257 |
7.9 Other Decoding Algorithms for Turbo Codes | p. 257 |
7.10 Exit Charts for Turbo Codes | p. 257 |
7.10.1 Introduction to Exit Charts | p. 258 |
7.10.2 Construction of the Exit Chart | p. 259 |
7.10.3 Extrinsic Transfer Characteristics of the Constituent Decoders | p. 261 |
Bibliography and References | p. 269 |
Problems | p. 271 |
8 Low-Density Parity Check Codes | p. 277 |
8.1 Different Systematic Forms of a Block Code | p. 278 |
8.2 Description of LDPC Codes | p. 279 |
8.3 Construction of LDPC Codes | p. 280 |
8.3.1 Regular LDPC Codes | p. 280 |
8.3.2 Irregular LDPC Codes | p. 281 |
8.3.3 Decoding of LDPC Codes: The Tanner Graph | p. 281 |
8.4 The Sum-Product Algorithm | p. 282 |
8.5 Sum-Product Algorithm for LDPC Codes: An Example | p. 284 |
8.6 Simplifications of the Sum-Product Algorithm | p. 297 |
8.7 A Logarithmic LDPC Decoder | p. 302 |
8.7.1 Initialization | p. 302 |
8.7.2 Horizontal Step | p. 302 |
8.7.3 Vertical Step | p. 304 |
8.7.4 Summary of the Logarithmic Decoding Algorithm | p. 305 |
8.7.5 Construction of the Look-up Tables | p. 306 |
8.8 Extrinsic Information Transfer Charts for LDPC Codes | p. 306 |
8.8.1 Introduction | p. 306 |
8.8.2 Iterative Decoding of Block Codes | p. 310 |
8.8.3 Exit Chart Construction for LDPC Codes | p. 312 |
8.8.4 Mutual Information Function | p. 312 |
8.8.5 Exit Chart for the SND | p. 314 |
8.8.6 Exit Chart for the PCND | p. 315 |
8.9 Fountain and LT Codes | p. 317 |
8.9.1 Introduction | p. 317 |
8.9.2 Fountain Codes | p. 318 |
8.9.3 Linear Random Codes | p. 318 |
8.9.4 Luby Transform Codes | p. 320 |
8.10 LDPC and Turbo Codes | p. 322 |
Bibliography and References | p. 323 |
Problems | p. 324 |
Appendix A Error Probability in the Transmission of Digital Signals | p. 327 |
Appendix B Galois Fields GF(q) | p. 339 |
Answers to Problems | p. 351 |
Index | p. 357 |