Available:*
Library | Item Barcode | Call Number | Material Type | Item Category 1 | Status |
---|---|---|---|---|---|
Searching... | 30000010306147 | TP155.75 F53 2013 | Open Access Book | Book | Searching... |
On Order
Summary
Summary
Tested and proven strategy to develop optimal automated process fault analyzers
Process fault analyzers monitor process operations in order to identify the underlying causes of operational problems. Several diagnostic strategies exist for automating process fault analysis; however, automated fault analysis is still not widely used within the processing industries due to problems of cost and performance as well as the difficulty of modeling process behavior at needed levels of detail.
In response, this book presents the method of minimal evidence (MOME), a model-based diagnostic strategy that facilitates the development and implementation of optimal automated process fault analyzers. MOME was created at the University of Delaware by the researchers who developed the FALCON system, a real-time, online process fault analyzer. The authors demonstrate how MOME is used to diagnose single and multiple fault situations, determine the strategic placement of process sensors, and distribute fault analyzers within large processing systems.
Optimal Automated Process Fault Analysis begins by exploring the need to automate process fault analysis. Next, the book examines:
Logic of model-based reasoning as used in MOME MOME logic for performing single and multiple fault diagnoses Fuzzy logic algorithms for automating MOME Distributing process fault analyzers throughout large processing systems Virtual SPC analysis and its use in FALCONEER(tm) IV Process state transition logic and its use in FALCONEER(tm) IVThe book concludes with a summary of the lessons learned by employing FALCONEER(tm) IV in actual process applications, including the benefits of "intelligent supervision" of process operations.
With this book as their guide, readers have a powerful new tool for ensuring the safety and reliability of any chemical processing system.
Author Notes
Richard J. Fickelscherer, PE, is one of the key developers of the FALCON system, a real-time, online, knowledge-based system that performs process fault diagnoses. FALCON led to a generalized quantitative model-based diagnostic strategy known as the method of minimal evidence (MOME). Dr. Fickelscherer went on to develop advanced process control and process monitoring programs for Exxon, Merck, and Koch Industries. Working as a consultant to FMC Corporation, he developed FALCONEER, a process fault analyzer based on MOME.
Daniel L. Chester, PhD, is Associate Chair of the Department of Computer and Information Sciences at the University of Delaware. He is a cofounder of FALCONEER Technologies, which offers and installs advanced software for auditing process plant operations, including the online, real-time detection and diagnosis of process faults.
Table of Contents
Foreword | p. xiii |
Preface | p. xv |
Acknowledgments | p. xix |
1 Motivations for Automating Process Fault Analysis | p. 1 |
1.1 Introduction | p. 1 |
1.2 CPI Trends to Date | p. 1 |
1.3 The Changing Role of Process Operators in Plant Operations | p. 3 |
1.4 Methods Currently Used to Perform Process Fault Management | p. 5 |
1.5 Limitations of Human Operators in Performing Process Fault Management | p. 10 |
1.6 The Role of Automated Process Fault Analysis | p. 12 |
1.7 Anticipated Future CPI Trends | p. 13 |
1.8 Process Fault Analysis Concept Terminology | p. 14 |
References | p. 16 |
2 Method of Minimal Evidence: Model-Based Reasoning | p. 21 |
2.1 Overview | p. 21 |
2.2 Introduction | p. 22 |
2.3 Method of Minimal Evidence Overview | p. 23 |
2.3.1 Process Model and Modeling Assumption Variable Classifications | p. 28 |
2.3.2 Example of a MOME Primary Model | p. 31 |
2.3.3 Example of MOME Secondary Models | p. 36 |
2.3.4 Primary Model Residuals' Normal Distributions | p. 39 |
2.3.5 Minimum Assumption Variable Deviations | p. 41 |
2.3.6 Primary Model Derivation Issues | p. 44 |
2.3.7 Method for Improving the Diagnostic Sensitivity of the Resulting Fault Analyzer | p. 47 |
2.3.8 Intermediate Assumption Deviations, Process Noise, and Process Transients | p. 48 |
2.4 Verifying the Validity and Accuracy of the Various Primary Models | p. 49 |
2.5 Summary | p. 51 |
References | p. 52 |
3 Method of Minimal Evidence: Diagnostic Strategy Details | p. 55 |
3.1 Overview | p. 55 |
3.2 Introduction | p. 56 |
3.3 MOME Diagnostic Strategy | p. 57 |
3.3.1 Example of MOME SV&PFA Diagnostic Rules' Logic | p. 57 |
3.3.2 Example of Key Performance Indicator Validation | p. 67 |
3.3.3 Example of MOME SV&PFA Diagnostic Rules with Measurement Redundancy | p. 71 |
3.3.4 Example of MOME SV&PFA Diagnostic Rules for Interactive Multiple-Faults | p. 74 |
3.4 General Procedure for Developing and Verifying Competent Model-Based Process Fault Analyzers | p. 79 |
3.5 MOME SV&PFA Diagnostic Rules' Logic Compiler Motivations | p. 80 |
3.6 MOME Diagnostic Strategy Summary | p. 83 |
References | p. 84 |
4 Method of Minimal Evidence: Fuzzy Logic Algorithm | p. 87 |
4.1 Overview | p. 87 |
4.2 Introduction | p. 88 |
4.3 Fuzzy Logic Overview | p. 90 |
4.4 MOME Fuzzy Logic Algorithm | p. 91 |
4.4.1 Single-Fault Fuzzy Logic Diagnostic Rule | p. 93 |
4.4.2 Multiple-Fault Fuzzy Logic Diagnostic Rule | p. 97 |
4.5 Certainty Factor Calculation Review | p. 102 |
4.6 MOME Fuzzy Logic Algorithm Summary | p. 104 |
References | p. 105 |
5 Method of Minimal Evidence: Criteria for Shrewdly Distributing Fault Analyzers and Strategic Process Sensor Placement | p. 109 |
5.1 Overview | p. 109 |
5.2 Criteria for Shrewdly Distributing Process Fault Analyzers | p. 109 |
5.2.1 Introduction | p. 110 |
5.2.2 Practical Limitations on Target Process System Size | p. 110 |
5.2.3 Distributed Fault Analyzers | p. 112 |
5.3 Criteria for Strategic Process Sensor Placement | p. 113 |
References | p. 114 |
6 Virtual SPC Analysis and Its Routine Use in FALCONEER™ IV | p. 117 |
6.1 Overview | p. 117 |
6.2 Introduction | p. 118 |
6.3 EWMA Calculations and Specific Virtual SPC Analysis Configurations | p. 118 |
6.3.1 Controlled Variables | p. 119 |
6.3.2 Uncontrolled Variables and Performance Equation Variables | p. 120 |
6.4 Virtual SPC Alarm Trigger Summary | p. 123 |
6.5 Virtual SPC Analysis Conclusions | p. 124 |
References | p. 124 |
7 Process State Transition Logic and Its Routine Use in FALCONEER™ IV | p. 125 |
7.1 Temporal Reasoning Philosophy | p. 125 |
7.2 Introduction | p. 126 |
7.3 State Identification Analysis Currently Used in FALCONEER™ IV | p. 128 |
7.4 State Identification Analysis Summary | p. 131 |
References | p. 131 |
8 Conclusions | p. 133 |
8.1 Overview | p. 133 |
8.2 Summary of the MOME Diagnostic Strategy | p. 133 |
8.3 FALCON, FALCONEER, and FALCONEER™ IV Actual KBS Application Performance Results | p. 134 |
8.4 FALCONEER™ IV KBS Application Project Procedure | p. 136 |
8.5 Optimal Automated Process Fault Analysis Conclusions | p. 138 |
References | p. 139 |
Appendix A Various Diagnostic Strategies for Automating Process Fault Analysis | p. 141 |
A.1 Introduction | p. 141 |
A.2 Fault Tree Analysis | p. 142 |
A.3 Alarm Analysis | p. 143 |
A.4 Decision Tables | p. 143 |
A.5 Sign-Directed Graphs | p. 144 |
A.6 Diagnostic Strategies Based on Qualitative Models | p. 145 |
A.7 Diagnostic Strategies Based on Quantitative Models | p. 145 |
A.8 Artificial Neural Network Strategies | p. 147 |
A.9 Knowledge-Based System Strategies | p. 147 |
A.10 Methodology Choice Conclusions | p. 148 |
References | p. 149 |
Appendix B The FALCON Project | p. 163 |
B.1 Introduction | p. 163 |
B.2 Overview | p. 164 |
B.3 The Diagnostic Philosophy Underlying the FALCON System | p. 164 |
B.4 Target Process System | p. 165 |
B.5 The FALCON System | p. 167 |
B.5.1 The Inference Engine | p. 168 |
B.5.2 The Human-Machine Inference | p. 169 |
B.5.3 The Dynamic Simulation Model | p. 169 |
B.5.4 The Diagnostic Knowledge Base | p. 172 |
B.6 Derivation of the FALCON Diagnostic Knowledge Base | p. 173 |
B.6.1 First Rapid Prototype of the FALCON System KBS | p. 173 |
B.6.2 FALCON System Development | p. 173 |
B.6.3 The FALCON System's Performance Results | p. 182 |
B.7 The Ideal FALCON System | p. 183 |
B.8 Use of the Knowledge-Based System Paradigm in Problem Solving | p. 184 |
References | p. 185 |
Appendix C Process State Transition Logic Used by the Original FALCONEER KBS | p. 187 |
C.1 Introduction | p. 187 |
C.2 Possible Process Operating States | p. 187 |
C.3 Significance of Process State Identification and Transition Detection | p. 189 |
C.4 Methodology for Determining Process State Identification | p. 189 |
C.4.1 Present-Value States of All Key Sensor Data | p. 189 |
C.4.2 Predicted Next-Value States of All Key Sensor Data | p. 190 |
C.5 Process State Identification and Transition Logic Pseudocode | p. 191 |
C.5.1 Attributes of the Current Data Vector | p. 191 |
C.5.2 Method Applied to Each Data Vector | p. 192 |
C.6 Summary | p. 196 |
Appendix D FALCONEER™ IV Real-Time Suite Process Performance Solutions Demos | p. 197 |
D.1 FALCONEER™ IV Demos Overview | p. 197 |
D.2 FALCONEER™ IV Demos | p. 197 |
D.2.1 Wastewater Treatment Process Demo | p. 197 |
D.2.2 Pulp and Paper Stock Chest Demo | p. 199 |
Index | p. 203 |