Transcript
Page 1: Lessons learned from the U.S. nuclear power Plant on-line monitoring programs

ELSEVIER

www.elsevier.eom/locate/pnucene

Progress in Nuclear Energy, Vol. 46, No. 3-4, pp. 176-189, 2005 Available online at www.sciencedirect.com~ © 2005 Elsevier Ltd. All rights reserved

s c ~ E ~ e E (-d) o, R ~ c V. Printed in Great Britain 0149-1970/$ - see front matter

d o i : l 0 . 1 0 1 6 / j . p n u e e n e . 2 0 0 5 . 0 3 . 0 0 3

L E S S O N S L E A R N E D F R O M THE U.S. N U C L E A R P O W E R

P L A N T O N - L I N E M O N I T O R I N G P R O G R A M S

J. W. HINES a, E. DAVIS h

~Nuelear Engineering Department, The University of Tennessee, Knoxville, Tennessee 37996-2300,

t~Edan Engineering Corporation, 900 Washington St., Suite 830, Vancouver, Washington 98660

ABSTRACT The investigation and application of on-line monitoring programs has been ongoing for over two decades by the U.S. nuclear industry and researchers. To this date, only limited pilot installations have been demonstrated and the original objectives have changed significantly. Much of the early work centered on safety critical sensor calibration monitoring and reduction. The current focus is on both sensor and equipment monitoring. This paper presents the major lessons learned that contributed to the lengthy development process including model development and implementation issues, and the results of a recently completed cost benefit analysis.

KEYWORDS On line monitoring; sensor calibration verification; empirical models; fault detection and isolation. © 2005 Elsevier Ltd. All rights reserved

1. INTRODUCTION AND BACKGROUND

For the past two decades, Nuclear Power Plants (N-PPs) have attempted to move towards condition-based maintenance philosophies using new technologies developed to ascertain the condition of plant equipment. Specifically, techniques have been developed to monitor the condition of sensors and their associated instrument chains. Historically, periodic manual calibrations have been used to assure sensors are operating correctly. This technique is not optimal in that sensor conditions are only checked periodically; therefore, faulty sensors can continue to operate for periods up to the calibration frequency. Faulty sensors can cause poor economic performance and unsafe conditions. Periodic techniques also eanse the unnecessary calibration of instruments that are not faulted which can result in damaged equipment, plant downtime, and improper calibration under non-service conditions.

176

Page 2: Lessons learned from the U.S. nuclear power Plant on-line monitoring programs

US. On-Line Monitoring Programs 177

Early pioneers in the use of advanced information processing techniques for instrument condition monitoring included researchers at the University of Tennessee (UT) and Argonne National Laboratory. Dr. Belle Upadhyaya was one of the original investigators in the early 1980's [Upadhyaya 1985, 1989, 1992], through a Department of Energy funded research project to investigate the application of artificial intelligence techniques to nuclear power plants. Researchers at Argonne National Laboratory continued with similar research from the late 1980's [Mort 1987] in which they developed the Multivariate State Estimation System (MSET) which has gained wide interest in the US Nuclear Industry. Lisle, IL. based SmartSignal Corporation licensed the MSET technology for applications in all industries, and subsequently extended and modified the basic MSET technology in developing their commercial Equipment Condition Monitoring (SmartSignal eCM TM ) software [Wegerich 2001]. The Electric Power Research Institute (EPRI) has used a product from Expert Mierosystems called SureSense [Bickford 2003], which also uses the MSET algorithm. Several other US companies such as Pavillion Technologies, ASPEN IQ, and Performance Consulting Services [Griebenow 1995] have also developed sensor validation products. The major European participant in this area is the Halden Research Project where Dr. Paolo Fantoni and his multi-national research team have developed a system termed Plant Evaluation and Analysis by Neural Operators (PEANO) [Fantoni 1998, 1999] and applied it to the monitoring of nuclear power plant sensors. Several other researchers have been involved with inferential sensing and on-line sensor monitoring. A survey of the methods is given by Hines [2000a].

Early EPRI research included the development of the Instrument Calibration and Monitoring Program (ICMP) for monitoring physically redundant sensors [EPRI 1993a, 1993b]. Subsequent work expanded to monitoring both redtmdant and non-redundant sensors.

Research and development in the 1990s resulted in Topical Report TR-104965, On-Line Monitoring of Instrument Channel Performance, developed by the EPRI/Utility On-Line Monitoring Working Group. In July 2000, the U.S. Office of Nuclear Reactor Regulation Application issued a safety evaluation (SE), which was released in September 2000. This report focused on the generic application of on-line monitoring techniques to be used as a tool for assessing instrument performance. It proposed to relax the frequency of instrument calibrations required by the U.S. nuclear power plant Technical Specifications (TS) from once every fuel cycle to once in a maximum of 8 years based on the on-line monitoring results.

1.1 EPRI on-line monitoring (eLM) group

The EPRI Instrument Monitoring and Calibration (IMC) Users Group formed in 2000 with an objective to demonstrate eLM technology in operating nuclear power plants for a variety of systems and applications. A second objective is to verify that eLM is capable of identifying instrument drift or failure. The On-Line Monitoring Implementation Users Group formed in mid 2001 to demonstrate eLM in multiple applications at many nuclear power plants and has a four-year time frame.

Current United States nuclear plant participants include Limerick, Salem, Sequoyah, TMI, and VC Summer using a system produced by Expert Mierosystems Inc. (expmierosys.eom), and Harris and Pale Verde, which use a system developed by SmartSignal Inc. (smartsignal.eom). Each of these plants is currently using eLM technology to monitor the calibration of process instrumentation. In addition to monitoring instrumentation, the systems have an inherent dual purpose of monitoring the condition of equipment, which is expected to improve plant performance and reliability. The Sizewell B nuclear power plant in Great Britain is using the eLM services supplied by AMS (www.ams-corp.com).

1.2 Lessons learned

This paper presents a brief description of the development activities and the major lessons learned. These lessons will be divided into three main categories. First, the technology changes will be briefly discussed.

Page 3: Lessons learned from the U.S. nuclear power Plant on-line monitoring programs

178 A ~ Hines and E. Davis

Next, implementation issues will be presented along with several examples. Lastly, a recently completed cost benefit study will be summarized to show where economies will drive the future application of On- Line Monitoring technologies.

2. ON LINE MONITORING TECHNIQUES

The OLM systems use historical plant data to develop empirical models that capture the relationships between correlated plant variables. These models are then used to verify that the relationships have not changed. A change can occur due to sensor drift, equipment faults, or operational error. The systems currently in use in the US are based on the Multivariate State Estimation Technique developed at Argonne National Laboratory (ANL) [Singer 1996, Gross 1997] and further studied at the University of Tennessee (UT) [Gribok 2000].

Numerous data-based technologies have been used by major researchers in the field including autoassociative neural networks [Fantoni 1998, Hines 1998, Upadhyaya 1992], fuzzy logic [Hines 1997], non-linear partial least squares [Qin 1992, Rasmussen 2000a], and kernel based techniques such as MSET [Singer 1996] and the Advanced Calibration Monitor (ACM) [Hansen 1994]. Three technologies have emerged and have been used in the Electric Power Industry that use different databased prediction methods: a kernel based method (MSET), a neural network based method (PEANO and the University of Tennessee AANN)), and a transformation method (NLPLS). These methods are described and compared in Hines [2000a].

The major lesson learned in applying empirical modeling strategies are that the methods should

• produce accurate results, • produce repeatable and robust results, • have an analytical method to estimate the uncertainty of the predictions, • be easily trained and easily retrained for new or expanded operating conditions.

2.1 Accurate results

Early applications of autoassoeiative techniques, such as MSET, were publicized to perform well with virtually no engineering judgment necessary. One item of interest is the choice of inputs for a model. Early application limits were said to be around 100 inputs per model [EPRI 2000] with no need to choose and subgroup correlated variables. However, experience has shown that models should be constructed with groups of highly correlated sensors resulting in models commonly containing less than 30 signals [EPRI 2002a]. It has been shown that adding irrelevant signals to a model increases the prediction variance while not including a relevant variable biases the estimate [Rasmussen 2003b]. Additionally, automated techniques for sensor groupings have been developed for the MSET model [Hines 2004].

2.2 Repeatable and robust results

When empirical modelling techniques are applied to data sets that consist of collinear (highly correlated) data sets, ill-conditioning can result in highly accurate performance on the training data, but highly variable, inaccurate results on unseen data. Robust models perform well on data that have incorrect inputs as expected noisy environments or when a sensor input is faulted. Regularization techniques can be applied to make the predictions repeatable, robust, and with lower variability [Hines 1999, 2000b, Gribok 2000, 2001]. A summary of the methods is given in Gribok [2002], and regularization methods have been applied to many of the systems currently in use.

Page 4: Lessons learned from the U.S. nuclear power Plant on-line monitoring programs

U.S. On-Line Monitoring Programs 179

2.3 Uncertainty analysis

The most basic requirements outlined in the NRC safety evaluation [2000] are that of an analysis of the uncertainty in the empirical estimates. Argonne National Laboratory has performed Monte Carlo based simulations to estimate the uncertainty of MSET based technique estimations [Zavaljevski 2000, 2003]. These techniques produce average results for a particular model trained with a particular data set. Researchers at The University of Tennessee have developed analytical techniques to estimate prediction intervals for all of the major techniques (MSET, AANN, PEANO, and NLPLS). The analytical results were verified using Monte Carlo based simulations and provide the desired 95% coverage [Rasmussen 2003a, 2003b, 2004, Gribok 2004]. Each of the techniques performs well, some better than the others, on various data sets.

2.4 Ease of training and retraining

As will be shown in section 3, it is virtually impossible for the original training data to cover the entire range of operation. The operating conditions may change over time and the models may need to be retrained to incorporate the new data.

MSET based methods are not trained, but are non-parametric modelling techniques. These techniques work well in that new data vectors can simply be added to the prototype data matrix.

Artificial Neural Networks require fairly long training times. Other parametric techniques, such as Non- Linear Partial Least Squares, can be trained much faster. Recently, PEANO system has incorporated a NLPLS algorithm with performed with equalled accuracy to the original AANN algorithm and can be trained in minutes versus days [Fantoni 2002].

3. OLM PLANT IMPLEMENTATION

In 2000, EPRI's focus moved from OLM product development to its implementation. In 2001, the On-Line Monitoring Implementation project started with a strategic role to facilitate OLM's implementation and cost effective use in numerous applications at power plants. Specifically, EPRI sponsored on-line monitoring implementations at multiple nuclear power plants. After three years of implementation and installation experience, several lessons have been learned. The major areas include data acquisition and quality, and model development, and results interpretation.

3.1 Data acquisition and quality

In order to build a robust model for OLM, one must first collect data covering all the operating conditions in which the system is expected to operate and for which signal validation is desired. This data is historical data that has been collected and stored and may not represent the plant state due to several anomalies that commonly occur. These include interpolation errors, random data errors, missing data, loss of significant figures, stuck data, and others. Data should always be visually observed and corrected or deleted before u s e .

3.1.1 Interpolation errors

The first problem usually encountered in using historical data for model training is that it is usually not actual data, but instead, data resulting from compression routines normally implemented in data archival programs. For example, the PI Data Historian from OSI Software creates a data archive that is a time- series database. However, all of the data is not stored at each collection time. Only data values that have changed by more than a tolerance are stored along with their time stamp. This method requires much less

Page 5: Lessons learned from the U.S. nuclear power Plant on-line monitoring programs

180 ~ ~ Hines and E. Davis

storage but results in a loss of data fidelity. When data is extracted from the historian, data values between logged data points are calculated through a simple linear interpolation. The resulting data appears to be a saw tooth time series and the correlations between sensors may be severely changed. Figure 1 below is a plot of data output by a data historian. The plot shows a perfectly linear increase in power between April 6 and April 7, although this was not the actual operation. Data collected for model training should be actual data and tolerances should be set as small as possible or not used.

99.8,

IOO.¢

1002 P,~e r

fOOD

99J6 5-~pr

, 1 ! i 6.Apr "l--Apr 8-~pr 9.-~r 10-,~ r 11--Ap~

TI~p~ Fig. 1. Data Interpolation

3.1.2 Data quality issues

Several data quality issues are common. These cases include

Lost or missing data. * Single or multiple outliers in one sensor or several.

Stuck data in which the data value does not update. • Random data values. • Unreasonable data values. • Loss of significant digits.

The figures below show several of these issues:

~:

L~-~..~,I~.~ rT,~¢~.~l~I~ r : ~ l ~ ~rF~,~ ~-~l~l~J~l~ ~I:~:: ̧

~ :~,~ ,X ~,V~ ~,~

Page 6: Lessons learned from the U.S. nuclear power Plant on-line monitoring programs

U.S. On-Line Monitoring Programs 181

Fig. 3. Loss of Significant Digits

Fig. 4. Unreasonable Data

Most of these data problems can be visually identified or can be detected by a data clean up utility. These utilities remove bad data or replace it with the most probably data value using some algorithm. It is most common to delete all bad data observations from the training data set. Most OLM software systems include automated tools for data cleanup; these tools easily identify extreme outlying data but are typically insensitive to data errors that occur within the expected region of operation. The addition of bad data points in a training set can invalidate a model. The figure below shows the prediction results with (Figure 5a) and without (Figure 5b) two bad data points. The actual data is black while the predicted data is grey.

Page 7: Lessons learned from the U.S. nuclear power Plant on-line monitoring programs

182 J.W. Hines attd E. Davis

5

Fig. 5a. Predictions with bad data.

!

Fig. 5b. Predictions with bad data removed.

3.2 Model development

Model development is not just a simple click and go as once claimed. There are several decisions that need to be made including:

• Defining models and selecting relevant inputs. • Selecting relevant operating regions. • Selecting relevant training data.

Grouping the sensors into related (correlated) groups has been discussed in section 2.1. This can be done with automated systems or can be done with engineering judgment. A combination of the techniques probably works best. Most nuclear plants tend to operate for extended periods at 100 percent power and some system data tends to exhibit little variation which complicates any correlation analysis, especially for noisy signals.

Page 8: Lessons learned from the U.S. nuclear power Plant on-line monitoring programs

U.S. On-Line Monitoring Programs 183

The model must be trained with data coveting all operating regions in which it is expected to operate. These operating regions can vary si~ificantly between nuclear plants since regions are defined by system structure, sensor values, operating procedures.

One example of a system structure change is the periodic usage of standby pumps or the cycled usage of redundant pumps. A model must be trained for each operating condition for the system to work properly, but excessive training on unusual conditions may degrade the performance on the most usual operating conditions. Therefore, some plant line-ups may not ever be included in the training set.

Operating conditions can also change due to equipment repair. In this case the model must be retrained to account for the new condition. Figure 6 presents an example in which a pump impeller was repaired resulting in an increased flow rate. The sensors are operating properly before and after the repair, but they are obviously sensing different operating states. In this case, the model must be completely retrained.

Bef~e Repair

~ ....................... 1 ~ ....................................... ~ ' .............................

00 iiii 7~ .............. ..................................................................................... I "

gfter Repair

72 ~ , ~

Mar May Jul Sep Nov O,a~

I " tF7594,~. , I F T ~ I T A , I F 7 6 0 0 A l Fig. 6. Repaired pump impeller.

Operating conditions also change due to cyclic changes such as seasonal variations. If a model is trained during mild summers and then monitoring occurs in a hotter summer with higher cooling water temperatures, the model will not perform correctly. In this case, data from the more severe operating conditions must be added to the training data. The figure below shows an example of this anomaly.

Fig. 7. Cyclic operating condition requiring retraining

Page 9: Lessons learned from the U.S. nuclear power Plant on-line monitoring programs

184 J ~ Hines and E. Davis

Out of the ordinary transients can also cause modeling problems. One example of this is programmed control rod changes that occur in boiling water reactors. This is a common procedure but one in which retraining might or might not be conducted, depending on the user's preference regarding false alarms. The figure below shows this example of short-term transients.

M~xim~m ~r~lue~ Tmn~i~n1~ exceed

Fig. 8. Short-term transients

3.3 Results interpretation

Once a model is trained and put into operation, the predictions must be evaluated to determine if the system is operating correctly, if a sensor is drifting, if an operating condition has changed, or if an equipment failure has occurred. The choice of which has occurred can be made using logic and this logic has been programmed into expert system type advisors with some success [Wegerich 2001]. The logical rules operate on the residuals, which are the difference between the predictions and the observations. Under normal conditions, the residuals should be small random values. If only one residual grows, the hypothesis is that a sensor has degraded or failed. An example of a drifting sensor is shown below with the fn'st plot showing the sensor value drifting down and predicted value remaining fairly constant, and the second plot showing the residual deviating down from zero.

2.

,3

Z~4~C~I

Zig. 9. Sensor drifting and its associated residual.

Page 10: Lessons learned from the U.S. nuclear power Plant on-line monitoring programs

US. On-Line Monitoring Programs 185

If several residuals significantly differ from zero, the operating state has probably changed or art equipment failure has occurred. More in depth knowledge and engineering judgment must be used to ascertain which has occurred and a fault detection and identification system may be necessary to make this decision.

Early architectures used a statistical technique termed Sequential Probability Ratio Test (SPRT), developed by Wald [1945] and improved by Gross [1992], to determine when a sensor's residual has deviated from zero. This method assumes normally distributed noise, which rarely is the ease and degrades its operation. Simpler threshold checking techniques have been used with success. The thresholds have been set using different methods such as three sigma bands with sigma equal to the standard deviation of the residual from the training set. More complex techniques use changing threshold bands that switch when models change [Fantoni 1998] or change as the uncertainty in the prediction changes [Rasmussen 2003b].

4. COST BENEFIT ANALYSIS

Recently EPRI has completed a Cost Benefit Analysis Guide [EPRI, 2003]. The objective of this document was to determine the economic impact of the installation, operation, and upkeep of an On-Line Monitoring system and quantify the associated costs and benefits. This section summarizes the results presented in that document.

4.1 Costs

The costs of an on-line implementation include software licensing, equipment, model development, training, and maintenance. If the system is used to monitor Teehnieal Specification sensors to defer manual calibrations, then additional costs will be incurred to obtain a license amendment. These costs are summarized in Table 1 below. The expected costs are very sensitive to the software costs and the values below apply specifically to the Expert Mierosystems SureSense software used by the EPRI eLM Implementation project.

4.2 Benefits

The benefits of an on-line monitoring system include direct benefits from a reduction in manual calibrations, and indirect benefits including performance enhancements and equipment monitoring. It has been determined that the cost of a manual calibration is approximately $910 for one sensor. The number of safety critical sensors covered by Technical Specifications commonly ranges between 60 and 100 sensors. However, more than 200 sensors are suitable for calibration monitoring and the range of savings depends on the number of calibrations avoided each cycle, which depend on the number of sensor being monitored. The typical anticipated savings are

• 50 calibrations deferred each operating cycle - $45,500 • 75 calibrations deferred each operating cycle - $68,250 • 100 calibrations deferred each operating eyele- $91,000

Figure 10 graphically shows the payback due to the installation of an eLM installation considering the benefits of calibration reductions. The payback period is strongly affected by the number of sensors being monitored. A system monitoring 300 sensors has a payback of 6 years while a system monitoring only 100 sensors may never have a positive net present value. This shows that the techniques must be applied to non-Technical Specification sensors for the investment to be worthy of consideration. It is also apparent that the benefits of calibration reduction alone may not be a strong incentive for installation of an eLM system and other indirect benefits should be considered.

Page 11: Lessons learned from the U.S. nuclear power Plant on-line monitoring programs

L86 J. Eg Hines and E. Davis

Table 1 - Costs of On Line Monitoring Implementation [EPR12003]

Cost Element Cost per Cost per Channel Model Total Cost

Initial Program Set-Up

Software license $25,000

Computers for project personnel $5,000

Personnel training $45 $450 $9,000

Extract historical data $6,000

Initiaf model development $300 $3,000 $60,000

Configure on-line interfaces $25,000

initial model deployment $60 $600 $12,000

On-site procedures $20,000

Plant-specific software acceptance $3,000

Initial Program Set-Up Total: $165,000 Technical S ~ i f i c a t i o n ' A p p l i c ~ i o n s 1t00 channels)

Technical Specification change request prep $15,000

NRC review fee $15,000

Uncertainty analysis $75 $750 $7,500

On-site procedures $75 $750 $7,500

Total with Technical Specifications Included: $210,000

Recurring Annual Costs

Maintenance Agreement $5,000

Periodic review and analysis $45 $450 sg,000

Model Maintenance $60 $600 $12,000

Total Annual Costs $26,000

¢Z

z

P=

Payback Versus Number of Ci'mnnets Modeled

2 Yeat~

Fig. 10. Paybaek Analysis of OLM Installation

Page 12: Lessons learned from the U.S. nuclear power Plant on-line monitoring programs

U,S. On-Line Monitor#2g Programs 187

The indirect benefits related to OLM are more difficult to quantify. One example is that of being able to schedule maintenance for failed sensors. In 2001 a participating NPP detected a first stage turbine pressure sensor drift. This sensor is an input to a pressure control system and has a redundancy of only two. Therefore without an OLM system, the operator would not have been able to justify which of the two sensors was drifting and would have had to immediately perform maintenance. Because the faulty sensor was readily identified, the plant was able to continue normal operation and maintenance was scheduled for a more opportune time. Additionally, the time to troubleshoot sensor anomalies is reduced with OLM.

Indirect benefits can also be attributed to increased performance. Several plants use performance monitoring software to increase thermal efficiency. If a faulty sensor were used as input to the performance monitoring system, incorrect plant operational changes could be made that would reduce the performance of the plant. Having validated signals as inputs to these systems can have an economic advantage.

The benefits of on-line equipment monitoring are also difficult to quantify but may be extremely important. The benefits range from more efficient maintenance scheduling to a reduction in down-time. The largest potential savings comes from the possible avoidance of an incident. A study of the loss of power incidents at eight selected U.S. units between 2000 and 2004 shows that the average number of incidents is 5.5. The average dollar loss per incident ranges from 0.6 million to 6.2 million with a mean estimate of 1.5 million. These values lead to a loss of ranging from 3.4 million to 33.9 million with a mean of 8.1 million. It is apparent that just one avoided incident would pay for the installation costs many times over. However, a cursory investigation also shows that only a small percentage of the incidents would have been avoided through the use of an OLM system.

5. CONCLUSIONS

The development and application of On-Line Monitoring systems has occurred over the past 20 years. Through that time period much has been learned about improving the modeling techniques, implementing the system at a plant site, evaluating the results, and the economically basis for such an installation. The original objective of extending Technical Specification sensor calibrations to meet extended fuel cycles has changed to monitoring both safety and non-safety related signals, performance, and equipment. As plants fully field these technologies, the efforts and experiences of plant personnel, researchers, and EPRI project managers will prove invaluable.

REFERENCES

Biekford, R., Holzworth, R.E., R.D. G-riebenow, and A. t-Iussey (2003), "An Advanced Equipment Condition Monitoring System for Power Plants", Transactions of the American Nuclear Society, New Orleans, LA, Nov 16-20, 2003.

EPRI (1993a), Instrument Calibration and Monitoring Program, Volume 1: Basis for the Method, EPRI, Palo Alto, CA: 103436-V1.

EPRI (199b3), Instrument Calibration and Monitoring Program, Volume 2: Failure Modes and Effects Analysis, EPRI, Palo Alto, CA: 103436-V2.

EPRI (2000), On-Line Monitoring of Instrument Channel Performance, EPRI, Palo Alto, CA: 2000. 1000604.

EPRI (2002a), Plant Systems Modelling Guidelines to Implement On-Line Monitoring, EPRI, Palo Alto, CA: 1003661.

EPRI (2002b), On-Line Monitoring Implementation Guidelines, EPRI, Palo Alto, CA:

EPRI (2002c), Implementation of On-Line Monitoring for Technical Specification Instruments, EPRI, Palo Alto, CA: 1006833.

Page 13: Lessons learned from the U.S. nuclear power Plant on-line monitoring programs

188 J. W. Hines and E. Davis

EPRI (2003), On-Line Monitoring Cost Benefit Guide, Final Report, EPRI, Palo Alto, CA: 1006777.

Fantoni, P., S. Figedy, A. Racz, (1998), "A Neuro-Fuzzy Model Applied to Full Range Signal Validation of PWR Nuclear Power Plant Data", FLINS-98, Antwerpen, Belgium.

Fantoni, P., (1999), "On-Line Calibration Monitoring of Process Instrumentation in Power Plants", EPRI Plant Maintenance Conference, Atlanta, Georgia, June 21, 1999.

Fantoni, P., M. Hoffmann, B. Rasmussen, LW. Hines, and A. Kirschner, (2002), "The use of non linear partial least square methods for on-line process monitoring as an alternative to artificial neural networks," 5 e" International Conference on Fuzzy Logic and InteUigent Technologies in Nuclear Science (FLINS), Gent, Belgium, Sept. 16-18.

Gribok, A.V., J.W. Hines, R.E. Uhrig (2000), "Use of Kernal Based Techniques for Sensor Validation in Nuclear Power Plants", The Third American Nuclear Society International Topical Meeting on Nuclear Plant Instrumentation and Control and Human-Machine Interface Technologies, Washington DC, November 13-17, 2000.

Gribok, A.V., J.W. Hines, I Attieh, and R.E. Uhrig, (2000), "Stochastic Regularization of Feedwater Flow Rate Evaluation for Venturi the Meter Fouling Problem in Nuclear Power Plants", Inverse Problems in Engineering.

Gribok, A.V., J.W. Hines, I Attieh, and R.E. Uhrig, (2001), "Regularization of Feedwater Flow Rate Evaluation for the Venturi Meter Fouling Problems in Nuclear Power Plants", Nuclear Technology, Vol. 134, No. 1, April 2001.

Crribok, A.V., J.W. Hines, A. Urmanov, and R.E. Uhrig, "Regularization of Ill-Posed Surveillance and Diagnostic Measurements", Power Plant Surveillance and Diagnostics, eds. Da Ruan and P. Fantoni, Springer, 2002.

Gribok, A.V., J.W. Hines, A.M. Urmanov, (2004), "Uncertainty Analysis of Memory Based Sensor Validation Techniques", accepted for publication in the special issue of Real Time Systems on "Applications of Intelligent Real- Time Systems for Nuclear Engineering".

Griebenow, R.D., A.L. Sudduth (1995), "Applied Pattern Recognition for Plant Monitoring and Data Validation", The1995 ISA POWID Conference.

Gross, K.C. (1992), "Spectrum-Transformed Sequential Testing Method for Signal Validation Applications", 8 th Power Plant Dynamics, Control & Testing Syrup., Knoxville, Tennessee, Vol. I, May, 1992, pp. 36.01-36.12.

Gross, K.C., R.M. Singer, J.P. Herzog, R. VanAlstine and S.W. Wegerich (1997), "Application of a Model-based Fault Detection System to Nuclear Plant Signals", Proceedings, Intelligent System Applications to Power Systems, (ISAP (&), Seoul, Korea, July 6-10, pp. 66-70.

Hansen, E.J., and lk&B. Caudill (1994), "Similarity Based Regression: Applied Advanced Pattern Recognition for Power Plant Analysis"; E.J. Hansen, M.B. Caudill; 1994 EPRI-ASME Heat Rate Improvement Conference; Baltimore, Maryland.

Hines, J.W., and D.J. Wrest (1997), "Signal Validation Using an Adaptive Neural Fuzzy Inference System", Nuclear Technology, August, pp. 181-193.

Hines, J.W., and R.E. Uhrig, (1998), "Use of Autoassociative Neural Networks for Signal Validation", Journal of Intelligent and Robotic Systems, Kluwer Academic Press, February, pp. 143-154.

Hines, J.W., A.V. Gribok, I. Attieh, and R.E. Uhrig (1999), "Regularization Methods for Inferential Sensing in Nuclear Power Plants", Fuzzy Systems and Soft Computing in Nuclear Engineering, Ed. Da Ruan, Springer, 1999.

Hines, J.W. and B. Rasmussen (2000a), "On-Line Sensor Calibration Verification: "A Survey"", 14th International Congress and Exhibition on Condition Monitoring and Diagnostic Engineering Management, Manchester, England, September, 2000.

Hines, J.W., A.V. Gribok, R.E. Uhrig, and I. Attieh, (2000b), "Neural network regnlarization techniques for a sensor validation system," Transactions of the American Nuclear Society, San Diego, California, June 4-8.

Hines, J.W., A. Usynin, and S. Wegerich (2004), "Autoassociative Model Input Variable Selection for Process Modeling", 58th Meeting of the Society for Machinery Failure Prevention Technology, Virginia Beach, Virginia, April 26-30, 2004.

Mott, Young, and R.W. King (1987), "Pattern Recognition Software for Plant Surveillance", US DOE Report.

Page 14: Lessons learned from the U.S. nuclear power Plant on-line monitoring programs

U.S. On-Line Monitoring Programs 189

Qin, SJ., and T.J. McAvoy (1992), "Nonlinear PLS Modelling Using Neural Networks," Computers in Chemical Engineering, vol. 16, n. 4, pp. 379-391.

Rasmussen, B., J.W. Hines, and R.E. Uhrig (2000), "Nonlinear Partial Least Squares Modeling for Instrument Surveillance and Calibration Verification, Proc. Maintenance and ReliabiliO~ Conference, Knoxville, TN.

Rasmussen, B. J.W. Hines, and A.V. Gribok (2003a), "An Applied Comparison of the Prediction Intervals of Common Empirical Modeling Strategies", Brandon Rasmussen, Andrei Gribok, and J. Wesley Hines, Proc. Maintenance and Reliability Conference, Knoxville, TN.

Rasmussen, B., (2003b), "Prediction Interval Estimation Techniques for Empirical Modeling Strategies and their Applications to Signal Validation Tasks", Ph.D. dissertation, Nuclear Engineering Department, The University of Tennessee, Knoxville.

Rasmussen, B. and J.W. Hines (2004), "Uncertainty Estimation Techniques for Empirical Model Based Condition Monitoring", 6th International FLINS Conference on Applied Computational Intelligence, Blankenberge, Belgium, Sept 1-4, 2004.

Singer, R.M., K.C. Gross, J.P. Herzog, R.W. King and S.W. Wegerieh (1996), "Model-Based Nuclear Power Plant Monitoring and Fault Detection: Theoretical Foundations", Proc. 9th Intl. Conf. on Intelligent Systems Applications to Power Systems, Seoul, Korea.

Upadhyaya, B.R., (1985), "Sensor Failure Detection and Estimation", Nuclear Safety.

Upadhyaya, B.R., and K. Holbert (1989), "Development and Testing of an Integrated Signal Validation System for Nuclear Power Plants", DOE Contract DE-AC02-86NE37959.

Upadhyaya, B.R., and E. Eryurek (1992), "Application of Neural Networks for Sensor Validation and Plant Monitoring," Nuclear Technology, vol. 97, pp. 170-176, February, 1992.

Wald, A (1945), "Sequential Tests of Statistical Hypotheses," Annals of Mathematical Statistics, V ol. 16, pp 117-186.

Wegerich, S, R. Singer, J. Herzog, and A. Wilks (2001), "Challenges Facing Equipment Condition Monitoring Systems", Proc. Maintenance and Reliability Conference, Gaflinburg, TN.

Zavaljevski, N and K. Gross (2000), "Uncertainty Analysis for Multivariate State Estimation in Safety Critical and Mission Critical Maintenance Applications", Proc. Maintenance and Reliability Conference, Knoxville, TN.

Zavaljevsld, N., A. Miron, C. Yu, and E. Davis (2003), "Uncertainty Analysis for the Multivariate State Estimation Technique (MSET) Based on Latin Hypercube Sampling and Wavelet De-Noising", Transaction of the American Nuclear Society, New Orleans, LA, No


Recommended