102
Fault Detection and Diagnosis of Brahmanbaria Gas Processing Plant Using Artificial Neural Network Analysis MASTER OF SCIENCE IN ENGINEERING (CHEMICAL) Suman Ahmed Department of Chemical Engineering BANGLADESH UNIVERSITY OF ENGINEERING AND TECHNOLOGY, DHAKA March, 2018

Fault Detection and Diagnosis of Brahmanbaria Gas

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Fault Detection and Diagnosis of Brahmanbaria Gas

Fault Detection and Diagnosis of Brahmanbaria Gas

Processing Plant Using Artificial Neural Network Analysis

MASTER OF SCIENCE IN ENGINEERING (CHEMICAL)

Suman Ahmed

Department of Chemical Engineering

BANGLADESH UNIVERSITY OF ENGINEERING AND TECHNOLOGY, DHAKA

March, 2018

Page 2: Fault Detection and Diagnosis of Brahmanbaria Gas

ii

Fault Detection and Diagnosis of Brahmanbaria Gas

Processing Plant Using Artificial Neural Network Analysis

by

Suman Ahmed

A thesis submitted to the Department of Chemical Engineering

in partial fulfillment of the requirements for the degree of

MASTER OF SCIENCE IN ENGINEERING (CHEMICAL)

Department of Chemical Engineering

BANGLADESH UNIVERSITY OF ENGINEERING AND

TECHNOLOGY, DHAKA

March, 2018

Page 3: Fault Detection and Diagnosis of Brahmanbaria Gas
Page 4: Fault Detection and Diagnosis of Brahmanbaria Gas
Page 5: Fault Detection and Diagnosis of Brahmanbaria Gas

v

Abstract Natural gas (NG) plays an important role in different sectors such as power generation,

fertilizer, industrial and commercial sector in Bangladesh. There are many gas

producing fields in Bangladesh. Although process safety technology has been

gradually implemented over the years, several accidents have happened in gas field

industry in Bangladesh like BGFCL, Niko, Occidental, Tullow, and Chevron. Massive

blowout took place in Occidental operated field and the similar incident happened at

Tengratila field of Niko. Major gas leakage was found in Titas Gas Field (BGFCL).

Gas processing plants associated with the gas fields have encountered process

industries and outage due to lack of proper monitoring. Bangura gas plant was shut

down for lacking proper monitoring and the Bibiyana gas plant was shut down to

repair a gas leaking . Those incidents lead to emphasis on fault detection and diagnosis.

Advanced process control (such as supervisory control and data acquisition (SCADA)

and Distributed control) systems help to operate the plant more reliably. However,

operator is saturated by alarms due to disturbances in a chemical process, need tool to

rapidly identifying root cause of fault and to rapidly intense to mitigate consequences.

To reduce the frequency and consequences of accidents, several techniques of hazard

identification and fault diagnosis have been developed and implemented. Over the last

few years, several studies were carried out on detection and diagnosis of process plant

disturbances using NN based Fault Diagnosis Technique.

In this thesis, an attempt has been made to study the fault detection and diagnosis of gas

processing plant using NN based system. Firstly, the steady state model of the gas

processing plant was developed using HYSYS and be validated using Brahmanbaria gas

plant data. Secondly Dynamic model is developed within Aspen HYSYS to study the

transient behaviour and different states (normal and abnormal) of the plant. Thirdly,

Different states of the process plant were generated using dynamic model.

Page 6: Fault Detection and Diagnosis of Brahmanbaria Gas

vi

Finally, a multi-layered feed forward NN based fault detection and diagnosis model has

been developed to identify the fault (disturbance) and no fault (normal) operation. The

developed NN based fault detection and diagnosis system has been trained using back

propagation algorithm. The NN based fault detection and diagnosis system has been

trained, validated and tested using the dynamic model data. Several neural networks

with different configurations and various learning strategies has employed in the training

process to obtain the optimum NN architecture for fault detection and diagnosis.

Preliminary results shows that NN based method successfully detect the faults of Gas

processing plant. It is expected that ANN based fault detection and diagnosis tool will be

popular in petrochemical process due to its simplicity to develop.

Page 7: Fault Detection and Diagnosis of Brahmanbaria Gas

vii

Acknowledgements

Firstly, the author would like to express my sincere thanks to Dr. Md. Tanvir Sowgath

for proposing this research topic. I would like to thank him for his valuable suggestions,

excellent guidance, support, encouragement, and supervision that made this dissertation

possible.

The author would like to gratitude to Dr. Md. Ali Ahammad Shoukat Choudhury who

made my thesis work to bring in day light by providing proper guidance though I am

working on chevron Bangladesh a as national rotator shift duty (14/14).

The author would like to gratefully acknowledge Dr. Syeda Sultana Razia for her

kindness help as well as guidance enabled me to complete this thesis paper process. Her

guidance helps me to write up thesis work

The author would like to thanks to Dr. Syed Farid Ahmed for his proper guidance to

complete the thesis work.

We owe a debt of gratitude to the Professor I.M.Mujtaba, University of Bradford, UK in

neural networks whose unceasing research and his support has been a continual

challenge and inspiration to us.

The author would like to thank Mr. Shafiq of Brahmanbaria gas processing plant

(BGFCL) to provide gas process plant live data, are gratefully acknowledged.

Finally, the author would like to thank my family members, all my Chevron Bangladesh

colleagues and all my friends for their support and encouragement during the research

work.

Page 8: Fault Detection and Diagnosis of Brahmanbaria Gas

viii

Table of contents

Abstract…………………………………………………………………………..............v Acknowledgements……………………………………………………………………..vii List of Figures……………………………………………………………………...........xi List of Tables…………………………………………………………………………...xiii Nomenclature………………………………………………………………………. ….xiv 1. INTRODUCTION

1.1 Background...............................................................................................................1

1.2. Problem Statement…….…………………………………………………………..3 1.3 Objective and Scope Research…………...………………………………………..4 1.4 Organization of the Thesis…………………………….……………………….…..5 2. LITERATURE REVIEW 2.1Introduction...............................................................................................................6

2.2 Principal of fault ......................................................................................................7 2.3 Quantitative fault detection method………………………………………….….....9 2.3.1 Process monitoring and Diagnosis……………………………………………10 2.4 Neural Network......................................................................................................11 2.4.1Neural Network Classification….………………………………………….....11 2.4.2 Neural Network Application in Chemical and Process Engineering……........12

2.4.3 Basics to artificial neural networks………………………..............................14

Page 9: Fault Detection and Diagnosis of Brahmanbaria Gas

ix

3.0 Process modeling using neural network 3.1 Data preprocessing………………………………………………………………..22

3.2 Data Cleaning…………………………………………………………………….23

3.3 Normalization of Input and Output Data Sets……………....................................24

3.4 Coding for Data Pre-processing…..........................................................................26

3.5 ANN Structure………………………………………………….…………….…...27

3.5.1 Structure Selection…………………………….………...................................27

3.5.2 Sizing the Network Structure………………………………………………..29

3.5.3 Algorithm for Optimum Network Structure………………………………..30 3.6 Selection of Proper Transfer Function……………………………………..….....32

3.7 Initializing the Weight Factor Distribution……………………............................34

3.8 Selection of ANN Parameters………………………………................................34

3.9 Train the Network…………………………………………………………….….37 3.10 Validation & Testing………………………………..……………………….….37 4. PLANT SIMULATION 4.1 Introduction………………………………………………………………………38

4.2 Process description of Brahmanbaria gas processing plant (BGFCL), Brahmanbaria, Bangladesh…...........................................................................................39

4.3 Steady state simulation of Brahmanbaria Gas Processing Plant………….…..….40

4.4 Dynamic state simulation conversion from steady state simulation of Brahmanbaria Gas Processing Plant…………………………………………………….41 4.5 Dynamics Monitoring…………………………………………………………….42

Page 10: Fault Detection and Diagnosis of Brahmanbaria Gas

x

4.6 Steady State Data and Dynamic Data…………………………………………….47

4.7 Simulation & Validation…………………………………………………………..48

4.8 Summary…………………………………………………………………………..52 5. ANN Fault detection and diagnosis modeling of Brahmanbaria gas

processing plant

5.1 Introduction………………………………………………………………………53

5.2 ANN Model Development of Brahmanbaria Gas Processing Plant……………..54

5.3 ANN Based Data Prediction for Brahmanbaria Gas Processing Plant (Normal Modeling)……………………………………………………………………………57

5.4 Neural Network Fault detection scheme………………………………………...59 5.5 Result……………………………………………………………………………66 6.0 CONCLUSION AND RECOMENDATION FOR FUTURE WORK

6.1 Conclusion………………………………………………………………………76 6.2 Recommendation for future work………………………………………………77 REFFERENCES…………………………………………………………………..78 Publication………………………………………………………………………….80 Appendix A…………………………………………………………………………81 Appendix B…………………………………………………………………………85 Appendix C…………………………………………………………………………87

Page 11: Fault Detection and Diagnosis of Brahmanbaria Gas

xi

List of figures 2.1 Time-dependency of faults 7

2.2 Basic models of faults: (a) Additive fault (b) Multiplicative faults 8

2.3 Classification of Neural Network

11

2.4 Components of Biological neuron

16

2.5 Model of Artificial Neuron 16

2.6 A representation of a simple 3-layer feed-forward ANN 18

2.7 Several activation functions 19

3.1 Preprocessing and Post-processing within network object 26

3.2 The hyperbolic tangent function superimposed over the sigmoid function

33

4.1 Steady state HYSYS Simulation. 40

4.2 Temperature profile of fractionation column normal operation 42

4.3 Disturbance profile of fractionation column during feed valve full close

43

4.4 Disturbance profile of fractionation column during feed valve full open

44

4.5 Disturbance profile of fractionation column during separators valve full close

45

4.6 Dynamic state simulation of Brahmanbaria Gas Processing Plant (BGFCL)

46

5.1 Online NN based fault detection system

59

Page 12: Fault Detection and Diagnosis of Brahmanbaria Gas

xii

5.2 Offline NN based fault detection

60

5.3 Neural Network Back propagation Training Scheme. 60

5.4 NN based fault detection system

63

5.5 Algorithm to extract weights and biases from optimized network 64

5.6 Normal Operation mode by HYSYS 69

5.7 Tower valve disturbance Operation mode by HYSYS 70

5.8 Normal operational trends

71

5.9 Training of NN Fault detection system 72

5.10 Statistical regression analysis of NN predicted data with fault 73

5.11 Statistical regression analysis of NN predicted data with fault

73

5.12 Statistical regression analysis of NN predicted test data

74

Page 13: Fault Detection and Diagnosis of Brahmanbaria Gas

xiii

List of tables

3.1 Hierarchy of Artificial Neural Networks 28

4.1 Sales gas composition 48

4.2 Steady state and dynamic state data comparison 49

4.3 Steady state and dynamic state data comparison

49

4.4 Steady state and dynamic state data comparison

50

4.5 Steady state and dynamic state data comparison

50

4.6 Steady state and dynamic state data comparison

51

5.1 Input-output parameter 66

5.2 Types of operation mode/disturbances criteria for Neural Network analysis

67

5.3 NN architecture for different conditions

68

Page 14: Fault Detection and Diagnosis of Brahmanbaria Gas

xiv

Nomenclature

ANN Artificial Neural Network

ANNs Artificial Neural Networks

NN Neural Network

AI Artificial Intelligence

FF Feedforward

FFANN Feed-Forward Artificial Neural Network

FLNs Functional Link Networks

RBFNs Radial Basis Function Networks

RBF Radial Basis Function

AF Activation Function

BA Backpropagation Algorithm

BP Backpropagation

FFBPN Feed forward Back propagation Network

LM Levenberg- Marquardt

RFTS Rapid Foundry Tooling System

FTA Fault Tree Analysis

BGPP Brahmanbaria Gas Processing Plant

HP & LTS High Pressure & Low Temperature Separator

FEMA Failure mode and effect analysis

Page 15: Fault Detection and Diagnosis of Brahmanbaria Gas

xv

HAZOP Hazard and Operability

PHA Process Hazard Analysis

PSM Process Safety Management

Page 16: Fault Detection and Diagnosis of Brahmanbaria Gas

1

Chapter 1

INTRODUCTION

1.1 Background

The chemical and petrochemical industry is becoming larger and more complex in

Bangladesh. Many of the gas fields are operated by government organization like

Bangladesh Gas Field Limited (BGFCL), rest of the gas fields are operated by

international oil company like Chevron, Kris energy, Santos. Process safety management

has not developed in comparison to the growth of chemical and petrochemical industry.

Consequences several accident history record available (Lee, 1996). The study of fault

detection and diagnosis will play vital rule in reducing the occurrence of sudden,

disruptive, or dangerous outages, equipment damage, and personal accidents, and to

assist in the operation with the maintenance program.

Research of quantitative fault detection system has earned more interest in recent times

not just only due to economical, yet more importantly; it functions as a safety

mechanism. The application of an advance controller in terms of fault detection will help

to reduce the probability of accident and loss as a result of human or mechanical error.

Real time fault detection enables supervisory control system to rapidly intervene and

prevent any incipient fault event into escalating into a process incident or accident

thereby preventing process outage and potential loss. The disaster in Bhopal and

Chernobyl is an excellent example why an advance controller can play a vital role in

preventing the incident from happen in the first place.

Fault detection is essentially a pattern recognition problem, in which a functional

mapping from the measurement space to a fault space is calculated A wide variety of

techniques have been proposed to detect and diagnose faults.

Generally, there are three different options available to approach a fault diagnosis

problem: state estimation methods, statistical process control methods, and knowledge-

based methods.

The emergence of artificial intelligence (AI) also plays a role in the development of

control system. The approach of AI is focusing on imitating the rational thinking of

Page 17: Fault Detection and Diagnosis of Brahmanbaria Gas

2

human (Lee, 2006). AI system such as fuzzy logic, neural network and genetic

programming had been integrated with the conventional control system to produce an

intelligent controller system. A neural network, a type of knowledge-based system,

possesses many desirable and preferred properties for chemical process fault diagnosis.

These properties include its abilities to learn from example, extract salient features from

data, reason in the presence of novel, imprecise or incomplete information, tolerate noisy

and random data, and degrade gracefully in performance when encountering data beyond

its range of training (Venkatasubramanian, 2003).

Reviewing the development of neural network fault detection and diagnosis systems, the

general trend in research is to increase the robustness of the system to un-modelled

patterns, realize fast and reliable diagnosis in dynamic processes and dynamically filter

noisy data used for detection (Hamid, 2004).

Aspen HYSYS solves the critical engineering and operating problems that arise

throughout the lifecycle of a chemical process, such as designing a new process,

troubleshooting a process unit or optimizing operations of a full process like an Acrylic

Acid plant. The process simulation capabilities of Aspen HYSYS enables engineers to

predict the behavior of a process using basic engineering relationships such as mass and

energy balances, phase and chemical equilibrium, and reaction kinetics. With reliable

thermodynamic data, realistic operating conditions and the rigorous Aspen HYSYS

equipment models, they can simulate actual plant behavior. In this case, the intelligent

controller will help the operator to handle and deal with various abnormal conditions or

fault that happen with more reliable, efficient and faster. The implementations of Neural

Network for fault detection in Brahmanbaria Gas Processing plant are proposed. Steady

state simulation has made by using plant operations parameter of Brahmanbaria gas

processing plant (Sultana R, Syeda., Ahmed, Suman., Rahman, Md Bazlur., Mehfuz,

Omit., and Shamsuzzaman, Razib., 2008). The steady state simulations data will be

compared with the plant live data (Kamruzzaman, 1999). Dynamic state simulation

model will help to understand the real plant behavior (HYSYS 3.2). The dynamic state

simulation plant model data is compared with steady state simulation plant data.

Page 18: Fault Detection and Diagnosis of Brahmanbaria Gas

3

1.2 Problem Statement

As we are heading toward the future, the advance knowledge and technology are

contributing to the improvement on reliability, safety and efficiency of fault detection

and diagnosis system. This system is very important as it will prevent accident; failure

and disaster from happening and save many lives. Today, safety and health are becoming

a crucial agenda in developing and managing technical processes. As a result, the

development of Neural Network in various fields, especially in fault detecting has shown

great progress. Neural Network has the potential to be developed further to be applied in

chemical plant such as Brahmanbaria Gas Process Plant.

Furthermore, Matlab 7.0 used to model and stimulate the Neural Network in terms of

monitoring and supervising the Brahmanbaria Gas Process Plant. Matlab is a high-

performance language for technical computing software that is used widely in the

engineering field to calculate and solve many mathematical and technical problems.

Thus, this research will be focusing on fault detection on as Brahmanbaria Gas Process

Plant by using Neural Network. These researches will emphasis on how and how far

Neural Network can contribute to overcome as Brahmanbaria Gas Process Plant failure

and fault problems.

Page 19: Fault Detection and Diagnosis of Brahmanbaria Gas

4

1.3 Objective and Scope Research

The main aim of this research is to develop a fault detection system using neural network

analysis. By using the Brahmanbaria gas process plant as the case study, the

implementation of neural network will help the controller to detect fault more efficient.

The work covered the following scope:

To develop a steady state model of the Brahmanbaria gas processing plant using

HYSYS and is validated using real plant data.

To develop a dynamic model to understand the real plant behavior and

disturbances.

To develop a NN based fault detection and diagnosis system.

The possible outcome of the thesis is a fault detection and diagnosis tool for

Brahmanbaria gas processing plant.

Page 20: Fault Detection and Diagnosis of Brahmanbaria Gas

5

1.4 Research Methodology

Chapter 1 is the introduction to this thesis. Summary of the background, problem of

statement, objective and scope of research as well as research methodology are also

included in this chapter.

Chapter 2 provides an overview of quantitative fault detection system, artificial neural

network system, its uses in the chemical and petrochemical industries, feed forward and

backpropagation system are briefly discussed.

Chapter 3 depict model development process by using ANN, data pre processing, data

cleaning process, normalization input and output data sets, coding for data pre-

processing, ANN structure, training, validation and testing system are described.

Chapter 4 describes process description of Brahmanbaria gas processing plant and also

illustrate the steady state and dynamic state simulation process by using plant design data

and HYSYS simulator. Then it shows the validation process with steady state simulation

data and plant live data. Here it does also explain the conversation process from steady

state plant simulation process to dynamic state plant simulation process. It shows the

way to monitor of dynamic disturbances by using dynamic simulation. Finally, in this

chapter shows different fault construction process in HYSYS dynamic simulation and

data collection process for artificial neural network analysis.

Chapter 5 shows ANN fault detection and diagnosis model development process and

provides the results of the simulation and experimental studies after applying the

proposed methods. The observed results also discussed.

Chapter 6 states the conclusions drawn from the current work and suggests possible

direction for future research.

Page 21: Fault Detection and Diagnosis of Brahmanbaria Gas

6

Chapter 2

LITERATURE REVIEW

2.1 Introduction In the area of plant-wide control at the supervisory level, the process fault detection and

system plays a key role. Fault detection, isolation, and recovery (FDIR) is a subfield of

control engineering which concerns itself with monitoring a system, identifying when a

fault has occurred, and pinpointing the type of fault and its location. Two approaches can

be distinguished: A direct pattern recognition of sensor readings that indicate a fault and

an analysis of the discrepancy between the sensor readings and expected values, derived

from some model. In the latter case, it is typical that a fault is said to be detected if the

discrepancy or residual goes above a certain threshold. Fault detection usually includes

the fault diagnosis and fault correction system. Fault diagnosis is the identification of the

root causes of process upset. Meanwhile, fault correction is the provision of

recommended corrective actions to restore the process to normal operating condition.

In this regard, real-time appropriate actions must be taken in present chemical and

petrochemical manufacturing plants. The technical personnel in most of these industries

is responsible for process monitoring status, detecting abnormal events, diagnosing the

source causes and administering proper intervention to bring the process to normal

operation.

A large variety of techniques for fault detection had been proposed in the literature in

recent times. Due to the broad scope of the process fault diagnosis problem and the

difficulties in its real time solution, various computer-aided approaches have been

developed over the years. (Himmelblau and Hussain, 1978) These cover a wide variety

of techniques such as early attempts using fault trees and diagraphs, analytical

approaches, and knowledge-based systems and neural networks in more recent studies.

From a modeling perspective, these methods require either accurate process models,

semi-quantitative models, or qualitative model.

Page 22: Fault Detection and Diagnosis of Brahmanbaria Gas

7

Neural networks have been studied very intensively. New architectures and learning

algorithms are developed all the time. Even though the present neural network models

don‟t achieve human-like performance, they offer interesting means for pattern

recognition and classification. The traditional pattern recognition includes a large

collection of very different types of mathematical tools (preprocessing, extraction of

features, final recognition). In many cases it is difficult to say what kind of tool would

best fit to a particular problem. Neural network make it possible to combine these steps,

because they are able to extract the features autonomously. They are practical to use,

because they are nonparametric. It has also been reported that the accuracy of neural

classifiers is better than that of traditional ones.

Page 23: Fault Detection and Diagnosis of Brahmanbaria Gas

8

2.2 Principle of Fault

A fault is defined as an unpermitted deviation of at least one characteristic property of a

variable from an acceptable behavior (Isermann, 1997). In the meantime, Himmelblau in

1978 defines a fault as a process abnormality or symptom, such as high temperature in a

reactor or low product quality. In general, fault is deviations from the normal operating

behavior in the plant that are not due to disturbance change or set point change in the

process, which may cause performance deteriorations, malfunctions or breakdowns in

the monitored plant or in its instrumentation. Therefore, the fault is a state that may lead

to a malfunction or failure of the system. The time dependency of faults can be

distinguished, as shown in Figure 2.1, as abrupt fault such as overheating and

overpressure, incipient fault such as continuing overflow, and intermittent fault such as

fault in gear or valve.

Fig.2.1: Time-dependency of faults: Abrupt (a), incipient (b), and Intermittent(c) by Isermann (1997).

Page 24: Fault Detection and Diagnosis of Brahmanbaria Gas

9

According to Gertler in 1998, faults can be categorized into the following categories:-

i. Additive process faults: Unknown inputs acting on the plant, which are normally

zero. They cause a change in the plant outputs independent of the known input.

Such fault can be best described as plant leaks and load

ii. Multiplicative process faults: These are gradual or abrupt changes in some plant

parameters. They cause changes in the plant outputs, which also depend on the

magnitude of the known inputs. Such faults can be best described as the

deterioration of plant equipment, such as surface contamination, clogging, or the

partial or total loss of power.

iii. Sensor faults: These are difference between the measured and actual values of

individual plant variables. These faults are usually considered additive

(independent of the measured magnitude), though some sensor faults (such as

sticking or complete failure) may be better characterized as multiplicative.

iv. Actuator faults: These are difference between the input command of an actuator

and its actual output. Actuator faults are usually handled as additive though,

some kind (such as sticking or complete failure) may be described as

multiplicative.

Page 25: Fault Detection and Diagnosis of Brahmanbaria Gas

10

2.3 Quantitative Fault Detection Method

There are three common methods of quantitative fault detection system. They are:

State Estimation Approaches Statistical Process Control Approach and Knowledge-

Based Approaches:

i. State Estimation Approaches: It can estimate immeasurable parameters, if they

are observable. However, they require an exact process model. It combines a

fundamental model of the process with on-line measurements to provide on-

line, recursive estimates of the underlying theoretical states of the process.

ii. Statistical Process Control Approach: Its capability to model any nonlinear

relationship is still limited by the assumed basis functions used in the

regression.

iii. Knowledge-Based Approaches: Knowledge-based approaches use expert

systems or artificial intelligence methods to process data. In rule-based expert

systems, the process model is represented by a set of qualitative and

quantitative governing rules. Other method is Neural Network (NN) based

system. NN is used to model complex relationships between inputs and

outputs or to find patterns in data. It serves as pattern recognition to identify

the process fault by reasoning based on generalizing a set of data. Firstly, the

different fault situations are trained, validated and tested. After the network

learning process, it can predict the fault detection.

One of the Knowledge based approaches is using Neural Network. Neural Network is

a non-linear statistical data modeling tools. They can be used to model complex

relationships between inputs and outputs or to find patterns in data. ANN is attractive

due to its information processing characteristic such as nonlinearity, high parallelism,

fault tolerance as well as capability to generalize and handle imprecise information

(Basheer and Hajmeer, 2000). These characteristics have made ANN suitable for

solving a variety of problems. In fault detection case, the neural network serves as

pattern recognition to identify the process fault by reasoning based on generalizing a

Page 26: Fault Detection and Diagnosis of Brahmanbaria Gas

11

set of data. With parallel computation and ability to adapt to changes, neural network

is a best choice for fault detection system. NN analysis tool is one of the intelligent

tools that it uses widely in the world for fault detection and diagnosis analysis. In NN

analysis, lots of data can analysis but other techniques cannot handle lots of data

analysis.

2.3.1 Process Monitoring and Diagnosis

In process monitoring, neural network system has been successfully applied to reduce

cost of emission monitoring in modern chemical industries (Chementator, 2000).

AlphaMOS, France and Neural Computer Sciences, Southampton, UK, have termed

up to develop intelligent odor-sensing systems using neural networks, which makes

them suitable for online process monitoring, such as continuously monitoring

perfumes in soaps and cosmetic bases and aromas in the food industry.

The electronic nose, called FOX 2000, developed by AlphaMOS is being used for

monitoring odor from sulfur compound and natural gas (which is odorless to the

human nose). The next generation of electronic nose will be equipped with hybrid

arrays of different type of sensors. For example, sensors incorporating conduction

polymers, which are more discriminating than metal oxide silicon, will be

commercially available in mid- 2005. Using sensors for surface acoustic waves allows

monitoring of small molecules of toxic gases in the range of 1-10 parts per billion.

The devices can typically acquire, analyze and recognize a sample within seconds

compared to about 30-60 minutes required for conventional chromatography tests

(Chementator, 2000 & Blanchar, 1994).

Fujitsu and Nippon Steel Corporation, Japan, has implemented a neural network

system for monitoring and detecting process faults in the continuous casting of

steel. In such an operation, the molten steel enters a water-cooled mold. The outside

surface of the continuous slab of steel gradually solidifies, but must be kept

continuously moving and contained within the walls of the continuous caster. On

occasion, a processing defect called breakout occurs.

Page 27: Fault Detection and Diagnosis of Brahmanbaria Gas

12

2.4 Neural Network:

The use of Neural Networks (NNs), in all aspects of process engineering activities,

such as modeling, design, optimization and control, has considerably increased in

recent years (Venkatasubramanian, 2003). Different NN based techniques

(architecture, training) have been adopted in different field of science to overcome the

difficulties of first principle based modeling. The non-linear relationship between

input and output of a system can be built up cost effectively by NNs.

2.4.1Neural Network Classification

Basically neural network can be generally being separated into two groups according

Lee in 2006:-

i. Supervised neural network- neural network operating with supervised learning

and training strategies, which is major of ANNs such as Hopfield Network,

FFBPN (Feed forward Back propagation Network), RBF (Radial Basis

Function), etc.

ii. Unsupervised neural network- neural network that do not need any supervised

learning and training strategies, including all kinds of self-organizing, self-

clustering, and learning networks such as SOM, ART (Adaptive Resonant

Theory), and so on.

Single-Layer ANNs

Multi-Layer ANNs

Recurrent ANNs

ANN system structure

Adaline Perceptron

Hopfield Network

LVQ

Madaline FFBPN

RBFN Neocognitron

BAM ART Hopfield

Network Boltzman

Machine

Fig. 2.3: Classification of Neural Network (Patterson 1996)

Page 28: Fault Detection and Diagnosis of Brahmanbaria Gas

13

The Feed-forward network is when the data flow from input to output units. The data

processing can extend over multiple layers of units, but no feedback connections are

present. Moreover, for Recurrent networks, its contain feedback connections.

Different from the feed-forward networks, the dynamical properties of the network

are important. In certain times, the activation values of the units will undergo a

relaxation process such that the network will develop to a stable state in which these

activations do not change anymore. In other applications, the changes of the activation

values of the output neurons are significant, such that the dynamical behavior

constitutes the output of the network.

2.4.2 Neural Network Application in Chemical and Process

Engineering

Realistic process model is very complicated and time consuming because it consists

of a lot of non-linear relation. Even it is unachievable when the basic mechanism is

not understood. Neural networks can learn a non-linear relation from example (input

output) and solve the problem easily (Montague et al., 1994). NN has been widely

used extensively in chemical engineering. NN has been used over the years such as in

process modeling, adaptive control, model based control, hybrid process monitoring,

fault detection, dynamic modeling, and parameter estimation process flow sheet

simulations, on-line process optimization and visualization, parameter estimation,

fault diagnosis, error detection, data reconciliation, process analysis, oil and gas

exploration, manufacturing, process control, product design and analysis, visual

quality inspection system, machine analysis, project bidding, dynamics of chemical

process systems.

In chemistry, neural network determines the molecular structure by comparing the

data obtained by spectroscopic analysis. In process control, NN determine the

complex relationship between the controlled and manipulated variable comparing the

data obtained from the monitoring of the process and the fault detection. NN shows

great promise over the recent years to solve problems that have proven to be difficult

for the standard technique using digital computers. NN is inherently parallel in

Page 29: Fault Detection and Diagnosis of Brahmanbaria Gas

14

structure like human brain and has capabilities for storing knowledge from analyzing

information. In the recent history NN has become very popular by different

researchers from a wide range of disciplines i.e. aerospace, automotive,

transportation; telecommunications, electronics, robotics, speech; financial, insurance,

securities, banking; manufacturing, oil and gas; medical and defense (Hagan et al.,

1996).

Application of Neural Networks to bio-processing and chemical engineering have

increased significantly since 1988. One of the first applications of neural network is to

fault diagnosis of a chemical reactor system (Anderson, 1992). Since then, the

number of research publications on neural network application in bio-processing and

chemical engineering has risen significantly. Neural computing provides a good

overview of potential application of neural networks as listed below (Jacobsson,

2001):

i. Classification: Use input values to determine the classification e.g. is the input

the letter A, is the blob of the video data a plane and what kind of plane is it.

ii. Prediction: ANNs have been shown to be successful as predictive tools by

predicting that some event will or will not occur, predicting the time at which

an event will occur, or predicting the level of some event outcome. To predict

with an acceptable level of accuracy, an ANN must be trained with a sizable

set of examples of past pattern/ future outcome pairs. The ANN must then be

able to generalize and extrapolate from new patterns to predict associative

outcomes. Many process industries are now using ANNs in a big way

for production, stock selection, predicting time series temperature profile of

reactors and portfolio management (Bishop, 1996).

iii. Data association: Like classification but it also recognizes data that contains

errors e.g. not only identify the characters that were scanned but identify when

the scanner is not working properly.

iv. Data Conceptualization: Analyze the inputs so that grouping relationships can

be inferred e.g. extract from a database the names of those most likely to by a

particular product.

Page 30: Fault Detection and Diagnosis of Brahmanbaria Gas

15

v. Data Filtering: Smooth an input signal e.g. takes the noise out of a telephone

signal (Hoskins, 1988).

vi. Optimization: ANNs have been used for a number of problems that require

finding an optimal or near optimal solution e.g. the scheduling of

manufacturing operation, finding the shortest of all possible ways to profit

maximization, minimization of some cost function under an asset of

constraints, and so on (Bishop, 1996).

Some integrated product formulation and optimization system such as CAD/Chem

consists of neural networks to estimate the properties of a given product formulation,

an expert system to run the formulation model repeatedly in essence asking many

what-if questions relating to product formulation, a set of user-defined design goals

that apply a fuzzy logic inspired method to specify ranking and preferences of product

properties and constraints in ingredients, processing and costs; and a product

optimizer to drive the repeated what-if trials in the direction required to meet these

goals (Patterson, 1995). Commercial applications of the CAD/Chem system have

included the optimal design of formulated products such as coatings, plastics,

rubber, specially polymers, building materials (Islam and Hossain, 2001).

2.4.3 Basics to artificial neural networks

Artificial Intelligence (AI)

The term Neural Network resulted from artificial intelligence (AI) research, which

attempts to understand and model brain behavior. According to Barr and Feigenbaum

(1981): “Artificial Intelligence is the part of computer science concerned with designing

intelligent computer system that exhibits characteristics we associate with intelligence

in human behavior”. This definition simply states that the goal of AI is to make computer “Think” to

make them solve problems requiring human intelligence. Focusing on the means of

achieving this goal, Buchanan and Short life (1983) offer another definition of AI:

Page 31: Fault Detection and Diagnosis of Brahmanbaria Gas

16

“Artificial intelligence is the branch of computer science dealing with symbolic, non-algorithmic methods of problem solving.” This second definition of AI emphasizes two aspects of AI-based methods for

problem solving. First, AI is a formal procedure specifying a step-by-step execution

path that guarantees a correct or optimal solution at some point. Second, AI involves

Symbolic processing, a branch of computer science that deals with non-numerical

symbols and names. In contrast, the more classical numerical processing deals with

numerical calculation and process.

Artificial Neural Networks (ANNs)

Artificial Neural Networks (ANNs) are simplified models of the central nervous

system. They are networks of highly interconnected neural computing elements that

have the ability to respond to input stimuli and to learn to adapt to the environment.

Robert Hecht- Nielson defines neural networks as follows: “A Neural Network is a computing system made up of number of simple, highly interconnected nodes or processing elements, which process information by its dynamic state response to external inputs.” To achieve good performance, neural networks employ a massive interconnection of

simple computing cells referred to as “Neurons” or “Processing units.” the following

definition is given by Alekxander and Morton (Haykin, 1999). “A Neural network is a massively parallel distributed processor that has a natural propensity for storing experimental knowledge and making it available for use. It resembles the brain in two respects:

i. Knowledge is acquired by the network through a learning process.

ii. Inter-neuron connection strengths known as synaptic weights are used to store

the knowledge.”

The procedure used to perform the learning process is called a Learning

Algorithm, the function of which is to modify the Synaptic Weights of the

network in an orderly fashion so as to attain a desired design objective. Neural

Page 32: Fault Detection and Diagnosis of Brahmanbaria Gas

17

networks are also referred to in the literature as neural computers, connectionist

networks, parallel distributed processors, etc. (Haykin, 1999).

Neuron

In the human brain, neuron within the nervous system interacts in a complex fashion.

Typical neuron collects signals from others through a host of fine structures called

Dendrites. The neuron sends out spikes of electrical activity through a long, thin stand

known as an Axon, which splits into thousands of branches. At the end of each

branch, a structure called a Synapse converts the activity from the axon into electrical

effects that inhibit or excite activity from the axon into electrical effects that inhibit or

excite activity in the connected neurons. When a neuron receives excitatory input that

is sufficiently large compared with its inhibitory input, it sends a spike of electrical

activity down its axon. Learning occurs by changing the effectiveness of the synapses

so that the influence of one neuron on another changes. In simulating biological

neurons, first the essential features of neurons and their interconnections are deduced.

Then typically program a computer to simulate these features. However, everyone

have incomplete knowledge about neurons and has limited computing power, so

models are necessarily gross idealizations of real networks of neurons. A Neuron is an

information-processing unit that is fundamental to the operation of a neural network.

It behaves as activation or mapping function and it produces an output when the

cumulative effect of the input stimuli exceeds a threshold value. Figure 2.5 shows the

model for a neuron. We may identify three basic elements of the neuron, as described

here (Haykin, 1999):

Page 33: Fault Detection and Diagnosis of Brahmanbaria Gas

18

Fig.2.4: Components of Biological neuron

Fig.2.5: Model of Artificial Neuron

i. A set of Synapses or Connecting links, each of which is characterized by a

Weight or Strengths of its own. This weight is a sort of filter, which is a part of

linkage connecting the input to the neuron. It models the synaptic neural

connection in biological nets act to either increase (excitatory input) or

decrease (inhibitory input).

ii. An Adder for summing the input signals, weighted by the respective synapses

of the neuron; the operation described here a linear combiner.

Page 34: Fault Detection and Diagnosis of Brahmanbaria Gas

19

An Activation function for limiting the amplitude of the output of a neuron. The

activation function is also referred to in the literature as a squashing function in that it

squashes (limits) the permissible amplitude range of the output signal to some finite

value.

A detailed mathematical model of a neuron is shown in Figure 2.6. Models may

include an externally applied threshold/bias that has the effect of lowering the net

input of the activation function. On the other hand, the net input of the activation

function may be increased by employing a bias term rather than a threshold. In

mathematical terms, we may describe a neuron m by writing the following pair of

equations:

rm= ∑Nwmn xn bm 2.1

ym= f (rm) 2.2 Where x1, x2,…, xN are the inputs; wm1, wm2, …, wmN are the synaptic weights of

neuron m; rm is the linear combiner output; bm is the bias term; f is the activation

function; and ym is the output signal of the neuron. The scalar input x is transmitted

through a connection that multiplies its strength by the scalar weight w, to form the

product wx, again a scalar. The neuron shown in equation 2.1 has a bias bm but bias

term may not be used depending on the situation. You may view the bias as simply

being added to the product wx as shown by the summing junction or as shifting the

function f to the left by an amount bm. The bias is much like a weight, except that it

has a constant input of 1. The transfer function net input rm, again a scalar, is the sum

of the weighted input wx and the bias bm. This sum is the argument of the transfer

function f which takes the argument rm and produces the output ym. Note that w and

b are both adjustable scalar parameters of the neuron.

Page 35: Fault Detection and Diagnosis of Brahmanbaria Gas

20

Feed-Forward Artificial Neural Network (FANN)

The power of single neuron can be greatly amplified, using multiple neurons in a

network of layered connectionist architecture. Neurons layered in such a way is also

called Feed- Forward Artificial Neural Network and abbreviated to FANN. A feed

forward network has a layered structure. Feed-forward artificial neural networks

include MLPs, Functional Link Networks (FLNs) and Radial Basis Function

Networks (RBFNs) (Looney, 1996)

MLPs are perhaps the most popular network architecture in use for many problems.

This is the type of network shown in Figure 2.6. The network has a simple

interpretation as a form of input-output model, with the weights and thresholds

(biases) the free parameters of the model. Such networks can model functions of

almost arbitrary complexity, with the number of layers, and the number of units in

each layer, determining the function complexity. Important issues in MLP design

include specification of the number of hidden layers and the number of units in these

layers (Neural Networks, 1984-2003)

Fig. 2.6: A representation of a simple 3-layer feed-forward ANN

Page 36: Fault Detection and Diagnosis of Brahmanbaria Gas

21

On the left is the layer of inputs, or branching nodes, which are not artificial neurons.

A feature vector x = (x1,…,xN) that represents a pattern enter the input layer on the

left with each component xn entering one and one input node. From each nth input

(branching) node, the nth component xn fans out to each of the M neurons in the

middle layer. Thus each mth hidden (middle) neurons has a fan-in of all N input

components. As each xn enters the mth neuron of the hidden layer, it is modified via

multiplication by the synaptic weight wmn for that connection line. All resulting

products wnmxn at the mth hidden neuron are summed over n to yield:

rm= ∑n Wnm Xn 2.3 and

ym= h(rm) 2.4

is the activation output. Doing the same for the output layer we have

sj= ∑m umj ym 2.5

and zj= g(sj) 2.6 Backpropagation Algorithm: The Levenberg-Marquardt Method

A single layer network has severe restrictions: the class of tasks that can be

accomplished is very limited. A two layer feedforward network can overcome many

restrictions but did not present a solution to the problem of how to adjust the weights

from input to hidden units. The central idea behind this solution is that the errors for

the units of the hidden layer are determined by backpropagating the errors of the units

of the output layer. For this reason the method is often called the backpropagation

learning rule.

Understanding the Backpropagation

The backpropagation supervised learning algorithm is used to find weights in

multilayer feedforward networks. The backpropagation algorithm is conceptually

simple. Following each input data vector, the network performance is evaluated on the

target values in the validation set. The errors resulting from the comparison of the

Page 37: Fault Detection and Diagnosis of Brahmanbaria Gas

22

actual and target output values are propagated backward through the network, and

the weight values are adjusted to minimize error. With respect to neural networks,

the performance criterion is the minimization of squared error. Therefore, the total

system error is expressed as follows:

E= 1 ∑ ( ɛn )2 = ½ ||ɛ||2 2.7

Where, ɛn is the error of the nth pattern, and ε is a vector with elements ɛn.

The problem of learning in neural networks is formulated in terms of the minimization

of the error function E. This error is a function of the adaptive parameters (weights

and biases) in the network. Many learning laws are in common use. Most of these

laws are some sort of variation of the best known and oldest learning law, Hebb's

Rule. Research into different learning functions continues as new ideas routinely

show up in trade publications. Some researchers have the modeling of biological

learning as their main objective. Others are experimenting with adaptations of their

perceptions of how nature handles learning. Either way, man's understanding of how

neural processing actually works is very limited. Learning is certainly more complex

than the simplifications represented by the learning laws currently developed

(Anderson, 1992).

The backpropagation algorithm is the most practical and commonly used model for

neural networks. Some of the well-known types of backpropagation algorithms are

(Jacobsson, 2001):

i. Gradient descent with adaptive learning rate backpropagation: is a network

training function that updates weight and bias values according to gradient

descent with adaptive learning rate.

ii. Gradient descent with momentum & adaptive learning rate backpropagation: is

a network training function that updates weight and bias values according to

gradient descent momentum and an adaptive learning rate.

iii. Levenberg-Marquardt backpropagation: is a network training function that

updates weight and bias values according to Levenberg-Marquardt

optimization.

Page 38: Fault Detection and Diagnosis of Brahmanbaria Gas

23

Chapter 3

MODEL DEVELOPMENT USING ANN

Artificial neural network is being used for model development. Four steps are

followed to develop the model. There four steps are laid down below in network

designing process:

i. Assemble and pre-process raw data to get training data

ii. Create the network object.

iii. Train the network

iv. Simulate the network response to new inputs

Neural network design requires a lot of hard work, collected raw data are pre-

processed before using them in network training, network object is created,

performances are monitored, parameters adjusted, connections added, rules modified,

and on and on until the network achieves the desired results.

3.1 Data Pre Processing

The objective of data pre-processing is to produce the training set of the NN,

which represents the relationship of network inputs and outputs. Preprocessing

means that the existing data is processed (in some way) before the network is trained

on it. By preprocessing the data, the problem may be much more suitable for the

network. The major tasks in data preprocessing are (Williams, 2000):

Data cleaning Normalization input and output data sets.

Page 39: Fault Detection and Diagnosis of Brahmanbaria Gas

24

3.2 Data Cleaning Many data sets are imperfect due to the presence of missing values and noise in

the data. Incomplete data comes from

data value not available when collected

Different consideration between the time when the data was collected and

when it is analyzed.

human/hardware/software problems

Noisy data comes from the process of data

collection

entry and transmission Inconsistent data comes from

Different data sources

Functional dependency violation To handle data imperfection, data cleaning algorithms must be developed which can

Fill in missing values

Identify outliers and smooth out noisy data

Correct inconsistent data

Resolve redundancy caused by data integration The various methods that can be used to deal with data cleaning requirements are

thoroughly discussed in issues around data mining. In this section a program is

written to identify outliers and remove them before the data is used to train a network

3.3 Normalization of Input and Output Data Sets

This normalization is very critical, if the input and output variables are not of the

same order of magnitude, some variables may be given more significance than they

otherwise would have. The training algorithm is forced to compensate for order-of-

magnitude differences by adjusting network weights, which is not very effective for

Page 40: Fault Detection and Diagnosis of Brahmanbaria Gas

25

many of the training algorithm i.e. backpropagation algorithm. For example, if input

variable 1 has a value of 10000 and input variable 2 has a value of 10, the assigned

weight for the second variable going into a node of hidden layer 1 must be greater

than that of the first variable to have any significance. In addition, the typical transfer

function, such as a sigmoid function, cannot distinguish between two different values

of input values when the latter are large because they yield identical threshold output

values of 1.0. For example, using the sigmoid function, when xi =5, we find f (xi)=

0.993.

This section mainly describes different normalization procedures. The first

normalizes each variable, xi, in the data set to between 0 to 1 by dividing its value by

the upper limit of that variable, xi, max to give a normalized variable, xi, norm.

When xi ranges between 1500-4500, and normalization factor of xi, norm= 5000, is

assigned, a normal distribution results between 0.3 to 0.9. One limitation of this

method is that it does not utilize entire range of the transfer function and the most

important that data are oscillating with higher value to lower one. Uniform

distribution is not ensured. Equation 3.1 illustrates that only a small portion of the

transfer function corresponds to xi values of 0.3 to 0.9 and -0.3 to -0.9. The weight

factors can broaden and shift this range to include a larger region of the transfer

function.

However, as the number of variables and weight factors increase, these adjustments

become more difficult for training algorithm. As a result, this normalization method is

adequate for many simple networks, but problems can arise as the network

architectures become more complex. The second method expands the normalization

range so that the minimum value of the normalized variable, xi,norm is set at 0 and

the maximum value, xi,norm is set at one. Normalized variable xi,norm is defined by

using the minimum and maximum values of the original variable, xi,min and xi,max

respectively.

Page 41: Fault Detection and Diagnosis of Brahmanbaria Gas

26

This method significantly improves on the first method by using entire range of the

transfer function, as equation 3.2 illustrates. Another benefit from this method is that

every input variable in the data set has a similar distribution range, which improves

training efficiency.

The third technique normalizes the data set between limits of -1 and +1, having the

average value set at 0. We call this technique the zero-mean normalization method

and represent the normalization variable, xi, norm by:

and Ri, max = maximum [ Xi, max- Xi, avg- Xi, min] 3.4

Where xi is an input or output variable, xi,avg is the average minimum value of

the variable, xi,max is the maximum value of the variable and Ri,max is the

maximum range between the average value and either the minimum or the maximum

value.

As in the second method, the zero mean method utilizes the entire range of the

transfer function and every input variable in the data set has similar distribution

range. Moreover, this method gives some meaning to the values of the normalized

variable: 0 represents normal state of the variable, -1 represents a very low level of

the variable and +1 represents a very high level of the variable.

In addition, by setting all of the normal states of the variables to zero, the network

will always have a standard structure that makes training more consistent from one

problem to the next and also more efficient. Specifically, all networks should

normally predict output responses of approximately 0 (nominal value) for a set of

input variables at their nominal values of 0. Therefore, the network is essentially only

training deviations in the output variable to various deviations in the input variables.

Page 42: Fault Detection and Diagnosis of Brahmanbaria Gas

27

3.4 Coding for Data Pre-processing

The accuracy of the network entirely depends on the data that are used to train the

network. During pre-processing, it is important that the data cover the range of inputs

for which the network will be used. Multilayer networks can be trained to generalize

well within the range of inputs for which they have been trained. However, they do

not have the ability to accurately extrapolate beyond this range, so it is important that

the training data span cover the full range of the input space. Sigmoid transfer

functions are generally used in the hidden layers. These functions become essentially

saturated when the net input is greater than three. When this happens at the beginning

of the training process, the gradients will be small, and the network training will be

very slow. In the first layer of the network, the net input is a product of the input

times the weight plus bias. Functions get saturated when input is large or vice versa.

But normalization before applying is standard practice. Generally, the normalization

step is applied to both the input vectors and the target vectors in the data set. In this

way, the network output always falls into a normalized range. The network output

can then be reverse transformed back into the units of the original target data when

the network is put to use in the field. It is easiest to think of the neural network as

having a preprocessing block that appears between the input and the first layer of the

network and a post-processing block that appears between the last layer of the

network and the output, as shown in the following figure.

Fig. 3.1: Preprocessing and Post-processing within network object.

To the input and output of a network, processing function is assigned by: net.inputs{1}.processFcns net.outputs{2}.processFcns

where the index 1 and 2 refers to the first input vector and output return from a two-

Page 43: Fault Detection and Diagnosis of Brahmanbaria Gas

28

layer network. According to data transformation and integration steps all data is

normalized to a range between 0.15-0.85. Afterwards all pre-processed input and

output data is arranged in array form by the following MATLAB code.

In this code, data set is divided to 50% data for training, 25% testing and 25% for validation. 3.5 ANN Structure

After getting the training data set, the neural network can be built. To do this the

network structure should be defined first. Defining the network structure includes:

Network structure selection

Sizing the network structure

Training the neural network 3.5.1 Structure Selection

There is no well-defined procedure or rule to be used in building a neural network.

Data preprocessing – structure selection – network sizing – network training steps

are interrelated and the designer need to establish methods to develop a set of

competing neural network models and a way of performance measure to select the

best one. Artificial neural networks are a set of several different models featuring a wide

variety of different architectures, learning strategies and applications. At present

several types of networks specialized in carrying out various tasks are distinguished.

Table 3.1 categorizes the different types of artificial neural networks.

Category 1 networks are the most powerful, versatile, and reliable nonlinear

classifier recognizers (Looney, 1997). The networks in Category 2 may be trained to

some extent to adjust the field of attraction for the different classes, but are not yet

sufficiently reliable or efficient. Category 3 networks are self-organizing and

perform linearly separable clustering of data. The nature of the problem we are trying

to solve determines which neural network will be employed.

Page 44: Fault Detection and Diagnosis of Brahmanbaria Gas

29

MLPs are perhaps the most popular network architecture in use today; this network

has a simple interpretation as a form of input-output model, with the weights and

biases the free parameters of the model. Such networks can model functions of

almost arbitrary complexity. Largely most researchers who use artificial neural

networks usually use the feed forward type and MLPs in particular (Looney, 1997).

Basing network topology selection on this confirmed fact, this thesis applies MLP

type neural networks to different chemical engineering modeling problems. Network

sizing and network training, discussed in the following section are mainly concerned

with issues related to MLP.

Table 3.1: Hierarchy of Artificial Neural Networks

Category Type Name of network type

1 Feedforward

(FANNs)

MLPs (multiple-layered perceptrons)

FLNs (functional link networks)

RBFNs (radial basis function

networks)

LVQNs (learning vector quantization

networks) 2 Recurrent

(RNNs)

Hopfield networks (random serial)

3 Self-organizing Maps(SOMs)

Kohonen‟s SOFMs (self-organizing

feature maps)

Sprecht‟s probabilistic networks

Bezdek‟s fuzzy c-means networks

Hybrid learning vector quantization

Grossberg‟s ART networks (adaptive

resonance theory) SOLVQNs (self-

organizing LVQNs)

Page 45: Fault Detection and Diagnosis of Brahmanbaria Gas

30

3.5.2 Sizing the network structure

Let N be the number of input branching nodes, M be the number of hidden neurons, J

the number of output neurons, Q be the number of exemplar vectors for training. The

network architecture is determined by the numbers N, M, and J. The main questions in

sizing a neural network are:

How many layers of neurons to use?

How many input nodes to use?

How many neurons in the hidden layers should we use?

How many neurons should we use in the output layer?

The number of layers to use is provided by the Hornik-Stinchcombe-White result

stated as a feed-forward artificial neural network with two layers of neurodes and non-

constant non decreasing activation function at each hidden neurode can approximate

any piecewise continuous function from a closed bounded subset of Euclidean N-

dimensional space to Euclidean J-dimensional space with any pre-specified accuracy,

provided that sufficiently many neurodes be used in the single hidden layer.

The data determines N, J and Q. The number N of input nodes must be the number N

of feature in the feature vectors, so that once a set of feature is chosen, the number N

is fixed. J will fix the number of output neurons. The remaining task in network

sizing is to set the number of hidden neurons- M. unfortunately, there is currently

no universal guideline for determining the optimal number of hidden neurons. The

selection of the number of hidden neurons is often the result of empirical

experimentation combined with trial and error.

Page 46: Fault Detection and Diagnosis of Brahmanbaria Gas

31

Selection of Proper Transfer Function

Three main transfer functions used in network training are sigmoid, hyperbolic tangent and

purelin. The sigmoid and hyperbolic tangent transfer functions perform well for the

prediction and process-forecasting. The hyperbolic tangent transfer function generally

outperforms the sigmoid transfer function. As the biological and chemical processing systems

become more complex and nonlinear, the advantages of the hyperbolic tangent transfer

function become more apparent. When hyperbolic tangent transfer function is superimposed

over sigmoid function, Figure 3.2 shows, two features distinguish the hyperbolic tangent

function:

i. The slope of the hyperbolic tangent function is much greater than the slope of the

sigmoid function.

ii. The hyperbolic tangent function has a negative response for a negative input value and

a positive response for a positive input value, while the sigmoid function always has a

positive response.

The fact that the hyperbolic tangent function has the greater slope means that it shows a

greater response to a small deviation in the input variable. Therefore, it can better distinguish

between small deviations in the input variable and can generate a much more nonlinear

response.

The second main feature of the hyperbolic tangent transfer (an output response has the same

sign as the input value) is critical for network nodes. This feature of the hyperbolic tangent

transfer function gives some meaning to a nodes output value: 0 represents the normal state

(average) of a node, -1 represents a very low response level and +1 represents a very high

response level.

Both the zero-mean normalization method and the hyperbolic tangent transfer function should

normally predict output responses of approximately 0 (nominal value) for a set of input

variables at their nominal values of 0. Within this structure, when the input variables are

nominally 0, the inputs to the nodes of the first hidden layer are also 0. Consequently, the

outputs of those nodes are 0 when using a hyperbolic tangent transfer function.

Page 47: Fault Detection and Diagnosis of Brahmanbaria Gas

32

Similarly, the inputs and outputs of the remaining hidden layer and the output layer also 0. In

short, before any training takes place, the network already correctly predicts the nominal case

and essentially only has to be trained for deviations from that case. In comparison, a 0 input to

a sigmoid transfer function produces an output response of 0.5, which means that the network

must also adjust the initial weights to train the nominal case.

Fig.3.2: The hyperbolic tangent function superimposed over the sigmoid function.

3.7 Initializing the weight Factor Distribution

Prior to training a neural network, one must first initialize the weight factors, wij, between

the nodes of the hidden layers. The weight factors are set randomly with either a uniform or

Gaussian distribution. Here Gaussian distribution have found effective for the case study.For

neural network that are relatively simple, the initial distribution of the weight factors is not

particularly critical. The initial distribution set by the NN Tool of MATLAB, for instance,

almost always performs adequately in network training. The initial weight-factor distribution

is normally set to a fairly narrow range and allowed to broaden using high learning rates and

high momentum coefficients in the early stages of the training process. Problems can occur,

Page 48: Fault Detection and Diagnosis of Brahmanbaria Gas

33

however, when using very large data sets and/or complex network architectures.

For complex networks, the weight-factor distribution do not broaden much during network

training and therefore the initial values is set to coincide with our normalized input and

output variables. 3.8 Selection of ANN Parameters

The ANN topology corresponds to a feed-forward multilayer perceptrons. This approach is

the most common architecture of an ANN. Multilayer feed-forward ANNs have two different

phases: a training phase (sometimes also referred as the learning phase) and an execution

phase. In the training phase the ANN is trained to output a specific value when given a set of

inputs. This is done in a set of input/output pairs called the training set.

An ANN training algorithm uses a set of parameters that will determine the weights between

the layers of the ANN are adjusted. If these parameters are chosen in a proper way, the

training will adjust the weights successfully and the ANN will perform perfectly in the

training set. This is why it is extremely important to determine correctly these parameters.

This part proposes a way to detect which parameters affect the training the most. This can be

done by proving different sets of parameters and then measuring the performance in training

set and the performance in validation set. Once having these two measures, then one can use

a feature selection algorithm to find those variables (parameters) that are the most correlated

with the results obtained. The performances in the training and validation sets are easily

obtained by testing the trained ANN in both sets and then calculating the absolute percentage

of error.

Parameter Influence Determination picks an ANN with the same architecture, initial weights

on its layers and the same training algorithm but with a different set of parameters every time

and trains it for 1000 epochs. The number of epochs was chosen arbitrarily and is sufficiently

large to guarantee that if the training algorithm behaves strangely.

This particular set of parameters is then added as a row to an observation matrix that the

feature selection algorithm needs as an input. Similarly the performance in the training set

and the performance in validation set are added to an entry of two different vectors. At the

Page 49: Fault Detection and Diagnosis of Brahmanbaria Gas

34

end, the variable selection algorithm is going to solve the systems given by equation 3.5 and

3.6, where A is the observation matrix and b is the performance in either the training or

validation set. Ax=bt 3.5

Ax=by 3.6

The variable selection algorithm is going to obtain identified x that fits the best the

performance vectors by bT or bY. Even more important that the coefficient x are the order

import by the feature selection algorithm to the columns of the matrix A. The columns of A

are the parameters of the learning algorithm so the column select first by the feature selection

algorithm is the parameter most correlated with the performance of the ANN.

This procedure is done in the training and validation set. One thing is that one has to keep in

mind that the procedure is used to vary the parameters. It is well known that certain

parameters have default values, such case the learning rate. The combination of parameters

will have to contain this default values but also others that in the best case lead us to better

results, but obviously one need to take into consideration the time needed to train the network

and so one cannot test an infinite number of different parameter values. Procedure is

performed in a time series as follows:

1. Creation of a set containing sets of different values for the parameters.

2. Training of an ANN with a set of values for the parameters and the time series.

3. Evaluation of the performance of the ANN in the training and validation set.

4. Adding this set of parameters as a row of the feature selection input matrix.

5. Addition of the two measurements obtained in 3 to the input vectors.

6. If there is still a set with values that has not been evaluated go to 2.

7. Run the feature selection algorithm two times, each time with the matrix formed in 4 and one vector created in 5.

Page 50: Fault Detection and Diagnosis of Brahmanbaria Gas

35

3.9 Train the Network

Once a network has been structured for a particular application, that network is ready to be

trained. There are two approaches to training - supervised and unsupervised. Supervised

training involves a mechanism of providing the network with the desired output either by

manually "grading" the network's performance or by providing the desired outputs with the

inputs. Unsupervised training is where the network has to make sense of the inputs without

outside help.

The vast bulk of networks utilize supervised training. Un-supervised training is used to

perform some initial characterization on inputs. However, in the full blown sense of being

truly self-learning, it is still just a shining promise that is not fully understood, does not

completely work, and thus is relegated to the lab (Anderson, 1995).

3.10 Validation & Testing Artificial neural networks are increasingly used as non-linear, non-parametric prediction

models for many engineering tasks such as pattern classification, control and sensor

integration. Neural network models are data driven and therefore resist analytical or

theoretical validation. Neural network models are constructed by training using a data set, i.e.

the model alters from a random state to a “trained” state, and must be empirically validated.

The evaluation and validation of an artificial neural network prediction model are based upon

one or more selected error metrics. Generally, neural network models which perform a

function approximation task will use a continuous error metric such as mean absolute error

(MAE), mean squared error (MSE) or root mean squared error (RMSE). The errors will be

summed over the validation set of inputs and outputs, and then normalized by the size of the

validation set. Some practitioners will also normalize to the cardinality of the output vector if

there is more than one output decision, so the resulting error is the mean per input vector and

per output decision.

Page 51: Fault Detection and Diagnosis of Brahmanbaria Gas

36

Chapter 4

PLANT SIMULATION

4.1 Introduction

In the recent years, the study on the plant design, control, and optimization had been done on

plant simulation to generate better control system and optimize the process. Gas processing

plant was simulated both steady state & Dynamic state with the HYSYS simulator. The

Brahmanbaria gas processing plant contains several standard unit operations that are typical of

many chemical plants. The plant has specially one distillation unit operation. This chapter will

focus on Brahmanbaria gas processing plant simulation steady state & Dynamic state, its fault

detection & diagnosis by using Neural Network.

Page 52: Fault Detection and Diagnosis of Brahmanbaria Gas

37

4.2 PROCESS DESCRIPTION OF BRAHMANBARIA GAS

PROCESSING PLANT (BGFCL), BRAHMANBARIA, BANGLADESH

Fig.4.1: Process Block Diagram for process description.

Page 53: Fault Detection and Diagnosis of Brahmanbaria Gas

38

Wet gas comes from two wells through pipe lines. The two wells have different capacity and

their total capacity is 40 MMscfd. As gas contains different types of contaminants, it does

require refining. Then, gas goes to air cooler (AC-101) which helps to reduce gas temperature

and pressure. Gas contains heavier hydrocarbon, water. It allows for flowing through to two

phase vertical separator (V-100 & 101). Here, liquid separates from gas. The overhead product

gas goes to gas/gas exchanger (E-201) shell side where gas loses its temperature .After the

gas/gas exchanger (E-201) gas goes to another two phase vertical separator (V-101). Here also

gas loses its pressure and temperature and overhead product allow for passing gas/gas

exchanger (E-201) tube side. From this two phase‟s separator, bottom product liquid stores

liquid storage tank (V-102). From the gas/gas exchanger gas goes to sales line through the

meter skid.

The heavier liquid is collecting from two phase vertical separators in liquid storage tank (V-

102). The heavier hydrocarbon is flowing to distillation column after pre-heating by steam

heater (E-100). The distillation column has ten trays. The distillation column (D-501) has

condenser and re-boiler unit. The feed liquid is fractionated according to temperature

difference and produce different types of product. The overhead product and bottom product

ratio depends on feed quality, feed capacity, number of trays and reflux ratio. In this plant,

motor spirit (MS) produce 15-20% of total feed and High speed diesel (HSD) produce 85-80%

of total feed. The overhead product and bottom product can be maximize, minimize by

changing some parameters like reflux ratio, temperature as numbers of tray and feed is fixed.

Page 54: Fault Detection and Diagnosis of Brahmanbaria Gas

39

4.3 Steady state simulation of Brahmanbaria Gas Processing Plant

The plant simulation has completed by using Brahmanbaria gas processing plant well-1 and

well-7 gas composition, flow rate, temperature, pressure, different unit operations

specification like valve specification, cooler specification, vessel specification, heat exchanger

specification, pump specification, distillation column speciation. The whole plant steady state

simulation of the plant was configured in HYSYS simulator.

Fig.4.2: Steady state HYSYS Simulation.

Page 55: Fault Detection and Diagnosis of Brahmanbaria Gas

40

4.4 Dynamic state simulation conversion from steady state simulation of

Brahmanbaria Gas Processing Plant

Dynamic simulation HYSYS model has been developed from steady state HYSYS mode by

adding some unit operations like PID control valves, column sizing, using dynamic assistant,

and controller operations. Valve operations have been added between separator, mixer, and

column operations, a heater operation has been also added between the mixer and column

operation for dynamic simulation purposes as well as installed a heater allows varying the

temperature of the feed entering the column.

Before run the simulation case in dynamic mode, the degrees of freedom for the flow sheet

has been reduced to zero by setting the pressure-flow specifications. It is also necessary to size

the existing valves, vessels, coolers, and heat exchangers in the main flow sheet and the

column sub- flow sheet. The sizing parameters like valve Cv value, vessel volume and cooler/

heat exchanger K- value must be specified for these unit operations. In addition, it

automatically sets the sizing parameters of the equipment in the simulation flow sheet.

Later on process parameters has been monitored in case of normal operations and abnormal

operations in dynamics simulation state.

Page 56: Fault Detection and Diagnosis of Brahmanbaria Gas

41

Process simulation diagram for Dynamics state condition

Fig.4.3: Dynamic state simulation of Brahmanbaria Gas Processing Plant (BGFCL)

Page 57: Fault Detection and Diagnosis of Brahmanbaria Gas

42

4.5 Dynamics Monitoring

Different parameters observe in dynamic simulation for normal and abnormal condition. It has

been created a strip chart to monitor the general trend of key variables. From the data book, it

has been added all of the variables that would like to manipulate or model. In dynamic

simulation, parameter can observe by running strip.

For neural network fault detection and diagnosis analysis, data has been created by

considering for different fault scenario like Tower valve full open (D-501), Tower valve full

close (D-501), High pressure separator (V-101) liquid outlet valve and low temperature

separator (V-201) liquid outlet valve full open and full close condition.

The disturbance was created in the dynamic simulation utility of HYSYS to simulate process

condition in the fractionation column feed valve full close and monitored trend analysis of it.

From the trend analysis, it is observed that the temperature profile of feed line has

disturbances and as well as molar feed profile disturbances in figure.4.4. In operating plant

operations team face difficulties during fractionation column feed line valve problem. As

consequences, top production and bottom production become low as well as re-boiler

temperature become abnormal and also tripped at high high temperature. As long term,

production become hampered and need to count lose production opportunity (LPO).

Page 58: Fault Detection and Diagnosis of Brahmanbaria Gas

43

Fig. 4.4: Disturbance profile of fractionation column during feed valve full close.

Page 59: Fault Detection and Diagnosis of Brahmanbaria Gas

44

The fault has created for considering high pressure separator liquid valve and low temperature

separator liquid valve full close to observe tower feed trend analysis. By trending analysis, it

is very clear to us that tower molar feed, temperature and pressure profile become disturbances

in figure 4.5 the tower top product and bottom product become abnormal range where in

normal conditions productions within design range. For this reason, top production and bottom

production become low as well as re-boiler temperature become abnormal and it also tripped

at high high temperature. As long term, production become hampered and need to count lose

production opportunity (LPO).

Page 60: Fault Detection and Diagnosis of Brahmanbaria Gas

45

Fig. 4.5: Disturbance profile of fractionation column during separators valve full close.

Page 61: Fault Detection and Diagnosis of Brahmanbaria Gas

46

4 .6 Steady State Data and Dynamic Data

The steady state simulation has stimulated by using brahmanbaria gas processing plant unit

operations design data, well gas composition, well flow, pressure, temperature etc. The steady

state simulation data has compared with the plant live data. Dynamic simulation has been

enthused from steady state simulation in HYSYS simulator. In dynamic simulation, different

process parameters monitored by using strip chart as well as sample data has collected by

considering normal and abnormal operations for NN analysis. From the comparison, it is

found that steady state and dynamic state simulation data‟s are almost similar.

Page 62: Fault Detection and Diagnosis of Brahmanbaria Gas

47

4.7 Simulation & Validation

The Brahmanbaria gas process plant simulation data had to be validated with the actual

process to ensure its reliability, exactness and relevant. The Brahmanbaria gas process plant

simulation is compare with several variables on the actual plant data with steady state and

dynamic state to make the comparison as in Table 4.1, 4.2, 4.3, 4.4, 4.5 & 4.6 The result is

very overwhelming and the Brahmanbaria gas process plant simulation is proven to be a

reliable and relevant plant simulation.

Table 4.1: Sales gas composition Sales gas compositions comparisons

Component Plant data

Steady state

data

Dynamic state

data

Methane 0.95649 0.95649 0.948688234 Ethane 2.59E-02 2.59E-02 2.63E-02 Propane 7.16E-03 7.16E-03 7.65E-03 i-Butane 1.82E-03 1.82E-03 2.12E-03 n-Butane 1.46E-03 1.46E-03 1.80E-03

i-Pentane 1.16E-03 1.16E-03 1.76E-03 n-Pentane 7.19E-04 7.19E-04 1.23E-03 n-Hexane 4.87E-04 4.87E-04 1.44E-03 n-Heptane 4.42E-04 4.42E-04 2.73E-03 n-Octane 8.94E-05 8.94E-05 1.29E-03

H2O 2.94E-05 2.94E-05 8.46E-04 CO2 1.66E-03 1.66E-03 1.66E-03 N2 2.56E-03 2.56E-03 2.53E-03

Page 63: Fault Detection and Diagnosis of Brahmanbaria Gas

48

Table 4.2: Steady state and dynamic state data comparison Sales gas Steady state data Plant data Dynamic state data

Flow (MMscfd) 39.17 39.17 39.7 Tower Feed

Pressure (psia) 200 200 199.61 Top Product

Pressure (psia) 200 200 198.35 Temperature (°F) 338 338 304 Bottom product

Pressure (psia) 200 200 199.24 Temperature (°F) 480 482 482.98

Table 4.3: Steady state and dynamic state data comparison Sales gas Steady state Dynamic state

Sales gas Vapor

Phase Sales gas

Vapor

Phase

Vapor/Phase fraction 1 1 0.99 0.99 Temperature (F) 39.38 39.38 90.31 90.31 Pressure (psia) 1030 1030 1090 1090 Molar flow (MMscfd)

39.17

39.17

39.7

39.7

Mass flow (Ib/hr) 73162.94 73162.94 76271.71 76254.1 Std Ideal Liq Volm (bbl/d)

16182.04 16182.04 16553.65 16551.71

Molar Entropy (Btu/Ibmole-F)

34.17 34.17 35.16 35.16

Page 64: Fault Detection and Diagnosis of Brahmanbaria Gas

49

Table 4.4: Steady state and dynamic state data comparison Tower Feed Steady state Dynamic state

Tower

Feed

Vapor

Phase

Liquid

Phase

Tower

Feed

Vapor

Phase

Liquid

Phase

Vapor/Phase fraction 0.0013 0.00136 0.785 0.21 0.21 0.48 Temperature (F) 247 247 24.73 106.25 106.25 106.25 Pressure (psia) 200 200 200 199.61 199.61 199.61 Molar flow (MMscfd) 0.57 0.0078 0.45 0.27 0.006 0.13 Mass flow (Ib/hr) 4350.11 156 4106.47 1703.8 129.73 1411.1 Std Ideal Liq Volm (bbl/d) 452.51 3.34 433.52 181.2 26.51 143.5 Molar Entropy (Btu/ Ibmole-F) 17.16 38.37 18.4 24.08 39.96 23.21 Table 4.5: Steady state and dynamic state data comparison Top product Steady state Dynamic state

Ovhd

Vapor

Phase Ovhd-1

Vapor

Phase

Liquid

Phase

Vapor/Phase fraction 1 1 1 1 0 Temperature (F) 338.07 338.07 304.31 304.31 304.31 Pressure (psia) 200 200 198.35 198.35 198.35 Molar flow (MMscfd) 0.49 0.49 0.23 0.23 0 Mass flow (Ib/hr) 3445.95 3445.95 1181.92 1181.92 0 Std Ideal Liq Volm (bbl/d) 364.48 364.48 130.43 130.43 0 Molar Entropy (Btu/Ibmole-F) 51.2 51.2 47.67 47.67 41.84

Page 65: Fault Detection and Diagnosis of Brahmanbaria Gas

50

Table 4.6: Steady state and dynamic state data comparison Bottom Product Steady state Dynamic state

Liquid

Prod

Vapor

Phase

Liquid

Phase

Liquid

Prod-1

Vapor

Phase

Liquid

Phase

Vapor/Phase fraction 0.0033 0.0033 0.96 0.2 0.2 0.79 Temperature (F) 480.42 480.42 480.42 482.98 482.98 482.98 Pressure (psia) 200 200 200 199.24 199.24 199.24 Molar flow (MMscfd) 0.0073 0.00244 0.0707 0.0042 0.00867 0.0033 Mass flow (Ib/hr) 904.16 30.06 874.1 521.86 107.39 414.46 Std Ideal Liq Volm (bbl/d) 88.03 2.92 85.1 50.75 10.45 40.3 Molar Entropy (Btu/Ibmole-F) 52.94 62.08 52.62 54.21 61.62 52.28 Liq Volm flow @ std cond (bbl/d) 87.75 2.92 84.83 50.59 10.41 40.17

Page 66: Fault Detection and Diagnosis of Brahmanbaria Gas

51

4.8 Summary

Steady state and Dynamic state simulation has stimulated by using Aspen HYSYS Simulator

guide line. Steady state simulation feed well 1&2 data has collected from Brahmanbaria gas

processing plant data. Also vessel, mixer, cooler, pump, heat exchanger, distillation column

data collected from plant data specification. Further information was found Sultana R, Syeda.,

Ahmed, Suman., Rahman, Md Bazlur., Mehfuz, Omit., and Shamsuzzaman, Razib., 2008

B.Sc. Design.

Page 67: Fault Detection and Diagnosis of Brahmanbaria Gas

52

Chapter 5

ANN Fault detection and diagnosis modeling

5.1 Introduction

The use of Neural Networks (NNs), in all aspects of process engineering activities, such as

modeling, design, optimization and control, has considerably increased in recent years

(Mujtaba and Hussain, 2001). Different NN based techniques (architecture, training) have

been adopted in different field of science to overcome the difficulties of first principle based

modeling. The non-linear relationship between input and output of a system can be built up

cost effectively by NNs.

In this chapter, a wide range of non-linear data sets from HYSYS simulated Brahmanbaria gas

processing plant have been presented for training, testing and validation

Page 68: Fault Detection and Diagnosis of Brahmanbaria Gas

53

5.2 NN architecture NN provides a non-linear mapping between input and output variables and is useful in

providing cross-correlation among these variables without modeling and simulating the

system. The mapping is performed by the use of processing elements and connection weights

(Aldrich and Slater, 2001). The architecture of NN consists of a number of layers, a number of

neurons; transfer functions and weights and biases and how layers are connected among

themselves. With the increase of number of layers and neurons, the NN‟s capabilities of

approximating complex functions increases (provided data are not over fitted). In process

engineering feed forward network whose signals flow in the forward direction from the input

units to the output units and incorporates feedback in its operation are widely used because of

its simplicity and available mathematical algorithms to perform its function. A typical NN

architecture is shown in Figure 3.1.

Figure 5.1: A Typical NN Architecture

Page 69: Fault Detection and Diagnosis of Brahmanbaria Gas

54

5.3 ANN Model Developments

The neural network model comparison is mainly used to choose the optimum number of

neurons in the hidden layer. This program let the user to do an experiment on the effect of

changing different network parameters like:

Number of neurons in the hidden layer.

Data transformation.

Transfer function in the hidden layer.

Transfer function in the output layer.

Training algorithm.

Parameters of the given training algorithm.

The performance of a neural network during training is measured based on the mean squared

error. After completing a training step the overall performance of a model is measured. Using

only numeric values of error functions are usually deceptive while judging whether the model

is efficient or not. So, in the neural network modeling approach of this thesis, graphical

outputs of error distributions were also used to help the model selection step.

Page 70: Fault Detection and Diagnosis of Brahmanbaria Gas

55

Data Normalization/Preprocessing Group the data into Training, Validation and Testing Set Construct of set of Neural Network with a varying number of neurons in hidden layer and select baseline neuron as 1.

Select the Network Architecture

Select training algorithm and stopping criteria

Train and Validate the network

Post-process the data Calculate % of abs error & avg. abs error for network model

No

Satisfactory

Yes Showing graphical error evaluation for different network model in terms of number of neuron within hidden layer Select the Optimum network architecture (Number of Neurons within Hidden Layer) Figure 5.5: Algorithm to extract weights and biases from optimized network

Page 71: Fault Detection and Diagnosis of Brahmanbaria Gas

56

Neural Network model development for Brahmanbaria gas processing plant includes the

following steps:

Step 1: Data for the stated problem is collected from simulated HYSYS model. Data

integration requirements are done manually before using the MATLAB programs. The

collected and integrated data is then stored in a separate data file.

Step 2: Data transformation is done before starting the network training. The preprocessed

data is divided in to three different sets, training set (50%), testing set (25%) and validation set

(25%).

Step 3: 1st section of MATLAB program, do data transformation, network construction,

network training, and selecting the best model.

Step 4: 2nd section MATLAB program, further analyze the performance of the

model selected in step 3, then the neural network description is extracted and saved in a

separate file.

The computer program which is stated in Step 3, construct a set of competing neural network

models. After training them simultaneously it compare the performance of each model.

Graphical outputs from this computer program are used to select the best neural network

topology.

Page 72: Fault Detection and Diagnosis of Brahmanbaria Gas

57

5.4 Neural Network Fault detection scheme

Any non-linear relationship between input and output of a system can be captured effectively

using NNs. NNs are consist of large number of primitive computational elements called

neuron. NN based fault detection are based on classification of historic process knowledge.

The development of the NN based system involves selecting suitable architecture to

differentiate between the faults of the process with the normal condition. The steps of NN

based fault detection system in this work is shown in Fig 5.1 and Fig 5.2 .NN is trained,

validated and tested are similar to state as is shown in Fig.5.3 and Fig.5.4 .Historic data of 10

key variables which affects the different mode of operations (types of faults and normal) is fed

to the input layer and mode of operations fed to the output layer of the NN.NN is trained,

validated and tested using dynamic data. Several multi layered feedforward neural networks

with varying configurations and Levenberg Marquardt back propagation algorithm are also

employed in the training and testing process of the to obtain optimum neural network

architecture.

Aspen HYSYS Dynamic

Model

Key Parameters to monitor by NN

Input Signal to Input layer of NN

Output Signal from Output layer of NN

Input parameters

and Disturbances

Identification of Normal operation and Types of Faults

Online Neural Network (Fixed Architecture)

Fig. 5.2: Online NN based fault detection system.

Page 73: Fault Detection and Diagnosis of Brahmanbaria Gas

58

Aspen HYSYS Dynamic

Model

Key Parameters to monitor by NN

Input Signal to Input layer of NN

Output Signal from Output layer of NN

Input parameters

and Disturbances

Identification of Normal operation and Types of Faults

Offline Neural Network (Training, Validating and Testing )

Key Parameters of 1000 secs Simulation of Different States (Normal and Abnormal) Stored by NN

Fig. 5.3: Offline NN based fault detection After getting option, network architecture the offline process is completed. Fixed architecture base NN has been used for on line testing. In the online mode data is directly feed from the HSYS simulator. Fixed NN base architecture online NN base architecture identified normal operation types of fault.

Fig. 5.4: Neural Network Back propagation Training Scheme.

Page 74: Fault Detection and Diagnosis of Brahmanbaria Gas

59

The development of neural network model follows the standard procedure of system

identification model. Generally, standard procedure of system identification model involves

several procedures in order to make sure that the model is properly developed:

Selection of input and output variable

Training data generation

Selection of network structure

Selection of training and validation

Selection of input and output variables

For the application machine learning approaches, it is important to properly select the input

variables, as ANN‟s are supposed to learn the relationship between input and output variables

on the basis of input-output pairs provided during training. In neural network based fault

detection model, the input variables represent the operating state of the pneumatic actuator,

and the output is the condition of normal or abnormal which may cause in turn the faults. Then

these normal and abnormal conditions are taken as the output of the ANN model.

Training Data Generation

The generation of training data is an important step in the development of ANN models. To

achieve a good performance of the neural network, the training data should represent the

complete the range of operating conditions of the pneumatic actuator which contains all

possible fault occurrences.

Selection of Network Structure

To make a neural network to perform some specific task, one must choose how the units are

connected to one another. This includes the selection of the number of hidden nodes and type

of the transfer function used. The number of hidden- units is directly related to the capabilities

of the network. For the best network performance, an optimal number of hidden- units must be

properly determined using the trial and error procedure.

The ANN model used here has two hidden layer of logarithmic sigmoidal neurons, which

receives the inputs, then broadcast their output to an output layer of linear neurons, which

Page 75: Fault Detection and Diagnosis of Brahmanbaria Gas

60

compute the corresponding values. The back propagation training algorithm, which propagates

the error from the output layer to the hidden layer to update weight matrix, is most commonly

used for feed forward neural networks.

The generated training data are normalized and applied to the neural network with

corresponding output, to learn the input-output relationship. The neural network model was

trained using Matlab program using the neural network toolbox. Based on the developed

Matlab program, the feed forward neural network model is trained using back propagation

method. At the end of the training process, the model obtained consists of the optimal weight

and the bias vector. After training the generalization performance of the network is evaluated

with the help of the test data and it shows that the trained ANN is able to produce the correct

output even for the new input.

A multi layered Feed forward networks (Fig.5.2) has been employed along with back

propagation algorithm. NN based fault detection system is trained, validated and tested using

data generated using dynamic model. Key process parameters and fault are to NN. Data sets

are sorted to avoid uneven distribution. All data are scaled to the symmetrical range of -1 to

+l. The data sets are divided into training, validation and test subsets. Several neural networks

with varying configurations and various learning strategies are also employed in the training

process of the neural networks. The following steps have been taken when developing the

model for fault detection (Fig.5.3).

Page 76: Fault Detection and Diagnosis of Brahmanbaria Gas

61

Fig. 5.5 NN based fault detection system

Selection of training and validation

After the output variable had been selected to be test, the neural network had to be train and

validate before ready to be implemented on the Brahmanbaria gas processing plant. Neural

network can be train by two different styles of training. In incremental training the weights

and biases of the network are updated each time an input is presented to the network. In batch

training the weights and biases are only updated after all the inputs are presented. Training is

important to achieve and train the weights and biases that can estimation that similar to the

actual plant. Meanwhile, validation is a process to verify the Neural Network using unseen or

other data to test the reliability and robustness of the created neural network.

Page 77: Fault Detection and Diagnosis of Brahmanbaria Gas

62

5.5 Result

This section presents the details of the development and testing of ANN model for fault

detection on control valve in Brahmanbaria gas processing plant. The Back propagation

algorithm was developed using MATLAB 7 Neural Network Toolbox. Plant test data

generation was designed and conducted in HYSYS. Brahmanbaria Gas processing plant has

captured the dynamic behavior. Table 5.1 shows the design of the plant model (Fig.4.3)

within Aspen HYSYS. 1700 sample data for key parameters from the normal situation to each

type of faults are simulated using Aspen HYSYS simulator. The data has divided into three

sets, a training set, a validation set and a testing set. The key data (Table 5.2 ) of different

normal operation mode using Aspen HYSYS simulator are shown in chart forms from Fig 5.6

and sample fault (Table 5.2) in the process plant generated using dynamic model are shown in

chart forms from Fig. 5.7. However, visually distinguished such data pattern is quite difficult.

Table 5.1: Input-output parameter

Stream name S-101 S-302 Heater- 501 inlet

Temperature (°F) 150 98 62 Pressure (Psia) 3000 1024 19 Mass flow rate ( Ib/hr) 57112 39789 11314 Stream name Tower feed S 601 S 611

Temperature (°F) 62 14 350 Pressure (Psia) 19 14 16 Mass flow rate ( Ib/hr) 11314 10113 1201

Page 78: Fault Detection and Diagnosis of Brahmanbaria Gas

63

Table 5.2: Types of operation mode/disturbances criteria for Neural Network analysis.

Ope

ratio

ns

Tow

er fe

ed -

Tem

pera

ture

S 60

1-1

- Pr

essu

re

Tow

er fe

ed -

Pres

sure

S 60

1-1

- Te

mpe

ratu

re

S 61

1-1

- Te

mpe

ratu

re

S 61

1-1

- Pr

essu

re

S 30

2 -

Mol

ar F

low

S 30

2 -

Pres

sure

S 40

1 -

Tem

pera

ture

S 40

1 -

Pres

sure

Nor

mal

ope

ratio

n

[F] [psia] [psia] [F] [F] [psia] [MMscfd] [psia] [F] [psia]

121 10 19 214 225 20 24 75 101 10

121 10 19 214 225 20 24 75 104 10

119 10 17 188 225 20 25 75 104 10

121 10 19 214 225 20 24 75 104 10

119 10 17 188 225 20 25 75 104 10

Tow

er v

alve

full

open

79 15 192 10 225 20 23 75 10 66

79 15 192 10 225 20 24 75 10 67

79 15 192 10 225 20 24 75 10 65

79 15 192 10 225 20 24 75 10 67

79 15 192 10 225 20 24 75 10 65

Tow

er v

alve

full

clos

e 181 7 -76 10 225 20 24 75 10 68

182 7 -76 10 225 20 24 75 10 68

180 7 -76 10 225 20 24 75 10 68

182 7 -76 10 225 20 24 75 10 68

180 7 -76 10 225 20 24 75 10 68

HP

& L

TS v

alve

fu

ll op

en

373 25 81 10 260 20 24 75 69 374

373 25 81 10 260 20 24 75 69 374

373 25 81 10 260 20 24 75 69 374

373 25 81 10 260 20 24 75 69 374

HP

& L

TS v

alve

full

clos

e

373 25 81 10 260 20 24 75 69 374

153 18 219 10 225 20 24 75 10 166

153 18 219 10 225 20 24 75 10 166

152 18 218 10 225 20 24 75 10 166

152 18 218 10 225 20 24 75 10 166

Page 79: Fault Detection and Diagnosis of Brahmanbaria Gas

64

Table 5.3: NN architecture for different conditions.

Operations Output neuron value Output neuron value by NN

Normal operation

1 0 0 0 0 1 0 0 0 0

1 0 0 0 0 1 0 0 0 0

1 0 0 0 0 1 0 0 0 0

1 0 0 0 0 1 0 0 0 0

1 0 0 0 0 1 0 0 0 0

Tower valve full open fault

0 1 0 0 0 0 1 0 0 0

0 1 0 0 0 0 1 0 0 0

0 1 0 0 0 0 1 0 0 0

0 1 0 0 0 0 1 0 0 0

0 1 0 0 0 0 1 0 0 0

Tower valve full close fault

0 0 1 0 0 0 0 1 0 0

0 0 1 0 0 0 0 1 0 0

0 0 1 0 0 0 0 1 0 0

0 0 1 0 0 0 0 1 0 0

0 0 1 0 0 0 0 1 0 0

HP & LTS valve full open fault

0 0 0 1 0 0 0 0 1 0

0 0 0 1 0 0 0 0 1 0

0 0 0 1 0 0 0 0 1 0

0 0 0 1 0 0 0 0 1 0

0 0 0 1 0 0 0 0 1 0

HP & LTS valve full close fault

0 0 0 0 1 0 0 0 0 1

0 0 0 0 1 0 0 0 0 1

0 0 0 0 1 0 0 0 0 1

0 0 0 0 1 0 0 0 0 1

0 0 0 0 1 0 0 0 0 1

Page 80: Fault Detection and Diagnosis of Brahmanbaria Gas

65

Those data for each key parameter as mentioned by NN based fault detection system and fed

to neural network input layer table (5.3). Types of operation mode/disturbances criteria (5.3)

are integrated with output layer neuron of NN based system. Neuron of output layer for each

states value equal to 1 and for absent of the state‟s value equal to 0 (Table 5.3). Data sets are

sorted to avoid uneven distribution. All data are scaled to the symmetrical range of -1 to +1.

Fig.5.6: Normal Operation mode by HYSYS

Page 81: Fault Detection and Diagnosis of Brahmanbaria Gas

66

Fig.5.7: Tower valve disturbance Operation mode by HYSYS

Training of NN Fault detection system is shown in Fig.5.8. The statistical regression plots of

(Fig.5.9 to Fig.5.10) between predicted and target data of different operation mode are plotted.

Network architecture is updated until the regression value is almost close to 1. Optimum

network is found for this work with 9 neurons in hidden layer.

Page 82: Fault Detection and Diagnosis of Brahmanbaria Gas

67

However, if the set point of normal operation is changed whose response is not similar to the

training pattern work, NN fault detection might detect as normal operation. Hyperbolic

tangent functions are used in the input and hidden nodes. Linear functions are used in the

hidden layer and output nodes.

NN based fault detection system (Fig.5.4) is trained, validated and tested using data generated

using the dynamic model. Different faults in valve are identified before sending to NN. Neural

Network relates measurements to the faults and identify between normal and abnormal states.

The output neurons of the NN are set between the values of 0 and 1 depending on types of

fault (Fig.5.4).

Design considerations of the neural network fault detection involved are stated as is shown in

Fig.5.5. Data sets are sorted to avoid uneven distribution. All data are scaled to the

symmetrical range of -1 to +l. The data sets are divided into training, validation and test

subsets. Several neural networks with varying configurations and various learning strategies

are also employed in the training process of the neural networks.

In this work, a three layered Feed forward network has been employed along with back

propagation algorithm. The input layer contains 4, the 1st hidden layer 4, 2nd hidden layer 1

and the output layer 4 nodes. Hyperbolic tangent functions are used in the input and 1st hidden

nodes. Linear functions are used in the 2nd hidden layer and output nodes. Input and output

architecture is shown in Table 5.2 & 5.3.

Page 83: Fault Detection and Diagnosis of Brahmanbaria Gas

68

Fig.5.8: Training of NN Fault detection system

Page 84: Fault Detection and Diagnosis of Brahmanbaria Gas

69

Fig.5.9: Statistical regression analysis of NN predicted data with fault

Fig.5.10: Statistical regression analysis of NN predicted data with fault

Predictions by optimum NN within the training range follow the expected trends and it is

within the engineering accuracy (Fig. 5.11 and Table 5.2 & 5.3). This probe that optimum

network able to predict types of fault (here liquid control valve failure for product quality loss)

even when the network is with new inputs (test data).

Page 85: Fault Detection and Diagnosis of Brahmanbaria Gas

70

Training of NN Fault detection system is shown in Fig. 5.8 The statistical regression plot (Fig.

5.11) between predicted and target data‟s is plotted to ensure that results generated have

satisfied. Predictions by different NN within the training range follow the expected trends and

it is within the engineering accuracy.

Fig. 5.11: Statistical regression analysis of NN predicted test data

Page 86: Fault Detection and Diagnosis of Brahmanbaria Gas

71

Chapter 6

6.0 CONCLUSION AND RECOMMENDATION

Overview

Fault detection system is one of the main elements in safety measurement in the chemical

plant. It is very ironic to think such small system would bring such big difference and impact

on the safety, reliability and cost effective of the process. Neural Network has the ability to

process information characteristic such as nonlinearity, high parallelism, fault tolerance as

well as capability to generalize and handle imprecise information.

Page 87: Fault Detection and Diagnosis of Brahmanbaria Gas

72

6.1 Conclusion

The development of neural network in various fields especially in fault detecting has shown

great progress. The priority of fault detection and diagnosis system in chemical and

petrochemical industry is increased. The implementation of Neural Network had provided a

reliable prediction as fault detection on Brahmanbaria gas processing plant was successfully

developed.

In this research, preliminary results shows that NN based method successfully detect the faults

of Brahmanbaria Gas processing plant. Brahmanbaria Gas Plant behavior and disturbances are

studied using HYSYS dynamic model. Feedforward NN based fault detection was developed

to identify the fault (disturbance) and no fault (normal) in plant operation. The NN based fault

detection system was being trained validated and tested using the dynamic model data.

Preliminary results show that NN based fault detection able to identify realistic fault output.

We believe that, NN based fault detection will help to avoid accident events and productivity

losses in Gas processing industry in Bangladesh and help operators to identify and monitor

multiple faults in real-time.

In future, more faults and multiple faults need to be accommodated to visualize the real plant

situation.

Page 88: Fault Detection and Diagnosis of Brahmanbaria Gas

73

6.2 Recommendation for future work

Although the development of Neural Network can be considered as successful, there are still

areas and aspect that can be improved in the future work:

In this thesis, preliminary results shows that NN based method successfully detect the faults of

Brahmanbaria Gas processing plant. In future, more faults and multiple faults needs to be

accommodated to visualize the real plant situation.

Plant HYSYS dynamic simulation data used to perform neural network analysis of

Brahmanbaria gas processing plant, it would be better to use plant live data for best fault

detection and diagnosis analysis.

To perform comparison fault detection and diagnosis method with expert system like fuzzy

logic.

Page 89: Fault Detection and Diagnosis of Brahmanbaria Gas

74

REFERENCES

Less, Frank. P., 1996. Loss Prevention in the Process Industries, Butterworth-Heinemann, 2nd edn, Chap.9, pp.364-398, Reed Educational and Professional Publishing., London. V. Venkatasubramanian, R. Rengaswamy, S. N. Kavuri, and K. Yin, “A Review of process fault detection and diagnosis Part III: Process history based methods,” Computers and Chemical Engineering, vol. 27, pp. 327-346, 2003. Mohd. Kamaruddin Bin Abd. Hamid., 2004.Multiple faults detection using artificial neural network. Master‟s thesis. Universiti Teknologi Malaysia, Malaysia. Sultana R, Syeda., Ahmed, Suman., Rahman, Md Bazlur., Mehfuz, Omit., and Shamsuzzaman, Razib., 2008, „Simulation of Brahmanbaria gas processing plant,‟ B.Sc. Design, Department of Chemical Engineering, Bangladesh University of Engineering & Technology, Dhaka, Bangladesh. Kamruzzaman S. 1999, „Simulation of Kailashtilla II Gas Processing Plant,‟ M.Sc. thesis, Bangladesh University of Engineering & Technology, Dhaka. HYSYS, HYSYS 3.2 user guide, Hyprotech Ltd, http://www.hyprotech.com/ Himmelblau, D.M and Hussain, M. A., 1978. Fault Detection and Diagnosis in Chemical and Petrochemical Processes, Vol-8 Berlin, Amsterdam: Elsevier Scientific Publishing Company. D. M. Himmelblau., 1978. Fault Detection and Diagnosis in Chemical and Petrochemical Processes, Elsevier Scientific Publishing Company, Amsterdam, Oxford., New York. Isermann, R., 1997. Supervision, Fault-detection and Fault-diagnosis Methods - an introduction. Control Eng. Practice. Vol. 5, No. 5, pp. 639-652. Gertler, J. J., (1998), Fault detection and diagnosis in engineering systems, New York: Marcel Dekker. Basheer , I.A. and Hajmeer M. (2000), Artificial Neural Networks: Fundamentals, Computing, Design, and Application, Journal of Microbiological Methods, 43: 3–31. Chementator., 2000. „Neural Networks Optimize Chemical Production,‟Chemical Engineering, vol.97, issue no.8, pp.29. Blanchar, D., 1994. „Applied AI News,‟ AI Magazine, vol.15, issue no.4, pp.79. Lee, R. S. T., Fuzzy-Neuro Approach to Agent Applications, Spinger-Verlag Berlin Heidelberg (2006). Patterson D. W. (1996), Artificial neural Networks: Theory and Application, Prentice

Page 90: Fault Detection and Diagnosis of Brahmanbaria Gas

75

Hall. Anderson, D. & Mcneil, G., 1992. Artificial Neural Networks Technology, Kaman Sciences Corporation, Newyork. Jacobsson, H., Bergfeldt, N. & Lundell, S., 2001. Matlab and Neural Network Toolbox Tutorial, Oxford University Press Inc, London Bishop, C. M., 1996. Neural Networks for Pattern Recognition, Oxford University Press Inc, London. Hoskins, J.C. & Himmelblau, D.M., 1988. Artificial neural network models of knowledge representation in chemical engineering, Computer & Chemical Engineering, vol.12, issue.9-10, pp.881-890. Patterson, D. W. 1995. Artificial Neural Networks, 1st edn, pp.1-20, Prentice Hall, London. Islam, A. and Hossain, T. Z., 2001. Modeling of a chemical process using artificial neural network, B.Sc. thesis, Department of Chemical Engineering, Bangladesh University of Engineering & Technology (BUET), Dhaka, Bangladesh. Haykin. S., 1999. Neural Networks: A comprehensive Foundation, 2nd edition, Prentice Hall, London. Looney, Carl G., 1997. Pattern Recognition Using Neural Networks, Theories and Algorithms for Engineers and Scientists, Oxford University Press, London. Neural Networks 1984-2003, <http://www.statsoftinc.com/textbook/stneunet>, accessed on 18 January 2013. Sarle W. 2001, Why use activation functions?, <http://www.faqs.org/faqs/ai- faq/neuralnets/part2/section-10.html >, accessed on 17 October 2012. Williams, C., 2000., Data Preprocessing, School of Informatics, University of Edinburgh. Looney, C. G., 1997. Pattern Recognition Using Neural Networks, Theories and Algorithms for Engineers and Scientists, Oxford University Press, London. Mujtaba, I. M and Hussain, M. A., 2004. Neural Networks and Other Learning Technologies in Process Engineering, Vol-3, Imperial College Press, London. Anderson, D., and Mcneil, G., 1997. Artificial Neural Networks Technology, Data &Analysis Center for Software, Daedalian.

Page 91: Fault Detection and Diagnosis of Brahmanbaria Gas

76

PUBLICATIONS

Sowgath, M.S., and Ahmed, Suman. 2014, „„Fault Detection of Brahmanbaria Gas

Plant using Neural Network,‟‟ on 8th International Conference, IEEE, Bangladesh.

Sowgath, M.S., and Ahmed, Suman. 2012, „„Study of Fault Detection and Diagnosis of Brahmanbaria Gas Processing Plant, Bangladesh using Neural Network,‟‟ in the Proceedings of 62nd Canadian Chemical Engineering Conference, October 14-17, 2012, Vancouver, Canada.

Page 92: Fault Detection and Diagnosis of Brahmanbaria Gas

77

Appendix A.1

Steady state material stream.

Stream Name Well 1 Well 7 Well 1 D/S Well 7 D/S MIX-100 D/S

Vapor Fraction 0.99 0.99 0.99 0.99 0.99754 Temperature (°F) 140.00 145.00 112.23 111.79 111.9894 Pressure (psia) 1890.00 2100.00 1130.00 1130.00 1130.0000 Molar flow (MMSCFD)

18.00 22.00 18.00 22.00 40.0000

Mass flow (lb/hr) 3.492e+4 4.309e+4 3.492e+4 4.308e+4 7.803e+4 Liq vol flow (bbl/day) 7520 9220 7520 9220 1674 Heat flow (Btu/hr) -6.648e+7 -8.160e+7 -6.648e+7 -8.160e+7 -1.4808e+8 Molar Enthalpy(Btu/Ibmole)

-3.364e+4 -3.378e+4 -3.364e+4 -3.378e+4 -3.372e+4

Stream Name AC-101 D/S V-100 G/O V-100 L/O E-201 G/O Sales gas

Vapor Fraction 0.99 1.00 0.00 0.99 1.00 Temperature (°F) 91.62 90.38 90.38103 49.38 39.38 Pressure (psia) 1120.00 1090.00 1090.00 1080.00 1030.00 Molar flow (MMSCFD)

40.00 39.71 0.28 39.71 39.17

Mass flow (lb/hr) 7.8003e+4 7.628e+4 1721 7.628e+4 7.312e+4 Liq vol flow (bbl/day) 16740.76 16558.02 182.75 16558.02 16182.04 Heat flow (Btu/hr) -1.493e+8 -1.466e+8 -2.725e+6 -1.488e+8 -1.453e+8 Molar Enthalpy(Btu/Ibmole)

-3.398e+4 -3.360e+4 -8.856e+4 -3.412e+4 -3.378e+4

Stream Name Chiller

D/S

V-101 G/O V-101 L/O LV-100

D/S

LV-101 L/O

Vapor Fraction 0.98 1.00 0.00 0.15025 0.25350 Temperature (°F) 0.00 -1.89 -1.89 85.67 -17.60711 Pressure (psia) 1070.00 1040.00 1040.00 440.00 440.00 Molar flow (MMSCFD) 39.71 39.17 0.54406 0.28012 0.54406 Mass flow (lb/hr) 7.628e+4 7.316e+4 3.118e+4 1.721e+4 3.118e+4 Liq vol flow (bbl/day) 16558.02 16182.05 375.97 182.75 375.97 Heat flow (Btu/hr) -1.520e+8 -1.475e+8 -4.068e+6 -2.725e+6 -4.068e+6 Molar Enthalpy(Btu/Ibmole)

-3.476e+4 -3.430e+4 -6.809e+4 -8.859e+4 -6.809e+4

Page 93: Fault Detection and Diagnosis of Brahmanbaria Gas

78

Appendix A.2

Table steady state material stream.

Stream Name Mix-101

L/O

V-102 inlet Tower

valve in

Vapor HC E-100 U/S

Vapor Fraction 0.236 0.294 0.00 1.00 4.00e-003 Temperature (°F) 19.89 14.39 13.28 13.28 13.078 Pressure (psia) 440.00 250.00 220.00 220.00 210.00 Molar flow (MMSCFD) 0.82 0.82 0.57 0.25 0.57318 Mass flow (lb/hr) 4.84e+4 4.84e+4 4.35e+4 4.89e+3 4.35e+4 Liq vol flow (bbl/day) 558.72 558.72 452.51 106.20 452.51 Heat flow (Btu/hr) -6.79e+6 -6.79e+6 -5.85e+6 -9.37e+5 -5.85e+6 Molar Enthalpy(Btu/Ibmole)

-7.50e+4 -7.50e+4 -9.30e+4 -3.39e+4 -9.30e+4

Stream Name Tower Feed Ovhd Liquid Prod C3Duty H-Q

Vapor Fraction 1.36e-002 1.00 3.33e-002 Temperature (°F) 24.73 338.07 480.42 Pressure (psia) 200.00 200.00 200.00 Molar flow (MMSCFD) 0.57 0.49 7.31e-002 Mass flow (lb/hr) 4.35e+4 3.44e+4 904.16 Liq vol flow (bbl/day) 452.52 364.48 88.03 Heat flow (Btu/hr) -5.82e+6 -3.73e+6 -6.07e+5 2.84e+7 2.73e+7 Molar Enthalpy(Btu/Ibmole)

-9.26e+4 -6.81e+4 -7.56e+4

Page 94: Fault Detection and Diagnosis of Brahmanbaria Gas

79

Appendix B.1

Table dynamic state material stream

Stream Name Well 1 Well 7 Well 1 D/S Well 7 D/S MIX-100 D/S

Vapor Fraction 0.99948 0.999428 0.99871 0.99684 0.99769 Temperature (°F) 140.00 145.00 112.56 112.39 112.46 Pressure (psia) 1890.00 2090.00 1138.49 1138.49 1138.49 Molar flow (MMSCFD)

17.98 21.99999 17.98 21.99 39.98

Mass flow (lb/hr) 34888.77 43086.79 34888.77 43086.79 77975.56 Liq vol flow (bbl/day)

7514.28 9220.57 7514.28 9220.57 16734.85

Heat flow (Btu/hr) -70088849.40 -86082905.57 -70088849.40 -86082905.0 -156171754.97 Molar Enthalpy(Btu/Ibmole)

-78239.64 -78560.30 -78239.64 -78560.30 0.99769

Stream Name AC-101 D/S V-100 G/O V-100 L/O E-201 G/O Sales gas

Vapor Fraction 0.9934 1.00 0.00 0.99973 0.99994 Temperature (°F) 91.9295 91.9295 91.9295 90.6975 90.31277 Pressure (psia) 1128.4991 1128.4991 1128.4991 1118.547 1090.00 Molar flow (MMSCFD)

39.985850 39.72327 0.262578 39.7232 39.7060

Mass flow (lb/hr) 77975.55 76398.93 1576.61 76398.93 76271.71 Liq vol flow (bbl/day)

16734.85 16567.56 167.28 16567.56 16553.65

Heat flow (Btu/hr) -157417362.67 -154708707.51 -2708655.16 -154757102.95 -1.545e+8

Molar Enthalpy(Btu/Ibmole)

-79041.50 -78194.93 -207111.44 -78219.39 -78149.89

Page 95: Fault Detection and Diagnosis of Brahmanbaria Gas

80

Appendix B.2

Table dynamic state material stream

Stream Name Chiller D/S V-101 G/O V-101

L/O

LV-100

D/S

LV-101

L/O

Vapor Fraction 0.99 1.00 0.00 0.10 0.13 Temperature (°F) 90.09 90.09 90.09 89.07475 86.70 Pressure (psia) 1104.51 1104.52 1104.51 651.85 651.85 Molar flow (MMSCFD)

39.72 39.70 1.72e-00 0.26 1.72e-002

Mass flow (lb/hr) 76398.93 76271.71 127.22 1576.61 127.22 Liq vol flow (bbl/day)

16567.57 16553.65 13.92 167.28 13.91

Heat flow (Btu/hr) -154757102.95 -154600754.51 -156348.44 -2708655.16 -156348.44 Molar Enthalpy(Btu/Ibmole)

-78219.39 -78174.36 -181755.16 -207111.44 -181755.16

Stream Name Mix-101

L/O

V-102 inlet Tower valve

in

Vapor HC E-100 U/S

Vapor Fraction 0.11 0.17 0.17 1.00 0.20 Temperature (°F) 88.90 85.50 85.50 85.50 83.91 Pressure (psia) 651.85 332.22 332.22 332.22 238.46 Molar flow (MMSCFD)

0.27 0.27 0.27 0.00 0.27

Mass flow (lb/hr) 1703.83 1703.84 1703.83 0.00 1703.83 Liq vol flow (bbl/day)

181.20 181.20 181.20 0.00 181.20

Heat flow (Btu/hr) -2865003.60 -2865003.60 -2865003.72 0.00 -2865003.72 Molar Enthalpy(Btu/Ibmole)

-205546.57 -205546.58 -205546.58 -78276.46 -205546.58

Stream Name Tower Feed Ovhd LiquidProd C3Duty H-Q

Vapor Fraction 0.21 1.00 0.20 Temperature (°F) 106.25 304.40 483.50 Pressure (psia) 199.61 199.28 200.14 Molar flow (MMSCFD) 0.27 0.23 4.20e-002 Mass flow (lb/hr) 1703.84 1181.95 521.87 Liq vol flow (bbl/day) 181.20 130.44 50.75 Heat flow (Btu/hr) -2839120.05 -1903315.98 -361543.40 0000 25886.33 Molar Enthalpy(Btu/Ibmole) -203689.34 -160686.52 -172709.31

Page 96: Fault Detection and Diagnosis of Brahmanbaria Gas

81

Appendix C.1

Neural Network Data Analysis.

Table Normal operation parameters of BGPP

Tower feed - Temp

S 601-1

- Press

Tower feed - Press

S 601-1

- Temp

S 611-1

- Temp

S 611-1

- Press

S 302 - Molar Flow

S 302 -

Press

S 401 -

Temp

S 401 -

Press

[F] [Psia] [Psia] [F] [F] [Psia] [MMscfd] [Psia] [F] [Psia] 122 10 19 215 225 20 32 75 100 10 122 10 19 215 225 20 24 75 104 10 122 10 19 215 225 20 24 75 105 10 122 10 19 215 225 20 24 75 101 10 122 10 19 215 225 20 24 75 99 10 122 10 19 215 225 20 24 75 99 10 122 10 19 215 225 20 24 75 101 10 122 10 19 215 225 20 24 75 106 10 122 10 19 215 225 20 24 75 105 10 122 10 19 215 225 20 24 75 104 10 122 10 19 215 225 20 24 75 103 10 122 10 19 215 225 20 23 75 104 10 122 10 19 215 225 20 24 75 102 10 122 10 19 215 225 20 24 75 104 10 122 10 19 215 225 20 24 75 102 10 122 10 19 215 225 20 25 75 101 10 122 10 19 215 225 20 24 75 106 10 122 10 19 215 225 20 24 75 102 10 122 10 19 215 225 20 24 75 105 10 122 10 19 215 225 20 24 75 100 10 122 10 19 215 225 20 24 75 105 10 122 10 19 215 225 20 24 75 101 10 122 10 19 215 225 20 24 75 99 10 122 10 19 215 225 20 24 75 99 10 122 10 19 215 225 20 24 75 101 10 122 10 19 215 225 20 24 75 106 10 122 10 19 215 225 20 24 75 105 10 122 10 19 215 225 20 24 75 104 10 122 10 19 215 225 20 24 75 103 10 122 10 19 215 225 20 23 75 104 10 122 10 19 215 225 20 24 75 102 10 122 10 19 215 225 20 24 75 104 10

Page 97: Fault Detection and Diagnosis of Brahmanbaria Gas

82

Appendix C.2

Table Tower valve full open operation parameters of BGPP

Tower feed - Temp

S 601-1

- Press

Tower feed - Press

S 601-1

- Temp

S 611-1

- Temp

S 611-1

- Press

S 302 - Molar Flow

S 302 -

Press

S 401 -

Temp

S 401 -

Press

[F] [Psia] [Psia] [F] [F] [Psia] [MMscfd] [Psia] [F] [Psia]

79 15 192 10 225 20 23 75 10 66

79 15 192 10 225 20 24 75 10 67

79 15 192 10 225 20 24 75 10 65

79 14 192 10 225 20 24 75 10 67

79 14 192 10 225 20 24 75 10 65

79 14 192 10 225 20 24 75 10 68

79 14 192 10 225 20 24 75 10 69

79 14 192 10 225 20 24 75 10 67

79 14 192 10 225 20 24 75 10 68

79 14 191 10 225 20 24 75 10 68

79 14 191 10 225 20 25 75 10 67

79 14 191 10 225 20 24 75 10 68

79 14 191 10 225 20 23 75 10 68

79 14 191 10 225 20 25 75 10 68

79 14 191 10 225 20 24 75 10 68

79 14 191 10 225 20 24 75 10 68

79 14 191 10 225 20 26 75 10 68

79 14 191 10 225 20 24 75 10 67

79 14 191 10 225 20 24 75 10 68

79 14 191 10 225 20 24 75 10 67

79 14 192 10 225 20 24 75 10 67

79 14 192 10 225 20 24 75 10 65

79 14 192 10 225 20 24 75 10 68

79 14 192 10 225 20 24 75 10 69

79 14 192 10 225 20 24 75 10 67

79 14 192 10 225 20 24 75 10 68

79 14 191 10 225 20 24 75 10 68

79 14 191 10 225 20 25 75 10 67

79 14 191 10 225 20 24 75 10 68

79 14 191 10 225 20 23 75 10 68

79 14 191 10 225 20 25 75 10 68

79 14 191 10 225 20 24 75 10 68

Page 98: Fault Detection and Diagnosis of Brahmanbaria Gas

83

Appendix C.3

Table Tower valve full close operation parameters of BGPP

Tower feed - Temp

S 601-1

- Press

Tower feed - Press

S 601-1

- Temp

S 611-1

- Temp

S 611-1

- Press

S 302 - Molar Flow

S 302 -

Press

S 401 -

Temp

S 401 -

Press

[F] [Psia] [Psia] [F] [F] [Psia] [MMscfd] [Psia] [F] [Psia] 181 7 -76 10 225 20 24 75 10 68 182 7 -76 10 225 20 24 75 10 68 180 7 -76 10 225 20 24 75 10 68 179 7 -76 10 225 20 24 75 10 68 179 7 -76 10 225 20 24 75 10 68 180 7 -76 10 225 20 24 75 10 68 180 7 -76 10 225 20 23 75 10 68 180 7 -76 10 225 20 24 75 10 68 180 7 -76 10 225 20 24 75 10 68 180 7 -76 10 225 20 24 75 10 68 186 7 -76 10 225 20 25 75 10 68 181 7 -76 10 225 20 24 75 10 68 181 7 -76 10 225 20 24 75 10 68 181 7 -76 10 225 20 24 75 10 68 181 7 -76 10 225 20 25 75 10 68 181 7 -76 10 225 20 24 75 10 68 181 7 -76 10 225 20 25 75 10 68 182 7 -76 10 225 20 23 75 10 68 180 7 -76 10 225 20 24 75 10 68 180 7 -76 10 225 20 24 75 10 68 181 7 -76 10 225 20 24 75 10 68 182 7 -76 10 225 20 24 75 10 68 180 7 -76 10 225 20 24 75 10 68 179 7 -76 10 225 20 24 75 10 68 179 7 -76 10 225 20 24 75 10 68 180 7 -76 10 225 20 24 75 10 68 180 7 -76 10 225 20 23 75 10 68 180 7 -76 10 225 20 24 75 10 68 180 7 -76 10 225 20 24 75 10 68 180 7 -76 10 225 20 24 75 10 68 186 7 -76 10 225 20 25 75 10 68 181 7 -76 10 225 20 24 75 10 68

Page 99: Fault Detection and Diagnosis of Brahmanbaria Gas

84

Appendix C.4

Table HP Separator and Low temperature separator valve full open operation

parameters of BGPPL

Tower feed - Temp

S 601-1

- Press

Tower feed - Press

S 601-1

- Temp

S 611-1

- Temp

S 611-1

- Press

S 302 - Molar Flow

S 302 -

Press

S 401 -

Temp

S 401 -

Press

[F] [Psia] [Psia] [F] [F] [Psia] [MMscfd] [Psia] [F] [Psia] 373 25 81 10 260 20 24 75 69 374 373 25 81 10 260 20 24 75 69 374 373 25 81 10 260 20 24 75 69 374 373 25 81 10 260 20 24 75 69 374 373 25 81 10 260 20 24 75 69 374 373 25 81 10 260 20 24 75 69 374 373 25 81 10 260 20 24 75 69 374 373 25 81 10 260 20 24 75 69 374 373 25 81 10 260 20 24 75 69 374 373 25 81 10 260 20 24 75 69 374 373 25 81 10 260 20 24 75 69 374 373 25 81 10 260 20 24 75 69 374 373 25 81 10 260 20 24 75 69 374 373 25 81 10 260 20 24 75 69 374 373 25 81 10 260 20 24 75 69 374 373 25 81 10 260 20 24 75 69 374 374 25 81 10 260 20 24 75 69 375 373 25 81 10 260 20 24 75 69 375 373 25 81 10 260 20 24 75 69 374 373 25 81 10 260 20 24 75 69 374 373 25 81 10 260 20 24 75 69 374 373 25 81 10 260 20 24 75 69 374 373 25 81 10 260 20 24 75 69 374 373 25 81 10 260 20 24 75 69 374 373 25 81 10 260 20 24 75 69 374 373 25 81 10 260 20 24 75 69 374 373 25 81 10 260 20 24 75 69 374 373 25 81 10 260 20 24 75 69 374 374 25 81 10 260 20 24 75 69 375 373 25 81 10 260 20 24 75 69 375 373 25 81 10 260 20 24 75 69 374 373 25 81 10 260 20 24 75 69 374

Page 100: Fault Detection and Diagnosis of Brahmanbaria Gas

85

Appendix C.5

Table HP Separator and Low temperature separator valve full open operation

parameters of Brahmanbaria gas processing plant

Tower

feed -

Temp

S

601-1

-

Press

Tower

feed -

Press

S 601-

1 -

Temp

S 611-

1 -

Temp

S

611-1

-

Press

S 302 -

Molar

Flow

S 302

-

Press

S 401

-

Temp

S 401

-

Press

[F] [Psia] [Psia] [F] [F] [Psia] [MMscfd] [Psia] [F] [Psia]

153 18 219 10 225 20 24 75 10 166 153 18 219 10 225 20 24 75 10 166 152 18 218 10 225 20 24 75 10 166 152 18 218 10 225 20 24 75 10 166 152 18 218 10 225 20 24 75 10 166 152 18 218 10 225 20 24 75 10 166 152 18 218 10 225 20 24 75 10 166 152 18 217 10 225 20 24 75 10 166 151 18 217 10 225 20 24 75 10 166 151 18 217 10 225 20 24 75 10 166 151 18 217 10 225 20 23 75 10 166 151 18 217 10 225 20 24 75 10 166 151 18 217 10 225 20 24 75 10 166 151 18 216 10 225 20 24 75 10 166 151 18 216 10 225 20 24 75 10 166 150 18 216 10 225 20 24 75 10 166 150 18 216 10 225 20 24 75 10 166 150 18 216 10 225 20 23 75 10 166 150 18 216 10 225 20 26 75 10 166 150 18 216 10 225 20 25 75 10 166 152 18 218 10 225 20 24 75 10 166 152 18 217 10 225 20 24 75 10 166 151 18 217 10 225 20 24 75 10 166 151 18 217 10 225 20 24 75 10 166 151 18 217 10 225 20 23 75 10 166 151 18 217 10 225 20 24 75 10 166 151 18 217 10 225 20 24 75 10 166 151 18 216 10 225 20 24 75 10 166 151 18 216 10 225 20 24 75 10 166 150 18 216 10 225 20 24 75 10 166 150 18 216 10 225 20 24 75 10 166 150 18 216 10 225 20 23 75 10 166

Page 101: Fault Detection and Diagnosis of Brahmanbaria Gas

86

Appendix D.1

Algorithm for Optimum Network Structure A Matlab code first initiated to determine optimum network structure in terms of

hidden layer neuron number. Preprocessed data are grouped into training,

validation and testing set. A loop is launched that calculate the percentage of error

for cumulative number of hidden layer neuron.

% Construct set of neural networks with number of hidden neurons varying between "hni" and "hnf" %at "del" step hni=1; hnf=20;

del=1;

for hn=hni:del:hnf;

net = network;

net.numInputs = 1;

net.inputs{1}.size = 2;

net.numLayers = 2;

net.layers{1}.size =hn;

net.layers{2}.size = 2;

net.inputConnect(1) = 1;

net.layerConnect(2,1) = 1;

net.outputConnect(2) = 1;

net.targetConnect(2) = 1;

net.layers{1}.transferFcn = 'tansig';

net.layers{2}.transferFcn = 'purelin';

net.biasConnect = [1; 1];

% The network performance measuring function and the training algorithms are set here

net.performFcn = 'mse';

net.trainFcn = 'trainlm';

Page 102: Fault Detection and Diagnosis of Brahmanbaria Gas

87

Appendix D.2

Terms used in above code are described below:

hni =1, initial number of neuron within hidden layer is 1.

hnf =20, possible maximum number of neuron within hidden layer is 20.

del=1, step.

net =network, create custom network

net.numInputs, number of input source is 1, not the number of

elements in input vector.

net.inputs{1}.size , number of input elements.

net.numLayers, indicates number of layer in the network.

net.layers{1}.size, number of neuron within in first layer. Here 1 indicates as 1st layer.

net.layers{2}.size, number of neuron within second layer.

net.inputConnect (i, j), input weight connection going to the ith layer from the jth

input.

net.layerConnect, layer weight connection going to the ith layer from the jth

layer.

net.outputConnect (2), output connection to external world from two layers.

net.layers{1}.transferFcn, activation function used in 1st layer. Here

hyperbolic tangent sigmoid function (tansig) is used within 1st layer.

net.layers{2}.transferFcn, activation function used in 2nd layer. Here

linear transfer function (purelin) is used within 2nd layer.

net.biasConnect =[1;1], bias connect to 1st layer is 1 & 2nd layer is 1

net.performFcn, performance function used for training feedforward neural networks. Here mean square error (mse) function is used to measure network performance.

net.trainFcn, network training function. For levenberg-Marquardt method trainlm

function is used.