View
15
Download
0
Category
Preview:
Citation preview
Reservoir Computing Methods for
Prognostics and Health Management (PHM)
Piero Baraldi
Energy Department
Politecnico di Milano
Italy
2 Piero Baraldi
Industry 4.0 2
20172012 Time
Da
ta
Available data
• Digitalization2.8 Trillion GD (ZD)
generated in 2016
• Analytics
AnalyticsData
3 Piero Baraldi
Industry 4.0 3
20172012 Time
Da
ta
Available data
• Digitalization2.8 Trillion GD (ZD)
generated in 2016
• Analytics
AnalyticsData Predictive
Maintenance
4 Piero Baraldi
4Predictive Maintenance 4
Monitored
Signals
Ambient &
Operating
Conditions
Prognostic
ModelIndustry 4.0
Remaining Useful Life
(RUL)
tp
𝑢1
𝑢2
𝑢𝑁
Analytics
Present time
Failure time
5 Piero Baraldi
5Predictive Maintenance 5
• Future demand
• Logistics options
• …
Monitored
Signals
Ambient &
Operating
Conditions
Prognostic
Model
Optimal
Maintenance DecisionsSafety
improvement,
Cost Saving,
New Business
• Maximum Availability
• Business continuity
• Warehouse savings
• Zero-defect production
• Zero-waste production
Industry 4.0
Remaining Useful Life
(RUL)
tpPresent
timeFailure
time
𝑢1
𝑢2
𝑢𝑁
Analytics
6 Piero Baraldi
In this Presentaton
• Prognostics
• Recurrent Neural Network (RNN)
• Reservoir Computing
• Echo State Network
• Application to the Prediction of Turbofan Engine RUL
6
7
Prognostics: What is the Problem?
Aircraft Turbofan Engine
N Monitored Signals
• Signal 1
• Signal 2
• ………..
• Signal N
Time
Te
mp
era
ture
8
Prognostics: What is the Problem?
Aircraft Turbofan Engine
N Monitored Signals
• Signal 1
• Signal 2
• ………..
• Signal N
Aircraft Engine
RUL Prediction
Time Time
RU
L
Te
mp
era
ture
Prognostic
Model
9 Piero Baraldi
Prognostics: the Challenge 9
• 2 identical components
• Same measurements at
time 𝑡𝑝: 𝑢 𝑡𝑝 ≡ 𝑢(𝑡𝑝)
• System evolution depends on present and past signal values
(the memory of the history)
Monitored
Signal
𝑡𝑝
𝑢
𝑡𝑓0 𝑡𝑝
𝑡𝑝 𝑡𝑝 𝑡𝑓
𝑢
Monitored
Signal
10
• Connection:
• Computational unit (neurons):
Prognostics: Methods
Feedforward Neural Network
𝑤11𝑢1
𝑢1𝑤11
𝑢1𝑤11
𝑢2𝑤21
𝑢3𝑤31
𝑓
𝑖=1
3
𝑢𝑖 𝑤𝑖1
𝑅𝑈𝐿 𝑡𝑝
11
Prognostics: Methods
• connections only "from left
to right", no connection
cycle
• no memory
Feedforward Neural Network
𝑅𝑈𝐿 𝑡𝑝
12
• connections only "from left
to right", no connection
cycle
• no memory
Prognostics: Methods
Feedforward Neural Network
• at least one connection
cycle
• activation can
"reverberate", persist even
with no input
• system with memory
Recurrent Neural Network
𝑅𝑈𝐿 𝑡𝑝𝑅𝑈𝐿 𝑡𝑝
13 Piero Baraldi
In this presentaton
• Prognostics
• Recurrent Neural Network (RNN)
• Reservoir Computing
• Echo State Network
• Application to the Prediction of Turbofan Engine RUL
13
14 Piero Baraldi
Recurrent NN: General Idea 14
𝑅𝑈𝐿 𝑡𝑝𝒖 1: 𝑡𝑝
𝑢1
𝑢𝑁Time
trajectory
𝑡 = 1
𝑡 = 𝑡𝑝
PROGNOSTIC MODEL
𝒖 1: 𝑡𝑝
15 Piero Baraldi
Recurrent NN: General Idea 15
𝑥2
𝑥1
𝑥𝑀
Linear
Regression
Non Linear
Expansion
𝑅𝑈𝐿 𝑡𝑝 = 𝑾𝒐𝒖𝒕𝒙(𝑡𝑝)𝒖 1: 𝑡𝑝
𝑢1
𝑢𝑁Time
trajectory
𝑡 = 1
𝑡 = 𝑡𝑝
𝑟𝑢𝑙
𝑀 ≫ 𝑁
𝒙 𝑡𝑝 = 𝑓 𝒖 1: 𝑡𝑝
𝒙 𝑡𝑝 = 𝑓 𝒙(𝑡𝑝 − 1), 𝒖 𝑡𝑝
Recursive
definition
𝒖 1: 𝑡𝑝
𝒙 𝑡𝑝
𝑅𝑈𝐿 𝑡𝑝
16 Piero Baraldi
Recurrent NN 16
𝒖 1: 𝑡𝑝 𝑥1(𝑡𝑝) = 𝑓
𝑖=1
𝑁
𝑤𝑖1𝑖𝑛 𝑢𝑖(𝑡𝑝) +
𝑖=1
𝑀
𝑤𝑖1 𝑥𝑖 (𝑡𝑝 − 1)
𝑾𝒊𝒏
𝑾Non Linear Expansion
𝑢3(𝑡𝑝)
𝑢2(𝑡𝑝)
𝑢1(𝑡𝑝)
17 Piero Baraldi
Recurrent NN 17
𝒖 1: 𝑡𝑝 𝒙(𝑡𝑝) = 𝑓 𝑾𝒊𝒏𝒖(𝑡𝑝) +𝑾𝒙(𝑡𝑝 − 1) 𝑅𝑈𝐿 𝑡𝑝 = 𝑊𝑜𝑢𝑡𝒙(𝑡𝑝)
𝑾Linear RegresionNon Linear Expansion
𝑢3(𝑡𝑝)
𝑢2(𝑡𝑝)
𝑢1(𝑡𝑝)
𝑾𝒊𝒏
𝑾𝒐𝒖𝒕
𝑅𝑈𝐿 𝑡𝑝
18
RNN: Training
𝑊𝑖𝑛,𝑊,𝑊𝑜𝑢𝑡
TRAINING SET
𝒖 1 , 𝑅𝑈𝐿𝐺𝑇 1 = 𝑡𝑓 − 1
…𝒖 5 , 𝑅𝑈𝐿𝐺𝑇 5 = 𝑡𝑓 − 5
…
𝒖 𝑡𝑓 − 1 , 𝑅𝑈𝐿𝐺𝑇 𝑡𝑓 − 1 = 1
Run-to-failure
degradation trajectory
t
𝒖
5 𝑡𝑓
𝒖(5)
𝑅𝑈𝐿𝐺𝑇 5 = 𝑡𝑓 − 5
19
RNN: Training19
𝑊𝑖𝑛,𝑊,𝑊𝑜𝑢𝑡
Training Objective: minimize the error function
𝐸 𝑅𝑈𝐿, 𝑅𝑈𝐿𝐺𝑇 =RMSE=
𝑡=1
𝑡𝑓−11
𝑡𝑓 − 1𝑅𝑈𝐿(𝑡) − 𝑅𝑈𝐿𝐺𝑇(𝑡) 2
TRAINING SET
𝒖 1 , 𝑅𝑈𝐿𝐺𝑇 1 = 𝑡𝑓 − 1
𝒖 2 , 𝑅𝑈𝐿𝐺𝑇 2 = 𝑡𝑓 − 2
…
𝒖 𝑡𝑓 − 1 , 𝑅𝑈𝐿𝐺𝑇 𝑡𝑓 − 1 = 1
20
RNN: Training20
𝑊𝑖𝑛,𝑊,𝑊𝑜𝑢𝑡
Training Objective: minimize the error function
𝐸 𝑅𝑈𝐿, 𝑅𝑈𝐿𝐺𝑇 =RMSE=
𝑡=1
𝑡𝑓−11
𝑡𝑓 − 1𝑅𝑈𝐿(𝑡) − 𝑅𝑈𝐿𝐺𝑇(𝑡) 2
TRAINING SET
𝒖 1 , 𝑅𝑈𝐿𝐺𝑇 1 = 𝑡𝑓 − 1
𝒖 2 , 𝑅𝑈𝐿𝐺𝑇 2 = 𝑡𝑓 − 2
…
𝒖 𝑡𝑓 − 1 , 𝑅𝑈𝐿𝐺𝑇 𝑡𝑓 − 1 = 1
Training Methods:
• Gradient-descent-based methods
• Reservoir Computing
21
Gradient-descent-based methods for RNN
RNN are difficult to train using gradient-descent-based methods:
• Bifurcations
• Many updating cycles Too long training times
• Hard to obtain long range memory
21
𝑾𝒊𝒏 𝑾𝒐𝒖𝒕
𝑾
-
𝑅𝑈𝐿𝐺𝑇(𝑡)
𝑅𝑈𝐿(𝑡)
Error(t)
𝒖(𝑡)
22 Piero Baraldi
In this presentaton
• Prognostics
• Recurrent Neural Network (RNN)
• Reservoir Computing
• Echo State Network
• Application to the Prediction of Turbofan Engine RUL
22
23 Piero Baraldi
Reservoir Computing (RC): Terminology
What is it? Purpose
ReservoirNon-linear temporal
expansion function
Expand the input history 𝒖 1: 𝑡𝑝 into a rich-enough
reservoir space 𝒙(𝑡𝑝)
readout Linear function
Combine the neuron signals 𝒙(𝑡𝑝) into the desired output
signal target 𝑅𝑈𝐿 𝑡𝑝
24 Piero Baraldi
Reservoir Computing (RC): Basic Idea 24
What is it? Purpose
ReservoirNon-linear temporal
expansion function
Expand the input hystory 𝒖 1: 𝑡𝑝 into a rich-enough
reservoir space 𝒙(𝑡𝑝)
readout Linear function
Combine the neuron signals 𝒙(𝑡𝑝) into the desired output
signal target 𝑅𝑈𝐿 𝑡𝑝
Reservoir and readout
serve different
purposes
They can be
separately
trained
25 Piero Baraldi
Reservoir Methods
• Echo State Networks
• Liquid State Machines
• Evolino
• Backpropagation-Decorrelation
• Temporal Recurrent Networks
• …
25
26 Piero Baraldi
In this presentaton
• Prognostics
• Recurrent Neural Network (RNN)
• Reservoir Computing
• Echo State Network
• Application to the Prediction of Turbofan Engine RUL
26
27
Generate the reservoir
• Purpose: obtain a rich enough reservoir space 𝒙(𝑛)
• Recipe:
➢ Big reservoir (𝑀 up to 104) rich enough reservoir space
➢ Sparsely connected 𝑊 is sparse (no more than 20% of possible
connections)
➢ Randomly connected weights of the connections are randomly
generated from a uniform distribution symmetric around the zero value
...
...
𝑾
28
Readout
• Purpose: learn the weights 𝑊𝑜𝑢𝑡 which minimize:
𝐸 𝑅𝑈𝐿, 𝑅𝑈𝐿𝐺𝑇 =RMSE=
𝑡=1
𝑡𝑓−11
𝑡𝑓 − 1𝑊𝑜𝑢𝑡𝒙(𝑡) − 𝑅𝑈𝐿𝐺𝑇(𝑡) 2
𝑾𝒊𝒏
𝑾𝒐𝒖𝒕
𝑾
(𝑡𝑝)
𝑢1(𝑡)
𝑢2(𝑡)
𝑢𝑁(𝑡)
𝑅𝑈𝐿 𝑡 = 𝑊𝑜𝑢𝑡𝑥(𝑡)
Readout𝑊𝑜𝑢𝑡𝒙 𝑡 = 𝑅𝑈𝐿𝐺𝑇(𝑡)
Linear regression
29
Traditional RNN ESN
𝑾𝒊𝒏 𝑾𝒐𝒖𝒕
𝑾
-
Training: Traditional RNN VS ESN 29
error
𝑾𝒊𝒏
𝑾𝒐𝒖𝒕
𝑾
-
𝑅𝑈𝐿𝐺𝑇
𝑅𝑈𝐿
error
𝑅𝑈𝐿𝐺𝑇
𝑅𝑈𝐿
random
𝒖𝒖
30
The Echo State Property
• The effect of 𝒙 𝑡 and 𝒖 𝑡 on a future state 𝒙 𝑡 + 𝑘 should vanish
gradually as time passes (i.e., 𝑘 → ∞ ) and not persist or even get
amplified.
• For most practical purposes:
𝜌 𝑊 : spectral radius of 𝑊 = largest absolute eigenvalue of W < 1
Echo State Property is satisfied
31
In this presentaton
• Prognostics:
• Recurrent Neural Networks (RNN)
• Reservoir Computing
• Echo State Network
• Application to the prediction of Turbofan Engine RULs
31
32
Prognostics: What is the Problem?
Aircraft Turbofan Engine
N Monitored Signals
• Signal 1
• Signal 2
• ………..
• Signal N
Aircraft Engine
RUL Prediction
TimeR
UL
Prognostic
Model
Te
mp
era
ture
Time
33
• 260 run-to-failure trajectories
• 21 measured signals + 3 signals representative of the operating
conditions
• 6 different operating conditions
Data
Preprocessing**
The C-MAPPS dataset*
* A. Saxena, K. Goebel, D. Simon, N. Eklund, Damage propagation modeling for aircraft engine run-to-failure simulation,
PHM2008
**M. Rigamonti, P. Baraldi, E. Zio, I. Roychoudhury, K. Goebel, S. Poll, Echo State Network for Remaining Useful Life Prediction
of a Turbofan Engine, PHM 2016, Bilbao
34
ESN Architecture Optimization
Network Architecture Optimization: Parameters
1) Network Dimensions
2) Spectral Radius
3) Connectivity
4) Input Scaling
5) Output Scaling
RUL(t)
35
Network Architecture Optimization: Parameters
ESN Architecture Optimization
RUL(t)
1) Network Dimensions
2) Spectral Radius
3) Connectivity
4) Input Scaling
5) Output Scaling
36
RUL(t)
Network Architecture Optimization: Parameters
1) Network Dimensions
2) Spectral Radius
3) Connectivity
4) Input Scaling/Shifting
5) Output Scaling/Shifting
6) Output Feedback
ESN Architecture Optimization
37
RUL(t)
Network Architecture Optimization: Parameters
1) Network Dimensions
2) Spectral Radius
3) Connectivity
4) Input Scaling
5) Output Scaling
ESN Architecture Optimization
38
RUL(t)
Network Architecture Optimization: Parameters
1) Network Dimensions
2) Spectral Radius
3) Connectivity
4) Input Scaling
5) Input Shifting
ESN Architecture Optimization
39
RUL(t)
Network Architecture Optimization: Parameters
1) Network Dimensions
2) Spectral Radius
3) Connectivity
4) Input Scaling
5) Input Shifting
ESN Architecture Optimization
Sigmoidal Activation Function
40
ESN Architecture Optimization
• Optimization Algorithm
• Population-based:
• Evolutionary-based
CHROMOSOME
Network
DimensionsConnectivity
Spectral
Radius
Input
Scaling
Inout
Shifting
Initialization Mutation Crossover Selection
Objective function: 𝑅𝐴 =σ 𝑅𝑈𝐿𝐺𝑇−𝑅𝑈𝐿
𝑅𝑈𝐿𝐺𝑇
• Experience + trial & errors difficult, good performance not guaranteed
• Differential evolution
41
Optimal Architecture
Network Architecture Optimization: Parameters
1) Network Dimensions
2) Spectral Radius
3) Connectivity
4) Input Scaling
5) Input Shifting
RUL(t)
Network
DimensionsConnectivity
Spectral
Radius
Input
Scaling
Output
Scaling
385 0.17 0.67 0.45 -0.05
42
ESN for Prognostics: Results (I)
0 20 40 60 80 100 1200
20
40
60
80
100
120
Time (Cycle)
RU
L (
Cyc
le)
RUL Prediction for Tansient 157
True RUL
ESN
FS
ELM
ESN = Echo State Network
FS = Fuzzy Similarity-based Prognosti Method
ELM = Extreme Learning Machine
43
43
Cumulative Relative Accuracy Alpha-Lambda
𝜶 = 𝟎. 𝟐
Steadiness
Extreme
Learning
Machine
0.42 ± 0.03 0.31 ± 0.04 15.3 ± 2.2
Fuzzy
Similarity-based
Method
0.48 ± 0.04 0.34 ± 0.04 12.7 ± 0.7
Echo State
Network 0.37 ± 0.03 0.38 ± 0.04 12.4 ± 1.2
➢ Results – Prognostic Metrics (70 test trajectories)
RUL
RULLURRA
GT
ˆ,)var( :)( tttt TSI
ESN for Prognostics: Results (II)
44
Conclusions
Recurrent Neural Network
Training: Reservoir Computing
Echo State Network
• Accurate RUL prediction
• Short Training Time
• Able to catch the system dynamics
Time
RU
L
Dynamic problem
45
Acknowledgments
• Dr. Sameer Al-Dahidi
• Francesco Cannarile
• Dr. Michele Compare
• Dr. Francesco Di Maio
• Dr. Marco Rigamonti
• Mingjing Xu
• Zhe Yang
• Prof. Enrico Zio
45
46
Thank
You!
Recommended