Upload
others
View
7
Download
0
Embed Size (px)
Citation preview
Chapter 3 Neural Network based MPPT
30
NEURAL NETWORK BASED MAXIMUM POWER POINT
TRACKING
3.1 Introduction
This chapter introduces concept of neural networks, it also deals with a novel
approach to track the maximum power continuously from PV array using neural networks.
A MPPT control algorithm is trained using neural networks and the simulation results are
presented.
3.2 Introduction about neural networks
� What are Neural Networks?
� Neural Networks (NNs) are networks of neurons, for example, as found in real (i.e.
biological) brains.
� Artificial Neurons are crude approximations of the neurons found in brains. They
may be physical devices, or purely mathematical constructs.
� Artificial Neural Networks (ANNs) are networks of Artificial Neurons, and hence
constitute crude approximations to parts of real brains. They may be physical
devices, or simulated on conventional computers.
� From a practical point of view, an ANN is just a parallel computational system
consisting of many simple processing elements connected together in a specific way
in order to perform a particular task.
� One should never lose sight of how crude the approximations are, and how
over-simplified our ANNs are compared to real brains.
� Why are Artificial Neural Networks worth studying?
� They are extremely powerful computational devices (universal computers).
� Massive parallelism makes them very efficient
� They can learn and generalize from training data – so there is no need for enormous
feats of programming.
Chapter 3 Neural Network based MPPT
31
� They are particularly fault tolerant – this is equivalent to the “graceful degradation”
found in biological systems.
� They are very noise tolerant – so they can cope with situations where normal
symbolic systems would have difficulty.
� In principle, they can do anything a symbolic/logic system can do, and more.
(In practice, getting them to do it can be rather difficult…).
� What are Artificial Neural Networks used for?
As with the field of AI in general, there are two basic goals for neural network
research:
� Brain modeling: The scientific goal of building models of how real brains work. This
can potentially help us understand the nature of human intelligence, formulate better
teaching strategies, or better remedial actions for brain damaged patients.
� Artificial System Building: The engineering goal of building efficient systems for
real world applications. This may make machines more powerful, relieve humans of
tedious tasks, and may even improve upon human performance.
These should not be thought of as competing goals. We often use exactly the same networks
and techniques for both. Frequently progress is made when the two approaches are allowed
to feed into each other. There are fundamental differences though, e.g. the need for
biological plausibility in brain modeling, and the need for computational efficiency in
artificial system building.
3.3 A framework for distributed representation
An artificial neural network consists of a pool of simple processing units which
communicate by sending signals to each other over a large number of weighted connections.
A set of major aspects of a parallel distributed model can be distinguished as follows:
� A set of processing units ('neurons,' 'cells');
� A state of activation yk for every unit, which is equivalent to the output of the unit;
� Connections between the units. Generally each connection is defined by a weight wjk
which determines the effect which the signal of unit j has on unit k;
Chapter 3 Neural Network based MPPT
32
� A propagation rule, which determines the effective input sk of a unit from its external
inputs;
� An activation function Fk, which determines the new level of activation based on the
effective input sk (t) and the current activation yk (t) (i.e., the update);
� An external input (bias, offset) θk for each unit;
� A method for information gathering (the learning rule);
� An environment within which the system must operate, providing input signals and-
if necessary-error signals.
Figure 3.1 illustrates these basics, some of which will be discussed in the next sections.
Fig. 3.1: The basic components of an artificial neural network
3.3.1 Processing units
Each unit performs a relatively simple job: receive input from neighbours or external
sources and use this to compute an output signal which is propagated to other units. Apart
from this processing, a second task is the adjustment of the weights. The system is
inherently parallel in the sense that many units can carry out their computations at the same
time.
Within neural systems it is useful to distinguish three types of units: input units
which receive data from outside the neural network, output units which send data out of the
neural network, and hidden units whose input and output signals remain within the neural
network.
Chapter 3 Neural Network based MPPT
33
During operation, units can be updated either synchronously or asynchronously.
With synchronous updating, all units update their activation simultaneously; with
asynchronous updating, each unit has a (usually fixed) probability of updating its activation
at a time t, and usually only one unit will be able to do this at a time. In some cases the latter
model has some advantages.
3.3.2 Connections between units
In most cases we assume that each unit provides an additive contribution to the input
of the unit with which it is connected. The total input to unit k is simply the weighted sum of
the separate outputs from each of the connected units plus a bias or offset term θk:
(3.1)
The contribution for positive wjk is considered as an excitation and for negative wjk as
inhibition. In some cases more complex rules for combining inputs are used, in which a
distinction is made between excitatory and inhibitory inputs.
3.3.3 Activation and output rules
We also need a rule which gives the effect of the total input on the activation of the
unit. We need a function Fk which takes the total input sk(t) and the current activation yk(t)
and produces a new value of the activation of the unit k:
(3.2)
Often, the activation function is a non-decreasing function of the total input of the unit:
(3.3)
although activation functions are not restricted to non-decreasing functions. Generally, some
sort of threshold function is used: a hard limiting threshold function (a sgn function), or a
linear or semi-linear function, or a smoothly limiting threshold (figure 3.2). For this
smoothly limiting function often a sigmoid (S-shaped) function like:
Chapter 3 Neural Network based MPPT
34
(3.4)
is used. In some applications a hyperbolic tangent is used, yielding output values in the
range [-1, +1].
Fig. 3.2: Various activation functions for a unit.
3.3.4 Network topologies
In the previous section we discussed the properties of the basic processing unit in an
artificial neural network. This section focuses on the pattern of connections between the
units and the propagation of data.
As for this pattern of connections, the main distinction we can make is between:
� Feed-forward networks, where the data flow from input to output units is strictly
feed forward. The data processing can extend over multiple (layers of) units, but no
feedback connections are present, that is, connections extending from outputs of
units to inputs of units in the same layer or previous layers.
� Recurrent networks that do contain feedback connections. Contrary to feed-forward
networks, the dynamical properties of the network are important. In some cases, the
activation values of the units undergo a relaxation process such that the network will
evolve to a stable state in which these activations do not change anymore. In other
applications, the changes of the activation values of the output neurons are
significant, such that the dynamical behavior constitutes the output of the network.
Chapter 3 Neural Network based MPPT
35
3.3.5 Paradigms of learning
We can categorize the learning situations in two distinct sorts. These are:
� Supervised learning or Associative learning in which the network is trained by
providing it with input and matching output patterns. These input-output pairs can be
provided by an external teacher, or by the system which contains the network (self-
supervised).
� Unsupervised learning or Self-organisation in which an (output) unit is trained to
respond to clusters of pattern within the input. In this paradigm the system is
supposed to discover statistically salient features of the input population. Unlike the
supervised learning paradigm, there is no a priori set of categories into which the
patterns are to be classified; rather the system must develop its own representation of
the input stimuli.
3.3.6 Modifying patterns of connectivity
Both learning paradigms discussed above result in an adjustment of the weights of
the connections between units, according to some modification rule. Virtually all learning
rules for models of this type can be considered as a variant of the Hebbian learning rule
suggested by Hebb in his classic book Organization of Behaviour (1949) (Hebb, 1949). The
basic idea is that if two units j and k are active simultaneously, their interconnection must be
strengthened. If j receives input from k, the simplest version of Hebbian learning prescribes
to modify the weight wjk with:
(3.5)
where γ is a positive constant of proportionality representing the learning rate. Another
common rule uses not the actual activation of unit k but the difference between the actual
and desired activation for adjusting the weights:
(3.6)
in which dk is the desired activation provided by a teacher. This is often called the Widrow-
Hoff rule or the delta rule.
Chapter 3 Neural Network based MPPT
36
3.4 STRUCTURES OF ARTIFICIAL NEURAL NETWORK
3.4.1 Network Models
The interconnection of artificial neurons results in neural networks, NNW (often
called neurocomputer or connectionist system in literature),and its objective is to emulate
the function of a human brain in a certain domain to solve scientific, engineering, and many
other real-life problems. The structure of biological neural network is not well-understood,
and therefore, many NNW models have been proposed. A few NNW models can be listed
from the literature as follows.
1) Perceptron
2) Adaline and Madaline
3) Backpropagation (BP) Network
4) Radial Basis Function Network (RBFN)
5) Modular Neural Network (MNN)
6) Learning Vector Quantization (LVQ) Network
7) Fuzzy Neural Network (FNN)
8) Kohonen’s Self-Organizing Feature Map (SOFM)
9) Adaptive Resonance Theory (ART) Network
10) Real-Time Recurrent Network
11) Hopfield Network
12) Boltzmann Machine
13) Recirculation Network
14) Brain-State-In-A-Box (BSB)
15) Bi-Directional Associative Memory (BAM) Network
For the training of the neural network in this thesis a backpropagation network is
used. The introduction to the backpropagation network is explained as follows:
3.4.2 Backpropagation Network
The feed forward multilayer backpropagation topology, shown in Fig. 3.3, is most
commonly used in power electronics and motor drives. The name “backpropagation” comes
from the method of supervised training used for the NNW shown by the lower two blocks in
Chapter 3 Neural Network based MPPT
37
the figure. The network is normally called multilayer Perceptron (MLP) type, but its
activation function(AF) can be different from threshold function. The MLP type NNW is
very powerful computationally compared with perceptron NNW. It uses error
backpropagation supervised training algorithm which was first described by Paul Werbos in
1974. Subsequently, Rumelhart, Hinton, Williams, McClelland, Parker, LeCun, etc., further
made contributions to this method. The example NNW shown in the figure has three input
signals ( )321 ,, XXX , and two output signals ( )21andYY . The circles represent the neurons
which have associated AF (not shown), and the weights are indicated by dots.
Fig.3.3: Three-layer back propagation network
The network shown has three layers: (a) input layer, (b) hidden layer, and (c) output layer.
With five neurons in the hidden layer as indicated, it is normally defined as 3–5–2 network.
The input layer is nothing but the nodes that distribute the signals to the middle layer.
Therefore, the topology is often defined as two-layer network. The bias source is normally
coupled to both hidden and output layer neurons, although it is shown here for the hidden
Chapter 3 Neural Network based MPPT
38
layer only for simplicity. Although the network handles continuous variables, the input and
output signals can be continuous, logical, or discrete bidirectional. If the I/O signals are
bipolar, the hidden layer neuron usually has hyperbolic tan AF and the output layer has
bipolar linear TF. On the other hand, for unipolar signals, these AFs can be sigmoidal and
unipolar linear, respectively. The signals within the NNW are processed in perunit (pu)
magnitude, and therefore, input signals normalization and output signals descaling or
denormalization are used as shown. Although, theoretically, three-layer NNW can solve all
the mapping problems, occasionally more than three layers are used.
Like a biological network, where the memory or intelligence is contributed in a
distributed manner by the synaptic junctions of neurons, the NNW synaptic weights
contribute similar distributed intelligence. This intelligence permits the basic input–output
mapping or pattern recognition property of NNW. This is also defined as associative
memory by which when one signal pattern is impressed at the input, the corresponding
pattern is generated at the output. This pattern generation or pattern recognition is possible
by adequate training of the NNW. Because of parallel computation, the computation is fault-
tolerant, i.e., deterioration of a few weights and/or missing links will not significantly
deteriorate the output signal quality. Besides, NNW has noise or harmonic filtering property.
3.4.3 Backpropagation Training
A NNW requires supervised training by example data rather than the traditional
programming in a computer system. This is similar to the training of a biological neural
network. Backpropagation is the most popular training method for a multilayer feed forward
network. In the beginning, input–output example data patterns can be obtained from the
experimental results or from simulation study if mathematical model of the plant is
available. Analytical data patterns can also be used. An initial NNW configuration is created
with the desired input and output layer neurons dictated by the number of respective signals,
a hidden layer with a few neurons, and appropriate AFs. Small random weights are selected
so that neuron outputs do not get saturated. With one input pattern, the output is calculated
(defined as forward pass) and compared with the desired or target output pattern. With the
calculated error (mean-squared-error (MSE)), the weights are then altered in the backward
direction by backpropagation algorithm until the error between the output pattern and the
Chapter 3 Neural Network based MPPT
39
desired pattern is very small and acceptable (see Fig.3.3). A round trip (forward and reverse
passes) of calculations is defined as an epoch. Similar training is repeated with all the
patterns so that matching occurs for all the patterns. At this point, the network is said to have
been trained satisfactorily to perform useful functions. If the error does not converge
sufficiently, it may be necessary to increase number of neurons in the hidden layer, and/or
add extra hidden layer(s). It has been explained that a three-layer NNW can solve practically
all pattern matching problems. Instead of selecting one pattern at a time in sequence, batch
method training can be used, where all the patterns are presented simultaneously and final
weight updates are made after processing all the patterns.
The weight adjustment for the minimization of error uses the standard gradient
descent technique, where the weights are iterated one at a time starting backward from the
output layer. Consider that a network is being trained with the input pattern number , and the
squared output error for all the output layer neurons of the network is given by:
(3.7)
where = desired output of the jth
neuron in the output layer, = corresponding actual
output, Q= dimension of the output vector, actual network output vector, and
= corresponding desired output vector. The total Mean Squared Error (MSE) for the P set
of patterns is then given by
MSE= (3.8)
The weights of the neurons are altered to minimize the value of the objective
function MSE by gradient descent method, as mentioned before. The weight update equation
is then given as:
(3.9)
where η =learning rate, = new weight between ith
and jth
neurons, and =
corresponding old weight. The weights are iteratively updated for all the P training patterns.
Chapter 3 Neural Network based MPPT
40
Sometimes mean square error is taken as the objective function. In order to be sure that the
MSE converges to a global minimum (i.e., does not get locked up to a local minimum), a
momentum term is added to the right of (3.9), where µ is a
small value.
Fig(3.4) illustrates the flowchart of the error back-propagation training algorithm for
a basic two-layer network as shown in fig(3.3).
Fig 3.4: Error backpropagation training algorithm Flowchart
Chapter 3 Neural Network based MPPT
41
The learning begins with the feed forward recall phase (step 2). After a single pattern
vector Z is submitted at the input, the layers responses ‘y’ and ‘o’ are computed in this
phase. Then the error signal computation phase (step 4) follows. Fist the error signal vector
must be determined in the output layer first, and then it is propagated towards the input
nodes. Then KXJ weights are subsequently adjusted within the matrix W in step 5. Finally,
JXI weights are adjusted within the matrix V in step 6. The cumulative error of into output
mapping is computed in step 3 as a sum over all continuous output errors in the entire
training set. The final error for the entire training cycle is calculated after each completed
pass through the training set. The learning procedure stops when the final error value the
upper bound Emax is obtained as shown in step 8.
3.5 Maximum Power Point Tracking of PV cell Using Neural Networks
The block diagram for identifying the maximum power point of a PV cell using
neural networks is shown in fig. 3.5.
Fig.3.5: Block Diagram for the identification of optimal operating point
Chapter 3 Neural Network based MPPT
42
Fig.3.6: Configuration of a Neural Network
The configuration of 3-layer feedforward neural network is in fig.3.6. The neural
network is utilized to identify the maximum Imp of the pv module. The network has 3
layers: an input, a hidden, and an output layer. The number of neurons are 3,5,and 1 in the
input, hidden and output layers respectively. The neuron in the input layer gets input solar
irradiance (G) and cell temperature (Tc). These signals are directly passed to the neurons in
the hidden layer. The neuron in the output layer provides the identified maximum Imp.
For each neuron in the hidden and the output layer, the output Oi (k) is given as follows:
(3.10)
The term Ii(k) is the input signal given to the neuron I at the Kth sampling. The input
Ii (k) is given by the weighted sum from the previous nodes as follows:
(3.11)
In the above equation, Wij is the connection weight from the neuron j to the neuron i
and Oj(k) is the output from neuron j. The process of determining connection weights is
referred to as training process. In the training process, we need a set of input-output patterns
for the neural network. These computations are performed off-line during the training
process. With the training patterns, the connection weights Wij recursively until the best fit is
achieved for the input-output patterns in the training data. A commonly used approach is the
generalized delta rule, where the sum of the squared error described below is minimized
during the training process.
(3.12)
Where N is the total number of training patterns. T(k) is the target output from the output
node and O(k) is the computed one. For all the training patterns, the error function E is
evaluated, and the connection weights are updated to minimize the error function E.
Chapter 3 Neural Network based MPPT
43
3.5.1 Neural Network Model Development in MATLAB/Simulink
The set of Input-Output patterns required for the training process of the neural
network are calculated from the simulated model of PV array presented in the last section. In
this training process for a step increase of every 5°C from 0°C to 75°C of Cell Temperature
in combination with Irradiance for every step increase of 200W/m2 from 0 to 2000W/m
2
Imp is calculated for different combinations of cell temperature and irradiance. The sample
training data used to train the neural network is shown below for constant TC=25°C and
varying Irradiance from 200 to 1000W/m2.
Table-3.1: Training data to train neural network
TC G Imp (A) Vmp(V) P(W)
25°C 200W/m2 .477 56.5 27.3
400W/m2 .956 59 57.4
600W/m2 1.437 62.2 88.4
800W/m2 1.913 61 118.5
1000W/m2 2.394 61.2 149.5
1200W/m2 2.875 64.4 182
1400W/m2 3.346 62.8 212
1600W/m2 3.827 62 241
1800W/m2 4.298 61.8 270
Chapter 3 Neural Network based MPPT
44
2000W/m2 4.798 61.6 300
3.5.2. Training of a Neural Network
Fig 3.7 Training of neural network
The training of a neural network consists of solar irradiance and cell temperature as
the input patterns. The target pattern is given by measured Imp for training the neural network.
The Imp is calculated for different values of irradiance and cell temperature w.r.t above
modeled PV module. This calculated Imp values are given as a training data to the neural
network. Fig.3.7, shows the simulation results convergence of error during training process.
During the training process, the convergence error is taken as 0.01.
The graphs for the Imp of the neural network and the calculated values of the PV model, are
combined to show the error between the two. The graphs are drawn for constant Tc = 25°C
and varying G from 200W/m2 to 2000W/m
2.
Chapter 3 Neural Network based MPPT
45
Fig.3.8 (a): calculated Imp of a PV model
Fig.3.8 (b): Imp of Neural Network
Fig.3.8(c): Combined graph of Imp for both neural network and calculated
From the above graph(fig 3.8c), it is to be noted that the accuracy of neural networks
in finding the Imp of a PV array for MPPT is more. Therefore, for MPPT in this thesis a
Neural Network based control algorithm is implemented.
Chapter 3 Neural Network based MPPT
46
3.6 Summary
In this chapter, introduction about neural networks is explained and then the
approach to track the maximum power continuously from PV array using neural networks
are presented. A MPPT control algorithm is trained using neural networks and the
simulation results are presented and the comparison of Imp of neural networks is compared
with the data given to train the neural network. The efficiency of the proposed neural
network has been presented for identifying the optimal operating point for the maximum
power tracking control of the PV modules. Despite the small set of patterns utilized for the
training of the neural network, the network gives accurate predictions over a wide variety of
operating modes. The accuracy does not degrade following the seasonal variations of
insolation and temperature.