Upload
others
View
3
Download
0
Embed Size (px)
Citation preview
Convolutional Operation
Vertical Edge Detection
G u
Filterkernelpython cowforwardtenserflow tfnnconradKeres conv2b
How it works verticaledgedetected
jabrniguugi.my
tmiFionEmyigxef
give.itemrammeatirted
t t
edge detected
HorizontalEdge Detection
horizontal filterKennel
3 3txt 4x4
6 6h x n n ft1 x n f ti
other filteroptions
I O 1 Sobelfilter puts emphasis on the central row pixel2 O 2I O l
3 O 3 Scharerfilter puts emphasis on the central row pixel10 O 103 O 3
flip toget horizontal edgedetection
W WzWs Learn these as parameters such that this
Wy w w gives us a goodedge detector
W W W Learning isevenmore robusthelps learn these features its trying to detectcan learn edges at angles too
Paddingconvolutional
6 6 mat 3 3 mat 4x4 mat
problems
shrinking output
throwing away info fromedges
010 I 0101010O Oo o6 6 I 3 3 wat 6 6 mat0
oT nt2p ft x nt2p f D0 0 01010 0
8 8matpadded
pad with zeros
p padding amount 1 inAn example
Valid Same convolutions
Valid no padding applied
nan f if d ftl x n ftl
same pad so the output is same as the inputsize
nt2p f 1 x fht2p ft fx f t2P ftl x fht2p ftl
f is usually odd
Strided Convolutions
ooM Stride 22
siY dropped becauseof floor
y
dimensions
Mxn f xf nt2p f nt2p f1 I tlpaddingp strides s S
Convolutions Over Volume
fwnvowe fI
3 3 3 4 4 16 6 3 edgesinredChanel
R G Bheight
wildly channelsto 1 ooo oooI 0 1 ooo oooI 0 1 Ooo ooo
Multiple Filters
verticalfilter
y3 3 3 4 4 1horizontal filter a
4 4 2T
2differentfilters applied
3 3 3 4 4 1
dimensionN x n x he f if he n f 11 n th XMc
4 Y I fitterdepth
of 1 biasparameter perfiter Rf
Rew t b
horizontalfilter
µ µy
pew b
a vw 2 a w t b tz W a b a _g z
a gCz
derivatives fAt II we X dZnwtwe Asu.eeX tZnw
jb E I t2hw
6 6 3 4 4 2
aa a
Notation
If layer 1 is a convolution layer
f filtersize
p padding
s stride
KB aB tBinput n nw x he
output n n nMn nd t 2p f
sI
filter f f not
activations a my xnw n
weights f xf xn xndetothfromlastlayer I t of fitersinlayerd
bias n I l l n
Examplef 3 f 5 f Ss I s z s 2p o p p p olofilters wfilters yofilters
Xla a al a39 39 3 373710 17 17 20 7 7 40na Nw 39 na hw 37 n nd n na n 7co ahe 3 n lo n 20 MY 40
vectorizing a
aooo yi logisticIsoftmax
1960
Types of layers in a convolutional network
convolution cow
pooling pool
fully connected Fc
Pooling Layers
Max Poolingf hyper
parameters
f 25 2
000069 2
getting thema6 3
from eachcolor
Some of features or activations from previous layerthen max pooling detects a particular feature and
preserves it dueto thehighnumber
no parameters only hyperparameters fixed computation
example
9 9 5g g 5
8 6 9
f 3 3 35 5S I
n zp f I 3S
Average Pooling
f 2S 2
7 7 1000 l x l x1000
helps make the sizeof the matrix smaller by taking theaverages
Example of a fullConvolutional Network
convI conv2pool1 pool2 FC3 FLY
7 i i imaxpoolMaxpool Softmax
layer3 layer4layer1 layer 2
BABE
6888386
Darioday880
As we take a picture suppose RGB, and we apply a filter/kernel to it, we are able to make that picture go through a filter like edge detections and leave out other features of the image. The image we get is particular to that filter. As we apply more filters/kernels to the image, we can stack these output images and we get images that have certain filters applied to them and these images tend to be smaller in width and height as compared to the actual images.We can view this as a neural network if we try to stack all the pixels of the original picture and connect it to the weights which are the values of the pixels in the filter. The concept of convolutional neural networks is that we connect a lot of these pixels values / input units to the weights / hidden units but not all of them to each one of the hidden units. The missing connections or the less connections help us to end up with an easier network with less parameters than a traditional neural network where all input units are connected to every one of the hidden units.As use this filter to get another image that we feed again to another convolutional network, this helps us separate or individually feed these features to further layers which eventually help us identify what the image is. We train the parameters, i.e the pixel values of the filter, to make new filters that help us detect features important in identifying the image. As we apply the filter we get a part or a feature of the image important for our classification. What pooling does is it helps us to kind of average out and make the size of the height and width smaller to prevent having a large number of parameters. It is like a control check on the network and helps us keep the number of parameters in control.
Why Convolutions
CONV nets have comparatively way lessparameters
Parameter Sharing A feature detector like vertical horizontaledgedetectors are
useful in all partsofthe image
Sparsityof connections ineach layer eachoutputvalue dependsonly ona
small number of inputs
weights
7www.IYI.trainiyitese
Pixelmatrix
Px
Originalimage 02 Weightvalues
TrainedfilterPX IyWig T WiWzWz
7 J Pty Wi W W W
Pty tw Wy WgWgT
px
I523