91
CHAPTER 1 INTRODUCTION 1.1 BASIC CONCEPTS Intrusions are the result of flaws in the design and implementation of computer systems, operating systems, applications, and communication protocols. Exploitation of these vulnerabilities is becoming easier because the knowledge and tools to launch attacks are readily available and usable. It has become easy for a novice to find attack programs on the Internet that he/she can use without knowing how they were designed by security specialists. There are two types of intrusion detection system. They are Network intrusion detection system and Host based intrusion detection system. The host based intrusion detection system consists of an agent on a host that identifies intrusions by analyzing system calls, application logs, file-system modifications (binaries, password files, capability databases,Access control lists, etc.) and other host activities and state. In a HIDS, sensors usually 1

DOC2

  • Upload
    wormz

  • View
    284

  • Download
    0

Embed Size (px)

Citation preview

Page 1: DOC2

CHAPTER 1

INTRODUCTION

1.1 BASIC CONCEPTS

Intrusions are the result of flaws in the design and implementation of

computer systems, operating systems, applications, and communication

protocols. Exploitation of these vulnerabilities is becoming easier because the

knowledge and tools to launch attacks are readily available and usable. It has

become easy for a novice to find attack programs on the Internet that he/she can

use without knowing how they were designed by security specialists.

There are two types of intrusion detection system. They are Network

intrusion detection system and Host based intrusion detection system. The host

based intrusion detection system consists of an agent on a host that identifies

intrusions by analyzing system calls, application logs, file-system modifications

(binaries, password files, capability databases,Access control lists, etc.) and

other host activities and state. In a HIDS, sensors usually consist of a software

agent. Some application-based IDS are also part of this category.

1.2 OBJECTIVE

Intrusion detection (ID) is a type of security management system for

computers and networks. An ID system gathers and analyzes information from

various areas within a computer or a network to identify possible security

breaches, which include both intrusions (attacks from outside the organization)

and misuse (attacks from within the organization). ID uses vulnerability

assessment (sometimes refered to as scanning), which is a technology

developed to assess the security of a computer system or network.

1

Page 2: DOC2

Intrusion detection functions include:

1. Monitoring and analyzing both user and system activities

2. Analyzing system configurations and vulnerabilities

3. Assessing system and file

4. integrity

5. Ability to recognize patterns typical of attacks

6. Analysis of abnormal activity patterns

7. Tracking user policy violations

The physical and data link layers are vulnerable to intrusions specific to

these communication layers. The mainstay of this project is to design a tool

which identifies the intruder in a system. It is efficient in not allowing the

intruder to work and it captures the intruder activity during system log. It also

determines the hidden layer of ANN. It is, therefore, essential to take into

account these considerations when designing and deploying an intrusion

detection system. Load which are lacking in the existing studies, concludes this

study and outlines some directions of future research work.

1.3 APPLICATION DOMAIN

Distributed computing is a field of computer science that studies distributed

systems. A distributed system consists of multiple autonomous computers that

communicate through computer. The computers interact with each other in

order to achieve a common goal. A computer program that runs in a distributed

system is called a distributed program, and distributed programming is the

process of writing such programs.

2

Page 3: DOC2

Distributed systems are networked computers operating with same

processors. The terms "concurrent computing", "parallel computing", and

"distributed computing" have a lot of overlap, and no clear distinction exists

between them. The same system may be characterized both as "parallel" and

"distributed"; the processors in a typical distributed system run concurrently in

parallel.[14]Parallel computing may be seen as a particular tightly-coupled form

of distributed computing,[15] and distributed computing may be seen as a

loosely-coupled form of parallel computing.[5] Nevertheless, it is possible to

roughly classify concurrent systems as "parallel" or "distributed" using the

following criteria:

1. In parallel computing, all processors have access to a shared memory.

Shared memory can be used to exchange information between

processors.

2. In distributed computing, each processor has its own private memory

(distributed memory). Information is exchanged by passing messages

between the processors.

Artificial Neural Networks (ANNs) are computational models which mimic

the properties of biological neurons. A neuron, which is the base of an ANN, is

described by a state, synapses, a combination function, and a transfer function.

The state of the neuron, which is a Boolean or real value, is the output of the

neuron. Each neuron is connected to other neurons via synapses. Synapses are

associated with weights that are used by the combination function to achieve a

precomputation, generally a weighted sum, of the inputs. The Activation

function, also known as the transfer function, computes the output of the neuron

3

Page 4: DOC2

from the output of the combination function. An artificial neural network is

composed of a set of neurons grouped in layers that are connected by synapses.

Application areas include system identification and control (vehicle control,

process control), quantum chemistry,[2] game-playing and decision making

(backgammon, chess, racing), pattern recognition (radar systems, face

identification, object recognition and more), sequence recognition (gesture,

speech, handwritten text recognition), medical diagnosis, financial applications

(automated trading systems), data mining (or knowledge discovery in databases,

"KDD"), visualization and e-mail spam filtering.

1. Function approximation, or regression analysis, including time series

prediction, fitness approximation and modeling.

2. Classification, including pattern and sequence recognition, novelty

detection and sequential decision making.

3. Data processing, including filtering, clustering, blind source separation

and compression.

4. Robotics, including directing manipulators, Computer numerical control.

4

Page 5: DOC2

CHAPTER 2

SYSTEM ANALYSIS

2.1 EXISTING SYSTEM

A key problem that many researches face is how to choose the optimal

set of features, as not all features are relevant to the learning algorithm, and in

some cases, irrelevant and redundant features can introduce noisy data that

distract the learning algorithm, severely degrading the accuracy of the detector.

These detectors not capable of detecting specific attacks .Some existing work

tried to build IDS that functioned at the Data Link Layer. The Intrusion

Detection systems examine only the network layer and higher abstraction layers

for extracting and selecting features and ignore the MAC layer header. .MAC

layer header attributes are input features to build the learning algorithm for

detecting intrusions. Some existing work tried to build IDS that functioned at

the Data Link Layer. The Intrusion Detection systems examine only the

network layer and higher abstraction layers for extracting and selecting features.

2.1.1 DRAWBACKS

1. De-authentication attacks

2. Unable to identify intruder

2.2 PROPOSED SYSTEM

We propose a model that efficiently detects specific intrusions .The

mainstay of this project is to design a tool which identifies the intruder in a

system. This greatly increases the accuracy of the IDS. In this, agents at user

5

Page 6: DOC2

side transmit the information to the management station, therefore it is efficient

in not allowing the intruder to work and it capture the intruder activity during

system log and we show the implementation of artificial neural networks that

determines the hidden layer in the Artificial neural networks.

2.2.1 Advantages of proposed system:

1. Increase the accuracy

2. Measure the relevance of each feature

3. Detection and ranking of the attacks

2.3. FEASIBILITY STUDY

2.3.1 Economical Feasibility

This study is carried out to check the economic impact that the system will

have research and development of the system is limited. The expenditures

must be justified. Thus the developed system as well within the budget and

this was achieved because most of the technologies used are freely available.

Only the customized products had to be purchased.

2.3.2 Operational Feasibility

The aspect of study is to check the level of acceptance of the system by the

user. This includes the process of training the user to use the system efficiently.

The user must not feel threatened by the system, instead must accept it as a

necessity. The level of acceptance by the users solely depends on the methods

that are employed to educate the user about the system and to make him

familiar with it. His level of confidence must be raised so that he is also able to

make some constructive criticism, which is welcomed, as he is the final user of

the system.

6

Page 7: DOC2

2.3.3 Technical Feasibility

This study is carried out to check the technical feasibility, that is, the

technical requirements of the system. Any system developed must not have a

high demand on the available technical resources. This will lead to high

demands on the available technical resources. This will lead to high demands

being placed on the client. The developed system must have a modest

requirement, as only minimal or null changes are required for implementing this

system. Once the technical feasibility is established, it is important to consider

the monetary factors also. Since it might happen that developing a particular

system may be technically possible but it may require huge investments and

benefits may be less. For evaluating this, economic feasibility is proposed

7

Page 8: DOC2

CHAPTER 3

SYSTEM SPECIFICATION

3.1 HARDWARE SPECIFICATION

Processor Type : Intel Core 2 Duo

Ram : 2 GB RAM

Hard disk : 40 GB

Device : Web Camera

3.2 SOFTWARE SPECIFICATION

Operating System : Windows XP

Programming Package : Visual Studio.Net 2008 (C#.Net)

Facility : Net

8

Page 9: DOC2

CHAPTER 4

PROJECT DESCRIPTION

4.1 PROBLEM DEFINITION

Some existing work tried to build IDS that functioned at the Data Link

Layer. The Intrusion Detection systems examine only the network layer and

higher abstraction layers for extracting and selecting features .The IDS cannot

detect the specific attack and cannot identify the intruder.

4.2 OVERVIEW OF THE PROJECT

An intrusion detection system (IDS) is a device or software application

that monitors network and/or system activities for malicious activities or policy

violations and produces reports to a Management Station. Intrusion prevention

is the process of performing intrusion detection and attempting to stop detected

possible incidents. Intrusion detection and prevention systems (IDPS) are

primarily focused on identifying possible incidents, logging information about

them, attempting to stop them, and reporting them to security administrators. In

addition, organizations use IDPSs for other purposes, such as identifying

problems with security policies, documenting existing threats, and deterring

individuals from violating security policies. IDPSs have become a necessary

addition to the security infrastructure of nearly every organization.

Our model efficiently detects specific intrusions. The mainstay of this

project is to design a tool which identifies the intruder in a system. This greatly

increases the accuracy of the IDS. In this ,the agents at user side transmit the

information to the management station, therefore it is efficient in not allowing

9

Page 10: DOC2

the intruder to work and it capture the intruder activity during system log and

we show the implementation of artificial neural networks that determines the

hidden layer in the Artificial neural networks.

4.3 MODULE DESCRIPTION

Based on the problem under study the project can be divided into three

modules and implemented to give a final product.

1. User authentication

2. Perceptron

3. Multilayer Perceptrons

4. Hybrid Multilayer Perceptrons

4.3.1 USER AUTHENTICATION

Authentication is the act of establishing or confirming something

as authentic, that is, that claims made by or about the subject are true. An

authentication factor is a piece of information used to authenticate or verify a

person's identity on appearance or in a procedure for security purposes and with

respect to individually granted access rights. If user login fails IDS capture

intruder image using the web camera and send it to the user mail id.

4.3.2 PERCEPTRON

There are three types of layers: input, hidden, and output layers. The

input layer is composed of input neurons that receive their values from external

devices such as data files or input signals. The hidden layer is an intermediary

layer containing neurons with the same combination and transfer functions.

Finally, the output layer provides the output of the computation to the external

applications.

10

Page 11: DOC2

Fig 4.1 Neural network

An interesting property of ANNs is their capacity to dynamically adjust

the weights of the synapses to solve a specific problem. There are two phases in

the operation of Artificial Neuron Networks. The first phase is the learning

phase in which the network receives the input values with their corresponding

outputs called the desired outputs. In this phase, weights of the synapses are

dynamically adjusted according to a learning algorithm. The difference between

the output of the neural network and the desired output gives a measure on the

performance of the network. The most used learning algorithm is the retro back

propagation algorithm. In the second phase, called the generalization phase, the

neural network is capable of extending the learned examples to new examples

not seen before. The learning phase is resource demanding, explained by the

iterative nature of the operation mode of the ANN. Once the network is trained,

the processing of a new input is generally fast.

It is a binary classifier used for classification of linearly separable

problems. It has three layers namely input layer, hidden layer, and output layer.

The Input layer is composed of input neurons that receive their values from

11

Page 12: DOC2

external devices .The hidden layer is an intermediary layer containing neurons.

Output layer provides the output of the computation to the external applications.

Fig 4.2 Perceptron

4.3.2.1. BINARY GATE

To use Brain Net library in this project, we should create a reference from

our project to the BrainNet.NeuralFramework.Dll library file. In the

constructor of the class, we are basically creating a neural network with two

neurons in first layer, two neurons in the hidden layer, and one neuron in the

output layer. The Train function will pass a training data object (consists of

inputs and outputs) to the Train Network function of the library. The Run

function will pass an array list as input to the Run Network function of the

library.

4.3.2.2. TRAIN ( ) FUNCTION

Training can be done by calling the Train function of the network. The input

to the train network function is a Training Data object. A TrainingData object

12

Page 13: DOC2

consists of two array lists - Inputs and Outputs. The number of elements in

TrainingData.Inputs should match exactly with the number of neurons in your

input layer. The number of elements in TrainingData.Outputs should match

exactly with the number of neurons in your output layer.

4.3.2.3. RUN ( ) FUNCTION

You can call the Run Network function of the network, to run the

network after training it. The input parameter to the Run function is an array list

which consists of the inputs to the input layer. Again, the number of elements in

this array list should match the number of neurons in input layer. The Run

function will return an array list which consists of the output values. The

number of elements in this array list will be equal to the number of elements in

the output layer.

4.3.2.4. DIGITALNEURALGATE CLASS

To test the digital neural gate, let us create a simple interface which can

create a gate, read the inputs to train the gate, and obtain the output to display it.

Here, we create a new object of our DigitalNeuralGate when the form loads.

Also, the user can create a new DigitalNeuralGate by clicking the 'Reset Gate'

button. In the beginning, the Truth Table provided in the training text boxes is

initialized to match the Truth Table of XOR gate

However, you can change the truth table by clicking the links, or you can

provide custom truth table by entering directly in the text boxes. Run the project

and see. To begin with, Reset the Gate by clicking 'Reset Gate', and just click

the 'Run Network' button and see the output. The output doesn't match the truth

table output. Now, we can train the network using the values in the truth table.

13

Page 14: DOC2

Click the 'Train 1000 Times' button and click the 'Run Network' button. You

can see the output is getting closer to the expected output - that is, the network

is learning. Do this a couple of times, and see the improvement in accuracy.

4.3.2.5. SAVING AND LOADING A NETWORK:

Brain Net offers built in support for persistence of neural networks. For

example, in the above case, after training a Gate, we may need to save its state

to load it later. For this, we can use the NetworkSerializer class in the BrainNet

library. To demonstrate this feature, let us add two functions to our

DigitalNeuralGate class. The Save Network method within NetworkSerializer

class will save the network to a specified path, and the Load Network function

will load the network back. The steps are of learning algorithms are

a. Initialise weights and threshold:

1. Set wi(t) to be the weight i at time t for all   input nodes.

2. Set   to be   and all xi inputs in this initial case to be 1.

3. Set wi(1) to small random values), thus initializing the weights. We take a

firing threshold at y = 0.

b. Present input and desired output:

Present from our training samples D the input   and desired output dj for this

training set.

c. Calculate the actual output:

14

Page 15: DOC2

Steps are repeated until the iteration error dj − y(t) is less than a user-specified

error threshold.

d. Feeding data through the net:

(1 ´ 0.25) + (0.5 ´ (-1.5)) = 0.25 + (-0.75) = - 0.5

Squashing:

4.3.3 MULTI-LAYER PERCEPTRON NEURAL NETWORK

A multilayer perceptron (MLP) is a feed forward artificial neural

network model that maps sets of input data onto a set of appropriate output. A

MLP consists of multiple layers of nodes in a directed graph which is fully

connected from one layer to the next. Except for the input nodes, each node is a

neuron (or processing element) with a nonlinear activation function. MLP

utilizes a supervised learning technique called back propagation for training the

network.  MLP is a modification of the standard linear perceptron, which can

distinguish data that is not linearly.

15

Page 16: DOC2

The MLP Network implemented for the purpose of this project is

composed of 3 layers, one input, one hidden and one output(Fig.4.1). The input

layer constitutes of 150 neurons which receive pixel binary data from a 10x15

simple pixel matrix. The size of this matrix was decided taking into

consideration the average height and width of character image that can be

mapped without introducing any significant pixel noise. The hidden layer

constitutes of 250 neurons whose number is decided on the basis of optimal

results on a trial and error basis. The output layer is composed of 16 neurons

corresponding to the 16-bits of Unicode encoding.

Multilayer perceptrons using a back propagation algorithm are the standard

algorithm for any supervised-learning pattern recognition process and the

subject of ongoing research in computational neuroscience and parallel

distributed processing. They are useful in research in terms of their ability to

solve problems stochastically, which often allows one to get approximate

solutions for extremely complex problems like fitness.

Currently, they are most commonly seen in speech recognition, image

recognition, and machine translation software, but they have also seen

applications in other fields such as cyber security.

In general, their most important use has been in the growing field of artificial

intelligence, although the multilayer perceptron does not have connections

with biological neural networks as initial neural based networks.

16

Page 17: DOC2

Fig 4.3 MLP Network

4.3.3.1. SYMBOL IMAGE DETECTION

The process of image analysis to detect character symbols by examining pixels

is the core part of input set preparation in both the training and testing phase.

Symbolic extents are recognized out of an input image file based on the color

value of individual pixels, which for the limits of this project is assumed to be

either black RGB (255, 0, 0, 0) or white RGB (255, 255, 255, 255). The input

images are assumed to be in bitmap form of any resolution which can be

mapped to an internal bitmap object in the Microsoft Visual Studio

environment. The procedure also assumes the input image is composed of only

characters and any other type of bounding object like a border line is not taken

into consideration.

17

Page 18: DOC2

Enumeration of character lines in a character image (page) is essential in

delimiting the bounds within which the detection can proceed. Thus detecting

the next character in an image does not necessarily involve scanning the whole

image all over again.

1. start at the first x and first y pixel of the image pixel(0,0), Set number of

lines to 0

2. scan up to the width of the image on the same y-component of the image

a. if a black pixel is detected register y as top of the first line

b. if not continue to the next pixel

c. if no black pixel found up to the width increment y and reset x to scan

the next horizontal line

3. start at the top of the line found and first x-component pixel(0,line_top)

4. scan up to the width of the image on the same y-component of the image

a. If no black pixel is detected register y-1 as bottom of the first line.

Increment number of lines

b. If a black pixel is detected increment y and reset x to scan the next

horizontal line

5. start below the bottom of the last line found and repeat steps 1-4 to detect

subsequent lines

6. If bottom of image (image height) is reached stop.

The detection of individual symbols involves scanning character lines for

orthogonally separable images composed of black pixels. The steps are

1. start at the first character line top and first x-component

2. scan up to image width on the same y-component

a. if black pixel is detected register y as top of the first line

18

Page 19: DOC2

b. if not continue to the next pixel

3. start at the top of the character found and first x-component,

pixel(0,character_top)

4. scan up to the line bottom on the same x-component

a. if black pixel found register x as the left of the symbol

b. if not continue to the next pixel

c. if no black pixels are found increment x and reset y to scan the next

vertical line

5. start at the left of the symbol found and top of the current line,

pixel(character_left, line_top)

6. scan up to the width of the image on the same x-component

a. if no black characters are found register x-1 as right of the symbol

b. if a black pixel is found increment x and reset y to scan the next

vertical line

7. start at the bottom of the current line and left of the symbol,

pixel(character_left,line_bottom)

8. scan up to the right of the character on the same y-component

a. if a black pixel is found register y as the bottom of the character

b. if no black pixels are found decrement y and reset x to scan the next

vertical line

From the procedure followed and the above figure it is obvious that the detected

character bound might not be the actual bound for the character in question.

19

Page 20: DOC2

Fig : 4.4 Line and Character boundary detection

This is an issue that arises with the height and bottom alignment irregularity

that exists with printed alphabetic symbols. Thus a line top does not necessarily

mean top of all characters and a line bottom might not mean bottom of all

characters as well. An optional confirmation algorithm implemented in the

project is:

A. start at the top of the current line and left of the character

B. scan up to the right of the character

1. if a black pixels is detected register y as the confirmed top

2. if not continue to the next pixel

3. if no black pixels are found increment y and reset x to scan the next

horizontal line

Fig: 4.5 Confirmation of Character boundaries

20

Page 21: DOC2

4.3.3.2. SYMBOL IMAGE MATRIX MAPPING

The next step is to map the symbol image into a corresponding two

dimensional binary matrix.. If all the pixels of the symbol are mapped into the

matrix, one would definitely be able to acquire all the distinguishing pixel

features of the symbol and minimize overlap with other symbols. However this

strategy would imply maintaining and processing a very large matrix (up to

1500 elements for a 100x150 pixel image). Since the height and width of

individual images vary, an adaptive sampling algorithm was implemented. The

algorithm is listed below:

a. For the width (initially 20 elements wide)

1. Map the first (0,y) and last (width,y) pixel components directly to the

first (0,y) and last (20,y) elements of the matrix

2. Map the middle pixel component (width/2,y) to the 10th matrix element

3. subdivide further divisions and map accordingly to the matrix

b. For the height (initially 30 elements high)

1. Map the first x,(0) and last (x,height) pixel components directly to the

first(x,0) and last (x,30) elements of the matrix

2. Map the middle pixel component (x,height/2) to the 15th matrix element

subdivide further divisions and map accordingly to the matrix

3. c. Further reduce the matrix to 10x15 by sampling by a factor of 2 on

both the width and the height

21

Page 22: DOC2

Fig : 4.6 Mapping symbol images onto a binary matrix

In order to be able to feed the matrix data to the network (which is of a single

dimension) the matrix must first be linearized to a single dimension. This is

accomplished with a simple routine with the following steps:

1. start with the first matrix element (0,0)

2. increment x keeping y constant up to the matrix width

a. map each element to an element of a linear array (increment array

index)

b. if matrix width is reached reset x, increment y

3. repeat up to the matrix height (x,y)=(width, height)

Hence the linear array is our input vector for the MLP Network. In a training

phase all such symbols from the trainer set image file are mapped into their own

linear array and as a whole constitute an input space. The trainer set would also

contain a file of character strings that directly correspond to the input symbol

images to serve as the desired output of the training.

22

Page 23: DOC2

Once the network has been initialized and the training input space prepared the

network is ready to be trained. Some issues that need to be addressed upon

training the network are:

1. How chaotic is the input space? A chaotic input varies randomly and in

extreme range without any predictable flow among its members.

2. How complex are the patterns for which we train the network? Complex

patterns are usually characterized by feature overlap and high data size.

3. What should be used for the values of:

a. Learning rate

b. Sigmoid slope

c. Weight bias

4. How many Iterations (Epochs) are needed to train the network for a given

number of input sets?

5. What error threshold value must be used to compare against in order to

prematurely stop iterations if the need arises?

Alphabetic optical symbols are one of the most chaotic input sets in pattern

recognitions studies. This is due to the unpredictable nature of their pictorial

representation seen from the sequence of their order. For instance the Latin

alphabetic consecutive character A and B have little similarity in feature when

represented in their pictorial symbolic form. The figure below demonstrates the

point of chaotic and non-chaotic sequence with the Latin and some factious

character set:

23

Page 24: DOC2

Fig: 4.7 Example of chaotic and non-chaotic symbol sequences

The complexity of the individual pattern data is also another issue in

character recognition. Each symbol has a large number of distinct features that

need to be accounted for in order to correctly recognize it. Elimination of some

features might result in pattern overlap and the minimum amount of data

required makes it one of the most complex classes of input space in pattern

recognition. Other than the known issues mentioned,the other numeric

parameters of the network are determined in real time. They also vary greatly

from one implementation to another according to the number of input symbols

fed and the network topology.

For the purpose of this project the parameters use is:

1. Learning rate = 150

2. Sigmoid Slope = 0.014

3. Weight bias = 30 (determined by trial and error)

4. Number of Epochs = 300-600 (depending on the complexity of the font

types)

5. Mean error threshold value = 0.0002 (determined by trial and error)

4.3.4. HYBRID MULTILAYER PERCEPTRON

4.3.4.1 PATTERN DETECTION

The Hybrid Multilayer Perceptrons architecture is the superposition of

perceptron with multilayer perceptrons networks. This type of network is

24

Page 25: DOC2

capable of identifying linear and nonlinear correlation between the input and

output vectorsIn the above process, we developed a simple application - a two

input gate that can be trained to perform the function of any digital gate - using

Brian Net library. Now it is time to go for something more exciting and

powerful - a pattern/image detection program using BrainNet library. We

provide a set of images as input to the network along with an ASCII character

that corresponds to each input - and we will examine whether the network can

predict a character.

Surprisingly, the project is pretty easy to develop. This is because BrainNet

library provides some functionality to deal directly with images. This project

will demonstrate:

1. Built in support for image processing/detection and pattern processing in

BrainNet library

2. Built in support for advanced training using Training Queues in BrainNet

library.

Before going to the code and explanation, let us see what the application really

does. You can find the application and source code in the attached zip file. Load

the solution in Microsoft Visual Studio.NET, set the startup project as

PatternDetector, and run the project.

To train the network, after adding the images to the training queue as explained

earlier, click 'Start Training' button. Train the network at least 1000 times, for a

below average accuracy. When we click the 'Start Training' button, training will

start. To detecting a pattern, once the training is completed; go to the 'Detect

using Network' pane. Load an image by clicking the browse button, and click

25

Page 26: DOC2

'Detect This Image Now' button to detect the pattern .If we trained the Network

sufficient number of times, and if we provided enough samples, we will get the

correct output.

4.3.4.2. TRAINING DATA

Click 'Browse' to load an image to the picture box (we can find some

images in the 'bin' folder of Pattern Detector - Also, you can create 20 x 20

monochrome images in Paintbrush if you want). Enter the ASCII character that

corresponds to the image - for example, if we are loading image of character

'A', enter 'A' in the text box. Click 'Add To Queue' button

After adding the images to the training queue as explained earlier, click

'Start Training' button. Train the network at least 1000 times, for a below

average accuracy. When we click the 'Start Training' button, training will start.

Once the training is completed, go to the 'Detect using Network' pane. Load an

image by clicking the browse button, and click 'Detect This Image Now' button

to detect the pattern .If we trained the Network sufficient number of times, and

if we provided enough samples, we will get the correct output.

26

Page 27: DOC2

4.4. DATA FLOW DIAGRAM

LEVEL 0:

Fig. 4.8 Dataflow diagram−Level 0

LEVEL 1:

Fig. 4.9 Dataflow diagram−Level 1

27

User Login ValidationAccess information

Intruder Block user

Camera enabled

Mail image to user

Page 28: DOC2

LEVEL 2:

Yes

no

28

Start

User authentication

If username &password IIIIIis correct

Webcam activation

Captures intruder image &send it to user mail

Perceptron

Multilayer Perceptron

Hybrid Multilayer Perceptron

Stop

Page 29: DOC2

Fig. 4.10 Dataflow diagram−Level 2

4.5 E-R DIAGRAM

4.5.1. SYSTEM ARCHITECTURE

Fig. 411. System architecture of IDS

29

Page 30: DOC2

4.5.2. USE CASE DIAGRAM

User

Fig. 4.12 Use case diagram

30

Login

Perceptron

Service

Binary Gate

Service

Pattern DetectionService

MLPService

HybridInfo ServiceWeb

Cam Process

Web Server

Page 31: DOC2

4.5.3. SEQUENCE DIAGRAM:

Fig. 4.13 Sequence diagram

31

Page 32: DOC2

4.5.4. CLASS DIAGRAM

Fig.4.14 Class diagram

32

Page 33: DOC2

CHAPTER 5

SYSTEM TESTING

5.1. TESTING

Testing is a process of executing a program with the intent of finding an

error. A good test case is one that has a high probability of finding an as-yet –

undiscovered error. A successful test is one that uncovers an as-yet-

undiscovered error. System testing is the stage of implementation, which is

aimed at ensuring that the system works accurately and efficiently as expected

before live operation commences. It verifies that the whole set of programs

hang together. System testing requires a test consists of several key activities

and steps for run program, string, system and is important in adopting a

successful new system. This is the last chance to detect and correct errors

before the system is installed for user acceptance testing.

The software testing process commences once the program is created and

the documentation and related data structures are designed. Software testing is

essential for correcting errors. Otherwise the program or the project is not said

to be complete. Software testing is the critical element of software quality

assurance and represents the ultimate the review of specification design and

coding. Testing is the process of executing the program with the intent of

finding the error. A good test case design is one that as a probability of finding

a yet undiscovered error. A successful test is one that uncovers a yet

undiscovered error.

33

Page 34: DOC2

5.2. UNIT TESTING

Unit testing is conducted to verify the functional performance of each

modular component of the software. Unit testing focuses on the smallest unit of

the software design (i.e.), the module. The white-box testing techniques were

heavily employed for unit testing

5.3 ACCEPTANCE TESTING

Testing is performed to identify errors. It is used for quality assurance.

Testing is an integral part of the entire development and maintenance process.

The goal of the testing during phase is to verify that the specification has been

accurately and completely incorporated into the design, as well as to ensure the

correctness of the design itself. For example the design must not have any logic

faults in the design is detected before coding commences, otherwise the cost of

fixing the faults will be considerably higher as reflected. Detection of design

faults can be achieved by means of inspection as well as walkthrough.

Testing is one of the important steps in the software development phase.

Testing checks for the errors, as a whole of the project testing involves the

following test cases:

1. Static analysis is used to investigate the structural properties of the

Source code.

2. Dynamic testing is used to investigate the behavior of the source code by

executing the program on the test data.

5.4 INTEGRATION TESTING

Integration tests are designed to test integrated software components to

determine if they actually run as one program. Testing is event driven and is

more concerned with the basic outcome of screens or fields. Integration tests

demonstrate that although the components were individually satisfaction, as

34

Page 35: DOC2

shown by successfully unit testing, the combination of components is correct

and consistent. Integration testing is specifically aimed at exposing the

problems that arise from the combination of components.

5.5 FUNCTIONAL TESTING

Functional tests provide a systematic demonstration that functions tested

are available as specified by the business and technical requirements, system

documentation, and user manuals.

Functional testing is centered on the following items:

Valid Input : identified classes of valid input must be accepted.

Invalid Input : identified classes of invalid input must be rejected.

Functions : identified functions must be exercised.

Output : identified classes of application outputs must be exercised

Systems : interfacing systems or procedures must be invoked.

Organization and preparation of functional tests is focused on requirements, key

functions, or special test cases. In addition, systematic coverage pertaining to

identify Business process flows; data fields, predefined processes, and

successive processes must be considered for testing.

5.5 TEST CASES

Any engineering product can be tested in one of the two ways:

1. White Box Testing

2. Black Box Testing

5.5.1. WHITE BOX TESTING

This testing is also called as Glass box testing. In this testing, by knowing

the specific functions that a product has been design to perform test can be

35

Page 36: DOC2

conducted that demonstrate each function is fully operational at the same time

searching for errors in each function. It is a test case design method that uses

the control structure of the procedural design to derive test cases. Basis path

testing is a white box testing. It includes

1. Flow graph notation

2. cyclometric complexity

3. Deriving test cases

4. Graph matrices Control

5.5.2. BLACK BOX TESTING

In this testing by knowing the internal operation of a product, test can be

conducted to ensure that “all gears mesh”, that is the internal operation

performs according to specification and all internal components have been

adequately exercised. It fundamentally focuses on the functional requirements

of the software. The steps involved in black box test case design are:

1. Graph based testing methods

2. Equivalence partitioning

3. Boundary value analysis

4. Comparison testing

36

Page 37: DOC2

CHAPTER 6

SYSTEM IMPLEMENTATION

6.1 FRONT END SOFTWARE

6.1.1 INTRODUCTION TO .NET FRAMEWORK

NET (dot-net) is the name Microsoft gives to its general vision of the

future of computing, the view being of a world in which many applications run

in a distributed manner across the Internet. We can identify a number of

different motivations driving this vision.

Firstly, distributed computing is rather like object oriented programming,

in that it encourages specialized code to be collected in one place, rather than

copied redundantly in lots of places. There are thus potential efficiency gains to

be made in moving to the distributed model.

Secondly, by collecting specialized code in one place and opening up a

generally accessible interface to it, different types of machines (phones,

handhelds, desktops, etc.) can all be supported with the same code. Hence

Microsoft's 'run-anywhere' aspiration.

Thirdly, by controlling real-time access to some of the distributed nodes

(especially those concerning authentication), companies like Microsoft can

control more easily the running of its applications. It moves applications further

into the area of 'services provided' rather than 'objects owned'. Prototyping

and evaluation of new protocols and Large-scale simulations not possible in

real experiments.

37

Page 38: DOC2

C# language is intended to be a simple, modern, general-purpose, object-

oriented programming language. The language, and implementations thereof,

should provide support for software engineering principles such as checking, array,

detection of attempts to use uninitialized variables, and automatic garbage

collection. Software robustness, durability, and programmer productivity are

important.

The language is intended for use in developing software

components suitable for deployment in distributed environments. Source code

portability is very important, as is programmer portability, especially for those

programmers already familiar with C and C++.Support for internationalization is

very important.

C# is intended to be suitable for writing applications for both hosted

and embedded systems, ranging from the very large that use

sophisticated operating systems, down to the very small having dedicated

functions. Although C# applications are intended to be economical with regard to

memory and processing power requirements, the language was not intended to

compete directly on performance and size with C or assembly language.

6.2 MAIL SERVER

A mail server (also known as a mail transfer agent or MTA, a mail transport

agent, a mail router or an Internet mailer) is an application that receives

incoming e-mail from local users (people within the same domain) and remote

senders and forwards outgoing e-mail for delivery. A computer dedicated to

running such applications is also called a mail server. Microsoft Exchange,

qmail, Exim and sendmail are among the more common mail server programs.

38

Page 39: DOC2

An email client or email program allows a user to send and receive email by

communicating with mail servers. There are many types of email clients with

differing features, but they all handle email messages and mail servers in the

same basic way.

The mail server works in conjunction with other programs to make up

what is sometimes referred to as a messaging system. A messaging system

includes all the applications necessary to keep e-mail moving as it should.

When you send an e-mail message, your e-mail program, such as Outlook or

Eudora, forwards the message to your mail server, which in turn forwards it

either to another mail server or to a holding area on the same server called

a message store to be forwarded later. As a rule, the system uses SMTP (Simple

Mail Transfer Protocol) or ESMTP (extended SMTP) for sending e-mail, and

either POP3 (Post Office Protocol 3) or IMAP (Internet Message Access

Protocol) for receiving e-mail.

39

Page 40: DOC2

CHAPTER 7

CONCLUSION & FUTURE ENHANCEMENTS

7.1 CONCLUSION

It provides a novel approach to select the best features for detecting

intrusions and the identification of intruder. The accuracy of intrusion detection

system is improved and overcome the de authentication attacks .We have also

studied the impact of feature selection on the performance of different

classifiers based on neural networks.

7.2 FUTURE ENHANCEMENTS

More than one camera can be used in future which accurately identifies the

intruder. The ANN can be applied in bio medical environment to determine the

brain diseases. In military scenarios, sensor networks ANN application can be

used. Feature selection was proven to have a significant impact on the

performance of the classifiers.

40

Page 41: DOC2

CHAPTER 8

APPENDIX

8.1. SOURCE CODE

8.1.1. USER AUTHENTICATION

using System;using System.Collections.Generic;using System.ComponentModel;using System.Data;using System.Drawing;using System.Linq;using System.Text;using System.Windows.Forms;using System.Net.Mail;namespace Intrusion_Detection{ public partial class FrmLogin : Form { public MailMessage MlMsg = new MailMessage(); public System.Net.Mail.SmtpClient SMTPclnt = new System.Net.Mail.SmtpClient("smtp.gmail.com", 587); public string myEmailAddress; public string[] toEmailAddress= new string[1]; public string myPassword; public string filenm; public FrmLogin() { InitializeComponent(); }

public void sendmail() { oWebCam.OpenConnection(); //PauseForMilliSeconds(1000); oWebCam.SaveImage(); oWebCam.Dispose(); myEmailAddress = "[email protected]"; myPassword = "jrmnkcamjse";

41

Page 42: DOC2

SMTPclnt.UseDefaultCredentials = false; SMTPclnt.Credentials = new System.Net.NetworkCredential(myEmailAddress, myPassword); SMTPclnt.EnableSsl = true; MlMsg.From = new MailAddress(myEmailAddress); MlMsg.Subject = "Intrusion Detected"; MlMsg.Body = "An intrusion was Detected at " + DateTime.Now + ", Tried to Authenticate with User name: '" + TBusrnm.Text + "'"; toEmailAddress[0] = "[email protected]"; foreach (string Toadd in toEmailAddress) { if (Toadd != null & Toadd != "") MlMsg.To.Add(Toadd); } //filenm = System.IO.Path.GetTempPath() + "//Intruder.BMP"; filenm = "c://Intruder.BMP"; if (filenm != null & filenm != "") { Attachment attached = new Attachment(filenm, System.Net.Mime.MediaTypeNames.Application.Octet); attached.ContentDisposition.Inline = true; MlMsg.Attachments.Add(attached); } try { //INFO>> SMTPclnt.send(FROM,TO,SUBHECT,BODY) MessageBox.Show("Wait for some time, You have entered wrong Password more than 3 times"); SMTPclnt.Send(MlMsg); //System.Environment.Exit(0); MessageBox.Show("Now you can access again"); } catch (Exception ex) { //INFO>> ERROR show MessageBox.Show(ex.Message.ToString()); } }

42

Page 43: DOC2

private void Blogin_Click(object sender, EventArgs e) { if (TBusrnm.Text != "") { if (TBpwd.Text != "") { if (TBpwd.Text == "admin" & TBpwd.Text == "admin") { MessageBox.Show("You have successfully logged in "); Form frm = (Form)this.MdiParent; MenuStrip ms = (MenuStrip)frm.Controls["menuStrip"]; ToolStripMenuItem tsmLO = (ToolStripMenuItem)ms.Items["logOutToolStripMenuItem"];

ToolStripMenuItem tsmLI = (ToolStripMenuItem)ms.Items["fileMenu"];ToolStripMenuItem tsmperc =

(ToolStripMenuItem)ms.Items["perceptronToolStripMenuItem"];ToolStripMenuItem tsmmulti =

(ToolStripMenuItem)ms.Items["multiLayerPerceptronToolStripMenuItem"];ToolStripMenuItem tsmhybrid =

(ToolStripMenuItem)ms.Items["hybridToolStripMenuItem"]; tsmLO.Name = "Log&Out"; tsmLI.Visible = false; tsmperc.Enabled = true; tsmmulti.Enabled = true; tsmhybrid.Enabled = true; this.Close(); } else { TBpwd.Text = ""; TBpwd.Focus(); MessageBox.Show("Enter correct Password");

if (Intrusion_Detection.Globvar.GlobalVar == "") Intrusion_Detection.Globvar.GlobalVar = "0"; int tempi = int.Parse(Intrusion_Detection.Globvar.GlobalVar) + 1;

43

Page 44: DOC2

Intrusion_Detection.Globvar.GlobalVar = Convert.ToString(tempi); if (int.Parse(Intrusion_Detection.Globvar.GlobalVar) > 3) { sendmail(); }

} } else { MessageBox.Show("Enter Password"); TBpwd.Focus(); if (Intrusion_Detection.Globvar.GlobalVar == "") Intrusion_Detection.Globvar.GlobalVar = "0"; int tempi = int.Parse(Intrusion_Detection.Globvar.GlobalVar) + 1; Intrusion_Detection.Globvar.GlobalVar = Convert.ToString(tempi); if (int.Parse(Intrusion_Detection.Globvar.GlobalVar) > 3) { sendmail(); } } } else { MessageBox.Show("Enter Username"); TBusrnm.Focus();

if (Intrusion_Detection.Globvar.GlobalVar == "") Intrusion_Detection.Globvar.GlobalVar = "0";

int tempi = int.Parse(Intrusion_Detection.Globvar.GlobalVar) + 1; Intrusion_Detection.Globvar.GlobalVar = Convert.ToString(tempi); if (int.Parse(Intrusion_Detection.Globvar.GlobalVar) > 3) { sendmail(); } }}

44

Page 45: DOC2

EXPLANATION

This code checks the condition whether the user is an authenticated person

or not . If the user is not authenticated person , the code enables the camera and

Using Sendmail() function , the intruder image can be captured and mailed to

owner mail id with date and time of intrusion occurred.

8.1.2. PERCEPTRON

using System.Collections;using System.Collections.Generic;using System.ComponentModel;using System.Data;using System.Drawing;using System.Linq;using System.Text;using System.Windows.Forms;using Microsoft.VisualBasic;using System.Diagnostics;

namespace Intrusion_Detection{ public partial class frmGate : Form { private DigitalNeuralGate gate; public frmGate() { InitializeComponent(); } //Form overrides dispose to clean up the component list.

private void MsgBox(string p) { throw new NotImplementedException(); } //Run the network to get the output, and show it in the text boxes private void cmdRun_Click(object sender, EventArgs e) {

45

Page 46: DOC2

double t1, t2, t3, t4;

try { //rout1, rinp11, rinp12 etc are textbox names t1= gate.Run(Convert.ToInt64(this.rinp11.Text), Convert.ToInt64(this.rinp12.Text)); this.rout1.Text = t1.ToString();

t2 = gate.Run(Convert.ToInt64(this.rinp21.Text), Convert.ToInt64(this.rinp22.Text)); this.rout2.Text = t2.ToString();

t3 = gate.Run(Convert.ToInt64(this.rinp31.Text), Convert.ToInt64(this.rinp32.Text)); this.rout3.Text = t3.ToString();

t4 = gate.Run(Convert.ToInt64(this.rinp41.Text), Convert.ToInt64(this.rinp42.Text)); this.rout4.Text = t4.ToString();

} catch (Exception ex) { MessageBox.Show(ex.Message); }}//Train only once private void cmdTrainOnce_Click(object sender, EventArgs e) { try { TrainOnce(); } catch (Exception ex) { MsgBox("Error. Check whether the input is valid - " + ex.Message); }

}

46

Page 47: DOC2

private void cmdSave_Click(object sender, EventArgs e) { gate.Save("c:\\test.xml"); }

private void cmdLoad_Click(object sender, EventArgs e) { gate.Load("c:\\test.xml"); }

EXPLANATION

Brain Net offers built in support for persistence of neural networks. For

example, in the above case, after training a Gate, we may need to save its state

to load it later. For this, we can use the NetworkSerializer class in the BrainNet

library. To demonstrate this feature, let us add two functions to our

DigitalNeuralGate class. The Save Network method within NetworkSerializer

class will save the network to a specified path, and the Load Network function

will load the network back.

8.1.3. MULTILAYER PERCEPTRON

public void load_character_trainer_set() { string line; openFileDialog1.Filter = "Character Trainer Set (*.cts)|*.cts"; if (openFileDialog1.ShowDialog() == DialogResult.OK) { character_trainer_set_file_stream = new System.IO.StreamReader(openFileDialog1.FileName); trainer_string = ""; while ((line = character_trainer_set_file_stream.ReadLine()) != null) trainer_string = trainer_string + line; number_of_input_sets = trainer_string.Length;

47

Page 48: DOC2

character_trainer_set_file_name = Path.GetFileNameWithoutExtension(openFileDialog1.FileName); character_trainer_set_file_path = Path.GetDirectoryName(openFileDialog1.FileName); //label20.Text = character_trainer_set_file_name; character_trainer_set_file_stream.Close();

image_file_name = character_trainer_set_file_path + "\\" + character_trainer_set_file_name + ".bmp"; image_file_stream = new System.IO.StreamReader(image_file_name); input_image = new Bitmap(image_file_name); pictureBox1.Image = input_image; input_image_height = input_image.Height; input_image_width = input_image.Width; if (input_image_width > pictureBox1.Width) pictureBox1.SizeMode = PictureBoxSizeMode.StretchImage; else pictureBox1.SizeMode = PictureBoxSizeMode.Normal; right = 1; image_start_pixel_x = 0; image_start_pixel_y = 0; identify_lines(); current_line = 0; character_present = true; character_valid = true; output_string = ""; label5.Text = "Input Image : [" + character_trainer_set_file_name + ".bmp]"; } } public void detect_next_character() { number_of_input_sets = 1; get_next_character(); if (character_present) { for (int i = 0; i < 10; i++) for (int j = 0; j < 15; j++)

48

Page 49: DOC2

input_set[i * 15 + j, 0] = ann_input_value[i * 2 + 1, j * 2 + 1]; get_inputs(0); calculate_outputs(); // comboBox3.Items.Clear(); //comboBox3.BeginUpdate(); for (int i = 0; i < number_of_output_nodes; i++) { output_bit[i] = threshold(node_output[number_of_layers - 1, i]); // comboBox3.Items.Add("bit[" + (i).ToString() + "] " + output_bit[i].ToString()); } // comboBox3.EndUpdate(); char character = unicode_to_character(); output_string = output_string + character.ToString(); // textBox8.Text = " " + character.ToString(); string hexadecimal = binary_to_hex(); } } public void analyze_image() { int analyzed_line = current_line; comboBox1.Items.Clear(); comboBox2.Items.Clear(); get_character_bounds(); if (character_present) { map_character_image_pixel_matrix(); create_character_image(); map_ann_input_matrix(); } else MessageBox.Show("Character Recognition Complete!", "Unicode OCR", MessageBoxButtons.OK, MessageBoxIcon.Exclamation);

} private void Form1_Paint(object sender, System.Windows.Forms.PaintEventArgs e)

49

Page 50: DOC2

{ } public int confirm_top() { int local_top = top; for (int j = top; j <= bottom; j++) for (int i = left; i <= right; i++) if (Convert.ToString(input_image.GetPixel(i, j)) == "Color [A=255, R=0, G=0, B=0]") { local_top = j; return local_top; } return local_top; } public int confirm_bottom() { int local_bottom = bottom; for (int j = bottom; j >= 0; j--) for (int i = left; i <= right; i++) if (Convert.ToString(input_image.GetPixel(i, j)) != "Color [A=255, R=255, G=255, B=255]") { local_bottom = j; return local_bottom; } return local_bottom; } public void map_character_image_pixel_matrix() { for (int j = 0; j < character_height; j++) for (int i = 0; i < character_width; i++) character_image_pixel[i, j] = input_image.GetPixel(i + left, j + top); } public void map_ann_input_matrix() {

50

Page 51: DOC2

pick_sampling_pixels(); for (int j = 0; j < matrix_height; j++) for (int i = 0; i < matrix_width; i++) { ann_input_pixel[i, j] = character_image.GetPixel(sample_pixel_x[i], sample_pixel_y[j]); if (ann_input_pixel[i, j].ToString() == "Color [A=255, R=0, G=0, B=0]") ann_input_value[i, j] = 1; else ann_input_value[i, j] = 0; } groupBox6.Invalidate(); groupBox6.Update(); } private void groupBox6_Paint(object sender, System.Windows.Forms.PaintEventArgs e) { SolidBrush blueBrush = new SolidBrush(Color.Blue); Pen blackpen = new Pen(Color.Black, 1); for (int j = 0; j < matrix_height; j++) for (int i = 0; i < matrix_width; i++) { e.Graphics.DrawRectangle(blackpen, (x_org + rec_width * i), (y_org + rec_height * j), (rec_width), (rec_height)); if (ann_input_value[i, j] == 1) e.Graphics.FillRectangle(blueBrush, x_org + rec_width * i, y_org + rec_height * j, rec_width, rec_height); } }

//// PERCEPTRON NEURAL NETWORK IMPLEMENTATION private void button4_Click(object sender, System.EventArgs e) { System.Threading.Thread t1 = new System.Threading.Thread(startProgress); t1.Start();

51

Page 52: DOC2

reset_controls(); //label27.Text = "Analyzing Image. Please Wait . . ."; //label27.Update(); form_network(); initialize_weights(); form_input_set(); form_desired_output_set(); right = 1; } void startProgress() { } public void form_network() { layers[0] = number_of_input_nodes; layers[number_of_layers - 1] = number_of_output_nodes; for (int i = 1; i < number_of_layers - 1; i++) layers[i] = maximum_layers; } public void initialize_weights()

for (int i = 1; i < number_of_layers; i++) for (int j = 0; j < layers[i]; j++) for (int k = 0; k < layers[i - 1]; k++) weight[i, j, k] = (float)(rnd.Next(-weight_bias, weight_bias)); }

8.1.4. HYBRID MULTILAYER PERCEPTRON

private void cmdClearAll_Click(object sender, EventArgs e) { lvMain.Items.Clear(); imlMain.Images.Clear();

} public void ShowProgress(long CurrentRound, long MaxRound, ref bool cancel) { this.pbTrain.Maximum =System.Convert.ToInt32(MaxRound);

52

Page 53: DOC2

this.pbTrain.Value = System.Convert.ToInt32(CurrentRound);

//Check whether our used clicked the cancel button //if (this.StopTraining == true) //cancel = true; lblTrainInfo.Text = CurrentRound + " rounds finished of " + MaxRound + " times";

}

public void DetectPattern() {

//Step 1 : Convert the image to detect to an arraylist

BrainNet.NeuralFramework.ImageProcessingHelper imgHelper = new BrainNet.NeuralFramework.ImageProcessingHelper(); ArrayList input = null;

input = imgHelper.ArrayListFromImage(this.picImgDetect.Image);

//Step 2: Run the network and obtain the output ArrayList output = null; output = network.RunNetwork(input); //Step 3: Convert the output arraylist to long value //so that we will get the ascii character code

BrainNet.NeuralFramework.PatternProcessingHelper patternHelper = new BrainNet.NeuralFramework.PatternProcessingHelper(); string character = Chr(patternHelper.NumberFromArraylist(output)); string bitpattern = patternHelper.PatternFromArraylist(output);

//Display the result this.txtAsciiDetect.Text = character; this.txtPatternDetect.Text = bitpattern; }

private void cmdSave_Click(object sender, EventArgs e)

53

Page 54: DOC2

{ //Serialize our network to a file BrainNet.NeuralFramework.NetworkSerializer ser = new BrainNet.NeuralFramework.NetworkSerializer();

SaveFileDialog dlg = new SaveFileDialog(); dlg.Filter = "XML Files|*.xml"; dlg.DefaultExt = "xml"; dlg.ShowDialog();

try { if (!string.IsNullOrEmpty(dlg.FileName)) { ser.SaveNetwork(dlg.FileName, network); MsgBox("Saved to file " + dlg.FileName); } } catch (Exception ex) { MsgBox("Error: Invalid File? " + ex.Message); }

}

private void cmdLoad_Click(object sender, EventArgs e) { //Serialize our network to a file BrainNet.NeuralFramework.NetworkSerializer ser = new BrainNet.NeuralFramework.NetworkSerializer();

OpenFileDialog dlg = new OpenFileDialog(); dlg.Filter = "XML Files|*.xml"; dlg.ShowDialog(); try { if (!string.IsNullOrEmpty(dlg.FileName)) { ser.LoadNetwork(dlg.FileName, ref network);

54

Page 55: DOC2

MsgBox("File " + dlg.FileName + " loaded"); } } catch (Exception ex) { MsgBox("Error: Invalid File? " + ex.Message); }

}

private void cmdExit_Click(object sender, EventArgs e) { this.Close(); //System.Environment.Exit(0);

}

EXPLANATION

In the above process, we developed a simple application - a two input gate that

can be trained to perform the function of any digital gate - using Brian Net

library. Now it is time to go for something more exciting and powerful - a

pattern/image detection program using BrainNet library. We provide a set of

images as input to the network along with an ASCII character that corresponds

to each input - and we will examine whether the network can predict a character

when an arbitrary image is given.

55

Page 56: DOC2

8.2 SCREEN SHOTS

USER AUTHENTICATION

Fig 4.11 Invalid user tried to enter as a authenticated person

56

Page 57: DOC2

Fig. 4.12 Tool identifies the intruder and enables the camera

57

Page 58: DOC2

Fig. 4.13. Intruder image was captured and mailed to owner’s mail id

PERCEPTRON

Fig. 4.14 Module 2- Perceptron . Use run network function to find hidden layer

58

Page 59: DOC2

XML OUTPUT FILE:

Fig. 4.16. Output of module 2 shows the hidden layer

59

Page 60: DOC2

MULTILAYER PERCEPTRON :

Fig 4.17. Module 3- Multilayer perceeptron

60

Page 61: DOC2

HYBRID MULTILAYER PERCEPTRON:

Fig 4.18. Module 4- Hybrid multilayer percpetron

61

Page 62: DOC2

XML OUTPUT FILE:

Fig. 4.19 Output of hybrid multilayer perceptron shows the hidden layer

62

Page 63: DOC2

CHAPTER 9

REFERENCES

[1]Hofmann, T. Horeis, and B. Sick, “Feature Selection for Intrusion Detection:

An Evolutionary Wrapper Approach,” Proc. IEEE Int’l Joint Conf. Neural

Networks, July 2004.

[2]A.H. Sung and S. Mukkamala, “Identifying Important Features for Intrusion

Detection Using Support Vector Machines and Neural Networks,” Proc.

Symp. Applications and the Internet (SAINT ’03), Jan. 2003.

[3]Mouhcine Guennoun1, Aboubakr Lbekkouri1 and Khalil El-Khatib2,”

Optimizing the Feature Set of Wireless Intrusion Detection Systems”

Computer Science and Network Security, VOL.8 No.10, October 2008.

[4]Mouhcine Guennoun,Khalil El-Khatib, “The Scalable wireless intrusion

detection,” Proc. IEEE Symp. Security and Privacy, May 2009.

[5]Khalil El-Khatib, “Impact of Feature Reduction on the Efficiency of

Wireless Intrusion Detection Systems “, ieee transactions on parallel and

distributed systems, vol. 21, no. 8, august 2010.

[6]Anderson.J.A,”Introduction to Artificial Nueral networks”,prentice hall

63