102
Table of Contents 1. Topic of the Project. 2. Category of the Project. 3. System Analysis Preliminary Investigation. Feasibility Study. Identification of Needs. Objectives/Close Boundary of System. S/W Engineer Paradigm Applied. S/W Requirement Specification. H/W Requirement Specification. DFD. E-R Diagram. 4. System Design. Module Design. Database Design. Object Oriented Design. User Interface Design. Output Design. Test Case Design. 5. Coding Complete Coding. Comments and Description. Coding Efficiency. 1

Documentation New

Embed Size (px)

Citation preview

Page 1: Documentation New

Table of Contents

1. Topic of the Project.

2. Category of the Project.

3. System Analysis

Preliminary Investigation.

Feasibility Study.

Identification of Needs.

Objectives/Close Boundary of System.

S/W Engineer Paradigm Applied.

S/W Requirement Specification.

H/W Requirement Specification.

DFD.

E-R Diagram.

4. System Design.

Module Design.

Database Design.

Object Oriented Design.

User Interface Design.

Output Design.

Test Case Design.

5. Coding

Complete Coding.

Comments and Description.

Coding Efficiency.

Error Handling.

Back-End Procedure Design.

Validation Checking.

6. Testing

1

Page 2: Documentation New

Testing Techniques and Strategies Used.

Test Case Applied.

Test Case Result.

7. System Security Measure.

Database/Data Security.

Creation of User Profile And Access Rights.

8. Cost Estimation of Project.

9. Pert Chart.

10. Gantt Chart.

11. Future Scope of the Project.

12. Reference Book Used.

2

Page 3: Documentation New

IMAGE

ENCRYPTION

Category of the Project

3

Page 4: Documentation New

We are using Web Design for developing project.We are developing this

project in java using JSP and html pages.

WEB DESIGN :- Web design is the skill of creating presentations of

content that is delivered to an end-user through the World Wide Web, by

way of a Web browser or other Web-enabled software like Internet television

clients, micro logging clients and RSS readers.

The intent of web design is to create a web site—a collection of

electronic documents and applications that reside on a web server/servers

and present content and interactive features/interfaces to the end user in

form of Web pages once requested. Such elements as text, bit-mapped

images and forms can be placed on the page using HTML tags.

4

Page 5: Documentation New

SYSTEM

ANALYSIS

5

Page 6: Documentation New

PRELIMINARY INVESTIGATION

A geologic preliminary investigation is a survey of the subsoil conducted by

an engineering geologist in conjunction with a civil engineer. Typically, the

footprint of the structure is established on the proposed building site and

trenches up to fourteen feet deep are dug both outside, and more

importantly, inside, the proposed footprint using the bucket-end of a

backhoe. In extreme cases, a larger, more powerful tracked excavator is

used. The geologist is looking for potential failure planes, expansive clays,

excessive moisture, potential for proper compaction, and other variables that

go into the construction of a solid foundation. Materials are also gathered to

determine the maximum compaction value of the subsurface. Prelims should

always be conducted prior to the construction of any permanent structure.

Feasibility Study

6

Page 7: Documentation New

FEASIBILITY STUDY IS THE PROCESS BY WHICH ANALYSTS determine, how

beneficial and practical the development of the information system is for the

business/institution.

Generally, a feasibility study precedes technical development and project

implementation. In other words, a feasibility study is an evaluation or analysis

of the potential impact of a proposed project. It is very important step in

system development as its determine whether the system has to be

developed or not developed.

The English word feasible means capable of be has carried out

successfully. An event is said to be feasible if it is considered possible

and practicable. The feasibility study is an investigation into how possible

a proposed scheme might be, and whether it is capable of being

carried out successfully. It is usually accessed on four standard criteria

for feasibility but other consideration may be relevant and necessary

depending on the specific nature of the project and it’s really required

in the present working conditions or not. It is a very important step in

system developed as it determines whether the system has to be

developed or not.

TYPE OF THE FEASIBILITY STUDY :

(I) ECONOMIC FEASIBILITY

(II) TECHNICAL FEASIBILITY

(III) BEHAVIORAL FEASIBILITY

Economic Feasibility :- Economic analysis is most frequently used for

evaluation of the effectiveness of the system more commonly knows as

cost /benefit analysis the procedure is to determine the benefit &

7

Page 8: Documentation New

saving that are expected from a system and compare them with costs,

decisions is made to design and implement the system. This part of

feasibility study gives the top management the economic justification

for the new system. This is an important input to the management,

because very often the top management does not like to get

confounded by the various technicalities that bound to be associated

with a project of this kind. A simple economic analysis that gives the

actual comparison of costs and benefits is much more meaningful in

such cases. In the system, the organization is most satisfied by

economic feasibility, because if the organization implements this

system it need not require any additional hardware resources as well as

it will be saving lot of time.

Technical Feasibility :- Technical feasibility centers on the existing

manual system of the test management process and to what extent it

can support the system. According to feasibility analysis procedure the

technical feasibility of the system is analyzed and the technical

requirements such as s/w facilities, procedure, inputs are identified. It is

also one of the important phases of the system development activities.

The system offers greater levels of user friendliness combined with

greater processing speed. Therefore, the cost of maintenance can be

reduced since processing speed is very high and the work is reduced in

the maintenance point of view management convince that the project

is operationally feasible.

Behavioral Feasibility :- People are inherently resistant to change and

computer has been known to facilitate changes. An estimate should be

made of how strong the user is likely to move towards the development

of computerized system. There are various levels of user in order to

ensure proper authentication and authorization and security of

sensitive data of the organization.

8

Page 9: Documentation New

IDENTIFICATION OF NEED

The user or the system analyst thinks of developing an alternative system

only when he feels necessity of it. The user hopes for an alternative for

external information requirements such as supplying the government

regulations or fulfilling the request of his own management to generate more

information.

The user might be well-acquainted with the unsatisfactory

performance for which they are responsible. For instance, the frequent late

billing of the customers can be a matter of worry for the manager of the

accounts receivable department or if there is any increment in the

percentage of the delinquent accounts, it might be a point to ponder over.

Likewise the system analyst, who is familiar with the

operational or administrative field, can put some advice across for

improvement. The system analysts maintain a interaction with those users

and try to know what is the drawback in the operation. The problems are also

identified through the joint discussion between the analyst and the user.

OBJECTIVES & CLOSE BOUNDRY OF SYSTEM

The purpose of on-line test simulator is to take online test in an

efficient manner and no time wasting for checking the paper.

The main objective of on-line test simulator is to efficiently evaluate the

candidate thoroughly through a fully automated system that not only

saves lot of time but also gives fast results.

9

Page 10: Documentation New

Analysis will be very easy in proposed system as it is automated. Result

will be very precise and accurate and will be declared in very short spa

of time because calculations are done by the simulator itself.

The proposed system is very secure as no chances of leakage as of

question paper as it is dependent on the administrator only.

SOFTWARE ENGINEER PARADIGM MODELS

A software life cycle model is either a descriptive or prescriptive

characterization of how software is or should be developed. In contrast to

software life cycle models, software process models often represent a

networked sequence of activities, objects, transformations, and events that

embody strategies for accomplishing software evolution. Such models can

be used to develop more precise and formalized descriptions of

10

Page 11: Documentation New

software life cycle activities. Each process model follows a particular life

cycle to ensure success in process of software development.

The primary functions of a software process model are to

determine the order of the stages involved in software development and

evolution and to establish the transition criteria for progressing from one

stage to the next.

There are various software development approaches defined

and designed which are used/employed during development process of

software. A process model for software engineering is chosen based on the

nature of the project and application, the methods and tools to be used, and

the controls and deliverables that are required. An example of model is

Waterfall model

Waterfall model:- Waterfall approach was first Process Model to be

introduced and followed widely in Software Engineering to ensure success of

project. This model is sometimes referred to as the linear sequential model.

The waterfall model was developed in 1970 by Winston W.Royce. This model

was the only widely accepted life cycle model until the early 1980s The

waterfall model is a sequential software development process, in which

progress is seen as flowing of water to downwards through all the phase of

development cycle. All the phases of waterfall model are related to each

other or dependent to each other.

DIAGRAM OF WATERFALL MODEL

11

Page 12: Documentation New

Requirement Analysis & Definition :- The requirements are gathered

from the end-user by consultation, these requirements are analyzed for their

validity and the possibility of incorporating the requirements in the system to

be development is also studied.

System & Software Design :- System design helps in specifying hardware

and system requirements and also helps in specifying hardware and system

requirements and also helps in defining overall system architecture.

Design

Requirement

Implementation

Verification &Validation

Operation & maintenance

12

Page 13: Documentation New

Implementation & Unit Testing :- on the basis of the design documents,

the work is divided in modules/units and actual coding is started. The system

is first developed in small programs called units, which are integrated in the

next phase. Each unit is developed and tested for its functionality; this is

referred to as unit testing. Unit testing mainly verifies if the modules/units

meet their specifications.

Integration & System Testing :- As the various units which are developed

and tested for their functionalities in the previous phase. These units are

integrated into a complete system during Integration phase and tested to

check if all modules/units coordinate between each other and the system as a

whole behaves as per the specifications. After successfully testing the

software, it is delivered to the customer.

Operations & Maintenance :- This phase is virtually never ending phase.

Generally, problems with the system developed( which are not found during

the development life cycle) come up after its practical use starts, so the

issues related to the system are solved after deployment of the system. Not

all the problems come in picture directly but they arise time to time to time

and needs to be solved; hence this process is referred as Maintenance.

SOFTWARE REQUIREMENT SPECIFICATION

A software requirements specification (SRS) is a comprehensive description of

the intended purpose and environment for software under development.

The SRS fully describes what the software will do and how it will be

expected to perform.

APACHE TOMCAT :- This is the top-level entry point of the

documentation bundle for the Apache Jakarta Tomcat Servlet/JSP

container. Tomcat version 5.5 implements the Servlet 2.4 and Java

Server Pages 2.0 specifications from the Java Community Process, and

13

Page 14: Documentation New

includes many additional features that make it a useful platform for

developing and deploying web applications and web services.

ADVANCE JAVA :- Java is a platform independent programming

language used to create secure and robust applications. Java is used to

create applications that run on a single computer or are distributed

among servers and clients over a network. While developing enterprise

applications, java provides complete independence from problems

related to hardware, network and operating system: since java provides

the features such as platform-independency and portability.

NOTEPAD :- Notepad is a basic text editor that you can use to create

simple documents. The most common use for Notepad is to view or edit

text (.txt) files, but many users find Notepad a simple tool for creating

Web pages. Because Notepad supports only very basic formatting, you

cannot accidentally save special formatting in documents that need to

remain pure text. This is especially useful when creating HTML

documents for a Web page because special characters or other

formatting may not appear in your published Web page or may even

cause errors. You can save your Notepad files as Unicode, ANSI, UTF-8,

or big-endian Unicode. These formats provide you greater flexibility

when working with documents that use different character sets.

INTERNET EXPLORER :- Here are some ways that Internet Explorer

makes browsing the web easier, safer, and more enjoyable. New

security and privacy features allow you to browse the web more safely.

Phishing Filter can help protect you from phishing attacks, online

fraud, and spoofed websites.

Higher security levels can help protect you from hackers and web

attacks.

The Security Status bar displays the identity of secure websites to

help you make informed decisions when using online banking or

merchants.

14

Page 15: Documentation New

Internet Explorer's add-on disabled mode lets you start Internet

Explorer without toolbars, ActiveX controls, or other add-ons that

might slow your computer or prevent you from getting online.

HARDWARE REQUIREMENT SPECIFICATION

The means of HRS is to specify the hardware requirement for our project. The

hardware’s configuration necessary for our project are given below –

MICROPROCESSOR :- INTEL PENTIUM DUAL–CORE/CORE2 DUAL

PROCESSOR.

RAM :- 1GB

HARD-DISK :- 160 GB

15

Page 16: Documentation New

OTHER TERMINAL :- VISUAL DISPLAY UNIT, KEYBOARD, MOUSE , CD –

ROM , DVD - ROM, PEN – DRIVE etc.

16

Page 17: Documentation New

SOFTWARE REQUIREMENT SPECIFICATION

Before starting the development of online examination project , install the

following:

JDK1.7

Apache Tomcat 1.6

JSP

Internet Explorer

17

Page 18: Documentation New

DATA FLOW

DIAGRAM(DFD)

18

Page 19: Documentation New

DATA FLOW DIAGRAM

A data flow diagram is graphical tool used to describe and analyze movement

of data through a system. These are the central tool and the basis from which

the other components associated with the system. These are known as the

logical data flow diagrams. The data flow diagrams show the actual

implements and movement of data between people, departments and

workstations.

Data Flow Diagram is an important tool of structured analysis

which was involved by Larry Constantine. Data Flow Diagram is complete

networks which describes the data flow in whole system, Data stores and

mention those processes which change the flow of data. Data Flow Diagram

is a formal and logical extract of network system which contains many

possible physical configurations. It is why the use of set of symbol which does

not indicate the physical forms denotes data source, data flow, data

transformation etc. Data Flow is the directed lines which identify the input

data flow on every process circle. The data store is denoted by rectangle

which is labeled and open on the last end which identifies the data store and

file. It is a graphical tool to explain and analysis the movement of data with

the help of system manual or automated.

Component of a DFD:-

a.) Process :- Process show what system does. Each process has one or

more data inputs and produces one or more data outputs. Processes are

represented by circle in a DFD.

19

Page 20: Documentation New

b.) Data storage :- A component of a DFD that describe the repository of

data in system.

c.) External entity :- An object outside the scope of the system. It is

represented in a box.

d.) Data flow :- It shows how data flows between process, data stores and

external entities. They model the passage of data in the system and are

represented by lines joining system components.

Data Storage

20

External Entity

Data Flow

Page 21: Documentation New

21

Page 22: Documentation New

22

Page 23: Documentation New

-

23

Page 24: Documentation New

24

Page 25: Documentation New

ENTITY RELATIONSHIP

DIAGRAM(ER Diagram)

ER-DIAGRAM

25

Page 26: Documentation New

In software engineering, an entity-relationship model (ERM) is an abstract

and conceptual representation of data. Entity-relationship modeling is a

database modeling method, used to produce a type of conceptual schema or

semantic data model of a system, often a relational database, and its

requirements in a top-down fashion. Diagrams created by this process are

called entity-relationship diagrams, ER diagrams.

An entity may be defined as a thing which is recognized as being capable of

an independent existence and which can be uniquely identified. An entity is

an abstraction from the complexities of some domain. When we speak of an

entity we normally speak of some aspect of the real world which can be

distinguished from other aspects of the real world.

An entity may be a physical object such as a house or a car, an event such

as a house sale or a car service, or a concept such as a customer transaction

or order. Although the term entity is the one most commonly used, following

Chen we should really distinguish between an entity and an entity-type. An

entity-type is a category. An entity, strictly speaking, is an instance of a given

entity-type. There are usually many instances of an entity-type. Because the

term entity-type is somewhat cumbersome, most people tend to use the term

entity as a synonym for this term.

Entities can be thought of as nouns. Examples: a computer, an employee, a

song, a mathematical theorem.

A relationship captures how two or more entities are related to one another.

Relationships can be thought of as verbs, linking two or more nouns.

Examples: an owns relationship between a company and a computer, a

supervises relationship between an employee and a department, a performs

relationship between an artist and a song, a proved relationship between a

mathematician and a theorem.

26

Page 27: Documentation New

Entities and relationships can both have attributes. Every entity must have a

minimal set of uniquely identifying attributes, which is called the entity's

primary key.

Entity-relationship diagrams don't show single entities or single instances of

relations. Rather, they show entity sets and relationship sets. Example: a

particular song is an entity. The collection of all songs in a database is an

entity set. The eaten relationship between a child and her lunch is a single

relationship. The set of all such child-lunch relationships in a database is a

relationship set. In other words, a relationship set corresponds to a relation in

mathematics, while a relationship corresponds to a member of the relation.

27

Page 28: Documentation New

28

Page 29: Documentation New

SYSTEM

DESIGN

MODULE DESIGN

29

Page 30: Documentation New

Module design which is also called "low level design" has to consider the

programming language which shall be used in the implementation. This will

determine the kind of interfaces you can use and a number of other subjects.

In this project, we will be focused on module design for the Java (Servlet, JSP)

programming language and show some crucial principles for a successful

design, which are the following :

ENCAPSULATION :-The principle of encapsulation which is sometimes

also called "information hiding" is part of the idea of object orientation

(see high level design). The principle is that only the data which are

part of the interface of an object are visible to the outside world.

Preferably these data are only available via function calls, rather than

being presented as global variables. An encapsulated module design

(related to C-programs) can be achieved by:

The use of local variables inside the functions as far as possible. i.e.

avoid variables with validity across functions or even modules.

The use of C-function interfaces i.e. pass parameters and return

parameters for data exchange, rather than global or static variables.

If values have to have a lifetime beyond one execution loop, use static

variables rather than global variables.

Design your software with a synchronized control and data flow as

outlined below.

OBJECT ORIENTED DESIGN

30

Page 31: Documentation New

Object oriented design is one of the modern techniques. The function of this

technique is to refine the requirements of earlier identified object and to

define design specific objects.

The function of object oriented analysis is to ensure by the study of the

present object whether it can be used again or it can be used for a new work.

This technique can be used to define new or modified object which will be

linked with the present object.

During the object oriented design the designer might have to revise the

data or to process those characteristics of object which were designed during

system analysis. Like wise, Design implementation decision might make it

compulsory that the designer defines the new set of object which will create

an interface with the help of which the user can interact under the new

system.

USER INTERFACE DESIGN

The cost of collecting the improved data and processed information are two

principal cost’s of a system. Since most data which enters a system and

31

Page 32: Documentation New

leaves the system, is recorded in the form, form design can affect the cost

effectiveness of a system badly.

It is the task of a system analyst to help a user make the design of a suitable

form and co-ordinate with the user informs production activities. It is also the

task of the analyst that he should control and regulate creation of new and

changed form with in an organization to check the costly duplication of form

and form designer effort.

OUTPUT DESIGN

Output is the primary purpose of any system. These guidelines apply for

the most part to both paper and screen outputs. Output design is often

discussed before other aspects of design because, from the client's point of

32

Page 33: Documentation New

view, the output is the system. Output is what the client is buying when he or

she pays for a development project. Inputs, databases, and processes exist to

provide output.

Problems often associated with business information output are

information delay, information (data) overload, paper domination,

excessive distribution, and no tailoring.

Mainframe printers: high volume, high speed, located in the data center

Remote site printers: medium speed, close to end user.

COM is Computer Output Microfilm. It is more compact than traditional

output and may be produced as fast as non-impact printer output.

Turnaround documents reduce the cost of internal information

processing by reducing both data entry and associated errors.

Periodic reports have set frequencies such as daily or weekly; ad hoc

reports are produced at irregular intervals.

Detail and summary reports differ in the former support day-to-day

operation of the business while the latter include statistics and ratios

used by managers to assess the health of operations.

Page breaks and control breaks allow for summary totals on key fields.

Report requirements documents contain general report information and

field specifications; print layout sheets present a picture of what the

report will actually look like.

Page decoupling is the separation of pages into cohesive groups.

Two ways to design output for strategic purposes are (1) make it

compatible with processes outside the immediate scope of the system,

and (2) turn action documents into turnaround documents.

People often receive reports they do not need because the number of

reports received is perceived as a measure of power.

33

Page 34: Documentation New

Fields on a report should be selected carefully to provide uncluttered

reports, facilitate 80-column remote printing, and reduce information

(data) overload.

The types of fields which should be considered for business output are:

key fields for access to information, fields for control breaks, fields that

change, and exception fields.

Output may be designed to aid future change by stressing unstructured

reports, defining field size for future growth, making field constants into

variables, and leaving room on summary reports for added ratios and

statistics.

Output can now be more easily tailored to the needs of individual users

because inquiry-based systems allow users themselves to create ad

hoc reports.

An output intermediary can restrict access to key information and

prevent unauthorized access.

An information clearinghouse (or information center) is a service center

that provides consultation, assistance, and documentation to

encourage end-user development and use of applications.

The specifications needed to describe the output of a system are : data

flow diagrams, data flow specifications, data structure specifications,

and data element specifications.

34

Page 35: Documentation New

TEST CASE DESIGN

Boris Beizer defines a test as “A Sequence of one or more subtests executed

as a sequence because the outcome and/or final state of one subtest is the

input and/or initial state of the next. The word ‘test’ is used to include

subtests, tests proper and test suites”.

A good test has a high probability of finding an error. To achieve this

goal, the tester must understand the software and attempt to develop a

mental picture of how the software might fail. Ideally, the classes of failure

are probed. For example, one class of potential failure in a GUI (Graphical

user interface) is a failure to recognize proper mouse position. A set of tests

would be designed to exercise the mouse in an attempt to demonstrate an

error in mouse position recognition.

A good test is not redundant. Testing time and resources are

limited. There is no point in conducting a test that has the same purpose as

another test. Every test should have a different purpose. A good test should

be “best of breed” In a group of tests that have a similar intent, time and

35

Page 36: Documentation New

resource limitations may toward the execution of only a subset of these tests.

I such cases, the test that has the highest like hood of uncovering a whole

class of errors should be used.

A good test should be neither too simple nor too complex. Although it is

sometimes possible to combine a series of tests into one test case, the

possible side effects associated with this approach may errors. In general,

each test should be executed separately.

A rich variety of test case design methods have evolved for software.

These methods provide the developer with a systematic approach to testing.

More important, methods provide a mechanism that can help to ensure the

completeness of tests and provide the highest likelihood for uncovering

errors in software.

CODING

36

Page 37: Documentation New

THIS IS THE HOME PAGE OF MY PROJECT

37

Page 38: Documentation New

// Project IMAGE ENCRYPTION

//*********************************************

//*********************************************// INCLUDED

Packages

//*********************************************

<import java.awt.*;>

<import javax.swing.*;>

<import java.awt.event.*;>

<%@ page import="java.util.*" %>

//*********************************************

// USED Methods/Functions

//*********************************************

38

Page 39: Documentation New

function validate_required(field,alerttext)

{

}

function validate_pass(field1,field2,alerttxt)

{

}

//*********************************************

// USED HTML TAGS

//*********************************************

<html> </html>

<head> </head>

<title> </title>

<body> </body>

<div> </div>

<ul> </ul>

<li> </li>

<br>

<img src=””/>

<a href=””> </a>

<p> </p>

//*********************************************// Java SCRIPT

TAGS

//*********************************************

<script>

<!--

--!>

</script>

//*********************************************

// Jsp Scripting TAGS

//*********************************************

39

Page 40: Documentation New

Start with <%

End with %>

CODING OF HOME PAGE:

import java.awt.*;

import javax.swing.*;

import java.awt.event.*;

public class Home extends JFrame implements

ActionListener

{

private JButton compose,breakmsg;

Home()

{

super("Cryptography");

Container con=getContentPane();

con.setLayout(null);

compose=new JButton("Hide Information");

compose.addActionListener(this);

compose.setBounds(300,350,150,50);

breakmsg=new JButton("Un-Hide Informarion");

breakmsg.addActionListener(this);

breakmsg.setBounds(550,350,150,50);

40

Page 41: Documentation New

con.add(compose);

con.add(breakmsg);

}

public void actionPerformed(ActionEvent ae)

{

if(ae.getSource()==compose)

{

this.dispose();

ComposePage cp=new ComposePage();

cp.setSize(1035,790);

cp.setVisible(true);

}

if(ae.getSource()==breakmsg)

{

this.dispose();

BreakPage bp=new BreakPage();

bp.setSize(1035,790);

bp.setVisible(true);

}

}

public static void main(String args[])

{

Home h=new Home();

41

Page 42: Documentation New

h.setSize(1035,790);

h.setVisible(true);

}

}

42

Page 43: Documentation New

43

Page 44: Documentation New

44

Page 45: Documentation New

45

Page 46: Documentation New

46

Page 47: Documentation New

47

Page 48: Documentation New

VALIDATIONS

VALIDATION CHECKING

VALIDATION : According to ISO 9000 : 2000 Validation is defined as "

Confirmation, through the provision of objective evidence, that the

requirements for a specific intended use or application have been fulfilled ".

In contrast with Verification, Validation rather focuses on the question

whether a system can perform its desired functions. Another definition of

Validation is “answering the Question whether the Customer will be able to

use the Product in Its intended manner." To validate something is to test it for

use, not to check it for physical properties. Something that has failed being

verified can still be declared as having fitness for purpose after validating it.

There are two type of validation such as given blow –

1. Data Validation

2. Form validation

48

Page 49: Documentation New

1. Data Validation : In computer science, data validation Is the

process of ensuring that a program operates on clean, correct and useful

data. It uses routines, often called “validation rules " or " check routines”,

that check for correctness, meaningfulness, and security of data that are

input to the system. The rules may be implemented through the automated

facilities of a data dictionary, or by the inclusion of explicit application

program validation logic.

For business applications, data validation can be defined through declarative

data integrity rules, or procedure - based business rules. Data that does not

conform to these rules must negatively affect business process execution.

Therefore, data validation should start with business process definition and

set of business rules within this process. Rules can be collected through the

requirements capture exercise.

The simplest data validation verifies that the characters provided come from

a valid set. For example, telephone numbers should include the digits and

possibly the characters +, -, (, and ) ( plus, minus, and parentheses ). A more

sophisticated data validation routine would check to see the user had entered

a valid country code, i.e., that the number of digits entered matched the

convention for the country or area specified.

Incorrect data validation can lead to data corruption or security vulnerability.

Data validation checks that data are valid, sensible, reasonable, and secure

before they are processed.

DATA VALIDATION CHECK : There are many types of data validation

check such as given blow –

BATCH TOTALS : Checks for missing records. Numerical fields may be

added for all records in a batch. The batch total is entered and the

49

Page 50: Documentation New

computer checks that the total is correct, e.g., add the 'Total Cost' field

of a number of transactions together.

CARDINALITY CHECK : Checks that record has a valid number of

related records. For example if Contact record classified as a Customer

it must have at least one associated Order (Cardinality > 0). If order

does not exist for a "customer" record then it must be either changed

to "seed" or the order must be created. This type of rule can be

complicated by additional conditions. For example if contact record in

Payroll database is marked as "former employee", then this record

must not have any associated salary payments after the date on which

employee left organization (Cardinality = 0).

CHECK DIGITS : Used for numerical data. An extra digit is added to a

number which is calculated from the digits. The computer checks this

calculation when data are entered, e.g., The ISBN for a book. The last

digit is a check digit calculated using a modulus 11 method.

CONSISTENCY CHECKS : Checks fields to ensure data in these fields

corresponds, e.g., If Title = " Mr. ", then Gender = "M".

CONTROL TOTALS : This is a total done on one or more numeric

fields which appears in every record. This is a meaningful total, e.g.,

add the total payment for a number of Customers.

CROSS - SYSTEM CONSISTENCY CHECKS : Compares data in

different systems to ensure it is consistent, e.g., The address for the

customer with the same id is the same in both systems. The data may

be represented differently in different systems and may need to be

transformed to a common format to be compared, e.g., one system

may store customer name in a single Name field as 'Doe, John Q', while

another in three different fields : First Name (John), Last Name (Doe)

50

Page 51: Documentation New

and Middle Name (Quality); to compare the two, the validation engine

would have to transform data from the second system to match the

data from the first, for example, using SQL : Last Name || ', ' || First

Name || substring ( Middle Name, 1, 1 ) would convert the data from

the second system to look like the data from the first 'Doe, John Q'.

DATA TYPE CHECKS : Checks the data type of the input and give an

error message if the input data does not match with the chosen data

type, e.g., In an input box accepting numeric data, if the letter 'O' was

typed instead of the number zero, an error message would appear.

FILE EXISTENCE CHECK : Checks that a file with a specified name

exists. This check is essential for programs that use file handling.

FORMAT OR PICTURE CHECK : Checks that the data is in a specified

format (template), e.g., dates have to be in the format DD / MM / YYYY.

HASH TOTALS : This is just a batch total done on one or more

numeric fields which appears in every record. This is a meaningless

total, e.g., add the Telephone Numbers together for a number of

Customers.

LIMIT CHECK : Unlike range checks, data is checked for one limit

only, upper OR lower, e.g., data should not be greater than 2 (<=2).

LOGIC CHECK : Checks that an input does not yield a logical error,

e.g., an input value should not be 0 when there will be a number that

divides it somewhere in a program.

PRESENCE CHECK : Checks that important data are actually present

and have not been missed out, e.g., customers may be required to

have their telephone numbers listed.

51

Page 52: Documentation New

RANGE CHECK : Checks that the data lie within a specified range o f

values, e.g., the month of a person's date of birth should lie between 1

and 12.

REFERENTIAL INTEGRITY : In modern Relational database values in

two tables can be linked through foreign key and primary key. If values

in the primary key field are not constrained by database internal

mechanism then they should be validated. Validation of the foreign key

field checks that referencing table must always refer to a valid row in

the referenced table.

SPELLING AND GRAMMAR CHECK : Looks for spelling and

grammatical errors.

UNIQUENESS CHECK : Checks that each value is unique. This can be

applied to several fields (i.e. Login, Password).

FORM VALIDATION CHECK : There are many type of form validation

check such as given blow –

CHECK WHOLE FORM() : A master function, called checkWholeForm()

is placed at the top of the page that contains a form. This function calls

a series of sub functions, each of which checks a single form element

for compliance with a specific string format and returns a message

describing the error. If the function returns an empty string, we know

the element complies.

CHECK USER NAME () : Here’s the routine that checks to see if the

user entered anything at all in the username field. (We’ll use the same

routine to check each form field for blankness.). We pass the value of

the username field to this function, which compares that value to an

empty string (""). If the two are the same, we know that the username

field is blank, so we return the warning string to our master function. If

52

Page 53: Documentation New

it’s not blank, we move along to the next hurdle. We want to permit

only usernames that are between 4 and 10 characters. We check the

length of the string, and reject it if it’s too short or too long.

Next, we want to forbid certain characters from appearing in

usernames. Specifically, we want to allow only letters, numbers, and

underscores. We can test for that using regular expressions and the

test() method. The regular expression functions found in JavaScript 1.2

are similar to Perl’s regular expressions, with a bit of simplification

when it comes to syntax. If you know Perl, you should have no trouble

wielding JavaScript’s regular expressions. The JavaScript regular

expression /\ W / is a standard character class that’s handily predefined

to mean "any character other than letters, numbers, and underscores."

So we set the variable illegal Chars equal to that regular expression,

and then test the username string against that variable to see if a

match is found. If it is, we throw up a warning.

By now, we’ve run the username through three tests. If it’s

passed all three, it’s OK by us. We give the username a passing grade

and move along to the next field.

CHECK PASSWORD ( ) : For the password field, we want to constrain

the length again ( this time, we’ll keep it between 6 and 8 characters ),

and we want to allow only letters and numbers — no underscores this

time. So we have to use a new regular expression to define which

characters we’re banning. This one, like the last one, includes \W —

everything but letters, numbers, and underscores — but we also need

to explicitly mention underscores, so as to permit only letters and

numbers. Hence: /[\W_]/.

When it comes to passwords, we want to be strict with our

users. It’s for their own good; we don’t want them choosing a password

53

Page 54: Documentation New

that’s easy for intruders to guess, like a dictionary word or their kid’s

birthday. So we want to insist that every password contain a mix of

uppercase and lowercase letters and at least one numeral. We specify

that with three regular expressions, a -z, A-Z, and 0-9, each followed by

the + quantifier, which means “one or more,” and we use the search ( )

method to make sure they’re all there :

CHECK PHONE ( ) : To validate a phone number, first we want to clear

out any spacer characters, such as parentheses, dashes, spaces, and

dots. We can do this with a regular expression and the replace ( )

method, replacing anything that matches our regular expression with a

null string. Having done that, we look at what we have left with the

isNaN( ) function (which checks to see if a given input is Not A

Number ), to test if it's an integer or not. If it contains anything other

than digits, we reject it. Then we count the length of the number. It

should have exactly ten digits — any more or less, and we reject it.

IS DIFFERENT ( ) : We want to do a few more kinds of validation. If

you present a license or something similar in a text box for the user to

accept, you want to make sure that it hasn’t been altered when the

form is submitted. That’s done very simply by comparing the submitted

string with the string you were expecting. Alternately, you can use the

onChange( ) method to catch the user in the act of modifying the text

and stop them before they submit the form.

CHECK RADIO ( ) : To make sure that a radio button has been chosen

from a selection, we run through the array of radio buttons and count

the number that have been checked. Rather than sending the whole

radio object to a sub function, which can be problematic (because the

radio object has no property indicating which value has been chosen),

54

Page 55: Documentation New

we pre - process the radio form element in a for loop and send that

result to a sub function for evaluation.

ERROR HANDLING

There are many types of errors occurred in programming and there solutions

are also given below here:-

Data reference errors

Is an un-initialized variables referenced?

Are array subscripts integer values and are they within the

array’s bounds?

Are there off-by-one errors in indexing operations or references

to arrays?

Is a variable used where a constant would work better?

Is a variable assigned a value that’s of a different type than the

variable?

Are data structures that are referenced in different functions

defined identically?

Data declaration errors

E.g. should a variable be declared a string instead of an array of

characters?

Are the variables assigned the correct length, type, storage

class?

If a variable is initialized at its declaration, is it properly

initialized and consistent with its type?

Are there any variable with similar names?

55

Page 56: Documentation New

Are there any variables declared that are never referenced or

just referenced once (should be a constant)?

Are all variables explicitly declared within a specific module?

Computation errors

Do any calculations that use variables have different data

types?

E.g., add a floating-point number to an integer

Do any calculations that use variables have the same data type

but are different size?

E.g., add a long integer to a short integer

Are the compiler’s conversion rules for variables of inconsistent

type or size understood?

Is overflow or underflow in the middle of a numeric calculation

possible?

Is it ever possible for a divisor/modulus to be 0?

Can a variable’s value go outside its meaningful range?

E.g., can a probability be less than 0% or greater than 100%?

Are parentheses needed to clarify operator presence rules?

Control flow errors

Do the loops terminate? If not, is that by design?

Does every switch statement have a default clause?

Are there switch statements nested in loops?

E.g., careful because break statements in switch statements will

not exit the loop… but break statements not in switch

statements will exit the loop.

Is it possible that a loop never executes? If it acceptable if it

doesn’t?

Does the compiler support short-circuiting in expression

evaluation?

Subroutine parameter errors

56

Page 57: Documentation New

If constants are passed to the subroutine as arguments are they

accidentally changed in the subroutine?

Do the units of each parameter match the units of each

corresponding argument?

E.g., English versus metric

This is especially pertinent for SOA components

Do the types and sizes of the parameters received by a

subroutine match those sent by the calling code?

Input/output errors

If the file or peripheral is not ready, is that error condition

handled?

Does the software handle the situation of the external device

being disconnected?

Have all error messages been checked for correctness,

appropriateness, grammar, and spelling?

Are all exceptions handled by some part of the code?

Does the software adhere to the specified format of the date

being read fro or written to the external device?

Other checks

Does your code pass the lint test?

E.g., how about gcc compiler warnings?

Is your code portable to other Os platforms?

Does the code handle ASCII and Unicode?

How about internationalization issues?

Does your code rely on deprecated APIs?

Will your code port to architectures with different byte

orderings?

57

Page 58: Documentation New

TESTING

58

Page 59: Documentation New

TESTING TECHNIQUES &

STRATEGIES USED

1. BLACK BOX TESTING :- Black-Box test design treats the system as a

“BLACK-BOX”, so it doesn’t explicitly use knowledge of the internal structure.

In the Black Box testing no knowledge of internal logic or code structure is

required. The types of testing under this strategy are totally based/focused

on the testing for requirements and functionality of the work

product/software application. Black Box testing is also known as specification-

based testing, behavioral testing, functional testing, opaque-box testing, or

closed-box testing. The engineers engaged in Black Box testing only knows

the set of inputs and expected outputs and is unaware of how those inputs

are transformed into outputs by the software.

Black Box testing refers to test activities using specification-based

testing methods and criteria to discover program errors based on program

requirements and product specifications. Black Box testing assumes no

knowledge of code and is intended to simulate the end-user experience.

Black Box testing is not an alternative to White-Box techniques. Rather, it is a

complementary approach that is likely to uncover a different class of errors

than White Box methods.

Black Box testing focuses on the output to various types of stimuli in

the targeted deployment environments. Black-Box testing attempts to find

errors in the following categories:

Incorrect or missing functions,

Interface errors,

59

Page 60: Documentation New

Errors in data structures or external data base access,

Behavior or performance errors, and

Initialization and termination errors.

2. INTEGRATION TESTING :- Integration testing is a logical extension of

unit testing. In its simplest form, two units that have already been tested are

combined into a component and the interface between them is tested.

Integration testing is meant to focus on component integration. After unit

testing, modules shall be assembled or integrated to form the complete

software package as indicated as indicated by the high level design.

Integration testing is a systematic technique for verifying the software

structure and sequence of execution while conducting tests to uncover errors

associated with interfacing. In other words integration testing means, testing

the interface between two or more modules (internal interface) or interfaces

with other system (external interfaces). So in integration testing, we will test

for the interfaces.

3. TOP-DOWN INTEGRATION :- Top down integration testing is an

incremental integration testing technique which beings by testing the top

level module and progressively adds in lower level module one by one.

Modules are integrated by moving downward through the control hierarchy,

beginning with the main control module (main program). Module subordinate

to the main control module is incorporated into the structure in either a

depth-first or breadth-first manner.

In the depth-first integration we integrate all components on a major

control path of the structure as per the figure. Selection of a major path is

some what arbitrary and depends on application-specific characteristics. For

example, selecting the left-hand path, components M1, M2, M5 integrated.

Then, the central and right-hand and control paths are built. Breadth-first

integration incorporates all components directly subordinate at each level,

moving across the structure horizontally. From the figure, components M2,

60

Page 61: Documentation New

M3 and M4 (a replacement for stub S4) would be integrated first. The next

control level, M5, M6, and so on, follows.

The integration process is performed in a series of 5 steps:-

The main control module is used as a test driver and stubs are

substituted for all components directly subordinate to the main control

module.

Depending on the integration approach selected (i.e., depth or breath

first), subordinate stubs are replace one at a time with actual

components.

Tests are conducted as each component is integrated.

On completion of each set of tests, another stub is replaced with he real

component.

Regression testing may be conducted to ensure that new errors have

not been introduced.

The process continues from step 2 until the entire program structure is built.

The top-down integration strategy verifies major control decision points early

in the test process. In a well-factored program structure, decision making

occurs at upper levels in the hierarchy and is therefore encountered first. If

major control problems do exist, early recognition is essential. If depth- first

integration is selected, consider a classic transaction structure in which a

complex series of interactive inputs is requested, acquired and validated via

an incoming path. The incoming path may be integrated in a top-down

manner. All input processing (for subsequent transaction dispatching) may be

demonstrated before other elements of the structure have been integrated.

Early demonstration functional capability is a confidence builder for both the

developer and the customer.

Top-down strategy sounds relatively uncomplicated, but in practice,

logistical problems can arise. The most common of these problems occurs

when processing at low levels, in the hierarchy is required to adequately test

61

Page 62: Documentation New

upper levels. Stubs replace low level modules at the beginning of top-down

testing; therefore, no significant data can flow upward in the program

structure. The tester is left which three choices:

Delay many tests until stubs are replaced with actual modules,

Develop stubs that perform limited functions that simulate the actual module,

or

Integrate the software from bottom of the hierarchy upward.

The first approach (Delay tests until stubs are replaced by actual modules)

causes us to loose some control over correspondence between specific tests

and in corporation of specific modules. This can lead to difficulty in

determining the cause of errors and tends to violate the highly constrained

nature of the top-down approach. The second approach workable but can

lead to significant over hand as stubs we come more and more complex. The

approach, called bottom-up testing, discussed is in the next section.

The top down integration assumes that component M1 provides all the

interface requirements of other components even while other components

are getting ready and does not require modification at a later stage.

The top down integration approach is best suited for the waterfall and the V

models.

4. BOTTOM-UP INTEGRATION :- Bottom-up integration testing, as its name

implies, begins construction and testing with atomic modules (i.e., modules at

the lowest level in the program structure). Since modules are integrated from

the bottom up, processing required for modules subordinate to a given level

is always available and the need for stub is eliminated.

Bottom-up integration testing, as its name implies, begins construction

and testing with atomic modules (i.e., modules at the lowest level in the

program structure). Because components are integrated from the bottom up,

processing required for components subordinate to a given level is always

available and the for stubs is eliminated.

62

Page 63: Documentation New

A bottom up integration strategy may be implemented with the following

steps:

Low-level components are combined into clusters (Sometime call

builds) that perform are specific software sub function.

A driver (a control program for testing) is written to coordinate test

case input and output.

The cluster is tested.

Drivers are removed clusters are combined moving upward in the

program structure.

Integration follows the pattern illustrated in figure. Components are combined

to from clusters 1, 2 and 3. each of the clusters is tested using a driver

(Shown as a dashed block). Components in clusters 1 and 2 are subordinate

to Ma. Drivers D

1 and D

2 are remove and the cluster are interfaced directly

to Ma Similarly, driver D

3 for cluster 3 is removed prior to integration with

module Mb. Both Ma and M

b will ultimately by integrated with component Mc,

and so forth.

As Integration moves upward, the need for separate test drivers lessens. In

fact, if the top 2 level of program structure integrated top down, the number

of driver be reduced substantially and integration of clusters is greatly

simplified.

Bottom up approach is best suited for the iterative and the

agile methodologies.

5. FUNCTIONAL & NON - FUNCTIONAL TESTING :- Functional

testing involves testing a product’s functionality and features. Non-functional

testing involves testing the product’s quality factors. System testing support

63

Page 64: Documentation New

both functional and non-functional test verification. Functional testing helps

in verifying what the system is supposed to do. It aids in testing the product’s

features or functionality.

Functional Testing requires in depth customer and product knowledge

as well as domain knowledge. So as to develop different test cases and find

critical defects, as the focus of the testing is to find defects.

Non-Functional Testing is perform to verify the quality factors such as

reliability, scalability etc. non-functional testing is very complex due to the

large amount of data that needs to be collected and analyzed. Non-functional

testing requires large amount of resources and the result are different for

different configurations and resources.

Non-Functional Testing requires understanding the product behavior,

design and architecture and also knowing what the competition provides.

6. OBJECT ORIENTED TESTING :- Object Oriented programming

language features of inheritance and polymorphism Present new technical

challenges to testers. The adoption of object oriented technologies brings not

only in the programming languages we use but in most aspects of software

development. We use incremental development processes, refocus and use

new notations for analysis and design, and utilize new programming language

features. The changes promise to make software more maintainable,

reusable, and flexible and so on.

Object oriented programming features in programming languages

obviously impact some aspects of testing. Features such as class inheritance

and interfaces support polymorphism in which code manipulates objects

without their exact class being known.

Many object oriented testing activities have used in traditional process.

We still have a use for unit testing although the meaning of unit has changed.

We still do integration testing to make sure various subsystems can work

correctly in concert. We still need system testing to verify that software

meets requirements. We still do regression testing to make sure the latest

64

Page 65: Documentation New

round of changes to the software has not adversely affected what it could do

before.

TEST CASE APPLIED

Boris Beizer defines a test as “A Sequence of one or more subtests executed

as a sequence because the outcome and/or final state of one subtest is the

input and/or initial state of the next. The word ‘test’ is used to include

subtests, tests proper and test suites”.

A good test has a high probability of finding an error. To achieve this goal,

the tester must understand the software and attempt to develop a mental

picture of how the software might fail. Ideally, the classes of failure are

probed. For example, one class of potential failure in a GUI (Graphical user

interface) is a failure to recognize proper mouse position. A set of tests would

65

Page 66: Documentation New

be designed to exercise the mouse in an attempt to demonstrate an error in

mouse position recognition.

A good test is not redundant. Testing time and

resources are limited. There is no point in conducting a test that has the

same purpose as another test. Every test should have a different purpose.

A good test should be “best of breed” In a group of tests that have a similar

intent, time and resource limitations may toward the execution of only a

subset of these tests. I such cases, the test that has the highest like hood of

uncovering a whole class of errors should be used.

A good test should be neither too simple nor too complex. Although

it is sometimes possible to combine a series of tests into one test case, the

possible side effects associated with this approach may errors. In general,

each test should be executed separately.

A rich variety of test case design methods have evolved for software.

These methods provide the developer with a systematic approach to testing.

More important, methods provide a mechanism that can help to ensure the

completeness of tests and provide the highest likelihood for uncovering

errors in software.

TEST CASE RESULT

Boris Beizer defines a test as “A Sequence of one or more subtests executed

as a sequence because the outcome and/or final state of one subtest is the

input and/or initial state of the next. The word ‘test’ is used to include

subtests, tests proper and test suites”.

A good test has a high probability of finding an error. To achieve this goal,

the tester must understand the software and attempt to develop a mental

picture of how the software might fail. Ideally, the classes of failure are

probed. For example, one class of potential failure in a GUI (Graphical user

interface) is a failure to recognize proper mouse position. A set of tests would

66

Page 67: Documentation New

be designed to exercise the mouse in an attempt to demonstrate an error in

mouse position recognition.

A good test is not redundant. Testing time and resources are

limited. There is no point in conducting a test that has the same purpose as

another test. Every test should have a different purpose.

A good test should be “best of breed” In a group of

tests that have a similar intent, time and resource limitations may toward the

execution of only a subset of these tests. I such cases, the test that has the

highest like hood of uncovering a whole class of errors should be used.

A good test should be neither too simple nor too complex. Although it is

sometimes possible to combine a series of tests into one test case, the

possible side effects associated with this approach may errors. In general,

each test should be executed separately.

67

Page 68: Documentation New

SYSTEM SECURITY

MEASURED

Computer security is a branch of computer technology known as

information security as applied to computers and networks. The objective of

computer security includes protection of information and property from theft,

corruption, or natural disaster, while allowing the information and property to

remain accessible and productive to its intended users.

Customers can only be given rights to read various reports. Once a customer

is assigned to the project, the project manager has to select at least one

68

Page 69: Documentation New

report the customer will be able to view. All customers have access to the

same set of reports.

69

Page 70: Documentation New

COST ESTIMATION OF

PROJECT

Software cost estimation is the process of predicting the effort required to

develop a software system. Cost estimation is closely related to design

activities, where the interaction between these activities is iterated many

times as part of doing design trade studies and early risk analysis. Later on in

the life-cycle, cost estimation supports management activities or primarily

70

Page 71: Documentation New

detailed planning, scheduling, and risk management. The purpose of software

cost estimation is to:

Define the resources needed to produce, verify, and validation the

software product, and manage these activities

Quantify, insofar as is practical, the uncertainty and risk inherent in

the estimate

Accurate software cost estimates to both developers and

customers. They can be used for generating request for proposals,

contract negotiations, scheduling, monitoring and control.

Understanding the costs may result in management approving

proposed systems that then exceed their budgets, with

underdeveloped functions and poor quality, and failure to complete

on time.

71

Page 72: Documentation New

PERT CHART

Complex projects require a series of activities, some of which must be

performed sequentially and others that can be performed in parallel with

other activities. This collection of series and parallel tasks can be modeled as

a network.

The Program Evaluation and Review Technique (PERT) is a network

model that allows for randomness in activity completion times. PERT was

72

Page 73: Documentation New

developed in the late 1950’s for the U.S.Navy;s palaris. It has the potential to

reduce both the time and cost required to complete a project.

In 1957 the critical path method (CPM) was developed as a network

model for project management. CPM is a deterministic method that uses a

fixed time estimate for each activity while CPM is easy to understand and

use, it does not consider the time variations that can have a great impact on

the completion time of a complex project.

Planning, Scheduling and control are considered to be basic managerial

functions, and CPM/PERT has been rightfully accorded due importance in the

literature or operations research and quantitative analysis. Far more than the

technical benefits, it was found that PERT/CPM provided a focus around which

managers could brain-storm and put their ideas together. It proved to be a

great communication medium by which thinkers and planners at one level

could communicate their ideas, their doubts and fears to another level. Most

important, it become a useful tool for evaluating the performance of

individuals and terms.

There are many variations of CPM/PERT which have been useful in planning

costs, scheduling manpower and machine time. CPM/PERT can answer the

following important questions :

How long will the entire project take to be completed? What are the

risks involved?

Which are the critical activities or tasks in the project which could delay

the entire project if they were not completed on time?

Is the project on schedule, behind schedule or ahead of schedule?

If the project has to be finished earlier than planned, what is the best

way to do this at the least cost?

73

Page 74: Documentation New

74

Page 75: Documentation New

GANTT CHART

75

Page 76: Documentation New

Developed by Henry Gantt, a Gantt Chart is a type of bar chart that

illustrates a project schedule. A Gantt chart is a bar chart that shows the

tasks of a project progresses, bars are shaded to show which tasks have been

completed. People assigned to each also can be represented.

The figure below shows a basic Gantt chart example. It shows tasks in a

security and access control project. Tasks are outline in two sections. Each

task uses a yellow triangle to indicate the start date of the task and a green

down triangle to indicate the finish date of the task. Also shown on this

schedule are the responsible sub-contractors for the project (in the column

labeled R-E-S-P)

WHEN TO USE GANTT CHARTS :-

When scheduling and monitoring tasks within a project.

When communicating plans or status of a project.

When the steps of the projects or process, their sequence and their

duration are known.

When it’s not necessary to show which tasks depend on completion of

previous tasks.

CONSTRUCTION OF GANTT CHART :- Identify tasks:

Identify the tasks needed to complete the project.

Identify key milestones in the project by brainstorming a list, or by

drawing a flow chart, storyboard or arrow diagram for the project.

Identify the time required for each task.

Identify the sequence: Which tasks must be finished before a following

task can begin, and which can happen simultaneously? Which tasks

must be completed before each milestone?

76

Page 77: Documentation New

Online Examination System

77

Page 78: Documentation New

Future Scope

Of the

Project

78

Page 79: Documentation New

Scope of the project is very broad in terms of other manually taking exam few

of them are:

This can be used in educational institution as well as corporate world ,it

can be used any where any time as it is a Web based application,so

user location doesn’t matter.

No restrictions that the examiner has to be present when the

candidates take the test.

79

Page 80: Documentation New

Reference Book used

Or

Bibliography

We have used in our project are given below:

80

Page 81: Documentation New

Java Server Programming

Java

Internet & Web

Software Testing & Project Management

System Analysis & Design

Java for the Web With Servlet, JSP & EJB

81