143
INTRODUCTION

Documentation

Embed Size (px)

DESCRIPTION

 

Citation preview

Page 1: Documentation

INTRODUCTION

Page 2: Documentation

PROJECT OVERVIEW

This is a network based intelligent multiple-choice-question examination system; named

Network based Intelligent Quiz System (NIQS), for assessing students. It is a system by

which students can appear in a quiz where there is no interaction between pencil and

paper rather interaction between computer and human being. We use the term intelligent

here as the system generates questions intelligently. There are many definitions of

intelligence but we use the term intelligence in the sense that the system generates the a

new question of a quiz by depending on the result of the question last answered and for

the same quiz of a subject it generates different questions for different students. The

questions vary student to student for the same quiz while they are sitting in the quiz at the

same time. Teachers can use the NIQS for evaluating students effectively, efficiently and

perfectly. Any university, college, school or educational institute can use this system for

their organization to take quizzes. Today it is the more efficient and effective method of

assessing students. One of the main benefits of our system is automated marking, that is,

teachers do not need to check the answer script as they do in manual quiz. It saves

valuable time of a teacher. On the other hand, students can score according to his/her

merit level and it will give feedback about a student in which side he/she is weak. In

recent years, the use of this type of quiz systems has become quite popular due to

pressures of increasing class sizes and the need for more efficient and effective methods

of assessing students.

PURPOSE

Network Based Exams System fulfills the requirements of the institutes to

conduct the exams in a network. Many students can give exam simultaneously. They can

view the result at the same time. Thus the purpose of the application is to provide a

system that saves the efforts and time of both the institutes and the students.

Page 3: Documentation

What is Network Based Intelligent Quiz System all about?

Network Based Intelligent Quiz System is an application that establishes a

network between the institutes and the students. Institutes enter on the application the

questions they want in the exam. These questions are displayed as a test to the eligible

students. The answers entered by the students are then evaluated and their score is

calculated and saved. This score then can be accessed by the institutes to determine the

passed students or to evaluate their performance.

The system entitled “Network Based Intelligent Quiz System” is application

software, which aims at providing services to the institutes and providing them with an

option of selecting the eligible students by themselves. It is developed by using Visual

Basic 6.0 technology and related database.

This project assesses students by conducting objective quiz. The tests would be

highly customizable. This project will enable educational institutes to conduct test and

have automated checking of answers based on the response by the candidates. The project

allows faculties to create their own tests. It would enable educational institutes to perform

tests, quiz and create feedback forms. It asks faculty to create his/her set of questions.

The result of the response would be available to the faculty of the question set. Further

the result would also be mailed to the student. This project would be helpful for creating

practice tests, say for educational institutes and as a feedback form.

Responses by the candidates will be checked automatically and instantly.

Network examination will reduce the hectic job of assessing the answers given by

the candidates.

Being an integrated Network Examination System it will reduce paper work.

Can generate various reports almost instantly when and where required.

Page 4: Documentation

SCOPE

This project would be very useful for educational institutes where regular

evaluation of students’ is required. Further it can also be useful for anyone who requires

feedback based on objective type responses

Required software is for conducting network based `objective’ type examination

and providing immediate results. The system should satisfy the following requirements:

Administrator Aspect

1. Taking backup of the database

2. Editing/Deleting/Creating the database.

3. Adding or deleting faculty

4. Changing the super password.

Faculty Aspect

1. Logging into the system.

2. Sending invitations to specific student by mail

3. Accepting registrations of candidates

4. Creating a test

5. Posting questions in the above test

6. Posting multiple options to respective question

7. Marking correct answer within the given options

8. Time limit of the test if any.

9. Whether to randomize the questions

10. Whether to randomize the options displayed

11. To allow the test to be taken in practice mode where the correct answer is

shown immediately after the candidate selects an option.

Page 5: Documentation

Student Aspect:

1. Requesting registration

2. Logging into the system.

3. Edit user information.

4. Selecting the test.

5. Selecting whether the test to be taken in practice mode where the correct

answer is shown immediately after the candidate selects an option.

6. Appearing for the examination.

7. Printing the result at the end of the examination.

8. Reviewing the given responses.

9. Changing password.

10. Resetting of forgotten password

Analysis

1. Authenticating users based on username and password

2. Recording candidates’ responses to every question

3. Checking whether the given response is correct or not

4. Keeping history of test reports of all users

Mailing

1. The reports are required to be mailed to the candidates on the

registered mail address.

2. Temporary password will be mailed to the user incase the

user forgets the password.

3. Invitations for the appearance for the new test will be mailed.

Page 6: Documentation

ORGANIZATION PROFILE

Chinmaya  Vidyapeet  was  started  in  1995  in  keeping  with  the  ideals of the

great visionary and sage, Swami Chinmayananda. The college has been housed in the

original school building of the Ernakulam Chinmaya Vidyalaya, a reverse L-shaped

structure on Warriam Road. The building saw the metamorphosis of the school into a

college in 1995 when the Vidyalaya was moved to Vaduthala. Thence, the Vidyapeet  

endeavored to maintain the highest standards in the field of higher education. At the

inception itself, Chinmaya Vidyapeet had a dual educational scheme - that is B.Com cum

CA/ACS which attracted many students. Every year its students won laurels and even All

India ranks in the CA and ACS courses.

For the last six years, students from the college have been winning university ranks in the

B. B. M. degree course. Several have done well in the university sports and cultural

festivals obtaining top sports and grace marks. Some have even represented the university

at the national level.  Though there is an emphasis on discipline, students are encouraged

to take part in diverse range of activities and develop themselves in a positive way.

The Vidyapeet seeks to impart the finest education in the field of Commerce,

Management and Economics. It takes great pride in its faculty, which is drawn from

a diverse stream.  It hopes to imbibe in the students a sense of community, serious

purpose, social commitment and high academic achievement. It is its endeavor to provide

a worthwhile experience to the students.

Chinmaya believes in inculcating the values of discipline, integrity and hard work in the

students. The Chinmaya culture embodied by the students can only benefit society and

the nation.

Page 7: Documentation

There is a dress code which is followed by all the students. The college uniform

has been designed by the first batch of students in 1995-1996 and continues to

be worn till date. Girls are requested to wear salwar, half sleeved, Kameeze and

black slip on footwear. Boys have to wear trousers, half-sleeved shirts and black

shoes. They have to be cleanly shaven and neatly attired.

Punctuality and regular attendance are also important habits to be followed by the

students.

As per Government Order No.318/10/H.Edn.Dept of 16.2.2010 and University

Order Circular No. DSS/AC.A1/2/195/2010 MOBILE PHONES are banned from the

college campus and possession of the same liable for punishment. The college will

confiscate the phone and report the matter.

Ragging is similarly prohibited by law. The Supreme Court has passed a

ruling making it a cognizant offence without fail and the University Circular No

A1/2/1647/2007 makes this legally enforceable.

The college encourages students to interact in a friendly manner.

Disciplinary action will be taken against any kind of VIOLENCE on the campus

and verbal abuse is also disallowed. SMOKING, DRUGS and ALCOHOL are

all prohibited on the campus.

The faculty is the principal driver of change through their direct involvement in

every aspect of the Institute: academics, governance, research, and consultancy. They

combine the very highest standards of teaching and mentoring with diverse backgrounds

as eminent entrepreneurs, policy makers, researchers, theoreticians and consultants. The

rich diversity of their backgrounds installs in the students a continuous desire to achieve

excellence.

Page 8: Documentation

The staff of Chinmaya is a dedicated group, ably assisted by a loyal non-teaching

staff. Several teachers have been with the college since its very inception and have been

part of the development process. All of them share a common interest - the individual

development of the students and work in cohesion. The faculty comprises 24 members

and is able to give individualized attention to the students. The staff has formed

committees to advise students on various matters. They are highly experienced and have

been responsible for the success of many of the students.

DEPARTMENTS

Chinmaya Vidyapeet accommodates the following departments:-

1. Department of English

2. Department of Hindi

3. Department of Mathematics

4. Department of Economics

5. Department of Commerce

6. Department of Management

7. Department of Computer Science.

The college, like the school, was founded by Janaki N. Menon, one of Swami

Chinmayanandaji’s oldest and earliest disciples. She was inspired by Gurudev to

establish institutions in Cochin which would be world class and reflect his wonderful

vision.

 

The young Janchechi was attracted to the sublime beauty of Vedanta and the

principles espoused by Poojya Swami Chinmayanandaji. When Gurudev came to Cochin

in 1954 she and her sister Meena Haridas were inspired to work tirelessly for the Mission,

setting up Balavihars and eventually the Vidyalaya and Vidyapeet.

Page 9: Documentation

Each brick of these institutions told the story of how they were established

without taxing the parents. This great work was achieved by Janchechi and her younger

sister Kamakshi Balakrishna whose name has become synonymous with education in

Kochi. Both school and college have endeavored to become centre of excellence and

have established themselves as a brand.

Janchechi, as she was fondly called, believed in selfless services and was a great

Karma yogi. She left us in April 2002 but continues to inspire us by her work and ideals.

The Chinmaya Mission, a world wide organization, is headed by H. H. Swami

Tejomayananda, a direct disciple of Poojya Swami Chinmayanandaji. Swami

Tejomayanandaji took over as Supreme Head in 1993, after the Samadhi of Gurudev

Chinmaya seeks to form a class comprising of students drawn from different

backgrounds but who all have a common goal of excellence and achievement. It believes

in enhancing the educational experience for the students. The school and 12th standard /

HSC marks are important. Serious consideration is also given to an applicant’s promise

of making a contribution to the class by way of a particular attainment, unusual academic

achievement or non-academic performance. The faculty also relies on a face-to-face

meeting to assess the student’s potential, as there is no complete dependence on just

marks alone to admit a candidate. The guiding principle for selection is to ensure an

effective learning environment for students so that eventually society and public will

benefit.

 

Over the years its students have done well in life. While in college they garnered

All India ranks, State ranks and University ranks in the CA stream, ACS examinations

and the BBM examinations. Several have attained positions of importance in various

industries and promise to become harbingers of change in the country. It is a matter of

pride that the Chinmaya name and stamp evoke respect for us.

Page 10: Documentation

ORGANIZATION CHART

MANAGEMENT

PRINCIPAL

HEAD of DEPARTMENTS

TEACHERS

NON-TEACHING STAFF

STUDENTS

Page 11: Documentation

SYSTEM ANALYSIS

Page 12: Documentation

System Analysis refers to the process of examining a situation with the intent of

improving it through better process and methods. System analysis is, therefore,

the process of gathering and interpreting facts, diagnosing problem and using the

information to recommend information in system or in other words, it means a detailed

explanation or description. Before computerizing a system under consideration, it has to

be analyzed. We need to study how it functions currently, what are problems and what

are requirements that proposed software should meet. The main components of making

software are:

Sys t em and so f twa re r equ i r eme n t s ana ly s i s

Des ign and im p lem en t a t i on o f so f twa re

Ensuring, verifying and maintaining software integrity

REQUIREMENT ANALYSIS

I have divided my project into two modules. The first module that is the

administrator module handles the duties of the administrator, like, adding teachers and

students, deleting the users, preparing rank lists, etc. The second module which is the

quiz module enables the teachers added by the administrator to add new questions to the

database and to set an exam for the students. The students on the other hand can take the

exam and the result will be available to them in a very short time.

Information Gathering

U s e r s o f t h e s y s t e m

Pre sen t a t i ons an docume n t s u s i ng o rga n i z a t i on

Pro spe c tu s , manua l s and   t he ru l e boo ks , wh ich

spec i fy  how va r i ous a c t i v i t i e s a r e carried out in the organization

Page 13: Documentation

Evolving a method of obtaining information from identified

source

Using information from module of organization

Com pu te r P re sen t a t i on o f ex i s t i ng sy s t em

.

PROBLEM DEFINITION

Software has to be designed to conduct tests. Unlike other examination systems this

software should not be just for the students; instead it should also provide facility to

Institutes to host Tests/Exams. This will help institutes as:

There will be no need to get new software every time to conduct a test.

Also like other software, it will help students by:

Saving the extra time of going to far away Exam Centre.

Students need not wait for their results.

Also this software will remove the flaws of existing Manual Systems like:

Reducing the manual labour (Decreases Overheads).

Avoiding Mistakes Due To Human Error (Accurate).

Will Increase Efficiency and Save Time.

Will Allow Neat Handling Of Data Rather Than Error Prone Records.

The institutes will register themselves with a unique login name and password; the

unique id will be issued to the institutes by the website.

After login:

They will enter exam details like number of questions, positive marks.

Then they will enter the questions along with the answers which can later be

deleted and edited.

Also they will enter the list of eligible candidates with their id names which can

also be edited later.

Institutes will be able to view the students list along with their respective results.

Page 14: Documentation

Also for students:

They will be able to login with their id, name and institute id.

They will be able to give the exam as per the details entered by respective

institutes.

Also they will be able to view their scores after test finishes.

If already given the test then they will just be able to view their scores.

Other users can take sample tests to get feel and look of how the tests are

conducted.

Other key points:

Different set of questions will be given to different students.

The questions will be selected randomly from the database.

FUNCTIONAL REQUIREMENTS

It deals with the functionalities required from the system which are as follows:

The software will help the colleges/organizations/companies to conduct their exams.

Functional Requirements

Functional requirements capture the intended behavior of the system. This behavior may

be expressed as services, tasks or functions the system is required to perform. This white

paper layout important concepts and discusses capturing functional requirements in such

a way that they can drive architectural decisions and be used to validate the architecture.

Functional Requirements contains:

Identification of inputs that the system should accept and under different

conditions.

Identification of outputs that the system will produce under different conditions

Identification of the data that the system should store which other systems might

use.

Page 15: Documentation

The computations the system should perform.

Only authorized person can access related details.

The organization will register themselves on the software for conducting their

exams.

Organizations can change their information regarding themselves.

The students can login through TEST-ID and PASSWORD and give their exams.

Administrator will be responsible for updating the site.

The organization can change questions and test papers whenever they want.

Non Functional Requirements

A. Portability 

B. Reliability.

C. Performance

I. Types of Non Functional Requirements are:

a. Interface Requirements

b. Performance Requirements

c. Time/Space Bounds

d. Reliability

e. Security

f. Survivability

g. Operating Requirements

EXISTING SYSTEM

Page 16: Documentation

Existing system here is manual i.e. all the transaction or the information is

recorded in the registers and as simple text files on the computers. If the person is in need

of particular information has to go through the registers and the text files and then

prepare the information needed by him manually.

MANUAL QUIZ SYSTEM

In the early days it was the most popular methods of assessing student. Even now

the system is quite popular to the students as well as to the teacher. In this system there

are several problems that we face in a common way. Some of those problems are:

1. Manual system requires pen/ pencils and paper.

2. Teacher needs to spend time to script checking.

3. Student needs to wait to get their result up to the teacher finishes the script

checking

These are the most common problems of manual quiz system that rotated each

and every time of quiz held. For these causes the popularity of manual system decreases

day by day and intelligent quiz system is taking the place of the manual system.

Bottlenecks Identified in Existing System

The first problem is that there are loads of hard copied documents being generated. This

brings us to the age-old discussion of keeping information in the form databases versus

keeping the same on sheets of paper. Keeping the information in the form of hard-copied

documents leads to the following problems:

1) Lack of space – It becomes a problem in itself to find space to keep the sheets of

paper being generated as a result of the ongoing discussion. The documents being

generated are too important to be ill-treated.

Page 17: Documentation

2) Filing poses a problem – Filing the documents categorically is a time

consuming and tedious exercise.

3) Filtering is not easy – It becomes hard to filter relevant documents for the

irrelevant ones if the count of the same crosses a certain manageable number.

4) Reviewing becomes time-consuming – All the process done manually at the

centers and all the records are maintained on the papers. So the maintenance of

the record is very difficult in the departments and as well as it’s very difficult for

the workers to check the record. The Existing system is paper based, time

consuming, monotonous, less flexible and provides a very hectic working

schedule. The chance of loss of records is high and also record searching is

difficult. Maintenance of the system is also very difficult and takes lot of time.

5) Result Processing is slow due to paper work and requirement of staff.

Need for the New System

To solve these problems they required a computerized system to handle all the

works. They required a web based application that will provide a working environment

that will be flexible and will provide ease of work and will reduce the time for report

generation and other paper works.

PROPOSED SYSTEM

In recent years, the use of network based quiz systems has become quite popular

due to pressures of increasing class sizes, and the need for more efficient methods of

assessing distances students. This thing motivates me to work with network based quiz

system. I have also tried to eliminate the problem with general web based quiz system

and decided to create a net work based intelligent quiz system. In my project, I have tried

to develop a “Network Based Intelligent Quiz System” which will be popular with both

the students and teachers.

Page 18: Documentation

Aims and Objective

The main purpose behind the proposed system is to provide a comprehensive

computerized system, which can capture, collate and analyze the data from these wards

and evaluate the impact of the program.

Constraints, Assumptions, Dependencies

Constraints

As this system is based on client server technology, so for normal operation

minimum of 64 MB RAM will be required on all clients.

Assumptions

In general it has been assumed that the user has complete knowledge of the

system that means user is not a naïve user. Any data entered by him/her will be valid. To

make the software as user friendly as possible but at the same time keeping in minds user

requirements.

1. Server OS should be Windows NT/2000/XP.

2. Client PC should be Windows 9X/NT/WorkStation or Windows 2000 with

latest service pack.

Dependencies

It depends that the one should follow the international standards for the generating

the User ID & should fill the related information in the proper format.

Software System Attributes

Usability:

The links are provided for each form. The user is facilitated to view and make

entries in the forms. Validations are provided in each field to avoid inconsistent or invalid

Page 19: Documentation

entry in the databases. Reports screen contains text boxes and drop down lists, so that

reports can be produced.

Security:

Application will allow only valid users to access the system. Access to any

application resource will depend upon user’s designation. There are two types of users

namely Administrator and Student. Security is based upon the individual user ID and

Password.

Maintainability:

The installation and operation manual of examination management system will be

provided to the user.

Availability:

System will be available around the clock except for the time required for the

back up of data.

Portability:

The application is developed in VB 6.It would be portable to other operating

system provided VB6 is available for the OS. As the database is made in SQL SERVER

2008, porting the database to another database server would require some development

effort.

Acceptance Criteria

The software should meet the functional requirement and perform the

functionality effectively and efficiently.

A user-friendly interface with proper menus.

Data transfer should be accurate and with in a reasonable amount of time

keeping in mind the network traffic.

The system should not allow entry of duplicate key values.

System should have the ability to generate transactional Logs to avoid any

accidental loss of data.

Page 20: Documentation

Log file should also be generated.

Computerized vs. Manual Examination System

Automated process of examination is much better than the manual system as it

has following advantages:

Time saving

Increased efficiency

Allows neat handling of data rather than error prone records.

Decreases overhead

Accurate

The user requirement for this system is to make the system fast, flexible, less prone to

error, reduce expenses and save time.

Time can be saved by scheduling the exams, if it is available a question bank to

store questions for different subjects.

A system can be given a mark by checking the students answers, and give the

result as soon as students finish his exam.

A facility to generate a result chart as pre required without manual interface.

The system should have records of students and faculty that can be access to the

system which can be used only by the authorized person.

The system should be more secure for management user records and more reliable

to work at any conditions.

The products and process features:

This system must be designed as user required. So, the complete requirement must be

found:

Quick scheduling:

Page 21: Documentation

The system helps the faculty member to generate an automatic exam instead of

using papers. Which save a time for writing, checking and for input marks. Also,

student can see the exam when he login as an individual to the system.

Immediate results and solutions:

When the student finishes his exam, the system checks his/her answers and

compared with the correct answer. And the system saves the incorrect and correct

answers and calculates the mark of correct answers. Then give the total mark.

And send a report for student to see where he has gone wrong.

Easy to store and retrieve information:

Rather to save the information on a papers or in separate sheets there are data base

management to store and retrieve the information needed by the administrator or

faculty member or student according report generated by the system.

Page 22: Documentation

FEASIBILITY STUDY

Page 23: Documentation

User needs a network-based system, which will remove all the above-mentioned

problems that, the user is facing. The user wants a network-based system, which will

reduce the bulk of paper work, provide ease of work, flexibility, fast record finding,

modifying, adding, removing and generating the reports.

We proposed our perception of the system, in accordance with the problems of

existing system by making a full layout of the system on paper. We tallied the problems

and needs by existing system and requirements. We were further updating in the layout in

the basis of redefined the problems. In feasibility study phase we had undergone through

various steps, which are described as under:

Cost:

The cost required in the proposed system is comparatively less to the existing

system.

Effort:

Compared to the existing system the proposed system will provide a better

working environment in which their will be ease of work and the effort required will be

comparatively less than the existing system.

Time:

Also the time required generating a report or for doing any other work will be

comparatively very less than in the existing system. Record finding and updating will

take less time than the existing system.

Labor:

In the existing system the number of staff required for completing the work is

more while the new system will require quite less number of staff

Economic Feasibility

In my project I make use of the existing systems which have Visual Basic 6

installed on them along with the database which is SQL Server 2008. Though the cost of

buying SQL Server is often high, once it is bought, there is no other maintenance cost and

Page 24: Documentation

we are not buying any other software specially for this project, so we can say that the

project is economically feasible.

The only possible fixed costs involved with the system would be paying for

people to write the code. It is possible that faculty would be willing to write the code for

free, or students would be willing to work on it as a project. There are no variable costs

associated with this system- since it operates on the servers, the department does not pay

anything for each use of the system. The tangible benefits will mostly be in time savings

for the current administrators, as well as a simplified process for activities. The intangible

benefits would be increased system involvement among faculty members and decreased

workload on the current administrators.

Technical Feasibility

The project can be said to be technically feasible because there will be less

number of errors actually no errors because the whole project will be divided into two

modules and so the errors if found, can be debugged very well and all the bugs can be

removed. Since the system uses network to implement, it is technically practical for all

actors. The system can be implemented on the servers that the department currently has

access too. The system requires no special expertise to operate, although some expertise

will be required to code it.

Behavioral Feasibility

 

The proposed system can be easily accepted as it is very easy to understand and is

very user-friendly. The organization will not be disturbed by the use of this system

because, the users will be provided with prompts which will enable them to use this

software very easily.

Page 25: Documentation

People are inherently resistant to change and computer has been known to

facilitate changes. An estimate should be made of how strong the user is likely to move

towards the development of computerized system. These are various levels of users in

order to ensure proper authentication and authorization and security of sensitive data of

the organization. Therefore it is understandable that the introduction of a candidate

system requires special efforts to educate and train the staff. The software that is being

developed is user friendly and easy to learn. In this way, the developed software is truly

efficient and can work on any circumstances, tradition, locales. Behavioral study strives

on ensuring that the equilibrium of the organization and status quo in the organization

neither are nor disturbed and changes are readily accepted by the users.

Page 26: Documentation

SOFTWARE SELECTION AND JUSTIFICATION

Page 27: Documentation

HARDWARE SPECIFICATION

The selection of hardware is very important in the existence and

proper working of any software. When selecting hardware, the size and capacity

requirements are also important.

Below is some of the hardware that is required by the system

1. 40 GB hard disk

2. 256 MB RAM

3. Monitor

4. Keyboard

5. Processor- Pentium 4 or above

HARD DISK

A hard disk drive (HDD; also hard drive, hard disk, or disk drive is a data

storage device used for storing and retrieving digital information from non-volatile

memory (retaining its data even when powered off) in a random-access manner (data can

be retrieved in any order rather than just sequentially). An HDD consists of one or more

rigid ("hard") rapidly rotating discs (platters) coated with magnetic material, with

magnetic heads arranged on a moving actuator arm to read and write data to the surfaces.

We need a hard disk of greater storage capacity because we have a large amount

of data to store and that can be done only if we have hard disk of higher capacity.

RAM

Random-access memory (RAM) is a form of computer data storage. A random-

access device allows stored data to be accessed in very nearly the same amount of time

for any storage location, so data can be accessed quickly in any random order. In contrast,

Page 28: Documentation

other data storage media such as hard disks, CDs, DVDs and magnetic tape, as well as

early primary memory types such as drum memory, read and write data only in a

predetermined order, consecutively, because of mechanical design limitations. Therefore

the time to access a given data location varies significantly depending on its physical

location.

We need at least 256 MB of RAM because we have to store a large amount of

data in the primary memory which is to be accessed every now and then. This type of

memory is volatile so data is erased when power goes off, so we need secondary memory

also which can store data permanently, so we use the hard disk.

MONITOR

A monitor or display (also called screen or visual display unit) is an electronic

visual display for computers. The monitor comprises the display device, circuitry and an

enclosure. The display device in modern monitors is typically a thin film transistor liquid

crystal display (TFT-LCD) thin panel, while older monitors use a cathode ray tube (CRT)

about as deep as the screen size.

We use monitors which give use better quality of vision of the data that is being

displayed.

KEYBOARD

In computing, a keyboard is a typewriter-style device, which uses an

arrangement of buttons or keys, to act as mechanical levers or electronic switches.

Following the decline of punch cards and paper tape, interaction via teleprinter-style

keyboards became the main input device for computers.

A keyboard typically has characters engraved or printed on the keys and each

press of a key typically corresponds to a single written symbol. However, to produce

some symbols requires pressing and holding several keys simultaneously or in sequence.

While most keyboard keys produce letters, numbers or signs (characters), other keys or

simultaneous key presses can produce actions or computer commands.

Page 29: Documentation

PROCESSOR

A central processing unit (CPU), also referred to as a central processor unit, is

the hardware within a computer system which carries out the instructions of a computer

program by performing the basic arithmetical, logical, and input/output operations of the

system. The term has been in use in the computer industry at least since the early 1960s.

The form, design, and implementation of CPUs have changed over the course of their

history, but their fundamental operation remains much the same.

On large machines, CPUs require one or more printed circuit boards. On personal

computers and small workstations, the CPU is housed in a single silicon chip called a

microprocessor. Since the 1970s the microprocessor class of CPUs has almost completely

overtaken all other CPU implementations. Modern CPUs are large scale integrated

circuits in packages typically less than four centimeters square, with hundreds of

connecting pins.

Two typical components of a CPU are the arithmetic logic unit (ALU), which

performs arithmetic and logical operations, and the control unit (CU), which extracts

instructions from memory and decodes and executes them, calling on the ALU when

necessary.

Not all computational systems rely on a central processing unit. An array

processor or vector processor has multiple parallel computing elements, with no one unit

considered the "center". In the distributed computing model, problems are solved by a

distributed interconnected set of processors.

In my project, the processor that is required is Pentium 4 or its higher versions for

better performance in very small time consumption.

Page 30: Documentation

SOFTWARE SPECIFICATION

We require many different software to make the application which is in making to

work efficiently. It is very important to select the appropriate software so that the

software works properly.

Below are the software that are required to make the new system.

1. Windows XP or higher versions

2. SQL Server Management Server 2008

3. Visual Basic 6.0

WINDOWS XP

Windows XP is an operating system produced by Microsoft for use on personal

computers, including home and business desktops, laptops and media centers. First

released to computer manufacturers on August 24, 2001, it is the second most popular

version of Windows, based on installed user base. The name "XP" is short for

"eXPerience", highlighting the enhanced user experience.

Windows XP, the successor to Windows 2000 and Windows Me, was the first

consumer-oriented operating system produced by Microsoft to be built on the Windows

NT kernel. Windows XP was released worldwide for retail sale on October 25, 2001, and

over 400 million copies were in use in January 2006. It was succeeded by Windows Vista

in January 2007. Direct OEM and retail sales of Windows XP ceased on June 30, 2008.

Microsoft continued to sell Windows XP through their System Builders (smaller OEMs

who sell assembled computers) program until January 31, 2009. On April 10, 2012,

Microsoft reaffirmed that extended support for Windows XP and Office 2003 would end

Page 31: Documentation

on April 8, 2014 and suggested that administrators begin preparing to migrate to a newer

OS.

We make use of the latest OS because , now a days, all the users are friendly with

the latest technologies, so as latest the OS , that well the user will be able to handle the

new software. Moreover, SQL and VB 6.0 which are platform independent software are

easily available and can be used in these OS’s.

SQL SERVER MANAGEMENT STUDIO 2008

SQL Server 2008 (formerly codenamed "Katmai") was released on August 6,

2008 and aims to make data management self-tuning, self organizing, and self

maintaining with the development of SQL Server Always On technologies, to provide

near-zero downtime. SQL Server 2008 also includes support for structured and semi-

structured data, including digital media formats for pictures, audio, video and other

multimedia data. In current versions, such multimedia data can be stored as BLOBs

(binary large objects), but they are generic bit streams. Intrinsic awareness of multimedia

data will allow specialized functions to be performed on them. According to Paul

Flessner, senior Vice President, Server Applications, Microsoft Corp., SQL Server 2008

can be a data storage backend for different varieties of data: XML, email, time/calendar,

file, document, spatial, etc as well as perform search, query, analysis, sharing, and

synchronization across all data types.

Other new data types include specialized date and time types and a Spatial data

type for location-dependent data. Better support for unstructured and semi-structured data

is provided using the new FILESTREAM data type, which can be used to reference any

file stored on the file system. Structured data and metadata about the file is stored in SQL

Server database, whereas the unstructured component is stored in the file system. Such

files can be accessed both via Win32 file handling APIs as well as via SQL Server using

Page 32: Documentation

T-SQL; doing the latter accesses the file data as a BLOB. Backing up and restoring the

database backs up or restores the referenced files as well. SQL Server 2008 also natively

supports hierarchical data, and includes T-SQL constructs to directly deal with them,

without using recursive queries.

The Full-text search functionality has been integrated with the database engine.

According to a Microsoft technical article, this simplifies management and improves

performance.

Spatial data will be stored in two types. A "Flat Earth" (GEOMETRY or planar)

data type represents geospatial data which has been projected from its native, spherical,

coordinate system into a plane. A "Round Earth" data type (GEOGRAPHY) uses an

ellipsoidal model in which the Earth is defined as a single continuous entity which does

not suffer from the singularities such as the international dateline, poles, or map

projection zone "edges". Approximately 70 methods are available to represent spatial

operations for the Open Geospatial Consortium Simple Features for SQL, Version 1.1.

SQL Server includes better compression features, which also helps in improving

scalability. It enhanced the indexing algorithms and introduced the notion of filtered

indexes. It also includes Resource Governor that allows reserving resources for certain

users or workflows. It also includes capabilities for transparent encryption of data (TDE)

as well as compression of backups. SQL Server 2008 supports the ADO.NET Entity

Framework and the reporting tools, replication, and data definition will be built around

the Entity Data Model. SQL Server Reporting Services will gain charting capabilities

from the integration of the data visualization products from Dundas Data Visualization,

Inc., which was acquired by Microsoft. On the management side, SQL Server 2008

includes the Declarative Management Framework which allows configuring policies and

constraints, on the entire database or certain tables, declaratively. The version of SQL

Server Management Studio included with SQL Server 2008 supports IntelliSense for

SQL queries against a SQL Server 2008 Database Engine. SQL Server 2008 also makes

Page 33: Documentation

the databases available via Windows PowerShell providers and management

functionality available as Cmdlets, so that the server and all the running instances can be

managed from Windows PowerShell.

Microsoft SQL Server is a relational database management system developed by

Microsoft. As a database, it is a software product whose primary function is to store and

retrieve data as requested by other software applications, be it those on the same

computer or those running on another computer across a network (including the Internet).

There are at least a dozen different editions of Microsoft SQL Server aimed at different

audiences and for different workloads (ranging from small applications that store and

retrieve data on the same computer, to millions of users and computers that access huge

amounts of data from the Internet at the same time).

True to its name, Microsoft SQL Server's primary query languages are T-SQL and ANSI

SQL

Prior to version 7.0 the code base for MS SQL Server was sold by Sybase SQL

Server to Microsoft, and was Microsoft's entry to the enterprise-level database market,

competing against Oracle, IBM, and, later, Sybase. Microsoft, Sybase and Ashton-Tate

originally teamed up to create and market the first version named SQL Server 1.0 for

OS/2 (about 1989) which was essentially the same as Sybase SQL Server 3.0 on Unix,

VMS, etc. Microsoft SQL Server 4.2 was shipped around 1992 (available bundled with

IBM OS/2 version 1.3). Later Microsoft SQL Server 4.21 for Windows NT was released

at the same time as Windows NT 3.1. Microsoft SQL Server v6.0 was the first version

designed for NT, and did not include any direction from Sybase.

About the time Windows NT was released, Sybase and Microsoft parted ways and

each pursued its own design and marketing schemes. Microsoft negotiated exclusive

rights to all versions of SQL Server written for Microsoft operating systems. Later,

Sybase changed the name of its product to Adaptive Server Enterprise to avoid confusion

Page 34: Documentation

with Microsoft SQL Server. Until 1994, Microsoft's SQL Server carried three Sybase

copyright notices as an indication of its origin.

SQL Server 7.0 and SQL Server 2000 included modifications and extensions to

the Sybase code base, adding support for the IA-64 architecture. By SQL Server 2005 the

legacy Sybase code had been completely rewritten.

In the ten years since release of Microsoft's previous SQL Server product (SQL

Server 2000), advancements have been made in performance, the client IDE tools, and

several complementary systems that are packaged with SQL Server 2005. These include:

an ETL tool (SQL Server Integration Services or SSIS), a Reporting Server, an OLAP

and data mining server (Analysis Services), and several messaging technologies,

specifically Service Broker and Notification Services.

Microsoft makes SQL Server available in multiple editions, with different feature

sets and targeting different users. These editions are:

Mainstream editions

Datacenter

SQL Server 2008 R2 Datacenter is the full-featured edition of SQL Server and is

designed for datacenters that need the high levels of application support and scalability. It

supports 256 logical processors and virtually unlimited memory. Comes with Stream

Insight Premium edition. The Datacenter edition has been retired in SQL Server 2012, all

its features are available in SQL Server 2012 Enterprise Edition.

Enterprise

SQL Server Enterprise Edition includes both the core database engine and add-on

services, with a range of tools for creating and managing a SQL Server cluster. It can

manage databases as large as 524 petabytes and address 2 terabytes of memory and

supports 8 physical processors. SQL 2012 Edition supports 160 Physical Processors

Page 35: Documentation

Standard

SQL Server Standard edition includes the core database engine, along with the stand-

alone services. It differs from Enterprise edition in that it supports fewer active instances

(number of nodes in a cluster) and does not include some high-availability functions such

as hot-add memory (allowing memory to be added while the server is still running), and

parallel indexes.

Web

SQL Server Web Edition is a low-TCO option for Web hosting.

Business Intelligence

Introduced in SQL Server 2012 and focusing on Self Service and Corporate

Business Intelligence. It includes the Standard Edition capabilities and Business

Intelligence tools: Power Pivot, Power View, and the BI Semantic Model, Master Data

Services, Data Quality Services and xVelocity in-memory analytics.

Workgroup

SQL Server Workgroup Edition includes the core database functionality but does

not include the additional services. Note that this edition has been retired in SQL Server

2012.

Express

SQL Server Express Edition is a scaled down, free edition of SQL Server, which

includes the core database engine. While there are no limitations on the number of

databases or users supported, it is limited to using one processor, 1 GB memory and 4 GB

database files (10 GB database files from SQL Server Express 2008 R2). It is intended as

a replacement for MSDE. Two additional editions provide a superset of features not in

Page 36: Documentation

the original Express Edition. The first is SQL Server Express with Tools, which

includes SQL Server Management Studio Basic. SQL Server Express with Advanced

Services adds full-text search capability and reporting services.

Specialized editions

Azure

Microsoft SQL Azure Database is the cloud-based version of Microsoft SQL

Server, presented as software as a service on Azure Services Platform.

Compact (SQL CE)

The compact edition is an embedded database engine. Unlike the other editions of

SQL Server, the SQL CE engine is based on SQL Mobile (initially designed for use with

hand-held devices) and does not share the same binaries. Due to its small size (1 MB

DLL footprint), it has a markedly reduced feature set compared to the other editions. For

example, it supports a subset of the standard data types, does not support stored

procedures or Views or multiple-statement batches (among other limitations). It is limited

to 4 GB maximum database size and cannot be run as a Windows service, Compact

Edition must be hosted by the application using it. The 3.5 version includes supports

ADO.NET Synchronization Services. SQL CE does not support ODBC connectivity,

unlike SQL Server proper.

Developer

SQL Server Developer Edition includes the same features as SQL Server 2012

Enterprise Edition, but is limited by the license to be only used as a development and test

system, and not as production server. This edition is available to download by students

free of charge as a part of Microsoft's DreamSpark program.

Page 37: Documentation

Embedded (SSEE)

SQL Server 2005 Embedded Edition is a specially configured named instance of

the SQL Server Express database engine which can be accessed only by certain Windows

Services.

Evaluation

SQL Server Evaluation Edition, also known as the Trial Edition, has all the

features of the Enterprise Edition, but is limited to 180 days, after which the tools will

continue to run, but the server services will stop.

Fast Track

SQL Server Fast Track is specifically for enterprise-scale data warehousing

storage and business intelligence processing, and runs on reference-architecture hardware

that is optimized for Fast Track.

Parallel Data Warehouse (PDW)

A massively parallel processing (MPP) SQL Server appliance optimized for large-

scale data warehousing such as hundreds of terabytes.

Datawarehouse Appliance Edition

Pre-installed and configured as part of an appliance in partnership with Dell & HP

base on the Fast Track architecture. This edition does not include SQL Server

Integration Services, Analysis Services, or Reporting Services.

Architecture

This architecture of MS SQL Server contains different layers and services.

Page 38: Documentation

Protocol layer

Protocol layer implements the external interface to SQL Server. All operations

that can be invoked on SQL Server are communicated to it via a Microsoft-defined

format, called Tabular Data Stream (TDS). TDS is an application layer protocol, used to

transfer data between a database server and a client. Initially designed and developed by

Sybase Inc. for their Sybase SQL Server relational database engine in 1984, and later by

Microsoft in Microsoft SQL Server, TDS packets can be encased in other physical

transport dependent protocols, including TCP/IP, Named pipes, and Shared memory.

Consequently, access to SQL Server is available over these protocols. In addition, the

SQL Server API is also exposed over web services.

Data storage

The main unit of data storage is a database, which is a collection of tables with

typed columns. SQL Server supports different data types, including primary types such as

Integer, Float, Decimal, Char (including character strings), Varchar (variable length

character strings), binary (for unstructured blobs of data), Text (for textual data) among

others. The rounding of floats to integers uses either Symmetric Arithmetic Rounding or

Symmetric Round Down (Fix) depending on arguments: SELECT Round(2.5, 0) gives

3.

Microsoft SQL Server also allows user-defined composite types (UDTs) to be

defined and used. It also makes server statistics available as virtual tables and views

(called Dynamic Management Views or DMVs). In addition to tables, a database can also

contain other objects including views, stored procedures, indexes and constraints, along

with a transaction log. A SQL Server database can contain a maximum of 231 objects, and

can span multiple OS-level files with a maximum file size of 260 bytes. The data in the

database are stored in primary data files with an extension .mdf. Secondary data files,

identified with a .ndf extension, are used to store optional metadata. Log files are

identified with the .ldf extension.

Page 39: Documentation

Storage space allocated to a database is divided into sequentially numbered pages,

each 8 KB in size. A page is the basic unit of I/O for SQL Server operations. A page is

marked with a 96-byte header which stores metadata about the page including the page

number, page type, free space on the page and the ID of the object that owns it. Page type

defines the data contained in the page - data stored in the database, index, allocation map

which holds information about how pages are allocated to tables and indexes, change

map which holds information about the changes made to other pages since last backup or

logging, or contain large data types such as image or text. While page is the basic unit of

an I/O operation, space is actually managed in terms of an extent which consists of 8

pages. A database object can either span all 8 pages in an extent ("uniform extent") or

share an extent with up to 7 more objects ("mixed extent"). A row in a database table

cannot span more than one page, so is limited to 8 KB in size. However, if the data

exceeds 8 KB and the row contains Varchar or Varbinary data, the data in those columns

are moved to a new page (or possibly a sequence of pages, called an Allocation unit) and

replaced with a pointer to the data.

For physical storage of a table, its rows are divided into a series of partitions (numbered 1

to n). The partition size is user defined; by default all rows are in a single partition. A

table is split into multiple partitions in order to spread a database over a cluster. Rows in

each partition are stored in either B-tree or heap structure. If the table has an associated

index to allow fast retrieval of rows, the rows are stored in-order according to their index

values, with a B-tree providing the index. The data is in the leaf node of the leaves, and

other nodes storing the index values for the leaf data reachable from the respective nodes.

If the index is non-clustered, the rows are not sorted according to the index keys. An

indexed view has the same storage structure as an indexed table. A table without an index

is stored in an unordered heap structure. Both heaps and B-trees can span multiple

allocation units.

Page 40: Documentation

Buffer management

SQL Server buffers pages in RAM to minimize disc I/O. Any 8 KB page can be

buffered in-memory, and the set of all pages currently buffered is called the buffer cache.

The amount of memory available to SQL Server decides how many pages will be cached

in memory. The buffer cache is managed by the Buffer Manager. Either reading from or

writing to any page copies it to the buffer cache. Subsequent reads or writes are

redirected to the in-memory copy, rather than the on-disc version. The page is updated on

the disc by the Buffer Manager only if the in-memory cache has not been referenced for

some time. While writing pages back to disc, asynchronous I/O is used whereby the I/O

operation is done in a background thread so that other operations do not have to wait for

the I/O operation to complete. Each page is written along with its checksum when it is

written. When reading the page back, its checksum is computed again and matched with

the stored version to ensure the page has not been damaged or tampered with in the

meantime.

Logging and Transaction

SQL Server ensures that any change to the data is ACID-compliant, i.e. it uses

transactions to ensure that the database will always revert to a known consistent state on

failure. Each transaction may consist of multiple SQL statements all of which will only

make a permanent change to the database if the last statement in the transaction (a

COMMIT statement) completes successfully. If the COMMIT successfully completes the

transaction is safely on disk.

SQL Server implements transactions using a write-ahead log.

Any changes made to any page will update the in-memory cache of the page,

simultaneously all the operations performed will be written to a log, along with the

transaction ID which the operation was a part of. Each log entry is identified by an

increasing Log Sequence Number (LSN) which is used to ensure that all changes are

Page 41: Documentation

written to the data files. Also during a log restore it is used to check that no logs are

duplicated or skipped. SQL Server requires that the log is written onto the disc before the

data page is written back. It must also ensure that all operations in a transaction are

written to the log before any COMMIT operation is reported as completed.

At a later point the server will checkpoint the database and ensure that all pages in

the data files have the state of their contents synchronized to a point at or after the LSN

that the checkpoint started. When completed the checkpoint marks that portion of the log

file as complete and may free it (see Simple transaction logging vs. Full transaction

logging). This enables SQL Server to ensure integrity of the data, even if the system fails.

On failure the database log has to be replayed to ensure the data files are in a

consistent state. All pages stored in the roll forward part of the log (not marked as

completed) are rewritten to the database, when the end of the log is reached all open

transactions are rolled back using the roll back portion of the log .

The database engine usually checkpoints quite frequently. However, in a heavily

loaded database this can have a significant performance impact. It is possible to reduce

the frequency of checkpoints or disable them completely but the roll forward during a

recovery will take much longer

Concurrency and locking

SQL Server allows multiple clients to use the same database concurrently. As

such, it needs to control concurrent access to shared data, to ensure data integrity - when

multiple clients update the same data, or clients attempt to read data that is in the process

of being changed by another client. SQL Server provides two modes of concurrency

control: pessimistic concurrency and optimistic concurrency. When pessimistic

concurrency control is being used, SQL Server controls concurrent access by using locks.

Locks can be either shared or exclusive. Exclusive lock grants the user exclusive access

to the data - no other user can access the data as long as the lock is held. Shared locks are

Page 42: Documentation

used when some data is being read - multiple users can read from data locked with a

shared lock, but not acquire an exclusive lock. The latter would have to wait for all

shared locks to be released. Locks can be applied on different levels of granularity - on

entire tables, pages, or even on a per-row basis on tables. For indexes, it can either be on

the entire index or on index leaves. The level of granularity to be used is defined on a

per-database basis by the database administrator. While a fine grained locking system

allows more users to use the table or index simultaneously, it requires more resources. So

it does not automatically turn into higher performing solution. SQL Server also includes

two more lightweight mutual exclusion solutions - latches and spin locks - which are less

robust than locks but are less resource intensive. SQL Server uses them for DMVs and

other resources that are usually not busy. SQL Server also monitors all worker threads

that acquire locks to ensure that they do not end up in deadlocks - in case they do, SQL

Server takes remedial measures, which in many cases is to kill one of the threads

entangled in a deadlock and rollback the transaction it started. To implement locking,

SQL Server contains the Lock Manager. The Lock Manager maintains an in-memory

table that manages the database objects and locks, if any, on them along with other

metadata about the lock. Access to any shared object is mediated by the lock manager,

which either grants access to the resource or blocks it.

SQL Server also provides the optimistic concurrency control mechanism, which is

similar to the multiversion concurrency control used in other databases. The mechanism

allows a new version of a row to be created whenever the row is updated, as opposed to

overwriting the row, i.e., a row is additionally identified by the ID of the transaction that

created the version of the row. Both the old as well as the new versions of the row are

stored and maintained, though the old versions are moved out of the database into a

system database identified as Tempdb. When a row is in the process of being updated, any

other requests are not blocked (unlike locking) but are executed on the older version of

the row. If the other request is an update statement, it will result in two different versions

of the rows - both of them will be stored by the database, identified by their respective

transaction IDs.

Page 43: Documentation

Data retrieval

The main mode of retrieving data from an SQL Server database is querying for it.

The query is expressed using a variant of SQL called T-SQL, a dialect Microsoft SQL

Server shares with Sybase SQL Server due to its legacy. The query declaratively specifies

what is to be retrieved. It is processed by the query processor, which figures out the

sequence of steps that will be necessary to retrieve the requested data. The sequence of

actions necessary to execute a query is called a query plan. There might be multiple ways

to process the same query. For example, for a query that contains a join statement and a

select statement, executing join on both the tables and then executing select on the results

would give the same result as selecting from each table and then executing the join, but

result in different execution plans. In such case, SQL Server chooses the plan that is

expected to yield the results in the shortest possible time. This is called query

optimization and is performed by the query processor itself.

SQL Server includes a cost-based query optimizer which tries to optimize on the

cost, in terms of the resources it will take to execute the query. Given a query, then the

query optimizer looks at the database schema, the database statistics and the system load

at that time. It then decides which sequence to access the tables referred in the query,

which sequence to execute the operations and what access method to be used to access

the tables. For example, if the table has an associated index, whether the index should be

used or not - if the index is on a column which is not unique for most of the columns (low

"selectivity"), it might not be worthwhile to use the index to access the data. Finally, it

decides whether to execute the query concurrently or not. While a concurrent execution is

more costly in terms of total processor time, because the execution is actually split to

different processors might mean it will execute faster. Once a query plan is generated for

a query, it is temporarily cached. For further invocations of the same query, the cached

plan is used. Unused plans are discarded after some time.

Page 44: Documentation

SQL Server also allows stored procedures to be defined. Stored procedures are

parameterized T-SQL queries, which are stored in the server itself (and not issued by the

client application as is the case with general queries). Stored procedures can accept

values sent by the client as input parameters, and send back results as output parameters.

They can call defined functions, and other stored procedures, including the same stored

procedure (up to a set number of times). They can be selectively provided access to.

Unlike other queries, stored procedures have an associated name, which is used at

runtime to resolve into the actual queries. Also because the code need not be sent from

the client every time (as it can be accessed by name), it reduces network traffic and

somewhat improves performance. Execution plans for stored procedures are also cached

as necessary.

SQL CLR

Microsoft SQL Server 2005 includes a component named SQL CLR ("Common

Language Runtime") via which it integrates with .NET Framework. Unlike most other

applications that use .NET Framework, SQL Server itself hosts the .NET Framework

runtime, i.e., memory, threading and resource management requirements of .NET

Framework are satisfied by SQLOS itself, rather than the underlying Windows operating

system. SQLOS provides deadlock detection and resolution services for .NET code as

well. With SQL CLR, stored procedures and triggers can be written in any

managed .NET language, including C# and VB.NET. Managed code can also be used to

define UDTs (user defined types), which can persist in the database. Managed code is

compiled to CLI assemblies and after being verified for type safety, registered at the

database. After that, they can be invoked like any other procedure. However, only a

subset of the Base Class Library is available, when running code under SQL CLR. Most

APIs relating to user interface functionality are not available.

When writing code for SQL CLR, data stored in SQL Server databases can be

accessed using the ADO.NET APIs like any other managed application that accesses

Page 45: Documentation

SQL Server data. However, doing that creates a new database session, different from the

one in which the code is executing. To avoid this, SQL Server provides some

enhancements to the ADO.NET provider that allows the connection to be redirected to

the same session which already hosts the running code. Such connections are called

context connections and are set by setting context connection parameter to true in the

connection string. SQL Server also provides several other enhancements to the

ADO.NET API, including classes to work with tabular data or a single row of data as

well as classes to work with internal metadata about the data stored in the database. It

also provides access to the XML features in SQL Server, including XQuery support.

These enhancements are also available in T-SQL Procedures in consequence of the

introduction of the new XML Data type (query,value,nodes functions).

Services

SQL Server also includes an assortment of add-on services. While these are not

essential for the operation of the database system, they provide value added services on

top of the core database management system. These services either run as a part of some

SQL Server component or out-of-process as Windows Service and presents their own

API to control and interact with them.

Service Broker

Used inside an instance, it is used to provide an asynchronous programming

environment. For cross instance applications, Service Broker communicates over TCP/IP

and allows the different components to be synchronized together, via exchange of

messages. The Service Broker, which runs as a part of the database engine, provides a

reliable messaging and message queuing platform for SQL Server applications.

Page 46: Documentation

Replication Services

SQL Server Replication Services are used by SQL Server to replicate and

synchronize database objects, either in entirety or a subset of the objects present, across

replication agents, which might be other database servers across the network, or database

caches on the client side. Replication follows a publisher/subscriber model, i.e., the

changes are sent out by one database server ("publisher") and are received by others

("subscribers"). SQL Server supports three different types of replication.

Transaction replication

Each transaction made to the publisher database (master database) is synced out to

subscribers, who update their databases with the transaction. Transactional replication

synchronizes databases in near real time.

Merge replication

Changes made at both the publisher and subscriber databases are tracked, and

periodically the changes are synchronized bi-directionally between the publisher and the

subscribers. If the same data has been modified differently in both the publisher and the

subscriber databases, synchronization will result in a conflict which has to be resolved -

either manually or by using pre-defined policies. rowguid needs to be configured on a

column if merge replication is configured.

Snapshot replication

Snapshot replication publishes a copy of the entire database (the then-snapshot of

the data) and replicates out to the subscribers. Further changes to the snapshot are not

tracked.

Analysis Services

Page 47: Documentation

SQL Server Analysis Services adds OLAP and data mining capabilities for SQL

Server databases. The OLAP engine supports MOLAP, ROLAP and HOLAP storage

modes for data. Analysis Services supports the XML for Analysis standard as the

underlying communication protocol. The cube data can be accessed using MDX and

LINQ queries. Data mining specific functionality is exposed via the DMX query

language. Analysis Services includes various algorithms - Decision trees, clustering

algorithm, Naive Bayes algorithm, time series analysis, sequence clustering algorithm,

linear and logistic regression analysis, and neural networks - for use in data mining.

Reporting Services

SQL Server Reporting Services is a report generation environment for data

gathered from SQL Server databases. It is administered via a web interface. Reporting

services features a web services interface to support the development of custom reporting

applications. Reports are created as RDL files.

Reports can be designed using recent versions of Microsoft Visual Studio (Visual

Studio.NET 2003, 2005, and 2008) with Business Intelligence Development Studio,

installed or with the included Report Builder. Once created, RDL files can be rendered in

a variety of formats including Excel, PDF, CSV, XML, TIFF (and other image formats),

and HTML Web Archive.

Notification Services

Originally introduced as a post-release add-on for SQL Server 2000, Notification

Services was bundled as part of the Microsoft SQL Server platform for the first and only

time with SQL Server 2005. SQL Server Notification Services is a mechanism for

generating data-driven notifications, which are sent to Notification Services subscribers.

A subscriber registers for a specific event or transaction (which is registered on the

Page 48: Documentation

database server as a trigger); when the event occurs, Notification Services can use one of

three methods to send a message to the subscriber informing about the occurrence of the

event. These methods include SMTP, SOAP, or by writing to a file in the file system.

Notification Services was discontinued by Microsoft with the release of SQL Server 2008

in August 2008, and is no longer an officially supported component of the SQL Server

database platform.

Integration Services

SQL Server Integration Services is used to integrate data from different data sources. It is

used for the ETL capabilities for SQL Server for data warehousing needs. Integration

Services includes GUI tools to build data extraction workflows integration various

functionality such as extracting data from various sources, querying data, transforming

data including aggregating, duplication and merging data, and then loading the

transformed data onto other sources, or sending e-mails detailing the status of the

operation as defined by the user.

Full Text Search Service

SQL Server Full Text Search service is a specialized indexing and querying

service for unstructured text stored in SQL Server databases. The full text search index

can be created on any column with character based text data. It allows for words to be

searched for in the text columns. While it can be performed with the SQL LIKE operator,

using SQL Server Full Text Search service can be more efficient. Full allows for inexact

matching of the source string, indicated by a Rank value which can range from 0 to 1000

- a higher rank means a more accurate match. It also allows linguistic matching

("inflectional search"), i.e., linguistic variants of a word (such as a verb in a different

tense) will also be a match for a given word (but with a lower rank than an exact match).

Proximity searches are also supported, i.e., if the words searched for do not occur in the

Page 49: Documentation

sequence they are specified in the query but are near each other, they are also considered

a match. T-SQL exposes special operators that can be used to access the FTS capabilities.

The Full Text Search engine is divided into two processes - the Filter Daemon

process (msftefd.exe) and the Search process (msftesql.exe). These processes interact

with the SQL Server. The Search process includes the indexer (that creates the full text

indexes) and the full text query processor. The indexer scans through text columns in the

database. It can also index through binary columns, and use iFilters to extract meaningful

text from the binary blob (for example, when a Microsoft Word document is stored as an

unstructured binary file in a database). The iFilters are hosted by the Filter Daemon

process. Once the text is extracted, the Filter Daemon process breaks it up into a

sequence of words and hands it over to the indexer. The indexer filters out noise words,

i.e., words like A, And etc., which occur frequently and are not useful for search. With the

remaining words, an inverted index is created, associating each word with the columns

they were found in. SQL Server itself includes a Gatherer component that monitors

changes to tables and invokes the indexer in case of updates.

When a full text query is received by the SQL Server query processor, it is handed

over to the FTS query processor in the Search process. The FTS query processor breaks

up the query into the constituent words, filters out the noise words, and uses an inbuilt

thesaurus to find out the linguistic variants for each word. The words are then queried

against the inverted index and a rank of their accurateness is computed. The results are

returned to the client via the SQL Server process.

SQLCMD

SQLCMD is a command line application that comes with Microsoft SQL Server,

and exposes the management features of SQL Server. It allows SQL queries to be written

Page 50: Documentation

and executed from the command prompt. It can also act as a scripting language to create

and run a set of SQL statements as a script. Such scripts are stored as a .sql file, and are

used either for management of databases or to create the database schema during the

deployment of a database.

SQLCMD was introduced with SQL Server 2005 and this continues with SQL

Server 2008. Its predecessor for earlier versions was OSQL and ISQL, which is

functionally equivalent as it pertains to TSQL execution, and many of the command line

parameters are identical, although SQLCMD adds extra versatility.

Visual Studio

Microsoft Visual Studio includes native support for data programming with

Microsoft SQL Server. It can be used to write and debug code to be executed by SQL

CLR. It also includes a data designer that can be used to graphically create, view or edit

database schemas. Queries can be created either visually or using code. SSMS 2008

onwards, provides intelligence for SQL queries as well.

SQL Server Management Studio

SQL Server Management Studio is a GUI tool included with SQL Server 2005

and later for configuring, managing, and administering all components within Microsoft

SQL Server. The tool includes both script editors and graphical tools that work with

objects and features of the server. SQL Server Management Studio replaces Enterprise

Manager as the primary management interface for Microsoft SQL Server since SQL

Server 2005. A version of SQL Server Management Studio is also available for SQL

Server Express Edition, for which it is known as SQL Server Management Studio Express

(SSMSE).

A central feature of SQL Server Management Studio is the Object Explorer,

which allows the user to browse, select, and act upon any of the objects within the server.

Page 51: Documentation

It can be used to visually observe and analyze query plans and optimize the database

performance, among others. SQL Server Management Studio can also be used to create a

new database, alter any existing database schema by adding or modifying tables and

indexes, or analyze performance. It includes the query windows which provide a GUI

based interface to write and execute queries.

Business Intelligence Development Studio

Business Intelligence Development Studio (BIDS) is the IDE from Microsoft

used for developing data analysis and Business Intelligence solutions utilizing the

Microsoft SQL Server Analysis Services, Reporting Services and Integration Services. It

is based on the Microsoft Visual Studio development environment but is customized with

the SQL Server services-specific extensions and project types, including tools, controls

and projects for reports (using Reporting Services), Cubes and data mining structures

(using Analysis Services).

Programmability

T-SQL

T-SQL (Transact-SQL) is the primary means of programming and managing SQL

Server. It exposes keywords for the operations that can be performed on SQL Server,

including creating and altering database schemas, entering and editing data in the

database as well as monitoring and managing the server itself. Client applications that

consume data or manage the server will leverage SQL Server functionality by sending T-

SQL queries and statements which are then processed by the server and results (or errors)

returned to the client application. SQL Server allows it to be managed using T-SQL. For

this it exposes read-only tables from which server statistics can be read. Management

functionality is exposed via system-defined stored procedures which can be invoked from

T-SQL queries to perform the management operation. It is also possible to create linked

Server using T-SQL. Linked server allows operation to multiple server as one query.

Page 52: Documentation

SQL Native Client (aka SNAC)

SQL Native Client is the native client side data access library for Microsoft SQL

Server, version 2005 onwards. It natively implements support for the SQL Server features

including the Tabular Data Stream implementation, support for mirrored SQL Server

databases, full support for all data types supported by SQL Server, asynchronous

operations, query notifications, encryption support, as well as receiving multiple result

sets in a single database session. SQL Native Client is used under the hood by SQL

Server plug-ins for other data access technologies, including ADO or OLE DB. The SQL

Native Client can also be directly used, bypassing the generic data access layers.

VISUAL BASIC 6.0

Visual Basic is a third-generation event-driven programming language and

integrated development environment (IDE) from Microsoft for its COM programming

model first released in 1991. Visual Basic is designed to be relatively easy to learn and

use. Visual Basic was derived from BASIC and enables the rapid application

development (RAD) of graphical user interface (GUI) applications, access to databases

using Data Access Objects, Remote Data Objects, or ActiveX Data Objects, and creation

of ActiveX controls and objects. Scripting languages such as VBA and VBScript are

syntactically similar to Visual Basic, but perform differently.

A programmer can put together an application using the components provided

with Visual Basic itself. Programs written in Visual Basic can also use the Windows API,

but doing so requires external function declarations. Though the program has received

criticism for its perceived faults, from version 3 Visual Basic was a runaway commercial

success, and many companies offered third party controls greatly extending its

functionality.

Page 53: Documentation

The final release was version 6 in 1998. Microsoft's extended support ended in

March 2008 and the designated successor was Visual Basic .NET (now known simply as

Visual Basic).

Like the BASIC programming language, Visual Basic was designed to be easily

learned and used by beginner programmers. The language not only allows programmers

to create simple GUI applications, but to also develop complex applications.

Programming in VB is a combination of visually arranging components or controls on a

form, specifying attributes and actions of those components, and writing additional lines

of code for more functionality. Since default attributes and actions are defined for the

components, a simple program can be created without the programmer having to write

many lines of code. Performance problems were experienced by earlier versions, but with

faster computers and native code compilation this has become less of an issue.

Although VB programs can be compiled into native code executables from

version 5 onwards, they still require the presence of runtime libraries of approximately 1

MB in size. Runtime libraries are included by default in Windows 2000 and later,

however for earlier versions of Windows, i.e. 95/98/NT, runtime libraries must be

distributed together with the executable.

Forms are created using drag-and-drop techniques. A tool is used to place controls

(e.g., text boxes, buttons, etc.) on the form (window). Controls have attributes and event

handlers associated with them. Default values are provided when the control is created,

but may be changed by the programmer. Many attribute values can be modified during

run time based on user actions or changes in the environment, providing a dynamic

application. For example, code can be inserted into the form resize event handler to

reposition a control so that it remains centered on the form, expands to fill up the form,

etc. By inserting code into the event handler for a key press in a text box, the program can

automatically translate the case of the text being entered, or even prevent certain

characters from being inserted.

Page 54: Documentation

Visual Basic can create executables (EXE files), ActiveX controls, or DLL files,

but is primarily used to develop Windows applications and to interface database systems.

Dialog boxes with less functionality can be used to provide pop-up capabilities. Controls

provide the basic functionality of the application, while programmers can insert

additional logic within the appropriate event handlers. For example, a drop-down

combination box will automatically display its list and allow the user to select any

element. An event handler is called when an item is selected, which can then execute

additional code created by the programmer to perform some action based on which

element was selected, such as populating a related list.

Alternatively, a Visual Basic component can have no user interface, and instead

provide ActiveX objects to other programs via Component Object Model (COM). This

allows for server-side processing or an add-in module.

The runtime recovers unused memory using reference counting which depends on

variables passing out of scope or being set to "Nothing", resulting in the very common

problem of memory leaks. There is a large library of utility objects, and the language

provides basic object oriented support. Unlike many other programming languages,

Visual Basic is generally not case sensitive, although it will transform keywords into a

standard case configuration and force the case of variable names to conform to the case

of the entry within the symbol table. String comparisons are case sensitive by default.

The Visual Basic compiler is shared with other Visual Studio languages (C, C++),

but restrictions in the IDE do not allow the creation of some targets (Windows model

DLLs) and threading models.

Characteristics

Visual Basic has the following traits which differ from C-derived languages:

Page 55: Documentation

Statements tend to be terminated with keywords such as "End If", instead of using

"{}"s to group statements.

Multiple variable assignments are not possible. A = B = C does not imply that the

values of A, B and C are equal. The boolean result of "Is B = C?" is stored in A.

The result stored in A would therefore be either false or true.

Boolean constant True has numeric value −1. This is because the Boolean data

type is stored as a 16-bit signed integer. In this construct −1 evaluates to 16 binary

1s (the Boolean value True), and 0 as 16 0s (the Boolean value False). This is

apparent when performing a Not operation on a 16 bit signed integer value 0

which will return the integer value −1, in other words True = Not False. This

inherent functionality becomes especially useful when performing logical

operations on the individual bits of an integer such as And, Or, Xor and Not. This

definition of True is also consistent with BASIC since the early 1970s Microsoft

BASIC implementation and is also related to the characteristics of CPU

instructions at the time.

Logical and bitwise operators are unified. This is unlike some C-derived

languages (such as Perl), which have separate logical and bitwise operators. This

again is a traditional feature of BASIC.

Variable array base. Arrays are declared by specifying the upper and lower

bounds in a way similar to Pascal and FORTRAN. It is also possible to use the

Option Base statement to set the default lower bound. Use of the Option Base

statement can lead to confusion when reading Visual Basic code and is best

avoided by always explicitly specifying the lower bound of the array. This lower

bound is not limited to 0 or 1, because it can also be set by declaration. In this

way, both the lower and upper bounds are programmable. In more subscript-

limited languages, the lower bound of the array is not variable. This uncommon

trait does exist in Visual Basic .NET but not in VBScript.

OPTION BASE was introduced by ANSI, with the standard for ANSI Minimal

BASIC in the late 1970s.

Page 56: Documentation

Relatively strong integration with the Windows operating system and the

Component Object Model. The native types for strings and arrays are the

dedicated COM types, BSTR and SAFEARRAY.

Banker's rounding as the default behavior when converting real numbers to

integers with the Round function ? Round(2.5, 0) gives 2, ? Round(3.5, 0)

gives 4.

Integers are automatically promoted to reals in expressions involving the normal

division operator (/) so that division of one integer by another produces the

intuitively correct result. There is a specific integer divide operator (\) which does

truncate.

By default, if a variable has not been declared or if no type declaration character

is specified, the variable is of type Variant. However this can be changed with

Deftype statements such as DefInt, DefBool, DefVar, DefObj, DefStr. There are

12 Deftype statements in total offered by Visual Basic 6.0. The default type may

be overridden for a specific declaration by using a special suffix character on the

variable name (# for Double, ! for Single, & for Long, % for Integer, $ for String,

and @ for Currency) or using the key phrase As (type). VB can also be set in a

mode that only explicitly declared variables can be used with the command

Option Explicit.

History

VB 1.0 was introduced in 1991. The drag and drop design for creating the user

interface is derived from a prototype form generator developed by Alan Cooper and his

company called Tripod. Microsoft contracted with Cooper and his associates to develop

Tripod into a programmable form system for Windows 3.0, under the code name Tripod

did not include a programming language at all. Microsoft decided to combine Ruby with

the Basic language to create Visual Basic.

Page 57: Documentation

The Ruby interface generator provided the "visual" part of Visual Basic and this

was combined with the "EB" Embedded BASIC engine designed for Microsoft's

abandoned "Omega" database system. Ruby also provided the ability to load dynamic

link libraries containing additional controls (then called "gizmos"), which later became

the VBX interface.

Timeline

Project 'Thunder' was initiated in 1990.

Visual Basic 1.0 (May 1991) was released for Windows at the Comdex/Windows

World trade show in Atlanta, Georgia.

Visual Basic 1.0 for DOS was released in September 1992. The language itself

was not quite compatible with Visual Basic for Windows, as it was actually the

next version of Microsoft's DOS-based BASIC compilers, QuickBasic and

BASIC Professional Development System. The interface used a Text user

interface, using extended ASCII characters to simulate the appearance of a GUI.

Visual Basic 2.0 was released in November 1992. The programming environment

was easier to use, and its speed was improved. Notably, forms became

instantiable objects, thus laying the foundational concepts of class modules as

were later offered in VB4.

Visual Basic 3.0 was released in the summer of 1993 and came in Standard and

Professional versions. VB3 included version 1.1 of the Microsoft Jet Database

Engine that could read and write Jet (or Access) 1.x databases.

Visual Basic 4.0 (August 1995) was the first version that could create 32-bit as

well as 16-bit Windows programs. It has three editions; Standard, Professional,

and Enterprise. It also introduced the ability to write non-GUI classes in Visual

Basic. Incompatibilities between different releases of VB4 caused installation and

operation problems. While previous versions of Visual Basic had used VBX

Page 58: Documentation

controls, Visual Basic now used OLE controls (with files names ending in .OCX)

instead. These were later to be named ActiveX controls.

With version 5.0 (February 1997), Microsoft released Visual Basic exclusively for

32-bit versions of Windows. Programmers who preferred to write 16-bit programs

were able to import programs written in Visual Basic 4.0 to Visual Basic 5.0, and

Visual Basic 5.0 programs can easily be converted with Visual Basic 4.0. Visual

Basic 5.0 also introduced the ability to create custom user controls, as well as the

ability to compile to native Windows executable code, speeding up calculation-

intensive code execution. A free, downloadable Control Creation Edition was also

released for creation of ActiveX controls. It was also used as an introductory form

of Visual Basic: a regular .exe project could be created and run in the IDE, but not

compiled.

Visual Basic 6.0 (Mid 1998) improved in a number of areas including the ability

to create web-based applications. VB6 has entered Microsoft's "non-supported

phase" as of March 2008. Although the Visual Basic 6.0 development

environment is no longer supported, the runtime is supported on Windows Vista,

Windows Server 2008 and Windows 7.

Mainstream Support for Microsoft Visual Basic 6.0 ended on March 31, 2005.

Extended support ended in March 2008. In response, the Visual Basic user

community expressed its grave concern and lobbied users to sign a petition to

keep the product alive. Microsoft has so far refused to change their position on the

matter. Ironically, around this time (2005), it was exposed that Microsoft's new

anti-spy ware offering, Microsoft AntiSpyware (part of the GIANT Company

Software purchase), was coded in Visual Basic 6.0. Its replacement, Windows

Defender, was rewritten as C++ code.

Page 59: Documentation

Derivative languages

Microsoft has developed derivatives of Visual Basic for use in scripting. Visual

Basic itself is derived heavily from BASIC, and subsequently has been replaced with

a .NET platform version.

Some of the derived languages are:

Visual Basic for Applications (VBA) is included in many Microsoft applications

(Microsoft Office), and also in many third-party products like Solid Works,

AutoCAD, WordPerfect Office 2002, ArcGIS, Sage 300 ERP, and Business

Objects Desktop Intelligence. There are small inconsistencies in the way VBA is

implemented in different applications, but it is largely the same language as VB6

and uses the same runtime library. Although Visual Basic development ended

with 6.0, in 2010 Microsoft introduced VBA 7 to provide extended features and

64-bit support for VBA.

VBScript is the default language for Active Server Pages. It can be used in

Windows scripting and client-side web page scripting. Although it resembles VB

in syntax, it is a separate language and it is executed by vbscript.dll as opposed to

the VB runtime. ASP and VBScript should not be confused with ASP.NET which

uses the .NET Framework for compiled web pages.

Visual Basic .NET is Microsoft's designated successor to Visual Basic 6.0, and is

part of Microsoft's .NET platform. Visual Basic.Net compiles and runs using

the .NET Framework. It is not backwards compatible with VB6. An automated

conversion tool exists, but fully automated conversion for most projects is

impossible.

StarOffice Basic is a Visual Basic compatible interpreter included in StarOffice

suite, developed by Sun Microsystems.

Page 60: Documentation

Gambas is a Visual Basic inspired free software programming language. It is not a

clone of Visual Basic, but it does have the ability to convert Visual Basic

programs to Gambas.

Performance and other issues

Earlier versions of Visual Basic (prior to version 5) compiled the code to P-Code

only. The P-Code is interpreted by the language runtime. The benefits of P-Code include

portability and smaller binary file sizes, but it usually slows down the execution, since

having a runtime adds an additional layer of interpretation. However, small amounts of

code and algorithms can be constructed to run faster than compiled native code.

Visual Basic applications require Microsoft Visual Basic runtime

MSVBVMxx.DLL, where xx is the relevant version number, either 50 or 60.

MSVBVM60.dll comes as standard with Windows in all editions after Windows 98 while

MSVBVM50.dll comes with all editions after Windows 95. A Windows 95 machine

would however require inclusion with the installer of whichever dll was needed by the

program.

Visual Basic 5 and 6 can compile code to either native or P-Code but in either

case the runtime is still required for built in functions and forms management.

Page 61: Documentation

DATA FLOW DIAGRAM

Page 62: Documentation

A DFD also known as ‘bubble chart’ has the purpose of clarifying system

requirements and identifying major transformations. It shows the flow of data through a

system. It is a graphical tool because it presents a picture. The DFD may be partitioned

into levels that represent increasing information flow and functional detail. Four simple

notations are used to complete a DFD. These notations are given below:-

DATA FLOW:-

The data flow is used to describe the movement of information from one part of

the system to another part. Flows represent data in motion. It is a pipe line through which

information flows. Data flow is represented by an arrow.

PROCESS:-

A circle or bubble represents a process that transforms incoming data to outgoing

data. Process shows a part of the system that transforms inputs to outputs.

EXTERNAL ENTITY:-

A square defines a source or destination of system data. External entities represent

any entity that supplies or receive information from the system but is not a part of the

system.

Page 63: Documentation

DATA STORE:-

The data store represents a logical file. A logical file can represent either a data store

symbol which can represent either a data structure or a physical file on disk. The data

store is used to collect data at rest or a temporary repository of data. It is represented by

open rectangle.

LEVEL 0 DFD for Network based Quiz System

Page 64: Documentation

LEVEL 1 DFD for Network Based Quiz System

Page 65: Documentation

SYSTEM DESIGN

Page 66: Documentation

The system which is in making is developed by working on two different modules

and combining them to work as a single unit. That single unit is the one which is known

as the new software. We go through the different design strategies to design the system

we are talking about. In the input design we decide which type of input screens are going

to be used for the system in making. In the output design we decide the output screens

and the reports that will be used to give the output and in the database design we decide

what all tables will be required and what all fields will be there in those tables. Each of

them is discussed briefly below.

INPUT DESIGN

Input design is the process of converting the user-originated inputs to a computer-

based format. For my project I will be making use of forms which will enable me to get

the user inputs with the help of the various tools like text box and combo boxes. The

forms that will be used are based on GUI and so any user will be able to use it with ease.

The design for handling input specifies how data are accepted for computer processing.

Input design is a part of overall system design that needs careful attention and if

includes specifying the means by which actions are taken. A system user interacting

through a workstation must be able to tell the system whether to accept input produce a

report or end processing. The collection of input data is considered to be the most

expensive part of the system design. Since the inputs have to be planned in such a manner

so as to get the relevant information extreme care is taken to obtain the information. If the

data going into the system is incorrect then the processing and outputs will magnify these

errors. The major activities carried out are

1. Collection of needed data from the source

2. Conversion of data into computer accepted form.

3. Verification of converted data.

4. Checking data for accuracy.

Page 67: Documentation

The following are the major input screens used for the application:

Login screen: used for providing user id and password.

Registration form: used for storing the details of different users.

Add Exam form: The input screen used to add various examination details.

Edit Exam form: The input screen used to edit examination details.

Create Question form: used by Admin to set up question for a particular exam.

Edit Questions form: used by staff to edit questions an item.

Add Subject form: used by the Admin to add the subjects.

Add Chapter form: used by Admin to add chapters.

The screen shots of the input forms are given in Appendix B

OUTPUT DESIGN

Output design is used to provide outputs to the users of the system. Here, in this

case, I make use of forms which contains the flex grids to show the outputs of the

processed data. I also make use of reports to show the results. I have tried to include pie-

charts and bar diagrams to show the results in the form of graphs to analyze the results

properly.

The output design has been done so that the results of processing should be

communicated to the user. Effective output design will improve the clarity and

performance of outputs. Output is the main reason for developing the system and the

basis on which they will evaluate the usefulness of the application. Output design phase

of the system is concerned with the Convergence of information to the end user in a

friendly manner. The output design should be efficient, intelligible so that system

relationship with the end user is improved and there by enhancing the process of decision

making.

Exam Info form: This output screen enables the administrator to view details of exam.

Page 68: Documentation

Exam form: This output screen enables the user to take exam.

Result form: This output screen enables the Candidates to view result

The screen shots of the output forms such as flex grids and reports are shown in

Appendix C.

DATABASE DESIGN

In the database design, we create a database with different tables that is used to

store the data. We normalize the data in the table. Database normalization is the process

of organizing the fields and tables in a relational database to minimize redundancy and

dependency. Normalization usually involves dividing large tables into smaller (and less

redundant) tables and defining relationships between them. The objective is to isolate

data so that additions, deletions, and modifications of a field can be made in just one table

and then propagated through the rest of the database via the defined relationships. In the

project I have made used of the 3rd normal form, Third Normal Form (3NF) is a property

of database tables. A relation is in third normal form if it is in Second Normal Form and

there are no functional (transitive) dependencies between two (or more) non-primary key

attributes.

The overall objective in the development of database technology has been to treat

data as an organizational resource and as an integrated whole. Database

Management System allows data to be protected and organized separately

from other resources. Database is an integrated collection of data. This is the

difference between logical and physical data.

In my project, I have made use of twelve tables which are stored in the database

named Quiz. The tables are used to store the values that are generated by the application.

The tables are as follows.

1. autotable

2. chapter

3. course

Page 69: Documentation

4. dept

5. design

6. login

7. question

8. student

9. subject

10. teacher

11. schedulequiz

12. result

The field names and the key constraints of all the tables are shown below in

detail.

Table 1: autotable

Column Name Data Type Key Constraints Null/Not Null

table_name char(20) Primary key Not null

Number varchar(6) Foreign key Not null

Table 2: chapter

Column Name Data Type Key Constraints Null/Not Null

chapter_id varchar(6) Primary key Not null

subject_id varchar(5) Foreign key Not null

chapter_name varchar(50) Not null

course_id varchar(5) Foreign key Not null

Table 3: course

Column Name Data Type Key Constraints Null/Not Null

Page 70: Documentation

course_id varchar(5) Primary key Not null

course_name char(20) Not null

Table 4: dept

Column Name Data Type Key Constraints Null/Not Null

department_id varchar(5) Primary key Not null

department_name char(20) Not null

course_id varchar(5) Foreign key Not null

Table 5: design

Column Name Data Type Key Constraints Null/Not Null

designation_id varchar(6) Primary key Not null

designation_name char(20) Not null

department_id varchar(5) Foreign key Not null

Table 6: login

Column Name Data Type Key Constraints Null/Not Null

user_name varchar(15) Primary key Not null

password varchar(15) Not null

user_type char(15) Not null

Page 71: Documentation

Table 7: question

Column Name Data Type Key Constraints Null/Not Null

course_id varchar(5) Foreign key Not null

subject_id varchar(5) Foreign key Not null

chapter_id varchar(6) Foreign key Not null

question_id varchar(5) Primary key Not null

question Varchar(500) Not null

opt1 varchar(50) Not null

opt2 varchar(50) Not null

opt3 varchar(50) Not null

opt4 varchar(50) Not null

answer varchar(50) Not null

Table 8: student

Column Name Data Type Key Constraints Null/Not Null

student_id varchar(5) Primary key Not null

student_name char(20) Not null

address1 varchar(50) Not null

address2 varchar(50) Not null

city Char(25) Not null

phone varchar(12) Not null

email varchar(50) Not null

gender char(8) Not null

class int Not null

department_id varchar(5) Foreign key Not null

course_id varchar(5) Foreign key Not null

Page 72: Documentation

password varchar(20) Not null

Table 9: subject

Column Name Data Type Key Constraints Null/Not Null

course_id varchar(5) Foreign key Not null

subject_id varchar(5) Primary key Not null

subject_name varchar(20) Not null

Table 10: teacher

Column Name Data Type Key Constraints Null/Not Null

teacher_id varchar(5) Primary key Not null

teacher_name char(20) Not null

email varchar(40) Not null

address1 varchar(50) Not null

address2 varchar(50) Not null

city char(20) Not null

qualification char(20) Not null

password varchar(15) Not null

department_id varchar(5) Foreign key Not null

designation_id varchar(6) Foreign key Not null

subject_id varchar(5) Foreign key Not null

Page 73: Documentation

course_id varchar(5) Foreign key Not null

Table 11: schedulequiz

Column Name Data Type Key Constraints Null/Not Null

quiz_id varchar(5) Primary key Not null

course_id varchar(5 Foreign key Not null

subject_id varchar(5) Foreign key Not null

chapter_id varchar(6) Foreign key Not null

question_no int Not null

marks int Not null

start_date date Not null

end_date date Not null

start_time float Not null

end_time float Not null

teacher_id varchar(5) Foreign key Not null

Table 12: result

Column Name Data Type Key Constraints Null/Not Null

course_id varchar(5) Foreign key Not null

Student_id varchar(5) Foreign key Not null

chapter_id varchar(6) Foreign key Not null

quiz_id varchar(5) Foreign key Not null

marks int Not null

res char(4) Not null

Page 74: Documentation

CODING

PROGRAM CODE PREPARATION

When considered as a s tep in sof tware engineer ing, coding is

v iewed as a natura l consequence of design. However, programming language

Page 75: Documentation

characteristics and coding style can profoundly affect software quality and

maintainability. The coding step translates a detail design representation into a

programming language realization. The translation process continues when a compiler

accepts source code as input and produces machine-independent object code as output.

The initial translation step in detail design to programming language is a primary

concern in the software engineering context. Improper  interpretation of a detail

design specification can lead to erroneous source code. Style is an important attribute

of source code and can determine the intelligibility of a  program. The elements of

a style include internal documentation, methods for data declaration, procedures for

statement construction, and I/O coding and declaration. In all cases, simplicity

and clarity are key characteristics. An offshoot of coding style is the execution

time and/or memory efficiency that is achieved. Coding is the phase in which we

actually write programs using a programming language. In the coding phase ,

des ign must be t ransla ted in to a machine readable form. I f des ign

is  performed in a detailed manner, coding can be accomplished

mechanistically. It was the only recognized development phase in early or

unsystematic development processes, but it is just one of several phases in a waterfall

process. The output of this phase is an implemented and tested collection of

modules.

In my project I have made use of the Visual basic 6 to code the whole project and

have made use of the SQL Server 2008 to act as a database to store the results of the

processed data which is the output of the project.

The full code that is written for the project is given in Appendix A for reference.

Page 76: Documentation

SYSTEM TESTING

Testing is the penultimate step of software development. An elaborate

testing of data is prepared and the system is using test data. While doing testing, errors

are noted and correction is made . The u se r s a r e t r a i ned t o ope ra t e t he

deve loped sys t em .Bo th ha rdware and so f twa re securities are made to run

the developed system successfully. System testing is aimed at ensuring the system

works accurately before live operation commences. Testing is vital to the system.

System testing makes a logical assumption that if all parts of the system are correct,

Page 77: Documentation

the goal will be successfully achieved. The candidate system is subjected to a

verity of tests: Online Response, Volume, Stress Recovery & Security and Usable tests.

A series of testing are performed for the proposed system before the system is

ready for user acceptance testing. Nothing is complete without testing, as it is vital

success of the system. The entire testing process can be divided into 4 phases

1. Unit Testing

2. Integration Testing

3. User Acceptance Testing

4. Use Cases/ Test Cases

UNIT TESTING

In my project, I have done unit testing to test each module to know whether each

module works properly or not. Some test data were passed for testing to check whether

they produced correct outputs and their flow is in the right path or not.

Un i t t e s t i ng focuse s ve r i f i c a t i on e f fo r t on t he sma l l e s t un i t o f

so f twa re de s igns t he module. To check whether each module in the

software works properly so that it gives desired outputs to the given inputs.

All Validations and conditions are tested in the module level in the unit

test .Control paths are tested to ensure the information properly flows into,

and out of the program unit and out of the program unit under test. Boundary condition

is tested to ensure that the modules operate at boundaries. All independent paths

through the control structure ensure that all statements in a module have been

executed at- least once.

INTEGRATION TESTING

Page 78: Documentation

The major concerns of integration testing in my project are developing an

incremental strategy that will l im i t t he complex i t y o f en t i r e a c t i ons among

componen t s a s t hey a r e added t o t he sy s t em. Developing a component

as they are added to the system, developing an implementation & integration

schedules that will make the modules available when needed, and designing

test cases that will demonstrate the viability of the evolving system, though

each program works individually, they should work after linking them together. Data

may be lost across interface and one module can have an adverse effect on

another. Subroutines, after linking, may not do the desired function expected

by the main routine. Integration testing is a systematic technique

for constructing program structure while at same time, conducting test to uncover errors

associated with the interface. In the testing, the programs are constructed and tested in

small segments.

ACCEPTANCE TESTING

An acceptance test has the objective of selling the software to the user

on the validity and reliability of  the system it verifies that the system

procedures operate to system specification and that the integrity of vital data

is maintained. It is to test how far the users accept the new system and how

easily the work on it as they work on the old system. I tested the system with

a large collection of records. The system is found to be user friendly and

working efficiently. All the above testing was successfully done.

USE CASES/ TEST CASES

Page 79: Documentation

Use cases are used during the analysis phase of a project to identify and partition

system functionality. They separate the system into actors and use cases.

Actors represent roles that can are played by users of the system. Those users can

be humans, other computers, pieces of hardware, or even other software systems. The

only criterion is that they must be external to the part of the system being partitioned into

use cases. They must supply stimuli to that part of the system, and the must receive

outputs from it.

Use cases describe the behavior of the system when one of these actors sends one

particular stimulus. This behavior is described textually. It describes the nature of the

stimulus that triggers the use case; the inputs from and outputs to other actors, and the

behaviors that convert the inputs to the outputs. The text of the use case also usually

describes everything that can go wrong during the course of the specified behavior, and

what remedial action the system will take.

USE CASE 1: Add Question

Primary

Actor

Stakeholders

Interest

Pre

Conditions

Post

Conditions

Task Scenario Output

Teacher Wants to

keep the

question

bank up to

Teacher is

identified

and

All

questions

are saved

Adding

new

question

Teacher

starts

adding

new

System

checks

whether

the

Page 80: Documentation

date authenticated question

specifying

the

question

number,

subject id,

chapter id,

question,

options

and

answer

questions

is already

present or

not and

whether

the

question

bank is

full or

not, if the

question

bank is

full or if

the

question

is already

present,

then the

question

is

rejected

Use Case 2: Sign In

Primary

Actor

Stakeholders

Interest

Pre

Conditions

Post

Conditions

Task Scenarios Output

Administrator Access the

system as a

valid

Users are

registered

in the

Successful

login for

all

Login

to the

1. User

name and

password

Successful

login

Page 81: Documentation

administrator system registered

users

system are valid

Teacher Access the

system as a

valid teacher

2. user

type is

not valid

Message

to re-enter

the valid

user typeStudent Access the

system as a

valid student

3. user

name is

invalid

Message

to re-enter

valid user

name

4.

Password

is invalid

Message

to enter

correct

password

5. User

name and

password

is not

correct

Message

to enter

valid user

name and

password

Use Case 3: Edit Questions

Primary

Actor

Stakeholders

Interest

Pre

Conditions

Post

Conditions

Task Scenarios Output

Teacher Wants to

keep the

question

Teacher is

identified

and

All

questions

are saved

To edit

questions

Teacher

enters the

question

If the

question

is not

Page 82: Documentation

bank up to

date

authenticated number,

chapter

name,

subject

name and

tries to

edit the

question

by

changing

some

options or

phrases of

the

question

present

in the

question

bank

then the

system

shows an

error

message

that the

question

is not

present

and to

enter the

correct

one.

Use Case 4: Schedule for Quiz

Primary

Actor

Stakeholders

Interest

Pre

Conditions

Post

Conditions

Task Scenarios Output

Teacher Wants to Teacher is Given Wants Teacher If any of

Page 83: Documentation

take a quiz

on a

scheduled

time and to

make more

students

interested in

taking the

quiz

identified and

authenticated.

schedule

is saved

to create

a new

schedule

for a

quiz

enters the

quiz

number.

Start time,

end time

and other

informations

to schedule

a quiz

the

information

is wrong,

the system

will prompt

the user

that the

information

is invalid

and to re-

enter the

details to

schedule

the quiz.

Use Case 5: Quiz Result

Primary

Actor

Stakeholders

Interest

Pre

Conditions

Post

Conditions

Task Scenarios Output

Page 84: Documentation

Teacher Wants to

evaluate the

students

Teacher is

identified

and

authenticated

Quiz

result is

saved

Teacher

wants to

see the

quiz

result

Teacher

enters the

quiz

number,

course

name,

subject

name and

chapter

name to

get the

result

System

checks

whether

these

inputs

are valid

or not. If

they are

not valid

then

prompts

the user

to enter

valid

quiz

number

and other

details to

get the

quiz

result

Use Case 6: Add User Info

Page 85: Documentation

Primary

Actor

Stakeholders

Interest

Pre

Conditions

Post

Conditions

Task Scenarios Output

Admin Wants to

keep the user

information

up to date

Admin is

identified and

authenticated

All

information

is saved

Wants to

add new

user

information

Admin

enters the

user

name,

password

and user

type

System checks

whether the

information is

valid or not. If

valid, then the

details are

saved else they

are not saved.

Use Case 7: Edit User Info

Primary

Actor

Stakeholders

Interest

Pre

Conditions

Post

Conditions

Task Scenarios Output

Admin Wants to

keep the user

info up to

date

Admin is

identified and

authenticated

All

informations

are saved

Admin

wants to edit

the user

information

Admin types

the user id

and the user

type of the

user whose

information

is to be

edited.

If the user

id and user

type is

valid then

the admin

can edit

the

informatio

n else an

error

message

will be

displayed.

Use Case 8: Get Feedback

Page 86: Documentation

Primary

Actor

Stakeholders

Interest

Pre

Conditions

Post

Conditions

Task Scenarios Output

Student Wants to

give

feedback

Student is

identified

and

authenticated

All

feedbacks

are saved

Student

select a

particular

quiz

Student

wants to

give

feedback

so he/she

selects a

quiz

If the

quiz time

is over,

then the

student

can give

feed

back,

else, a

message

will be

seen

stating

“your

time is

not over

Use Case 9: Appear in Quiz

Page 87: Documentation

Primary

Actor

Stakeholders

Interest

Pre

Conditions

Post

Conditions

Task Scenarios Output

Student Wants to

attend the

quiz

Student is

identified

and

authenticated

All

answers

are saved

Student

start to

take

the

quiz

System

checks

whether

the quiz

time has

started or

not.

If the quiz

time has

started, then

the student

can start the

quiz

System

checks

whether

the quiz

time is

over or

not

If the quiz

time is over

and the

student has

not

submitted,

then the

system

automatically

finishes the

quiz by

showing the

message

”your time is

over”

Page 88: Documentation

SYSTEM IMPLEMENTATION AND

MAINTAINENCE

Page 89: Documentation

SYSTEM IMPLEMENTATION

Implementation is an activity that is contained throughout the

development phase. It is the process of bringing a developed system into

operational use and turning it over to the user. The new sys t em and i t s

componen t s a r e t o be t e s t ed i n a s t r uc tu r ed and p l anned manne r . A

successful system should be delivered and users should have the confidence

that the system would work efficiently and effectively. The more complex the

system being implemented the more involved will be the system analysis and design

effort required for implementation. Imp lemen ta t i on i s t he s t age o f t he sy s t em

when t he t heo re t i c a l de s ign i s t u rned i n to working system. The

implementation involves careful planning investigation of the current sy s t em

and i t s cons t r a in t s on imp lemen t ing , de s ign o f me thods t o a ch i eve

t he change ove r , training of user over procedure and evaluation change over

method. There are three types of implementation:

1. Implementation of a computer system to replace a manual system. The problems

involved are converting files, training users, creating accurate files, and verifying

printouts for integrity.

2. Implementation of a new computer system to replace an existing one. This is usually a

difficult conversion. If not properly planned, there can be many problems. Some larger

systems have taken as long as a year to convert.

3. Implementation of a modified application to replace an existing one using the same

computer. This type of conversion is relatively easy to handle, provided there are no

major changes in files.

SYSTEM MAINTAINENCE

Page 90: Documentation

Maintenance corresponds to restoring something to original conditions, covering a

wide r ange o f a c t i v i t i e s i nc lud ing co r r ec t i ng codes and de s ign e r ro r s

and upda t i ng u se r suppo r t . Maintenance is performed most often to improve the

existing software rather than to a crisis or risk failure. The system would fail if not

properly maintained. The software maintenance is an important one in the

software development because we have to spend more efforts for

maintenance. Software maintenance is to improve the software quality

according to the requirements. After a system is successfully implemented, it should be

maintained in a proper manner. The need for system maintenance is to make the

system adaptable to the changes in the system environment. There may be social,

economical or technical changes, which affect system being i m p l e m e n t e d .

S o f t w a r e p r o d u c t e n h a n c e m e n t s m a y i n v o l v e p r o v i d i n g n e w

f u n c t i o n a l c apa b i l i t i e s , imp rov ing u se r d i s p l a ys and mode o f

i n t e r a c t i on , upg r ad i ng t he pe r fo r man ce characteristics of the system. So only

through proper system maintenance procedures, the system can be adapted to cope with

these changes. We may define maintenance by describing four activities that

are undertaken to after a  program is released for use. The first maintenance activity

occurs because it is unreasonable to assume that software testing will uncover all

latent errors in a large software system. During the use of any

large p rog ram, e r ro r s w i l l occu r and be r epo r t ed t o t he deve lope r .

The p roce s s t ha t i nc ludes t he diagnosis and correction of one or more errors is

called corrective maintenance. The second activity that contributes to a definition of

maintenance occurs because of the rapid change that is encountered in every aspects of

computing. Therefore, adaptive maintenance – an activity that modifies software to

properly interface with a changing environment is both necessary and common

place. The t h i rd a c t i v i t y t ha t may be app l i ed t o a de f i n i t i on o f

ma in t enance occu r s when a software package is successful. As the software is

used, recommendations for new capabilities, mod i f i c a t i ons t o ex i s t i ng

Page 91: Documentation

fun c t i ons , and gene ra l enh ance men t s a r e r e ce iv ed f ro m use r s . To

satisfy requests in this category, perfective maintenance is performed. This activity

accounts for the majority of all efforts expended on software maintenance. The fourth

maintenance activity occurs when software is changed to improve future

maintainability or reliability, or to provide a better basis for future enhancements. Often

called preventive maintenance, this activity is characterized by reverse engineering and

re-engineering techniques.

Page 92: Documentation

SCOPE OF PROJECT

Initially this project is to be implemented at the interact level but there is a scope

to implement this project on a client-server network and later on to be upgraded to

become a web site.

Page 93: Documentation

This project would be very useful for educational institutes where regular

evaluation of students’ is required. Further it can also be useful for anyone who requires

feedback based on objective type responses

Network Based Exam System is very useful for Educational Institution to prepare a

quiz, save the time that will take to check the paper and prepare mark sheets. It will help

the Institute to testing of students and develop their skills.

Other main features of this quiz system are as follows:

No restriction that examiner has to be present when the candidate takes the test.

This can be used in educational institutions as well as in corporate world.

It helps to efficiently reduce the load of administrator’s mass educational

evaluation.

It also reduces the waste of time of candidates for using extra things like paper,

pen, scale, etc.

It gives quick result and forbids the unfair scores.

It is focused on the benefit of both the student and exam conduction section.

Page 94: Documentation

FUTURE ENHANCEMENT

Nothing can be done in single step. Obviously examination application has some

future enhancements. The enhancements that can improve the value of this

application are the following:

Page 95: Documentation

Descriptive answering.

Facility for feedback on test questions and results.

Sound can be included as part of the question.

Video can be included as part of the question.

Animations can be included as part of the questions.

This system is now implemented at the client machine only but as a future

enhancement we can modify the system in such a way to make it work on a client- server

network. The system can be even more enhanced by making it an internet based quiz

system or so called online quiz system.

Page 96: Documentation

CONCLUSION

The conventional examination system where the students have to travel

examination centers or take proctored exams. Network based examination

system takes the advantage of network technologies to conduct exams and results

are published without any delay. Almost every project is subjected to change depending

on the client’s requirements. The sy s t em and t he a r ch i t e c tu r e o f t he

Page 97: Documentation

proposed sy s t em i s a compa t i b l e one , so add i t i on o f new modules can

be done without much difficulty. Since this module has its unique properties it can extend

further to make this system a complete one. The application was implemented and

tested with real data and were found to be error  free. Also, the system is

protected from any unauthorized access. All the necessary validations are

carried out on this project, so that any kind of users can make use of this

application and necessary messages makes them conscious of the error they have

made.