Upload
aravind-sagar
View
218
Download
2
Tags:
Embed Size (px)
Citation preview
1.1 PREAMBLE OF THE PROJECT
The “DOCKET CHUNK SYSTEM” is a web based project. The objective of this
application is to manage a file sharing system. The Company can create a employee’s database,
each one with his assigned folder. Within this folder, the employee can upload his files or
download the one already in the folder which is uploaded by the company itself. When the
company uploads a file in a employee’s folder, the employee will receive an email alerting him
of the new file and with a link to download it without logging in the File Sharing System. The
employee can also login into the system any time and lock for previously uploaded files.
1.2 OBJECTIVES OF THE PROJECT
The project is fully integrated with Employee Relationship Management (ERM)
solution and developed in a manner that is easily manageable, time saving and relieving one
form manual.
The objective of this application is to manage a file sharing system. The
Company can create a employee’s database, each one with his assigned folder. Within this
folder, the employee can upload his files or download the one already in the folder which is
uploaded by the company itself. When the company uploads a file in a employee’s folder, the
employee will receive an email alerting him of the new file and with a link to download it
without logging in the File Sharing System. The employee can also login into the system any
time and lock for previously uploaded files. Every user normally thinks about how to store
information on files for future usage. Sometimes he can share his data with others, which may
be called public data. But some data may be protective which is private only for limited
persons or single usage.
1
2.1 Previous Methods
The existing system is semi-automated system. In this system user needs to save the
information in the form of excel sheets or Disk Drives. There is no sharing is possible if the data
is in the form of Disk drives. This system gives very less security for saving data. Some data may
be lost due to mismanagement. It is a limited system and fewer users friendly. Searching of
particular information is very critical it takes lot of time. The users cannot able to restrict the file
sharing options. The users only know his information only not others. It is very critical to share
public information to all users.
2.2 Project Approach/Motivation of the Project
The development of this new system will contain the following activities, which try to
automate the entire process keeping in the view of database integration approach. User
Friendliness is provided in the application with various controls provided by system Rich User
Interface. The system makes the overall project management much easier and flexible. The user
information files can be stored in centralized database which can be maintained by the system.
This can give the good security for user information because data is not in client machine.
Authentication is provided for this application only registered users can access. User can share is
data to others, and also he can get data from others. Report generation features is provided using
Data reports to generate different kind of reports.
2
Simple Mail Transfer Protocol (SMTP) is an Internet standard for electronic mail (e-mail)
transmission across Internet Protocol (IP) networks. SMTP was first defined by RFC 821 and
last updated by RFC 5321 (2008) which includes the extended SMTP (ESMTP) additions, and is
the protocol in widespread use today. SMTP is specified for outgoing mail transport and uses
TCP port 25. SMTP connections secured by SSL are known by the shorthand SMTPS, though
SMTPS is not a protocol in its own right.
While electronic mail servers and other mail transfer agents use SMTP to send and receive mail
messages, user-level client mail applications typically only use SMTP for sending messages to a
mail server for relaying.
Fig 3.1: Working of SMTP Protocol
SMTP is a connection-oriented, text-based protocol in which a mail sender communicates with a
mail receiver by issuing command strings and supplying necessary data over a reliable ordered
data stream channel. An SMTP session consists of commands originated by an SMTP client and
corresponding responses from the SMTP server so that the session is opened, and session
parameters are exchanged. A session may include zero or more SMTP transactions. An SMTP
transaction consists of three command/reply sequences (see example below.) They are:
1. MAIL command, to establish the return address, a.k.a. Return-Path, 5321.From, m from,
or envelope sender.
2. RCPT command, to establish a recipient of this message. This command can be issued
multiple times, one for each recipient. These addresses are also part of the envelope.
3
3. DATA to send the message text. This is the content of the message, as opposed to its
envelope. It consists of a message header and a message body separated by an empty
line.
File sharing is the practice of distributing or providing access to digitally stored information,
such as computer programs, multimedia (audio, images and video), documents, or electronic
books. It may be implemented through a variety of ways. Common methods of storage,
transmission and dispersion include manual sharing utilizing removable media, centralized
servers on computer networks, World Wide Web-based hyperlinked documents, and the use of
distributed peer-to-peer networking.
What are folders?
A folder is a representation of result set data. The visual concept of a folder in Database is
analogous to a directory in Windows where folders are the containers and items are the files held
in the folders. A folder represents a group of related items. Discoverer end users select items
from one or more folders to get information from the database.
There are three types of folder:
simple folders, which contain items based on columns in a single database table or view
complex folders, which can contain items based on columns from multiple database
tables or views
custom folders, which are based on SQL statements
To the Discoverer end user, the type of a particular folder is immaterial.
Folders can include items, calculated items, joins, conditions, item classes, and hierarchies. Items
in a folder can be used in summary folders and to define hierarchies.
4
Simple folders contain items based on columns in a single database table or view. Items in a
simple folder can also represent calculations on other items in the folder.
Fig3.2 How Discoverer Administrator represents a table/view
Complex folders contain items from one or more folders. Complex folders enable you to create
a combined view of data from multiple folders. This is analogous to a view in the database.
Fig 3.3 How a complex folder group
5
What are the benefits of using complex folders instead of database views?
You could produce the same result set using a database view instead of a complex folder.
However, using a complex folder instead of a database view offers several advantages. You can:
create a complex folder without the database privileges required to create a database view
control access to a complex folder using the folder's business area
manage complex folders entirely within Discoverer Administrator, whereas database
views can be complicated to maintain
What are custom folders?
Custom folders are folders based on a SQL statement which could include SET operators or a
synonym that you type directly into a dialog. By defining a custom folder, you can quickly create
a folder that represents a complicated result set. When you save the custom folder, Discoverer
Administrator creates items using the 'Select' part of the SQL statement you have entered.
A centralized database has all its data on one place. In centralized database as all the data reside
on one place so problem of bottle-neck can occur, and data availability is not efficient as in
distributed database. Users can issue commands from any location to access data and it does not
affect the working of database. Distributed database allows us to store one copy of data at
different locations. Its advantage is that if a user wants to access data then the nearest site
(location) will provide data so it takes less time.
Fig 3.4 Sample figure on Centralized Database
6
4.1 Study of the system:
The system after careful analysis has been identified to be presented with the following
modules:
The Modules involved are
1. Administration
2. Users
3. File Uploading and Downloading
4. File Sharing
5. Authentication
4.1.1 Administration
Administrator has an authentication to monitoring the complete system.
He can have all the privileges. Administrator can register user directly, and edit the
information of a registered users.
File Type: Administrator can add different file types, therefore user can upload files for
that file types only. The file types normally include
Word File
PDF file
Text File
Images
Video/Audio File
User Login Tracking: User Login Tracking can be done by administrator. For tracking
the user details administrator must provide the user id and date on which he needs to
track the user details.
7
4.1.2 Users
User is nothing but a registered user.
A registered user directly uploads information into the system. Before uploading he
needs to specify his credentials for authentication.
The upload files have 2 types of priorities.
Public priority
The definition of Public priority means, the files which are uploaded in to the
system, those are shared by all the users who are registered in the system.
Private priority
Private priority means, the accessibility of the uploaded file only by the user
itself. Other users don’t have any permission for downloading.
4.1.3 File Uploading and Downloading
All registered users having authority to upload any file into the system, admin can track
all uploaded files. Based on the priority users can download file from the system.
For file uploading and downloading the system provides a good user interface for easy to
use. User can save the downloading file at his desired location.
4.1.4 File Sharing
For every uploading file user can put priority.
Private means no accessibility for other users. It’s simply personal.
Public means every user can access that file.
4.1.5 Authentication:
Authentication is a technique of providing security to the system.
Here every user must enter into the system through login page.
8
The login page will restrict the unauthorized users. A user must provide his credential
like user Id and password for login into the system. For this purpouse, the system
maintains data for all users.
Whenever a user enters his user id and password, it checks in the database for user
existence. If the user exists he can be treated as a valid user. Otherwise the request will
be denied.
4.2 Operating Environment:
4.2.1 Software Interface
Operating System Server : Windows XP or later
Database Server : Microsoft SQL Server-2005
Tools : Microsoft Visual Studio .Net-2008, Microsoft
Internet Explorer
User Interface : Asp.Net with Ajax
Code Behind : VC#.Net
4.2.2 Hardware Interface
Processor : Dual Core 2.6 Or any higher
Ram : 1 GB Ram or more
Hard Disk : PC with 40GB or more
9
4.3 Technologies:
INTRODUCTION TO .NET FRAMEWORK
The Microsoft .NET Framework is a software technology that is available with several
Microsoft Windows operating systems. It includes a large library of pre-coded solutions to
common programming problems and a virtual machine that manages the execution of programs
written specifically for the framework. The .NET Framework is a key Microsoft offering and is
intended to be used by most new applications created for the Windows platform.
The pre-coded solutions that form the framework's Base Class Library cover a large range of
programming needs in a number of areas, including user interface, data access, database
connectivity, cryptography, web application development, numeric algorithms, and network
communications. The class library is used by programmers, who combine it with their own code
to produce applications.
Principal design features
Interoperability
Because interaction between new and older applications is commonly required, the .NET
Framework provides means to access functionality that is implemented in programs that
execute outside the .NET environment. Access to COM components is provided in the
System.Runtime.InteropServices and System.Enterprise Services namespaces of the
framework; access to other functionality is provided using the P/Invoke feature.
Common Runtime Engine
The Common Language Runtime (CLR) is the virtual machine component of the .NET
framework. All .NET programs execute under the supervision of the CLR, guaranteeing
certain properties and behaviors in the areas of memory management, security, and
exception handling.
10
Base Class Library
The Base Class Library (BCL), part of the Framework Class Library (FCL), is a library
of functionality available to all languages using the .NET Framework. The BCL provides
classes which encapsulate a number of common functions, including file reading and
writing, graphic rendering, database interaction and XML document manipulation.
Simplified Deployment
Installation of computer software must be carefully managed to ensure that it does not
interfere with previously installed software, and that it conforms to security requirements.
The .NET framework includes design features and tools that help address these
requirements.
Security
The design is meant to address some of the vulnerabilities, such as buffer overflows, that
have been exploited by malicious software. Additionally, .NET provides a common
security model for all applications.
Portability
The design of the .NET Framework allows it to theoretically be platform agnostic, and
thus cross-platform compatible. That is, a program written to use the framework should
run without change on any type of system for which the framework is implemented.
Microsoft's commercial implementations of the framework cover Windows, Windows
CE, and the Xbox 360.
11
Architecture
Fig 4.1: Visual overview of the Common Language Infrastructure (CLI)
Common Language Infrastructure
The core aspects of the .NET framework lie within the Common Language
Infrastructure, or CLI. The purpose of the CLI is to provide a language-neutral platform for
application development and execution, including functions for exception handling, garbage
collection, security, and interoperability. Microsoft's implementation of the CLI is called the
Common Language Runtime or CLR.
Assemblies
The intermediate CIL code is housed in .NET assemblies. As mandated by specification,
assemblies are stored in the Portable Executable (PE) format, common on the Windows platform
for all DLL and EXE files. The assembly consists of one or more files, one of which must
contain the manifest, which has the metadata for the assembly. The complete name of an
assembly (not to be confused with the filename on disk) contains its simple text name, version
number, culture, and public key token.
12
Metadata
All CLI is self-describing through .NET metadata. The CLR checks the metadata to
ensure that the correct method is called. Metadata is usually generated by language compilers but
developers can create their own metadata through custom attributes. Metadata contains
information about the assembly, and is also used to implement the reflective programming
capabilities of .NET Framework.
Security
.NET has its own security mechanism with two general features: Code Access Security
(CAS), and validation and verification. Code Access Security is based on evidence that is
associated with a specific assembly. Typically the evidence is the source of the assembly
(whether it is installed on the local machine or has been downloaded from the intranet or
Internet). Code Access Security uses evidence to determine the permissions granted to the code.
Other code can demand that calling code is granted a specified permission
Class library
Namespaces in the BCL
System
System. CodeDom
System. Collections
System. Diagnostics
System. Globalization
System. IO
System. Resources
System. Text
System.Text.RegularExpressions
13
Microsoft .NET Framework includes a set of standard class libraries. The class library
is organized in a hierarchy of namespaces. Most of the built in APIs are part of either System.* or
Microsoft.* namespaces. The .NET class libraries are available to all .NET languages. The .NET
Framework class library is divided into two parts: the Base Class Library and the Framework
Class Library.
The Base Class Library (BCL) includes a small subset of the entire class library and is
the core set of classes that serve as the basic API of the Common Language Runtime. The
Framework Class Library (FCL) is a superset of the BCL classes and refers to the entire class
library that ships with .NET Framework.
Memory management
The .NET Framework CLR frees the developer from the burden of managing memory
(allocating and freeing up when done); instead it does the memory management itself. To this
end, the memory allocated to instantiations of .NET types (objects) is done contiguously from
the managed heap, a pool of memory managed by the CLR.
The .NET Garbage Collector (GC) is a non-deterministic, compacting, mark-and-sweep
garbage collector. The GC runs only when a certain amount of memory has been used or there is
enough pressure for memory on the system. Since it is not guaranteed when the conditions to
reclaim memory are reached, the GC runs are non-deterministic. Each .NET application has a set
of roots, which are pointers to objects on the managed heap (managed objects). These include
references to static objects and objects defined as local variables or method parameters currently
in scope, as well as objects referred to by CPU registers.
The GC used by .NET Framework is actually generational. Objects are assigned a
generation; newly created objects belong to Generation 0. The objects that survive a garbage
collection are tagged as Generation 1, and the Generation 1 objects that survive another
collection are Generation 2 objects.
Versions
14
Microsoft started development on the .NET Framework in the late 1990s originally under the
name of Next Generation Windows Services (NGWS). By late 2000 the first beta versions
of .NET 1.0 were released.
Fig 4.2: Figure on the .NET Framework Stack
The .NET Framework stack.
Version Version Number Release Date
1.0 1.0.3705.0 2002-01-05
1.1 1.1.4322.573 2003-04-01
2.0 2.0.50727.42 2005-11-07
3.0 3.0.4506.30 2006-11-06
3.5 3.5.21022.8 2007-11-09
15
ASP.NET
SERVER APPLICATION DEVELOPMENT
Server-side applications in the managed world are implemented through runtime hosts.
Unmanaged applications host the common language runtime, which allows your custom
managed code to control the behavior of the server. This model provides you with all the features
of the common language runtime and class library while gaining the performance and scalability
of the host server.
The following illustration shows a basic network schema with managed code running in
different server environments. Servers such as IIS and SQL Server can perform standard
operations while your application logic executes through the managed code.
SERVER-SIDE MANAGED CODE
ASP.NET is the hosting environment that enables developers to use the .NET Framework
to target Web-based applications. However, ASP.NET is more than just a runtime host; it is a
complete architecture for developing Web sites and Internet-distributed objects using managed
code. Both Web Forms and XML Web services use IIS and ASP.NET as the publishing
mechanism for applications, and both have a collection of supporting classes in the .NET
Framework.
XML Web services, an important evolution in Web-based technology, are distributed,
server-side application components similar to common Web sites. However, unlike Web-based
applications, XML Web services components have no UI and are not targeted for browsers such
as Internet Explorer and Netscape Navigator. Instead, XML Web services consist of reusable
software components designed to be consumed by other applications, such as traditional client
applications, Web-based applications, or even other XML Web services. As a result, XML Web
services technology is rapidly moving application development and deployment into the highly
distributed environment of the Internet.
If you develop and publish your own XML Web service, the .NET Framework provides a
set of classes that conform to all the underlying communication standards, such as SOAP,
WSDL, and XML. Using those classes enables you to focus on the logic of your service, without
16
concerning yourself with the communications infrastructure required by distributed software
development.
Finally, like Web Forms pages in the managed environment, your XML Web service will run
with the speed of native machine language using the scalable communication of IIS.
ACTIVE SERVER PAGES.NET
ASP.NET is a programming framework built on the common language runtime that can
be used on a server to build powerful Web applications. ASP.NET offers several important
advantages over previous Web development models:
Enhanced Performance. ASP.NET is compiled common language runtime code running
on the server. Unlike its interpreted predecessors, ASP.NET can take advantage of early
binding, just-in-time compilation, native optimization, and caching services right out of the
box. This amounts to dramatically better performance before you ever write a line of code.
World-Class Tool Support. The ASP.NET framework is complemented by a rich
toolbox and designer in the Visual Studio integrated development environment. WYSIWYG
editing, drag-and-drop server controls, and automatic deployment are just a few of the
features this powerful tool provides.
Power and Flexibility. Because ASP.NET is based on the common language runtime,
the power and flexibility of that entire platform is available to Web application developers.
The .NET Framework class library, Messaging, and Data Access solutions are all seamlessly
accessible from the Web. ASP.NET is also language-independent, so you can choose the
language that best applies to your application or partition your application across many
languages. Further, common language runtime interoperability guarantees that your existing
investment in COM-based development is preserved when migrating to ASP.NET.
Simplicity. ASP.NET makes it easy to perform common tasks, from simple form
submission and client authentication to deployment and site configuration. For example, the
ASP.NET page framework allows you to build user interfaces that cleanly separate
application logic from presentation code and to handle events in a simple, Visual Basic - like
forms processing model. Additionally, the common language runtime simplifies
17
development, with managed code services such as automatic reference counting and garbage
collection.
Manageability. ASP.NET employs a text-based, hierarchical configuration system,
which simplifies applying settings to your server environment and Web applications.
Because configuration information is stored as plain text, new settings may be applied
without the aid of local administration tools. This "zero local administration" philosophy
extends to deploying ASP.NET Framework applications as well. An ASP.NET Framework
application is deployed to a server simply by copying the necessary files to the server. No
server restart is required, even to deploy or replace running compiled code.
Scalability and Availability. ASP.NET has been designed with scalability in mind, with
features specifically tailored to improve performance in clustered and multiprocessor
environments. Further, processes are closely monitored and managed by the ASP.NET
runtime, so that if one misbehaves (leaks, deadlocks), a new process can be created in its
place, which helps keep your application constantly available to handle requests.
Customizability and Extensibility. ASP.NET delivers a well-factored architecture that
allows developers to "plug-in" their code at the appropriate level. In fact, it is possible to
extend or replace any subcomponent of the ASP.NET runtime with your own custom-written
component. Implementing custom authentication or state services has never been easier.
Security. With built in Windows authentication and per-application configuration, you
can be assured that your applications are secure.
LANGUAGE SUPPORT
The Microsoft .NET Platform currently offers built-in support for three languages: C#,
Visual Basic, and Java Script.
WHAT IS ASP.NET WEB FORMS?
The ASP.NET Web Forms page framework is a scalable common language runtime
programming model that can be used on the server to dynamically generate Web pages.
Intended as a logical evolution of ASP (ASP.NET provides syntax compatibility with
existing pages), the ASP.NET Web Forms framework has been specifically designed to address a
number of key deficiencies in the previous model. In particular, it provides:
18
The ability to create and use reusable UI controls that can encapsulate common functionality
and thus reduce the amount of code that a page developer has to write.
The ability for developers to cleanly structure their page logic in an orderly fashion (not
"spaghetti code").
The ability for development tools to provide strong WYSIWYG design support for pages
(existing ASP code is opaque to tools).
ASP.NET Web Forms pages are text files with an .aspx file name extension. They can be
deployed throughout an IIS virtual root directory tree. When a browser client requests .aspx
resources, the ASP.NET runtime parses and compiles the target file into a .NET Framework
class. This class can then be used to dynamically process incoming requests. (Note that the .aspx
file is compiled only the first time it is accessed; the compiled type instance is then reused across
multiple requests).
CODE-BEHIND WEB FORMS
ASP.NET supports two methods of authoring dynamic pages. The first is the method
shown in the preceding samples, where the page code is physically declared within the
originating .aspx file. An alternative approach--known as the code-behind method--enables the
page code to be more cleanly separated from the HTML content into an entirely separate file.
INTRODUCTION TO ASP.NET SERVER CONTROLS
In addition to (or instead of) using <% %> code blocks to program dynamic content,
ASP.NET page developers can use ASP.NET server controls to program Web pages. Server
controls are declared within an .aspx file using custom tags or intrinsic HTML tags that contain a
runat="server" attributes value. Intrinsic HTML tags are handled by one of the controls in the
System.Web.UI.HtmlControls namespace. Any tag that doesn't explicitly map to one of the
controls is assigned the type of System.Web.UI.HtmlControls.HtmlGenericControl.
19
In addition to supporting standard HTML input controls, ASP.NET enables developers to
utilize richer custom controls on their pages. For example, the following sample demonstrates
how the <asp:adrotator> control can be used to dynamically display rotating ads on a page.
1. ASP.NET Web Forms provide an easy and powerful way to build dynamic Web UI.
2. ASP.NET Web Forms pages can target any browser client (there are no script library or
cookie requirements).
3. ASP.NET Web Forms pages provide syntax compatibility with existing ASP pages.
4. ASP.NET server controls provide an easy way to encapsulate common functionality.
5. ASP.NET ships with 45 built-in server controls. Developers can also use controls built by
third parties.
6. ASP.NET server controls can automatically project both uplevel and downlevel HTML.
7. ASP.NET templates provide an easy way to customize the look and feel of list server
controls.
8. ASP.NET validation controls provide an easy way to do declarative client or server data
validation.
C#.NET
ADO.NET OVERVIEW
ADO.NET is an evolution of the ADO data access model that directly addresses user
requirements for developing scalable applications. It was designed specifically for the web with
scalability, statelessness, and XML in mind.
ADO.NET uses some ADO objects, such as the Connection and Command objects, and also
introduces new objects. Key new ADO.NET objects include the Dataset, Data Reader, and
Data Adapter.
The important distinction between this evolved stage of ADO.NET and previous data
architectures is that there exists an object -- the DataSet -- that is separate and distinct from any
20
data stores. Because of that, the DataSet functions as a standalone entity. You can think of the
DataSet as an always disconnected recordset that knows nothing about the source or destination
of the data it contains. Inside a DataSet, much like in a database, there are tables, columns,
relationships, constraints, views, and so forth.
A Data Adapter is the object that connects to the database to fill the DataSet. Then, it
connects back to the database to update the data there, based on operations performed while the
DataSet held the data. In the past, data processing has been primarily connection-based. Now, in
an effort to make multi-tiered apps more efficient, data processing is turning to a message-based
approach that revolves around chunks of information. At the center of this approach is the
DataAdapter, which provides a bridge to retrieve and save data between a DataSet and its
source data store. It accomplishes this by means of requests to the appropriate SQL commands
made against the data store.
While the DataSet has no knowledge of the source of its data, the managed provider has
detailed and specific information. The role of the managed provider is to connect, fill, and persist
the DataSet to and from data stores. The OLE DB and SQL Server .NET Data Providers
(System.Data.OleDb and System.Data.SqlClient) that are part of the .Net Framework provide
four basic objects: the Command, Connection, DataReader and DataAdapter. The following
sections will introduce you to some objects that have evolved, and some that are new. These
objects are:
Connections. For connection to and managing transactions against a database.
Commands. For issuing SQL commands against a database.
DataReaders. For reading a forward-only stream of data records from a SQL Server data
source.
DataSet. For storing, Remoting and programming against flat data, XML data and
relational data.
DataAdapters. For pushing data into a DataSet, and reconciling data against a database.
21
When dealing with connections to a database, there are two different options: SQL Server
.NET Data Provider (System.Data.SqlClient) and OLE DB .NET Data Provider
(System.Data.OleDb). In these samples we will use the SQL Server .NET Data Provider. These
are written to talk directly to Microsoft SQL Server. The OLE DB .NET Data Provider is used to
talk to any OLE DB provider (as it uses OLE DB underneath).
Connections:
Connections are used to 'talk to' databases, and are represented by provider-specific
classes such as SqlConnection. Commands travel over connections and resultsets are returned in
the form of streams which can be read by a DataReader object, or pushed into a DataSet object.
Commands:
Commands contain the information that is submitted to a database, and are represented by
provider-specific classes such as SqlCommand. A command can be a stored procedure call, an
UPDATE statement, or a statement that returns results. You can also use input and output
parameters, and return values as part of your command syntax. The example below shows how
to issue an INSERT statement against the Northwind database.
DataReaders:
The DataReader object is somewhat synonymous with a read-only/forward-only cursor over
data. The DataReader API supports flat as well as hierarchical data. A DataReader object is
returned after executing a command against a database. The format of the returned DataReader
object is different from a recordset. For example, you might use the DataReader to show the
results of a search list in a web page.
DATASETS AND DATAADAPTERS:
DataSets
The DataSet object is similar to the ADO Recordset object, but more powerful, and with one
other important distinction: the DataSet is always disconnected. The DataSet object represents a
cache of data, with database-like structures such as tables, columns, relationships, and
22
constraints. However, though a DataSet can and does behave much like a database, it is
important to remember that DataSet objects do not interact directly with databases, or other
source data. This DataSet is then used by a DataAdapter (or other objects) to update the
original data source.
The DataSet has many XML characteristics, including the ability to produce and consume XML
data and XML schemas. XML schemas can be used to describe schemas interchanged via
WebServices. In fact, a DataSet with a schema can actually be compiled for type safety and
statement completion.
DATAADAPTERS (OLEDB/SQL)
The DataAdapter object works as a bridge between the DataSet and the source data.
Using the provider-specific SqlDataAdapter (along with its associated SqlCommand and
SqlConnection) can increase overall performance when working with a Microsoft SQL Server
databases.
The DataAdapter object uses commands to update the data source after changes have been
made to the DataSet. Using the Fill method of the DataAdapter calls the SELECT command;
using the Update method calls the INSERT, UPDATE or DELETES command for each changed
row. You can explicitly set these commands in order to control the statements used at runtime to
resolve changes, including the use of stored procedures. However, this run-time generation
requires an extra round-trip to the server in order to gather required metadata, so explicitly
providing the INSERT, UPDATE, and DELETE commands at design time will result in better
run-time performance.
1. ADO.NET is the next evolution of ADO for the .Net Framework.
2. ADO.NET was created with n-Tier, statelessness and XML in the forefront. Two new
objects, the DataSet and DataAdapter, are provided for these scenarios.
3. ADO.NET can be used to get data from a stream, or to store data in a cache for updates.
4. There is a lot more information about ADO.NET in the documentation.
5. Remember, you can execute a command directly against the database in order to do
inserts, updates, and deletes. You don't need to first put data into a DataSet in order to insert,
update, or delete it.
23
SQL SERVER -2005
A database management, or DBMS, gives the user access to their data and helps them
transform the data into information. Such database management systems include dBase, paradox,
IMS, SQL Server and SQL Server. These systems allow users to create, update and extract
information from their database.
A database is a structured collection of data. Data refers to the characteristics of people,
things and events. SQL Server stores each data item in its own fields. In SQL Server, the fields
relating to a particular person, thing or event are bundled together to form a single complete unit
of data, called a record (it can also be referred to as raw or an occurrence). Each record is made
up of a number of fields. No two fields in a record can have the same field name.
During an SQL Server Database design project, the analysis of your business needs
identifies all the fields or attributes of interest. If your business needs change over time, you
define any additional fields or change the definition of existing fields.
SERVER TABLES SQL
SQL Server stores records relating to each other in a table. Different tables are created
for the various groups of information. Related tables are grouped together to form a database.
Every table in SQL Server has a field or a combination of fields that uniquely identifies
each record in the table. The Unique identifier is called the Primary Key, or simply the Key.
The primary key provides the means to distinguish one record from all other in a table. It allows
the user and the database system to identify, locate and refer to one particular record in the
database.
RELATIONAL DATABASE
Sometimes all the information of interest to a business operation can be stored in one
table. SQL Server makes it very easy to link the data in multiple tables. Matching an employee
to the department in which they work is one example. This is what makes SQL Server a
relational database management system, or RDBMS. It stores data in two or more tables and
24
enables you to define relationships between the tables and enables you to define relationships
between the tables.
FOREIGN KEY
When a field is one table matches the primary key of another field is referred to as a
foreign key. A foreign key is a field or a group of fields in one table whose values match those
of the primary key of another table.
REFERENTIAL INTEGRITY
Not only does SQL Server allow you to link multiple tables, it also maintains consistency
between them. Ensuring that the data among related tables is correctly matched is referred to as
maintaining referential integrity.
DATA ABSTRACTION
A major purpose of a database system is to provide users with an abstract view of the
data. This system hides certain details of how the data is stored and maintained. Data abstraction
is divided into three levels.
Physical level: This is the lowest level of abstraction at which one describes how the data are
actually stored.
Conceptual Level: At this level of database abstraction all the attributed and what data are
actually stored is described and entries and relationship among them.
View level: This is the highest level of abstraction at which one describes only part of the
database.
ADVANTAGES OF RDBMS
Redundancy can be avoided
Inconsistency can be eliminated
Data can be Shared
Standards can be enforced
Security restrictions ca be applied
25
DISADVANTAGES OF DBMS
A significant disadvantage of the DBMS system is cost. In addition to the cost of
purchasing of developing the software, the hardware has to be upgraded to allow for the
extensive programs and the workspace required for their execution and storage. While
centralization reduces duplication, the lack of duplication requires that the database be
adequately backed up so that in case of failure the data can be recovered.
FEATURES OF SQL SERVER (RDBMS)
SQL SERVER is one of the leading database management systems (DBMS) because it is
the only Database that meets the uncompromising requirements of today’s most demanding
information systems. From complex decision support systems (DSS) to the most rigorous online
transaction processing (OLTP) application, even application that require simultaneous DSS and
OLTP access to the same critical data, SQL Server leads the industry in both performance and
capability.
SQL SERVER is a truly portable, distributed, and open DBMS that delivers unmatched
performance, continuous operation and support for every database.
SQL SERVER RDBMS is high performance fault tolerant DBMS which is specially designed
for online transactions processing and for handling large database application.
SQL SERVER with transactions processing option offers two features which contribute to very
high level of transaction processing throughput, which are
The row level lock manager
ENTERPRISE WIDE DATA SHARING
The unrivaled portability and connectivity of the SQL SERVER DBMS enables all the
systems in the organization to be linked into a singular, integrated computing resource.
26
PORTABILITY
SQL SERVER is fully portable to more than 80 distinct hardware and operating systems
platforms, including UNIX, MSDOS, OS/2, Macintosh and dozens of proprietary platforms.
This portability gives complete freedom to choose the database server platform that meets the
system requirements.
OPEN SYSTEMS
SQL SERVER offers a leading implementation of industry –standard SQL. SQL
Server’s open architecture integrates SQL SERVER and non –SQL SERVER DBMS with
industry’s most comprehensive collection of tools, application, and third party software products
SQL Server’s Open architecture provides transparent access to data from other relational
database and even non-relational database.
DISTRIBUTED DATA SHARING
SQL Server’s networking and distributed database capabilities to access data stored on
remote server with the same ease as if the information was stored on a single local computer.
UNMATCHED PERFORMANCE
The most advanced architecture in the industry allows the SQL SERVER DBMS to
deliver unmatched performance.
27
4.4 UML Diagrams
Unified modeling language:
The Unifies Modeling Language (UML) is a standard language for writing software
blueprint. The UML may be used to visualize, specify, construct, and document the artifacts of a
software intensive system. It is a very expressive language, addressing all the views needed to
develop and then deploy such systems.
The UML has its efficient use in the design phase of a system.
The vocabulary of the UML encompasses three kinds of building blocks:
Things
Relationships
Diagrams
The diagrams that are employed to design this project are:
Class diagram
Use Case diagram
Sequence diagram
Collaboration diagram
Activity diagram
All these Diagrams give the users a clear idea of design of the system.
28
4.4.1 Class Diagram:
Class diagram are widely used to describe the types of objects in a system and their
relationships. Class diagrams describe three different perspectives when designing a system,
conceptual, specifications and implementation.
Class
Classes are composed of three things: names, dependency, generalization, and association.
Dependency:
It is a using relationship that states a change in specification of one thing may affect another
thing that uses it, but not necessarily the reverse.
Generalization:
It is a relationship between general things and a more specific kind of those things
that may be used to replace the general things. It is sometimes called as “is a kind of”
relationship. This gives the parent and child relation between actors in a usecase diagram.
29
Association:
It is a structural relationship that objects of one thing are connected to objects of another.
Aggregation is a plain association between two classes representing a structural relationship
between peers; it is also called as whole part relationship in which one class represents a larger
thing consists of smaller things.
30
Class Diagram of the System:
cls_subfoldersubfld_id : varcharsub fld_name : charsubfld_data : varcharsub fld_size : intsub fld_type : char
create()delete()modify()
cls_reportsr_time : datetimer_sender : charr_reciever : charr_address : char
send()recieve()save()
cls_admina_code : varchara_name : chara_password : varchara_files : varchara_reports : varchar
login()reports()logout()fileupload()
1..*
1
1..*
1
cls_sharef_id : varcharf_mode : varcharf_viewers
putmode()selectusers()removeshare()
cls_filef_id : varcharf_name : charf_size : intf_type : charf_permission : char
addfiles()removefiles()size()share()
0..*
1
0..*
1
cls_fileuploadf_id : varcharuf_time : datetimeuf_details : varchar
upload()
1..n
1
1..n
1
cls_folderfld_id : varcharfld_name : charfld_size : intfld_type : charfld_owner : char
create()delete()modify()
1
0..*
1
0..*
1
1
1
1
cls_downloadf_id : varcharfd_time : datetimefd_owner : char
download()
cls_emailtoaddr : varcharbodymsg : varcharfromaddr : varchar
generate()recieve()send()
cls_useru_code : varcharu_name : charu_password : varcharu_foldername : varcharu_filedetails : varchar
login()createfolder()userdata()addusers()opname()
0..*
1
0..*
1
1..n
1
1..n
1
1
1
1
1
0..*1
0..*1
Fig 4.4.1.1: Class Diagram of the System
31
4.4.2 Use case Diagrams:
A use case is a set of scenarios that describe the interaction between the
user and a system. These are used for modeling and organizing the behaviors of a
system. A use case diagram displays the relationship among actors and use cases.
The two main components of a use case diagram are:
The relationships used in theses diagrams are Association, Dependency, and
Generalization.
Use case diagram on overview of system
reports
user login track
admin
registration
login
services
file upload/download
logout
user
Fig 4.4.2.1: Use case diagram on overview of system
32
Use case diagram for user
user
login
logout
file upload/download
registration
folder operations
Fig 4.4.2.2: Use case diagram for user
Use case diagram for administrator
admin
login
reports
user login track
logout
services
file upload
Fig 4.4.2.3: Use case diagram for Administrator
33
4.4.3 Sequence Diagrams:
The Sequence Diagrams emphasizes the Time ordering of messages between objects. It
models collaboration of objects based on a time sequence. Sequence diagrams show a detailed
flow for a specific use case or even just part of a specific use case. They are almost self-
explanatory; they show the calls between the different objects in their sequence and can show, at
a detailed level, different calls to different objects.
Object Lifeline:
An object lifeline is the vertical dashed line that represents the existence of an object over
a period of time. Objects may be created during the interaction, their lifelines start with the
receipt of the message stereotyped as create. Objects may be destroyed during the interaction.
Their lifeline end with the receipt of the message stereotyped as destroys.
Focus of control:
Focus of control is a tall thin rectangle that shows the period of time during which an
object is performing an action, either directly or through a subordinate procedure, the top of the
rectangle is aligned with the start of the action; the bottom is aligned with its completion.
Sequence Diagram
34
Object
Lifeline
Focus of control
Sequence diagram for user login
u:user h:homepage l:loginpage d:database uh:user homepage
goto()
select login()
login button click with empty values
enter values &submit()
Valid User()
Not valid()
Fig 4.4.3.1: Sequence diagram for user login
Sequence diagram for user Registration:
u:user h:homepage r:registration page
d:database
1.goto()
2.select registration()
3.submit()
4.verify details()
5..response()
Fig 4.4.3.2: Sequence diagram for user Registration
35
Sequence diagram for file download by user
u:user uh:user homepage
fp:file downloadepage
d:database
1.login()
2.select download()
3.submit()
4.response()
5.click on home()
Fig 4.4.3.3: Sequence diagram for file download by user
Sequence diagram for creating a subfolder
u:user uh:user homepage
cs:cls subfolder d:database
1.login()
2.operation on subfolder()
3.submit()
4.response()
5.click on home button
Fig 4.4.3.4: Sequence diagram for creating a sub folder
36
Sequence diagram for file sharing
Hp:HomepageU:User U:Upload D:DataBase
1:Goto()
2:Upload the file()
3:CLick On FileSharing()
4:Response()
Fig 4.4.3.5: Sequence diagram for file sharing
Sequence diagram for sending an e-mail
A:Admin H:HomePage D:DataBase U:User
1:Goto()
2:Upload File
3:Sent Mail()
4:Download file()
5:Response()
Fig 4.4.3.6: Sequence diagram for sending an e-mail
37
4.4.4 Collaboration Diagrams:
A collaboration diagram emphasizes the structural organization of objects that send and
receive messages. It describes messages, interactions among objects in terms of sequenced
messages.
Collaboration diagram represent a combination of information taken from class, sequence, and
use case diagrams, describing both the static structure and dynamic behavior of a system.
Collaboration diagram for user login
u:user h:homepage
l:loginpage
d:database
uh:user homepage
3: login button click with empty values1: goto()
2: select login()
4: enter values &submit()
6: Not valid()
5: Valid User()
Fig 4.4.4.1: Collaboration diagram for user login
Collaboration diagram for user Registration
u:userh:homepage
r:registration page
d:database
4: 4.verify details()
1: 1.goto()
2: 2.select registration()
3: 3.submit()
5: 5..response()
Fig 4.4.4.2: Collaboration diagram for user Registration
38
Collaboration diagram for file download by user
u:user uh:user homepage
fp:file downloadepage
d:database
1: 1.login()
2: 2.select download()
3: 3.submit()
4: 4.response()
5: 5.click on home()
Fig 4.4.4.3: Collaboration diagram for file download by user
Collaboration diagram for creating a sub folder
u:user uh:user homepage
cs:cls subfolder
d:database
1: 1.login()
2: 2.operation on subfolder()
3: 3.submit()
4: 4.response()
5: 5.click on home button
Fig 4.4.4.4: Collaboration diagram for creating a sub folder
39
Collaboration diagram for file sharing
2: Upload The File()
User
H:HomePage
U;Upload DB:Data
Base
1: Goto()
3: Click on file sharing
4: Response()
Fig 4.4.4.5: Collaboration diagram for file sharing
Collaboration diagram for sending an e-mail
U:User
A:AdminH:Home
Page
D:DataBase
1: Goto()
2: Upload File
3: Sent Mail()
4: Download file()
5: Response()
Fig 4.4.4.6: Collaboration diagram for sending an e-mail
40
4.4.5 Activity diagrams:
Activity diagram shows the flow of activity to activity.It describes the workflow behavior
of a system. Activity diagrams are similar to state chart diagram because activities are the states
of doing something. The diagrams describe the state of activity by showing the sequence of
activity performed.
Activity diagrams the state of activity by showing the sequence of activity performed. Activity
diagrams can show activities that are conditional or parallel.
Symbol Symbol Name Meaning
Start State This is the starting of an activity
Stop State This is the ending of an activity
Action State An action state is an activity state that cannot
be further decomposed
Activity State Activity states can be further more
decomposed (non atomic).
Branch This specifies alternate paths taken based on
some Boolean expression
Transition/Link
This shows flow of control from one
activity/action state to another.
41
Activity diagram for user
is valid?
yes
no
Login
View filesModify accountfile downloadFile uploadNew subfolder creation
log out
file share recieve mail
Fig 4.4.5.1: Activity diagram for User
Activity Diagram for file upload/download by user
give the file details for upload/download
validate file details
is valid?
reject
accept
yesno
login
succesfully file uploaded/downloaded
Fig 4.4.5.2: Activity diagram for file upload/download by User
42
4.5 DATA FLOW DIAGRAMS
A data flow diagram is graphical tool used to describe and analyze movement of
data through a system. These are the central tool and the basis from which the other components
are developed. The transformation of data from input to output, through processed, may be
described logically and independently of physical components associated with the system. These
are known as the logical data flow diagrams. The physical data flow diagrams show the actual
implements and movement of data between people, departments and workstations. A full
description of a system actually consists of a set of data flow diagrams. Using two familiar
notations Yourdon, Game and Samson notation develops the data flow diagrams. Each
component in a DFD is labeled with a descriptive name.
Larry Constantine first developed the DFD as a way of expressing system requirements
in a graphical from, this lead to the modular design.
A DFD is also known as a “bubble Chart” has the purpose of clarifying system
requirements and identifying major transformations that will become programs in system design.
So it is the starting point of the design to the lowest level of detail. A DFD consists of a series of
bubbles joined by data flows in the system.
DFD SYMBOLS:
In the DFD, there are four symbols
1. A square defines a source(originator) or destination of system data
2. An arrow identifies data flow. It is the pipeline through which the information flows
3. A circle or a bubble represents a process that transforms incoming data flow into outgoing
data flows.
4. An open rectangle is a data store, data at rest or a temporary repository of data
43
Process that transforms data flow.
Source or Destination of data
Data flow
Data Store
CONSTRUCTING A DFD:
Several rules of thumb are used in drawing DFD’S:
1. Process should be named and numbered for an easy reference. Each name should be
representative of the process.
2. The direction of flow is from top to bottom and from left to right. Data traditionally flow
from source to the destination although they may flow back to the source. One way to
indicate this is to draw long flow line back to a source. An alternative way is to repeat the
source symbol as a destination. Since it is used more than once in the DFD it is marked with
a short diagonal.
3. When a process is exploded into lower level details, they are numbered.
4. The names of data stores and destinations are written in capital letters. Process and dataflow
names have the first letter of each work capitalized.
44
SAILENT FEATURES OF DFD’S
1. The DFD shows flow of data, not of control loops and decision are controlled considerations
do not appear on a DFD.
2. The DFD does not indicate the time factor involved in any process whether the dataflow take
place daily, weekly, monthly or yearly.
3. The sequence of events is not brought out on the DFD.
RULES GOVERNING THE DFD’S
PROCESS
1) No process can have only outputs.
2) No process can have only inputs. If an object has only inputs than it must be a sink.
3) A process has a verb phrase label.
DATA STORE
1) Data cannot move directly from one data store to another data store, process must move data.
2) Data cannot move directly from an outside source to a data store, a process, which receives,
must move data from the source and place the data into data store
3) A data store has a noun phrase label.
SOURCE OR SINK
The origin and destination of data
1) Data cannot move direly from a source to sink it must be moved by a process
2) A source and /or sink has a noun phrase land
DATA FLOW
1) A Data Flow has only one direction of flow between symbols. It may flow in both directions
between a process and a data store to show a read before an update.
2) A join in DFD means that exactly the same data comes from any of two or more different
processes data store or sink to a common location.
3) A data Flow from a data store means retrieve or use.
45
DFD Diagrams:
Context Level (0 th level DFD)
Fig 4.5.1: Data Flow Diagram on Context Level
Login DFD
Fig 4.5.2: Data Flow Diagram on Login
46
Fig 4.5.3: Data Flow Diagram on Admin (1st level)
Fig 4.5.4: Data Flow Diagram on Admin (2nd level)
47
Fig 4.5.5: Data Flow Diagram on User Details Data Flow (1st level)
Fig 4.5.6: Data Flow Diagram on User Details Data Flow (2nd level)
48
4.6 Database Design:
Database is a collection of large amounts of data which is integrated, shared and
concurrently accessed by the users.
The Database Design gives detailed description of classes and attributes used in the database
design
4.6.1 ER-Diagram:
In software engineering, an Entity-Relationship model is an abstract and conceptual
representation of data. Entity-relationship modeling is a database modeling method, used to
produce a type of conceptual schema or semantic data model of a system, often a relational
database, and its requirements in a top-down fashion,
Diagrams created using this process are called entity-relationship diagram, or ER diagram or
ERDs of short.
The Entity Relationship diagram (ERD) depicts the relationship between the data objects.
The set of primary components that are identified by the ERD are Data Objects,
Relationships, Attributes, and various types of indicators.
The main components of ER diagram are
Entity:
An entity is an object or concept about which you want to store information.
Attribute:
A single data item related to a database object. The database schema associates one or more
attributes with each database entity.
49
Entity
Attribute
Relationship:
A relationship is an association among two or more entities.
Connectives:
These are the links which join entities, attributes and relationships
Key constraint:
This is used to give the unique relationship between entities. i.e., at most one
relation of that kind exists between the entities.
This is rendered as a directed arrow.
Primary key:
This accepts the tuple uniquely and doesn’t take null values. In ERD’s this is
represented by underlining the attribute name.
Foreign key:
This is used to place relation between two tables. Values considering for foreign
key are Primary key values. These can take null values.
50
Relationship
Link
Key Constraint
E-R Diagram of the System:
Fig 4.6.1.1: ER Diagram of the System
51
4.6.2 DATABASE TABLE DESIGN
DownLoadMaster
Column name Data type Allow null Type
User id Varchar(50) Not Allow Foreign Key(user master)
DownLoad id Varchar(50) Not allow Primary key
DownLoadDate Date time Allow
DownLoadTime Date time Allow
FieldForDownLoad Varchar(50) Allow
DownLoadStatus Bit Allow
DownLoadFolderId Varchar(50) Allow
DownLoadSubFolderId Varchar(50) Allow
Table 4.6.2.1: Table on DownLoadMaster
FileTypeMaster
Column name Data type Allow null Type
FileTypeId Int Not allow Primary key
TypeName Varchar(50) Allow
FileExtension Varchar(50) Allow
Table 4.6.2.2: Table on FileTypeMaster
52
FileUploadMaster
Column name Data type Allow null Type
UserFolderId Varchar(50) Not allow ForeignKey(Folder Master)
UserSubFolderId Varchar(50) Not allow ForeignKey(SubFolder Master)
UserId Varchar(50) Not allow ForeignKey(user Master)
UserFileId Varchar(50) Not allow Primary key
FileTypeId Int Allow
FileName Varchar(50) Allow
FileSize Int Allow
UploadedDate Date time Allow
FileStatus Date time Allow
FilePermission Varchar(50) Allow
Table 4.6.2.3: Table on FileUploadMaster
LoginMaster
Column name Data type Allow null Type
User id Varchar(50) Not allow ForeignKey(User master)
UserLoginDate Date time Not allow
UserLoginTime Varchar(50) Not allow Primary key
UserLogoffTime Varchar(50) Allow
Table 4.6.2.4: Table on LoginMaster
53
FolderMaster
Column name Data type Allow null Type
UserFolderId Varchar(50) Not allow Primary key
UserId Int Not allow
FolderName Varchar(50) Allow
TotalStorageCapacity Int Allow
Creating Date Date time Allow
Table 4.6.2.5: Table on FolderMaster
SubFolderMaster
Column name Data type Allow null Type
UserFolderId Varchar(50) Not allow ForeignKey(FolderMaster)
UserId Varchar(50) Not allow ForeignKey(UserMaster)
UserSubFolderId Varchar(50) Not allow Primary key
SubFolderName NVarchar(10) Allow
SubFolderStatus Varchar(50) Allow
Created Date Date time Allow
Table 4.6.2.6: Table on SubFolderMaster
54
UserFileDetails
Column name Data type Allow null Type
UserId Varchar(50) Not Allow Foreign Key(UserMaster)
SubFolderId Varchar(50) Allow
FileId Varchar(50) Not allow Primary key
FileName Varchar(50) Allow
File Type Id Int Allow
FileSize Int Allow
Table 4.6.2.6: Table on UserFileDetails
UserMaster
Column name Data type Allow null Type
UserId Varchar(50) Not allow Primary key
First Name Varchar(50) Not allow
Last Name Varchar(50) Allow
Gender Varchar(50) Allow
User DOB Date time Not allow
User DOR Date time Not allow
UserLoginId Varchar(50) Not allow
Password Varchar(50) Allow
HintQuestion Varchar(50) Allow
HintAnswer Varchar(50) Allow
Table 4.6.2.6: Table on UserMaster
55
FileStatusMaster
Column name Data type Allow null Type
FileOwner Id Int Not allow Unique key
UserField Varchar(50) Not allow
RecipientUserId Int Not allow Primary key
DownLoadStatusBit Bit Allow
Shared date Date time Allow
DownLoadedDate Date time Allow
StatusBit Bit Allow
Email Message Varchar(1000) Allow
Table 4.6.2.7: Table on FileStatusMaster
56
5. INTRODUCTION
Software testing is a critical element of software quality assurance and represents the ultimate
review of specification, design and coding. In fact, testing is the one step in the software engineering
process that could be viewed as destructive rather than constructive.
A strategy for software testing integrates software test case design methods into a well-
planned series of steps that result in the successful construction of software. Testing is the set of
activities that can be planned in advance and conducted systematically. The underlying
motivation of program testing is to affirm software quality with methods that can economically
and effectively apply to both strategic to both large and small-scale systems.
5.1. STRATEGIC APPROACH TO SOFTWARE TESTING
The software engineering process can be viewed as a spiral. Initially system engineering
defines the role of software and leads to software requirement analysis where the information
domain, functions, behavior, performance, constraints and validation criteria for software are
established. Moving inward along the spiral, we come to design and finally to coding. To
develop computer software we spiral in along streamlines that decrease the level of abstraction
on each turn.
A strategy for software testing may also be viewed in the context of the spiral. Unit
testing begins at the vertex of the spiral and concentrates on each unit of the software as
implemented in source code .Testing progress by moving outward along the spiral to integration
testing, where the focus is on the design and the construction of the software architecture.
Talking another turn on outward on the spiral we encounter validation testing where
requirements established as part of software requirements analysis are validated against the
software that has been constructed. Finally we arrive at system testing, where the software and
other system elements are tested as a whole.
57
5.2. UNIT TESTING
Unit testing focuses verification effort on the smallest unit of software design, the module. The
unit testing we have is white box oriented and some modules the steps are conducted in parallel.
1. WHITE BOX TESTING
This type of testing ensures that
All independent paths have been exercised at least once
All logical decisions have been exercised on their true and false sides
All loops are executed at their boundaries and within their operational bounds
All internal data structures have been exercised to assure their validity.
To follow the concept of white box testing we have tested each form .we have created
independently to verify that Data flow is correct, All conditions are exercised to check their
validity, All loops are executed on their boundaries.
58
UNIT TESTING
MODULE TESTING
SUB-SYSTEM TESING
SYSTEM TESTING
ACCEPTANCE TESTING
Component Testing
Integration Testing
User Testing
2. CONDITIONAL TESTING
In this part of the testing each of the conditions were tested to both true and false aspects. And all
the resulting paths were tested. So that each path that may be generate on particular condition is
traced to uncover any possible errors.
3. DATA FLOW TESTING
This type of testing selects the path of the program according to the location of definition and use
of variables. This kind of testing was used only when some local variable were declared. The
definition-use chain method was used in this type of testing. These were particularly useful in
nested statements.
4. LOOP TESTING
In this type of testing all the loops are tested to all the limits possible. The following exercise
was adopted for all loops:
All the loops were tested at their limits, just above them and just below them.
All the loops were skipped at least once.
For nested loops test the inner most loop first and then work outwards.
Unstructured loops were resolved into nested loops or concatenated loops and tested
as above.
Each unit has been separately tested by the development team itself and all the input
have been validated.
5. System Testing
The philosophy behind testing is to find errors. Test cases are devised with this in mind.
A strategy employed for system testing is code testing.
6. Code Testing
This strategy examines the logic of the program. To follow this method we developed
some test data that resulted in executing every instruction in the program and module i.e. every
path is tested. Systems are not designed as entire nor are they tested as single systems. To ensure
59
that the coding is perfect two types of testing is performed or for that matter is performed or that
matter is performed or for that matter is performed on all systems.
7. Integration Testing
After the unit testing we have to perform integration testing. The goal here is to see if
modules can be integrated properly, the emphasis being on testing interfaces between modules.
This testing activity can be considered as testing the design and hence the emphasis on testing
module interactions. In this project integrating all the modules forms the main system. When
integrating all the modules I have checked whether the integration effects working of any of the
services by giving different combinations of inputs with which the two services run perfectly
before Integration.
8. System Testing
Here the entire software system is tested. The reference document for this process is the
requirements document, and the goal is to see if software meets its requirements. Here entire ‘VOIP’ has
been tested against requirements of project and it is checked whether all requirements of project have
been satisfied or not.
9. Acceptance Testing
Acceptance Test is performed with realistic data of the client to demonstrate that the software is
working satisfactorily. Testing here is focused on external behavior of the system; the internal logic of
program is not emphasized. In this project ‘VOIP’ I have collected some data and tested whether project
is working correctly or not.
Test cases should be selected so that the largest number of attributes of an equivalence class is
exercised at once. The testing phase is an important part of software development. It is the process of
finding errors and missing operations and also a complete verification to determine whether the objectives
are met and the user requirements are satisfied.
60
5.3. Test Cases:
Test Case
#
Test Case Description Expected Results Pass /
Fail
Actual Results
01 Click on login button It should be open home page
without missing any themes
Pass It has opened proper home
page, by clicking the login
button
02 Click on login button without
giving username &password
It should be ask for enter
username & password
Pass It has showing error
03 Enter username without
password
It should be ask for enter
password
Pass It has displayed error
04 Enter invalid username &
password
It should be show message for
invalid username, password
Pass It has displayed error
message for “User Not Exists”
05 Enter valid username &
password
It should be redirect to other
page
Pass It has redirected to other
page
Table5.3.1. Login Test cases:
Test
Case
#
Test Case Description Expected Results Pass
/
Fail
Actual Results
01 Click on Sign up! It should be open register
page without missing any
themes
Pass It has opened proper register
page, by clicking the Sign up! link
02 Click on register button without
giving mailing & contact details
It should be ask for
mailing & contact details
Pass It has showing error message for
“mailing & contact details”
03 Enter valid username, password ,
address , email id ,personal details
It should be redirect to
other page
Pass It has redirected to other page
Table5.3.2. Registration Test cases:
When user enter the web address: website Home page
61
Fig 6.1: Screen shot on website Home page
Sign up: Registration for new users
Fig 6.2: Screen shot on Registration for new users
After successful registration
62
Fig 6.3: Screen shot after successful registration for user
User login page
Fig 6.4: Screen shot on login page for a user
Successful login of a user
63
Fig 6.5: Screen shot on user home page
Edit profile
Fig 6.6: Screen shot on changing user details
Successful change of details
64
Fig 6.7: Screen shot on successful modification of details
Folder options page
Fig 6.8: Screen shot on user folder options page
Creating a folder
65
Fig 6.9: Screen shot on folder creating page for user
Deleting a folder
Fig 6.10: Screen shot on folder deleting page for user
Renaming a folder
66
Fig 6.11: Screen shot on folder renaming page for user
Moving a folder
Fig 6.12: Screen shot on folder moving page for user
Successful uploading of a file
67
Fig 6.13: Screen shot on file uploading page
Downloading a file
Fig 6.14: Screen shot on file downloading page
Logout from the website
68
Fig 6.15: Screen shot on logout from the system
Admin home page
Fig 6.16: Screen shot on admin home page
Adding a file type
69
Fig 6.17: Screen shot on Adding a file type
Search user with name
Fig 6.18: Screen shot on Search user with name
Login track of a user
70
Fig 6.19: Screen shot on Login track of a user
Contact us
Fig 6.20: Screen shot on contact us
About us
71
Fig 6.21: Screen shot on about us
CONCLUSION
72
It has been a great pleasure for me to work on this exciting and challenging project. This
project proved good for me as it provided practical knowledge of not only programming in
ASP.NET and C#.Net web based application and on some extent Windows Application and SQL
Server, but also about all handling procedure related with “Docket Chunk System”. It also
provides knowledge about the latest technology used in developing web enabled application and
client server technology that will be great demand in future. This will provide better
opportunities and guidance in future in developing projects independently.
BENEFITS:
The project is identified by the merits of the system offered to the user. The merits of this
project are as follows: -
It’s a web-enabled project.
This project offers user to enter the data through simple and interactive forms. This is very
helpful for the client to enter the desired information through so much simplicity.
The user is mainly more concerned about the validity of the data, whatever he is entering.
There are checks on every stages of any new creation, data entry or updating so that the user
cannot enter the invalid data, which can create problems at later date.
Sometimes the user finds in the later stages of using project that he needs to update some of
the information that he entered earlier. There are options for him by which he can update the
records.
From every part of the project the user is provided with the links through framing so that he
can go from one option of the project to other as per the requirement. This is bound to be
simple and very friendly as per the user is concerned. That is, we can set that the project is
user friendly which is one of the primary concerns of any good project.
Data storage and retrieval will become faster and easier to maintain because data is stored in
a systematic manner and in a single database.
Easier and faster data transfer through latest technology associated with the computer and
communication.
Through these features it will increase the efficiency, accuracy and transparency,
LIMITATIONS:
73
The size of the database increases day-by-day, increasing the load on the database back
up and data maintenance activity.
Training for simple computer operations is necessary for the users working on the
system.
FUTURE IMPROVEMENT
It can be implemented to upload files with an huge amount of size with the support of
various file formats.
This System being web-based and an undertaking of Cyber Security Division, needs to be
thoroughly tested to find out any security gaps.
A console for the data centre may be made available to allow the personnel to monitor on
the sites which were cleared for hosting during a particular period.
Moreover, it is just a beginning; further the system may be utilized in various other types
of auditing operation viz. Network auditing or similar process/workflow based
applications.
As the technology emerges, it is possible to upgrade the system and can be adaptable to
desired environment.
Based on the future security issues, security can be improved using emerging
technologies.
1. Acronyms
74
SRS Software Requirement Specification.
IEEE Institute of Electrical and Electronic Engineers.
VB.NET Visual Basic .NET
ASP Active Server Pages
SQL Structured Query Language
SQL server This stores records relating to each other in tables.
HTML Hyper Text Markup Language
UML Unified Modeling Language
User A person who uses the system.
User id: unique id provided by the user at the time of registration is that
which is used as User id for login.
Password unique, secret code word used by user for secure login.
GUI Graphical User Interface.
Folder Representation of result set data.
SMTP Simple Mail Transfer Protocol
File sharing It is the practice of distributing or providing access to digitally
stored information.
HTTP Hypertext Transfer Protocol.
HTTPS Secure Hypertext Transfer Protocol
TCP Transmission Control Protocol.
IP Internet Protocol.
DBMS Data Base Management System.
DDL Data Definition Language.
75
DML Data Manipulation Language.
API Application Programming Language.
DFD Data Flow Diagram
IDE Integrated Development Environment
RDBMS Relational Data Base Management System
76