105
DATA INTEGRITY PROOFS IN CLOUD STORAGE By A PROJECT REPORT Submitted to the Department of Computer Science & Engineering in the FACULTY OF ENGINEERING & TECHNOLOGY In partial fulfillment of the requirements for the award of the degree Of MASTER OF TECHNOLOGY IN COMPUTER SCIENCE & ENGINEERING APRIL 2012

Complete Document

Embed Size (px)

Citation preview

Page 1: Complete Document

DATA INTEGRITY PROOFS IN CLOUD STORAGE

By

A

PROJECT REPORT

Submitted to the Department of Computer Science & Engineering in the

FACULTY OF ENGINEERING & TECHNOLOGY

In partial fulfillment of the requirements for the award of the degree

Of

MASTER OF TECHNOLOGY

IN

COMPUTER SCIENCE & ENGINEERING

APRIL 2012

Page 2: Complete Document

BONAFIDE CERTIFICATE

Certified that this project report titled “DATA INTEGRITY PROOFS IN CLOUD

STORAGE” is the bonafide work of Mr. _____________Who carried out the research

under my supervision Certified further, that to the best of my knowledge the work

reported herein does not form part of any other project report or dissertation on the basis

of which a degree or award was conferred on an earlier occasion on this or any other

candidate.

Signature of the Guide Signature of the

H.O.D

Name Name

Page 3: Complete Document

CHAPTER 01

ABSTRACT :

Cloud computing has been envisioned as the de-facto solution to the rising storage costs

of IT Enterprises. With the high costs of data storage devices as well as the rapid rate at

which data is being generated it proves costly for enterprises or individual users to

frequently update their hardware. Apart from reduction in storage costs data outsourcing

to the cloud also helps in reducing the maintenance. Cloud storage moves the user’s data

to large data centers, which are remotely located, on which user does not have any

control. However, this unique feature of the cloud poses many new security challenges

which need to be clearly understood and resolved. We provide a scheme which gives a

proof of data integrity in the cloud which the customer can employ to check the

correctness of his data in the cloud. This proof can be agreed upon by both the cloud and

the customer and can be incorporated in the Service level agreement (SLA).

PROJECT PURPOSE:

Purpose of developing proofs for data possession at untrusted cloud storage servers we

are often limited by the resources at the cloud server as well as at the client. Given that

the data sizes are large and are stored at remote servers, accessing the entire file can be

expensive in I/O costs to the storage server. Also transmitting the file across the network

to the client can consume heavy bandwidths. Since growth in storage capacity has far

outpaced the growth in data access as well as network bandwidth, accessing and transmit-

ting the entire archive even occasionally greatly limits the scalability of the network re-

sources. Furthermore, the I/O to establish the data proof interferes with the on-demand

bandwidth of the server used for normal storage and retrieving purpose.

Page 4: Complete Document

PROJECT SCOPE:

Cloud storing its data file F at the client should process it and create suitable meta data

which is used in the later stage of verification the data integrity at the cloud storage.

When checking for data integrity the client queries the cloud storage for suitable replies

based on which it concludes the integrity of its data stored in the client. our data integrity

protocol the verifier needs to store only a single cryptographic key - irrespective of the

size of the data file F- and two functions which generate a random sequence. The verifier

does not store any data with it. The verifier before storing the file at the archive, prepro-

cesses the file and appends some meta data to the file and stores at the archive.

PRODUCT FEATURES:

Our scheme was developed to reduce the computational and storage overhead of the

client as well asto minimize the computational overhead of the cloud storage server. We

also minimized the size of the proof of data integrity so as to reduce the network band-

width consumption. Hence the storage at the client is very much minimal compared to all

other schemes that were developed. Hence this scheme proves advantageous to thin

clients like PDAs and mobile phones.

The operation of encryption of data generally consumes a large computational power. In

our scheme the encrypting process is very much limited to only a fraction of the whole

data thereby saving on the computational time of the client. Many of the schemes pro-

posed earlier require the archive to perform tasks that need a lot of computational power

to generate the proof of data integrity. But in our scheme the archive just need to fetch

and send few bits of data to the client.

Page 5: Complete Document

INTRODUCTION:

Data outsourcing to cloud storage servers is raising trend among many firms and users

owing to its economic advantages. This essentially means that the owner (client) of the

data moves its data to a third party cloud storage server which is supposed to -

presumably for a fee - faithfully store the data with it and provide it back to the owner

whenever required.

As data generation is far outpacing data storage it proves costly for small firms to

frequently update their hardware whenever additional data is created. Also maintaining

the storages can be a difficult task. Storage outsourcing of data to cloud storage helps

such firms by reducing the costs of storage, maintenance and personnel. It can also assure

a reliable storage of important data by keeping multiple copies of the data thereby

reducing the chance of losing data by hardware failures.

Storing of user data in the cloud despite its advantages has many interesting security

concerns which need to be extensively investigated for making it a reliable solution to the

problem of avoiding local storage of data. In this paper we deal with the problem of

implementing a protocol for obtaining a proof of data possession in the cloud sometimes

referred to as Proof of retrievability (POR).This problem tries to obtain and verify a proof

that the data that is stored by a user at a remote data storage in the cloud (called cloud

storage archives or simply archives) is

Not modified by the archive and thereby the integrity of the data is assured.

Such verification systems prevent the cloud storage archives from misrepresenting or

modifying the data stored at it without the consent of the data owner by using frequent

checks on the storage archives. Such checks must allow the data owner to efficiently,

frequently, quickly and securely verify that the cloud archive is not cheating the owner.

Cheating, in this context, means that the storage archive might delete some of the data or

may modify some of the data.

Page 6: Complete Document

CHAPTER 02

SYSTEM ANALYSIS :

PROBLEM DEFINITION:

Storing of user data in the cloud despite its advantages has many interesting security con-

cerns which need to be extensively investigated for making it a reliable solution to the

problem of avoiding local storage of data. Many problems like data authentication and in-

tegrity (i.e., how to efficiently and securely ensure that the cloud storage server returns

correct and complete results in response to its clients’ queries, outsourcing encrypted data

and associated difficult problems dealing with querying over encrypted domain were dis-

cussed in research literature.

EXISTING SYSTEM:

As data generation is far outpacing data storage it proves costly for small firms to

frequently update their hardware whenever additional data is created. Also maintaining

the storages can be a difficult task. It transmitting the file across the network to the client

can consume heavy bandwidths. The problem is further complicated by the fact that the

owner of the data may be a small device, like a PDA (personal digital assist) or a mobile

phone, which have limited CPU power, battery power and communication bandwidth.

LIMITATIONS OF EXISTING SYSTEM:

The main drawback of this scheme is the high resource costs it requires for

the implementation.

Also computing hash value for even a moderately large data files can be

computationally burdensome for some clients (PDAs, mobile phones, etc).

Data encryption is large so the disadvantage is small users with limited

computational power (PDAs, mobile phones etc.).

Page 7: Complete Document

PROPOSED SYSTEM:

One of the important concerns that need to be addressed is to assure the customer of the

integrity i.e. correctness of his data in the cloud. As the data is physically not accessible

to the user the cloud should provide a way for the user to check if the integrity of his data

is maintained or is compromised. In this paper we provide a scheme which gives a proof

of data integrity in the cloud which the customer can employ to check the correctness of

his data in the cloud. This proof can be agreed upon by both the cloud and the customer

and can be incorporated in the Service level agreement (SLA). It is important to note that

our proof of data integrity protocol just checks the integrity of data i.e. if the data has

been illegally modified or deleted.

ADVANTAGES OF PROPOSED SYSTEM:

Apart from reduction in storage costs data outsourcing to the cloud also

helps in reducing the maintenance.

Avoiding local storage of data.

By reducing the costs of storage, maintenance and personnel.

It reduces the chance of losing data by hardware failures.

Not cheating the owner.

Page 8: Complete Document

PROCESS FLOW DIAGRAMS FOR EXISTING AND PROPOSED

SYSTEM:

FEASIBILITY STUDY:

The feasibility of the project is analyzed in this phase and business proposal is put

forth with a very general plan for the project and some cost estimates. During system

analysis the feasibility study of the proposed system is to be carried out. This is to ensure

that the proposed system is not a burden to the company. For feasibility analysis, some

understanding of the major requirements for the system is essential.

Three key considerations involved in the feasibility analysis are

ECONOMICAL FEASIBILITY

TECHNICAL FEASIBILITY

SOCIAL FEASIBILITY

ECONOMICAL FEASIBILITY

This study is carried out to check the economic impact that the system will have on the

organization. The amount of fund that the company can pour into the research and

development of the system is limited. The expenditures must be justified. Thus the

developed system as well within the budget and this was achieved because most of the

technologies used are freely available. Only the customized products had to be purchased.

TECHNICAL FEASIBILITY

This study is carried out to check the technical feasibility, that is, the technical

requirements of the system. Any system developed must not have a high demand on the

Page 9: Complete Document

available technical resources. This will lead to high demands on the available technical

resources. This will lead to high demands being placed on the client. The developed

system must have a modest requirement, as only minimal or null changes are required for

implementing this system.

SOCIAL FEASIBILITY

The aspect of study is to check the level of acceptance of the system by the user.

This includes the process of training the user to use the system efficiently. The user must

not feel threatened by the system, instead must accept it as a necessity. The level of

acceptance by the users solely depends on the methods that are employed to educate the

user about the system and to make him familiar with it. His level of confidence must be

raised so that he is also able to make some constructive criticism, which is welcomed, as

he is the final user of the system.

Page 10: Complete Document

HARDWARE AND SOFTWARE REQUIREMENTS:

HARDWARE REQUIREMENTS:

• System : Pentium IV 2.4 GHz.

• Hard Disk : 40 GB.

• Floppy Drive : 1.44 Mb.

• Monitor : 15 VGA Colour.

• Mouse : Logitech.

• Ram : 512 Mb.

SOFTWARE REQUIREMENTS:

• Operating system : Windows XP.

• Coding Language : ASP.Net with C#

• Data Base : SQL Server 2005

Page 11: Complete Document

FUNCTIONAL REQUIREMENTS:

Functional requirements specify which output file should be produced from the given

file they describe the relationship between the input and output of the system, for each

functional requirement a detailed description of all data inputs and their source and the

range of valid inputs must be specified.

NON FUNCTIONAL REQUIREMENTS:

Describe user-visible aspects of the system that are not directly related with the

functional behavior of the system. Non-Functional requirements include quantitative

constraints, such as response time (i.e. how fast the system reacts to user commands.) or

accuracy ((.e. how precise are the systems numerical answers.)

PSEUDO REQUIREMENTS:

The client that restricts the implementation of the system imposes these requirements.

Typical pseudo requirements are the implementation language and the platform on

which the system is to be implemented. These have usually no direct effect on the users

view of the system.

Page 12: Complete Document

LITERATURE SURVEY:

Literature survey is the most important step in software development process. Before

developing the tool it is necessary to determine the time factor, economy n company

strength. Once these things r satisfied, ten next steps is to determine which operating

system and language can be used for developing the tool. Once the programmers start

building the tool the programmers need lot of external support. This support can be

obtained from senior programmers, from book or from websites. Before building the

system the above consideration r taken into account for developing the proposed system.

We have to analysis the Cloud Computing Outline Survey:

Cloud Computing

• Cloud computing providing unlimited infrastructure to store and execute customer

data and program. As customers you do not need to own the infrastructure, they are

merely accessing or renting; they can forego capital expenditure and consume resources

as a service, paying instead for what they use.

Benefits of Cloud Computing:

• Minimized Capital expenditure

• Location and Device independence

• Utilization and efficiency improvement

• Very high Scalability

• High Computing power

Page 13: Complete Document

Security a major Concern:

Security concerns arising because both customer data and program are residing in

Provider Premises.

Security is always a major concern in Open System Architectures

Data centre Security?

• Professional Security staff utilizing video surveillance, state of the art intrusion

detection systems, and other electronic means.

• When an employee no longer has a business need to access datacenter his privi-

leges to access datacenter should be immediately revoked.

• All physical and electronic access to data centers by employees should be logged

and audited routinely.

• Audit tools so that users can easily determine how their data is stored, protected,

used, and verify policy enforcement.

Page 14: Complete Document

Data Location:

When user uses the cloud, user probably won't know exactly where your data is

hosted, what country it will be stored in?

Data should be stored and processed only in specific jurisdictions as define by

user.

Provider should also make a contractual commitment to obey local privacy re-

quirements on behalf of their customers,

Data-centered policies that are generated when a user provides personal or sensi-

tive information, that travels with that information throughout its lifetime to ensure that

the information is used only in accordance with the policy

Backups of Data :

Data store in database of provider should be redundantly store in multiple physi-

cal location.

Data that is generated during running of program on instances is all customer

data and therefore provider should not perform backups.

Control of Administrator on Databases.

Page 15: Complete Document

Data Sanitization:

Sanitization is the process of removing sensitive information from a storage de-

vice.

What happens to data stored in a cloud computing environment once it has passed

its user’s “use by date”

What data sanitization practices does the cloud computing service provider pro-

pose to implement for redundant and retiring data storage devices as and when these de-

vices are retired or taken out of service.

Network Security:

• Denial of Service: where servers and networks are brought down by a huge

amount of network traffic and users are denied the access to a certain Internet based ser-

vice.

• Like DNS Hacking, Routing Table “Poisoning”, XDoS attacks

• QoS Violation : through congestion, delaying or dropping packets, or through re-

source hacking.

• Man in the Middle Attack: To overcome it always use SSL

• IP Spoofing: Spoofing is the creation of TCP/IP packets using somebody else's IP

address.

• Solution: Infrastructure will not permit an instance to send traffic with a source IP

or MAC address other than its own.

Page 16: Complete Document

How secure is encryption Scheme:

Is it possible for all of my data to be fully encrypted?

What algorithms are used?

Who holds, maintains and issues the keys? Problem:

Encryption accidents can make data totally unusable.

Encryption can complicate availability Solution

The cloud provider should provide evidence that encryption schemes were de-

signed and tested by experienced specialists.

Information Security:

Security related to the information exchanged between different hosts or between

hosts and users.

This issues pertaining to secure communication, authentication, and issues con-

cerning single sign on and delegation.

Secure communication issues include those security concerns that arise during the

communication between two entities.

These include confidentiality and integrity issues. Confidentiality indicates that all

data sent by users should be accessible to only “legitimate” receivers, and integrity indi-

cates that all data received should only be sent/modified by “legitimate” senders.

Solution: public key encryption, X.509 certificates, and the Secure Sockets Layer

(SSL) enables secure authentication and communication over computer networks.

Page 17: Complete Document

MODULES DESCRIPTION:

CLOUD STORAGE:

Data outsourcing to cloud storage servers is raising trend among many firms and users

owing to its economic advantages. This essentially means that the owner (client) of the

data moves its data to a third party cloud storage server which is supposed to -

presumably for a fee - faithfully store the data with it and provide it back to the owner

whenever required.

SIMPLY ARCHIVES:

This problem tries to obtain and verify a proof that the data that is stored by a user at

remote data storage in the cloud (called cloud storage archives or simply archives) is not

modified by the archive and thereby the integrity of the data is assured. Cloud archive is

not cheating the owner, if cheating, in this context, means that the storage archive might

delete some of the data or may modify some of the data. While developing proofs for

data possession at untrusted cloud storage servers we are often limited by the resources at

the cloud server as well as at the client.

SENTINELS:

In this scheme, unlike in the key-hash approach scheme, only a single key can be used

irrespective of the size of the file or the number of files whose retrievability it wants to

verify. Also the archive needs to access only a small portion of the file F unlike in the

key-has scheme which required the archive to process the entire file F for each protocol

verification. If the prover has modified or deleted a substantial portion of F, then with

high probability it will also have suppressed a number of sentinels.

Page 18: Complete Document

VERIFICATION PHASE:

The verifier before storing the file at the archive, preprocesses the file and appends some

Meta data to the file and stores at the archive. At the time of verification the verifier uses

this Meta data to verify the integrity of the data. It is important to note that our proof of

data integrity protocol just checks the integrity of data i.e. if the data has been illegally

modified or deleted. It does not prevent the archive from modifying the data.

Page 19: Complete Document

CHAPTER 03

SYSTEM DESIGN:

Data Flow Diagram / Use Case Diagram / Flow Diagram:

The DFD is also called as bubble chart. It is a simple graphical formalism

that can be used to represent a system in terms of the input data to the system, various

processing carried out on these data, and the output data is generated by the system

The data flow diagram (DFD) is one of the most important modeling tools. It is

used to model the system components. These components are the system process, the

data used by the process, an external entity that interacts with the system and the

information flows in the system.

DFD shows how the information moves through the system and how it is

modified by a series of transformations. It is a graphical technique that depicts

information flow and the transformations that are applied as data moves from input to

output.

DFD is also known as bubble chart. A DFD may be used to represent a system at

any level of abstraction. DFD may be partitioned into levels that represent increasing

information flow and functional detail.

Page 20: Complete Document

NOTATION:

SOURCE OR DESTINATION OF DATA:

External sources or destinations, which may be people or organizations or other entities.

DATA SOURCE:

Here the data referenced by a process is stored and retrieved.

PROCESS:

People, procedures or devices that produce data. The physical component is not

identified.

DATA FLOW:

Data moves in a specific direction from an origin to a destination. The data flow is a

“packet” of data.

MODELING RULES:

There are several common modeling rules when creating DFDs:

1. All processes must have at least one data flow in and one data flow out.

2. All processes should modify the incoming data, producing new forms of outgoing

data.

3. Each data store must be involved with at least one data flow.

4. Each external entity must be involved with at least one data flow.

5. A data flow must be attached to at least one process.

Page 21: Complete Document

SDLC:

SPIRAL MODEL:

PROJECT ARCHITECTURE:

Page 22: Complete Document

UML DIAGRAMS:

USE CASE:

Page 23: Complete Document

CLASS:

Page 24: Complete Document

SEQUENCE:

Page 25: Complete Document

ACTIVITY:

Page 26: Complete Document

DATA DICTIONARY:

ER DIAGRAM:

Page 27: Complete Document

DFD DIAGRAMS:

Page 28: Complete Document

CHAPTER 04

PROCESS SPECIFICATION(Techniques And Algorithm Used):

ALGORITHM:

META-DATA GENERATION:

Let the verifier V wishes to the store the file F with the archive. Let this file F consist of n

file blocks. We initially preprocess the file and create metadata to be appended to the file.

Let each of the n data blocks have m bits in them. A typical data file F which the client

wishes to store in the cloud.

Each of the Meta data from the data blocks mi is encrypted by using a suitable algorithm

to give a new modified Meta data Mi. Without loss of generality we show this process by

using a simple XOR operation. The encryption method can be improvised to provide still

stronger protection for verifier’s data.

All the Meta data bit blocks that are generated using the above procedure are to be

concatenated together. This concatenated Meta data should be appended to the file F

before storing it at the cloud server. The file F along with the appended Meta data e F is

archived with the cloud.

Page 29: Complete Document

SCREEN SHOTS:

OWNER

Page 30: Complete Document
Page 31: Complete Document
Page 32: Complete Document
Page 33: Complete Document
Page 34: Complete Document
Page 35: Complete Document
Page 36: Complete Document
Page 37: Complete Document
Page 38: Complete Document
Page 39: Complete Document
Page 40: Complete Document
Page 41: Complete Document
Page 42: Complete Document

TPA

Page 43: Complete Document
Page 44: Complete Document
Page 45: Complete Document
Page 46: Complete Document
Page 47: Complete Document
Page 48: Complete Document
Page 49: Complete Document
Page 50: Complete Document

ADMIN

Page 51: Complete Document
Page 52: Complete Document
Page 53: Complete Document
Page 54: Complete Document
Page 55: Complete Document
Page 56: Complete Document
Page 57: Complete Document

CHAPTER 05

TECHNOLOGY DESCRIPTION:

Software Environment

FEATURES OF. NET

Microsoft .NET is a set of Microsoft software technologies for rapidly building and

integrating XML Web services, Microsoft Windows-based applications, and Web

solutions. The .NET Framework is a language-neutral platform for writing programs that

can easily and securely interoperate. There’s no language barrier with .NET: there are

numerous languages available to the developer including Managed C++, C#, Visual

Basic and Java Script. The .NET framework provides the foundation for components to

interact seamlessly, whether locally or remotely on different platforms. It standardizes

common data types and communications protocols so that components created in

different languages can easily interoperate.

“.NET” is also the collective name given to various software components built upon

the .NET platform. These will be both products (Visual Studio.NET and Windows.NET

Server, for instance) and services (like Passport, .NET My Services, and so on).

THE .NET FRAMEWORK

The .NET Framework has two main parts:

1. The Common Language Runtime (CLR).

2. A hierarchical set of class libraries.

The CLR is described as the “execution engine” of .NET. It provides the environment

within which programs run. The most important features are

Page 58: Complete Document

Conversion from a low-level assembler-style language, called Intermedi-

ate Language (IL), into code native to the platform being executed on.

Memory management, notably including garbage collection.

Checking and enforcing security restrictions on the running code.

Loading and executing programs, with version control and other such fea-

tures.

The following features of the .NET framework are also worth description:

Managed Code

The code that targets .NET, and which contains certain extra

Information - “metadata” - to describe itself. Whilst both managed and unmanaged code

can run in the runtime, only managed code contains the information that allows the CLR

to guarantee, for instance, safe execution and interoperability.

Managed Data

With Managed Code comes Managed Data. CLR provides memory allocation and Deal

location facilities, and garbage collection. Some .NET languages use Managed Data by

default, such as C#, Visual Basic.NET and JScript.NET, whereas others, namely C++, do

not. Targeting CLR can, depending on the language you’re using, impose certain

constraints on the features available. As with managed and unmanaged code, one can

have both managed and unmanaged data in .NET applications - data that doesn’t get

garbage collected but instead is looked after by unmanaged code.

Common Type System

The CLR uses something called the Common Type System (CTS) to strictly enforce

type-safety. This ensures that all classes are compatible with each other, by describing

types in a common way. CTS define how types work within the runtime, which enables

types in one language to interoperate with types in another language, including cross-

language exception handling. As well as ensuring that types are only used in appropriate

ways, the runtime also ensures that code doesn’t attempt to access memory that hasn’t

been allocated to it.

Page 59: Complete Document

Common Language Specification

The CLR provides built-in support for language interoperability. To ensure that you can

develop managed code that can be fully used by developers using any programming

language, a set of language features and rules for using them called the Common

Language Specification (CLS) has been defined. Components that follow these rules and

expose only CLS features are considered CLS-compliant.

THE CLASS LIBRARY:

.NET provides a single-rooted hierarchy of classes, containing over 7000 types. The root

of the namespace is called System; this contains basic types like Byte, Double, Boolean,

and String, as well as Object. All objects derive from System. Object. As well as objects,

there are value types. Value types can be allocated on the stack, which can provide useful

flexibility. There are also efficient means of converting value types to object types if and

when necessary.

The set of classes is pretty comprehensive, providing collections, file, screen, and

network I/O, threading, and so on, as well as XML and database connectivity.

The class library is subdivided into a number of sets (or namespaces), each providing

distinct areas of functionality, with dependencies between the namespaces kept to a

minimum.

LANGUAGES SUPPORTED BY .NET

The multi-language capability of the .NET Framework and Visual Studio .NET enables

developers to use their existing programming skills to build all types of applications and

XML Web services. The .NET framework supports new versions of Microsoft’s old

favorites Visual Basic and C++ (as VB.NET and Managed C++), but there are also a

number of new additions to the family.

Page 60: Complete Document

Visual Basic .NET has been updated to include many new and improved language

features that make it a powerful object-oriented programming language. These features

include inheritance, interfaces, and overloading, among others. Visual Basic also now

supports structured exception handling, custom attributes and also supports multi-

threading.

Visual Basic .NET is also CLS compliant, which means that any CLS-compliant

language can use the classes, objects, and components you create in Visual Basic .NET.

Managed Extensions for C++ and attributed programming are just some of the

enhancements made to the C++ language. Managed Extensions simplify the task of

migrating existing C++ applications to the new .NET Framework.

C# is Microsoft’s new language. It’s a C-style language that is essentially “C++ for Rapid

Application Development”. Unlike other languages, its specification is just the grammar

of the language. It has no standard library of its own, and instead has been designed with

the intention of using the .NET libraries as its own.

Microsoft Visual J# .NET provides the easiest transition for Java-language developers

into the world of XML Web Services and dramatically improves the interoperability of

Java-language programs with existing software written in a variety of other programming

languages.

Active State has created Visual Perl and Visual Python, which enable .NET-aware

applications to be built in either Perl or Python. Both products can be integrated into the

Visual Studio .NET environment. Visual Perl includes support for Active State’s Perl

Dev Kit.

Other languages for which .NET compilers are available include

FORTRAN

COBOL

Eiffel

Page 61: Complete Document

Fig1 .Net Framework

ASP.NET

XML WEB SERVICES

Windows Forms

Base Class Libraries

Common Language Runtime

Operating System

C#.NET is also compliant with CLS (Common Language Specification) and supports

structured exception handling. CLS is set of rules and constructs that are supported by the

CLR (Common Language Runtime). CLR is the runtime environment provided by

the .NET Framework; it manages the execution of the code and also makes the develop-

ment process easier by providing services.

C#.NET is a CLS-compliant language. Any objects, classes, or components that created

in C#.NET can be used in any other CLS-compliant language. In addition, we can use ob-

jects, classes, and components created in other CLS-compliant languages in C#.NET .The

use of CLS ensures complete interoperability among applications, regardless of the lan-

guages used to create the application.

CONSTRUCTORS AND DESTRUCTORS:

Constructors are used to initialize objects, whereas destructors are used to de-

stroy them. In other words, destructors are used to release the resources allocated to the

object. In C#.NET the sub finalize procedure is available. The sub finalize procedure is

used to complete the tasks that must be performed when an object is destroyed. The sub

finalize procedure is called automatically when an object is destroyed. In addition, the

sub finalize procedure can be called only from the class it belongs to or from derived

classes.

Page 62: Complete Document

GARBAGE COLLECTION

Garbage Collection is another new feature in C#.NET. The .NET Framework monitors

allocated resources, such as objects and variables. In addition, the .NET Framework auto-

matically releases memory for reuse by destroying objects that are no longer in use.

In C#.NET, the garbage collector checks for the objects that are not currently in use by

applications. When the garbage collector comes across an object that is marked for

garbage collection, it releases the memory occupied by the object.

OVERLOADING

Overloading is another feature in C#. Overloading enables us to define multiple proce-

dures with the same name, where each procedure has a different set of arguments. Be-

sides using overloading for procedures, we can use it for constructors and properties in a

class.

MULTITHREADING:

C#.NET also supports multithreading. An application that supports multithreading can

handle multiple tasks simultaneously, we can use multithreading to decrease the time

taken by an application to respond to user interaction.

STRUCTURED EXCEPTION HANDLING

C#.NET supports structured handling, which enables us to detect and re-

move errors at runtime. In C#.NET, we need to use Try…Catch…Finally statements to

create exception handlers. Using Try…Catch…Finally statements, we can create robust

and effective exception handlers to improve the performance of our application.

THE .NET FRAMEWORK

The .NET Framework is a new computing platform that simplifies application devel-

opment in the highly distributed environment of the Internet.

Page 63: Complete Document

OBJECTIVES OF. NET FRAMEWORK

1. To provide a consistent object-oriented programming environment whether object

codes is stored and executed locally on Internet-distributed, or executed remotely.

2. To provide a code-execution environment to minimizes software deployment and guar-

antees safe execution of code.

3. Eliminates the performance problems.

There are different types of application, such as Windows-based applications and Web-

based applications.

FEATURES OF SQL-SERVER

The OLAP Services feature available in SQL Server version 7.0 is now called SQL

Server 2000 Analysis Services. The term OLAP Services has been replaced with the term

Analysis Services. Analysis Services also includes a new data mining component. The

Repository component available in SQL Server version 7.0 is now called Microsoft SQL

Server 2000 Meta Data Services. References to the component now use the term Meta

Data Services. The term repository is used only in reference to the repository engine

within Meta Data Services

SQL-SERVER database consist of six type of objects,

They are,

1. TABLE

2. QUERY

3. FORM

4. REPORT

5. MACRO

Page 64: Complete Document

TABLE:

A database is a collection of data about a specific topic.

VIEWS OF TABLE:

We can work with a table in two types,

1. Design View

2. Datasheet View

Design View

To build or modify the structure of a table we work in the table design view. We

can specify what kind of data will be hold.

Datasheet View

To add, edit or analyses the data itself we work in tables datasheet view mode.

QUERY:

A query is a question that has to be asked the data. Access gathers data that answers the

question from one or more table. The data that make up the answer is either dynaset (if

you edit it) or a snapshot (it cannot be edited).Each time we run query, we get latest

information in the dynaset. Access either displays the dynaset or snapshot for us to view

or perform an action on it, such as deleting or updating.

Page 65: Complete Document

FULL PROJECT CODING, DATABASE WITH VIDEO TUTORIAL

Page 66: Complete Document

HOW TO INSTALL DOCUMENT:

Execution help fileREQUIRED SOFTWARES: 1. MS visual studio 2008 2. SQL server 2005For WAP: 3. JDK 1.6 4. Nokia 5100 sdk HOW TO ATTACH DATABASE:STEP 1: Copy the database to following path. Path: C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data Or Path: C:\Program Files\Microsoft SQL Server\MSSQL\DataSTEP 2: Then open sql server.

STEP 3:To attach the database, right click on database and click attach.

Page 67: Complete Document

Then attach databases window will open.

STEP 4:Click add button in that window and choose required database. Then click ok.

Page 68: Complete Document

Database will added in database details.

Page 69: Complete Document

Finally click ok.STEP 5:Then open MS visual studio 2008 for our project.In server explorer, right click on database connection and click add connection.

Page 70: Complete Document

Add connection window will open. In that, choose data source as MS sql server,give sever name and choose database name and then click ok.

Page 71: Complete Document

Then our database will attached in server explorer.STEP 6:Then change the appsettings in web.config file.For that, right click on our database in server explorer and click properties.

Page 72: Complete Document

Properties window will open.

STEP 7:

Copy that connection string to value in appsettings tag in web.config file.

Page 73: Complete Document

<appSettings><add key="ConnectionString" value="Data Source=HOME\SQLEXPRESS;Initial Catalog=opinion;Integrated Security=True" />

</appSettings>

STEP 8:We used ajax in our project. So add ajax tools to your system using below steps.

1. Copy AjaxControlToolkitBinary folder to any directory (i.e. any path) in your system.

2. Open any design page in our project then click toolbox.

3. Then keep the mouse pointer in general tab and right click on it.

4. Choose add tab.

Page 74: Complete Document

New tab will created on toolbox.

5. Give name to that tab like “Ajax toolkits”.

Page 75: Complete Document

6. Right click on that new tab and click choose items.

Then choose toolbox items window will open.

Page 76: Complete Document

7. Click browse button and select AjaxControlToolkit.dll file from that Ajax-ControlToolkitBinary folder (from where you save that folder).

8. Then click ok.

Page 77: Complete Document

Now all ajax tools are added in toolbox.

Step 9:Finally, you should follow the given video file.

Page 78: Complete Document

CHAPTER 06

TYPE OF TESTING:

BLOCK & WHITE BOX TESTING:

Black Box Testing

Black Box Testing is testing the software without any knowledge of the inner

workings, structure or language of the module being tested. Black box tests, as most other

kinds of tests, must be written from a definitive source document, such as specification or

requirements document, such as specification or requirements document. It is a testing in

which the software under test is treated, as a black box .you cannot “see” into it. The test

provides inputs and responds to outputs without considering how the software works.

White Box Testing

White Box Testing is a testing in which in which the software tester has knowledge

of the inner workings, structure and language of the software, or at least its purpose. It is

purpose. It is used to test areas that cannot be reached from a black box level.

UNIT TESTING:

Unit testing is usually conducted as part of a combined code and unit test phase of the

software lifecycle, although it is not uncommon for coding and unit testing to be

conducted as two distinct phases.

Test strategy and approach

Field testing will be performed manually and functional tests will be written in

detail.

Test objectives

All field entries must work properly.

Page 79: Complete Document

Pages must be activated from the identified link.

The entry screen, messages and responses must not be delayed.

Features to be tested

Verify that the entries are of the correct format

No duplicate entries should be allowed

All links should take the user to the correct page.

SYSTEM TESTING:

The purpose of testing is to discover errors. Testing is the process of trying to discover

every conceivable fault or weakness in a work product. It provides a way to check the

functionality of components, sub assemblies, assemblies and/or a finished product It is

the process of exercising software with the intent of ensuring that the Software system

meets its requirements and user expectations and does not fail in an unacceptable manner.

There are various types of test. Each test type addresses a specific testing requirement.

INTEGRATION TESTING:

Software integration testing is the incremental integration testing of two or more

integrated software components on a single platform to produce failures caused by

interface defects.

The task of the integration test is to check that components or software

applications, e.g. components in a software system or – one step up – software

applications at the company level – interact without error.

Test Results: All the test cases mentioned above passed successfully. No defects

encountered.

Page 80: Complete Document

CHAPTER 07

CONCLUSION:

In this paper we have worked to facilitate the client in getting a proof of integrity of the

data which he wishes to store in the cloud storage servers with bare minimum costs and

efforts. Our scheme was developed to reduce the computational and storage overhead of

the client as well as to minimize the computational overhead of the cloud storage server.

We also minimized the size of the proof of data integrity so as to reduce the network

bandwidth consumption. Many of the schemes proposed earlier require the archive to

perform tasks that need a lot of computational power to generate the proof of data

integrity. But in our scheme the archive just need to fetch and send few bits of data to the

client.

LIMITATIONS & FUTURE ENHANCEMENTS :

Apart from reduction in storage costs data outsourcing to the cloud also

helps in reducing the maintenance.

Avoiding local storage of data.

By reducing the costs of storage, maintenance and personnel.

It reduces the chance of losing data by hardware failures.

Not cheating the owner.

Page 81: Complete Document

REFERENCE & BIBLIOGRAPHY:

Good Teachers are worth more than thousand books, we have them in Our

Department

References Made From:

1. Beginning ASP.NET 4: in C# and VB by Imar Spaanjaars.

2. ASP.NET 4 Unleashed by Stephen Walther.

3. Programming ASP.NET 3.5 by Jesse Liberty, Dan Maharry, Dan Hurwitz.

4. Beginning ASP.NET 3.5 in C# 2008: From Novice to Professional, Second Edi -

tion by Matthew MacDonald.

5. Amazon Web Services (AWS), Online at http://aws. amazon.com.

6. Google App Engine, Online at http://code.google.com/appengine/.

7. Microsoft Azure, http://www.microsoft.com/azure/.

8. A. Agrawal et al. Ws-bpel extension for people (bpel4people), version 1.0.,

2007.

9. M. Amend et al. Web services human task (ws-humantask), version 1.0., 2007.

10. D. Brabham. Crowdsourcing as a model for problem solving: An introduction

and cases.

11. Data Communications and Networking, by Behrouz A Forouzan.

12. E. Mykletun, M. Narasimha, and G. Tsudik, “Authentication and integrity in out-

sourced databases,” Trans. Storage, vol. 2, no. 2, pp. 107–138, 2006.

Page 82: Complete Document

13. D. X. Song, D. Wagner, and A. Perrig, “Practical techniques for searches on en-

crypted data,” in SP ’00: Proceedings of the 2000 IEEE Symposium on Security

and Privacy. Washington, DC, USA: IEEE Computer Society, 2000

14. A. Juels and B. S. Kaliski, Jr., “Pors: proofs of retrievability for large files,” in

CCS ’07: Proceedings of the 14th ACM conference on Computer and communica-

tions security. New York, NY, USA: ACM, 2007, pp. 584–597.

15. G. Ateniese, R. Burns, R. Curtmola, J. Herring, L. Kissner, Z. Peterson, and D.

Song, “Provable data possession at untrusted stores,” in CCS ’07: Proceedings of

the 14th ACM conference on Computer and communications security. New York,

NY, USA: ACM, 2007, pp. 598–609.

Sites Referred:

http://www.asp.net.com

http://www.dotnetspider.com/

http://www.dotnetspark.com

http://www.almaden.ibm.com/software/quest/Resources/

http://www.computer.org/publications/dlib

http://www.developerfusion.com/

Abbreviations:

POR Proof of retrievability

CLS Common Language Specification

PDA Personal Digital Assist