24
Data Security Technologies o 1.1 Disk Encryption o 1.2 Hardware based Mechanisms for Protecting Data o 1.3 Backups o 1.4 Data Masking o 1.5 Data Erasure 2 International Laws and Standards o 2.1 International Laws o 2.2 International Standards 3 See also Disk encryption refers to encryption technology that encrypts data on a hard disk drive. Disk encryption typically takes form in either software (see disk encryption software] or hardware (see disk encryption hardware). Disk encryption is often referred to as on-the-fly encryption ("OTFE") or transparent encryption. Full disk encryption (or whole disk encryption) uses disk encryption software or hardware to encrypt every bit of data that goes on a  disk or disk  volume. Full Disk Encryption prevents unauthorized access to data storage. The term "full disk encryption" is often used to signify that everything on a disk is encrypted, including the programs that can encrypt  bootable operating system  partitions. But they must still leave the master boot record (MBR), and thus part of the disk, unencrypted. There are, however, hardware-based full disk encryption and hybrid full disk encryption systems that can truly encrypt the entire boot disk, including the MBR. Hardware based Mechanisms for Protecting Data Software based security solutions encrypt the data to prevent data from being stolen. However, a malicious program or a hacker may corrupt the data in order to make it unrecoverable or unusable. Similarly, encrypted operating systems can be corrupted by a malicious program or a hacker, making the system unusable. Hardware-based security solutions can prevent read and write access to data and hence offers very strong protection against tampering and unauthorized access. Hardware based or assisted computer security offers an alternative to software-only computer security. Security tokens such as those using PKCS#11 may be more secure due to the physical access required in order to be compromised. Access is enabled only when the token is connected and correct PIN is entered (see two factor authentication ). However, dongles can be used by anyone who can gain physical access to it. Newer technologies in hardware based security solves this problem offering fool proof security for data. Working of Hardware based security: A hardware device allows a user to login, logout and to set different privilege levels by doing manual actions. The device uses biometric technology to  prevent malicious users from logging in, logging out, and changing privilege levels. The current state of a user of the device is read by controllers in peripheral devices such as harddisks. Illegal access by a malicious user or a malicious program is interrupted based on the current state o f a user by harddisk and DVD controllers making illegal acc ess to data impossible. Hardware based

Data Security Technologies

Embed Size (px)

Citation preview

Page 1: Data Security Technologies

8/6/2019 Data Security Technologies

http://slidepdf.com/reader/full/data-security-technologies 1/24

• Data Security Technologieso 1.1 Disk Encryption

o 1.2 Hardware based Mechanisms for Protecting Data

o 1.3 Backups

o 1.4 Data Masking

o 1.5 Data Erasure• 2 International Laws and Standards

o 2.1 International Laws

o 2.2 International Standards

• 3 See also

Disk encryption refers to encryption technology that encrypts data on a hard disk

drive. Disk encryption typically takes form in either software (see disk encryption

software] or hardware (see disk encryption hardware). Disk encryption is often

referred to as on-the-fly encryption ("OTFE") or transparent encryption.

Full disk encryption (or whole disk encryption) uses disk encryption software or hardware toencrypt every bit of data that goes on a disk or disk  volume. Full Disk Encryption preventsunauthorized access to data storage. The term "full disk encryption" is often used to signify thateverything on a disk is encrypted, including the programs that can encrypt bootable operatingsystem  partitions. But they must still leave the master boot record (MBR), and thus part of thedisk, unencrypted. There are, however, hardware-based full disk encryption and hybrid full disk encryption systems that can truly encrypt the entire boot disk, including the MBR.

Hardware based Mechanisms for Protecting Data

Software based security solutions encrypt the data to prevent data from being stolen. However, a

malicious program or a hacker may corrupt the data in order to make it unrecoverable or unusable. Similarly, encrypted operating systems can be corrupted by a malicious program or ahacker, making the system unusable. Hardware-based security solutions can prevent read andwrite access to data and hence offers very strong protection against tampering and unauthorizedaccess.

Hardware based or assisted computer security offers an alternative to software-only computer security. Security tokens such as those using PKCS#11 may be more secure due to the physicalaccess required in order to be compromised. Access is enabled only when the token is connectedand correct PIN is entered (see two factor authentication). However, dongles can be used byanyone who can gain physical access to it. Newer technologies in hardware based security solves

this problem offering fool proof security for data.

Working of Hardware based security: A hardware device allows a user to login, logout and to setdifferent privilege levels by doing manual actions. The device uses biometric technology to prevent malicious users from logging in, logging out, and changing privilege levels. The currentstate of a user of the device is read by controllers in peripheral devices such as harddisks. Illegalaccess by a malicious user or a malicious program is interrupted based on the current state of auser by harddisk and DVD controllers making illegal access to data impossible. Hardware based

Page 2: Data Security Technologies

8/6/2019 Data Security Technologies

http://slidepdf.com/reader/full/data-security-technologies 2/24

access control is more secure than protection provided by the operating systems as operatingsystems are vulnerable to malicious attacks by viruses and hackers. The data on harddisks can becorrupted after a malicious access is obtained. With hardware based protection, software cannotmanipulate the user privilege levels, it is impossible for a hacker or a malicious program to gainaccess to secure data protected by hardware or perform unauthorized privileged operations. The

hardware protects the operating system image and file system privileges from being tampered.Therefore, a completely secure system can be created using a combination of hardware basedsecurity and secure system administration policies.

Backups

Backups are used to ensure data which is lost can be recovered

In information technology, a backup or the process of backing up refers to making copies of data so that these additional copies may be used to restore the original after a data loss event.The verb is back up in two words, whereas the noun is backup (often used like an adjective in

compound nouns).[1]

Backups are useful primarily for two purposes. The first is to restore a state following a disaster (called disaster recovery). The second is to restore small numbers of files after they have beenaccidentally deleted or  corrupted. Data loss is also very common. 66% of internet users havesuffered from serious data loss.[2]

Since a backup system contains at least one copy of all data worth saving, the data storage requirements are considerable. Organizing this storage space and managing the backup process isa complicated undertaking. A data repository model can be used to provide structure to thestorage. In the modern era of computing there are many different types of data storage devices 

that are useful for making backups. There are also many different ways in which these devicescan be arranged to provide geographic redundancy, data security, and portability.

Before data is sent to its storage location, it is selected, extracted, and manipulated. Manydifferent techniques have been developed to optimize the backup procedure. These includeoptimizations for dealing with open files and live data sources as well as compression,encryption, and de-duplication, among others. Many organizations and individuals try to haveconfidence that the process is working as expected and work to define measurements andvalidation techniques. It is also important to recognize the limitations and human factorsinvolved in any backup scheme

Data repository models

Any backup strategy starts with a concept of a data repository. The backup data needs to bestored somehow and probably should be organized to a degree. It can be as simple as a sheet of  paper with a list of all backup tapes and the dates they were written or a more sophisticated setupwith a computerized index, catalog, or relational database. Different repository models havedifferent advantages. This is closely related to choosing a  backup rotation scheme.

Page 3: Data Security Technologies

8/6/2019 Data Security Technologies

http://slidepdf.com/reader/full/data-security-technologies 3/24

Unstructured 

An unstructured repository may simply be a stack of floppy disks or CD-

R/DVD-R media with minimal information about what was backed up and

when. This is the easiest to implement, but probably the least likely to

achieve a high level of recoverability.

Full + incrementals 

A full + incremental repository aims to make it more feasible to store several

copies of the source data. At first, a full backup (of all files) is made. After

that, any number of incremental backups can be made. There are many

different types of incremental backups, but they all attempt to only back up a

small amount of data (when compared to the size of a full backup). A

incremental backup copies everything that changed after the last backup

(full, differential or incremental). Restoring a whole system to a certain point

in time would require locating the last full backup taken previous to that time

and all the incremental backups that cover the period of time between the

full backup and the particular point in time to which the system is supposed

to be restored.[3] The scope of an incremental backup is typically defined as a

range of time relative to other full or incremental backups. Different

implementations of backup systems frequently use specialized or conflicting

definitions of these terms.

Differential backup 

A differential backup copies files that have been created or changed since the

last full backup. It does not mark files as having been backed up (in other

words, the archive attribute is not cleared). If you are performing a

combination of full and differential backups, restoring files and folders

requires that you have the last full as well as the last differential backup.

Reverse delta 

A reverse delta system stores the differences between current versions of a

system and previous versions. A reverse delta backup will start with a normal

full backup. After the full backup is performed, the system will periodically

synchronize the full backup with the live copy, while storing the data

necessary to reconstruct older versions. This can either be done using hard

links, or using binary diffs. This system works particularly well for large,

slowly changing, data sets. Examples of programs that use this method are

rdiff-backup and Time Machine 

Continuous data protection 

Instead of scheduling periodic backups, the system immediately logs every

change on the host system. This is generally done by saving byte or block-

Page 4: Data Security Technologies

8/6/2019 Data Security Technologies

http://slidepdf.com/reader/full/data-security-technologies 4/24

level differences rather than file-level differences.[4] It differs from simple disk

mirroring in that it enables a roll-back of the log and thus restoration of old

image of data.

Full system backup 

 This type of backup is designed to allow an entire PC to be recovered to "bare

metal" without any installation of operating system, application software and

data. Most users understand that a backup will prevent "data" from being

lost. The expense in a full system recovery is in the hours that it takes for a

technician to rebuild a machine to the point of restoring the last data backup.

So, a full system backup makes a complete image of the computer so that if 

needed, it can be copied back to the PC, usually using some type of bespoke

software such as Ghost, and the user can carry on from that point.

Backups can be also used when installing from one operating system to another.

[edit] Storage media

Regardless of the repository model that is used, the data has to be stored on some data storagemedium somewhere.

Magnetic tape 

Magnetic tape has long been the most commonly used medium for bulk data

storage, backup, archiving, and interchange. Tape has typically had an order

of magnitude better capacity/price ratio when compared to hard disk, but

recently the ratios for tape and hard disk have become a lot closer.[5]

Thereare myriad formats, many of which are proprietary or specific to certain

markets like mainframes or a particular brand of personal computer. Tape is

a sequential access medium, so even though access times may be poor, the

rate of continuously writing or reading data can actually be very fast. Some

new tape drives are even faster than modern hard disks. A principal

advantage of tape is that it has been used for this purpose for decades (much

longer than any alternative) and its characteristics are well understood.

Hard disk 

 The capacity/price ratio of hard disk has been rapidly improving for manyyears. This is making it more competitive with magnetic tape as a bulk

storage medium. The main advantages of hard disk storage are low access

times, availability, capacity and ease of use.[6] External disks can be

connected via local interfaces like SCSI, USB, FireWire, or eSATA, or via

longer distance technologies like Ethernet, iSCSI, or Fibre Channel. Some

disk-based backup systems, such as Virtual Tape Libraries, support data

deduplication which can dramatically reduce the amount of disk storage

Page 5: Data Security Technologies

8/6/2019 Data Security Technologies

http://slidepdf.com/reader/full/data-security-technologies 5/24

capacity consumed by daily and weekly backup data. The main

disadvantages of hard disk backups are that they are easily damaged,

especially while being transported (e.g., for off-site backups), and that their

stability over periods of years is a relative unknown.

Optical storage 

Blu-ray Discs dramatically increase the amount of data possible on a single

optical storage disk. Systems containing Blu-ray discs can store massive

amounts of data and be more cost efficient than hard drives and magnetic

tape. Some optical storage systems allow for cataloged data backups without

human contact with the discs, allowing for longer data integrity. A recordable

CD can be used as a backup device. One advantage of CDs is that they can

be restored on any machine with a CD-ROM drive. (In practice, writable CD-

ROMs are not always universally readable.) In addition, recordable CD's are

relatively cheap. Another common format is recordable DVD. Many optical

disk formats are WORM type, which makes them useful for archival purposessince the data can't be changed. Other rewritable formats can also be utilized

such as CD-RW or DVD-RAM.

Floppy disk 

During the 1980s and early 1990s, many personal/home computer users

associated backing up mostly with copying to floppy disks. The low data

capacity of a floppy disk makes it an unpopular and obsolete choice today.[7] 

Solid state storage 

Also known as flash memory, thumb drives, USB flash drives, CompactFlash,SmartMedia, Memory Stick, Secure Digital cards, etc., these devices are

relatively costly for their low capacity, but offer excellent portability and

ease-of-use.

Remote backup service 

As broadband internet access becomes more widespread, remote backup

services are gaining in popularity. Backing up via the internet to a remote

location can protect against some worst-case scenarios such as fires, floods,

or earthquakes which would destroy any backups in the immediate vicinity

along with everything else. There are, however, a number of drawbacks toremote backup services. First, internet connections (particularly domestic

broadband connections) are generally substantially slower than the speed of 

local data storage devices, which can be a problem for people who generate

or modify large amounts of data. Secondly, users need to trust a third party

service provider with both privacy and integrity of backed up data. The risk

associated with putting control of personal or sensitive data in the hands of a

Page 6: Data Security Technologies

8/6/2019 Data Security Technologies

http://slidepdf.com/reader/full/data-security-technologies 6/24

third party can be managed by encrypting sensitive data so that its contents

cannot be viewed without access to the secret key. Ultimately the backup

service must itself be using one of the above methods, so this could be seen

as a more complex way of doing traditional backups.

[edit] Managing the data repository

Regardless of the data repository model or data storage media used for backups, a balance needsto be struck between accessibility, security and cost. These media management methods are notmutually exclusive and are frequently combined to meet the needs of the situation. Using on-linedisks for staging data before it is sent to a near-line tape library is a common example.

On-line 

On-line backup storage is typically the most accessible type of data storage,

which can begin restore in milliseconds time. A good example would be an

internal hard disk or a disk array (maybe connected to SAN). This type of storage is very convenient and speedy, but is relatively expensive. On-line

storage is quite vulnerable to being deleted or overwritten, either by

accident, by intentional malevolent action, or in the wake of a data-deleting

virus payload.

Near-line 

Near-line storage is typically less accessible and less expensive than on-line

storage, but still useful for backup data storage. A good example would be a

tape library with restore times ranging from seconds to a few minutes. A

mechanical device is usually involved in moving media units from storageinto a drive where the data can be read or written. Generally it has safety

properties similar to on-line storage.

Off-line 

Off-line storage requires some direct human action in order to make access

to the storage media physically possible. This action is typically inserting a

tape into a tape drive or plugging in a cable that allows a device to be

accessed. Because the data is not accessible via any computer except during

limited periods in which it is written or read back, it is largely immune to a

whole class of on-line backup failure modes. Access time will vary dependingon whether the media is on-site or off-site.

Off-site data protection 

 To protect against a disaster or other site-specific problem, many people

choose to send backup media to an off-site vault. The vault can be as simple

as a system administrator's home office or as sophisticated as a disaster

Page 7: Data Security Technologies

8/6/2019 Data Security Technologies

http://slidepdf.com/reader/full/data-security-technologies 7/24

hardened, temperature controlled, high security bunker that has facilities for

backup media storage. Importantly a data replica can be off-site but also on-

line (e.g., an off-site RAID mirror). Such a replica has fairly limited value as a

backup, and should not be confused with an off-line backup.

Backup site or disaster recovery center (DR center)

In the event of a disaster, the data on backup media will not be sufficient to

recover. Computer systems onto which the data can be restored and properly

configured networks are necessary too. Some organizations have their own

data recovery centers that are equipped for this scenario. Other

organizations contract this out to a third-party recovery center. Because a DR

site is itself a huge investment, backing up is very rarely considered the

preferred method of moving data to a DR site. A more typical way would be

remote disk mirroring, which keeps the DR data as up to date as possible.

[edit] Selection and extraction of data

A successful backup job starts with selecting and extracting coherent units of data. Most data onmodern computer systems is stored in discrete units, known as files. These files are organizedinto filesystems. Files that are actively being updated can be thought of as "live" and present achallenge to back up. It is also useful to save metadata that describes the computer or thefilesystem being backed up.

Deciding what to back up at any given time is a harder process than it seems. By backing up toomuch redundant data, the data repository will fill up too quickly. Backing up an insufficientamount of data can eventually lead to the loss of critical information.

[edit] Files

Copying files 

Making copies of files is the simplest and most common way to perform a

backup. A means to perform this basic function is included in all backup

software and all operating systems.

Partial file copying

Instead of copying whole files, one can limit the backup to only the blocks or

bytes within a file that have changed in a given period of time. This technique

can use substantially less storage space on the backup medium, but requires

a high level of sophistication to reconstruct files in a restore situation. Some

implementations require integration with the source filesystem.

[edit] Filesystems

Page 8: Data Security Technologies

8/6/2019 Data Security Technologies

http://slidepdf.com/reader/full/data-security-technologies 8/24

Filesystem dump

Instead of copying files within a filesystem, a copy of the whole filesystem

itself can be made. This is also known as a raw partition backup and is

related to disk imaging. The process usually involves unmounting the

filesystem and running a program like dump. This type of backup has thepossibility of running faster than a backup that simply copies files. A feature

of some dump software is the ability to restore specific files from the dump

image.

Identification of changes

Some filesystems have an archive bit for each file that says it was recently

changed. Some backup software looks at the date of the file and compares it

with the last backup to determine whether the file was changed.

Versioning file system 

A versioning filesystem keeps track of all changes to a file and makes those

changes accessible to the user. Generally this gives access to any previous

version, all the way back to the file's creation time. An example of this is the

Wayback versioning filesystem for Linux.[8] 

If a computer system is in use while it is being backed up, the possibility of files being open for reading or writing is real. If a file is open, the contents on disk may not correctly represent whatthe owner of the file intends. This is especially true for database files of all kinds. The term fuzzy backup can be used to describe a backup of live data that looks like it ran correctly, but does not

represent the state of the data at any single point in time. This is because the data being backedup changed in the period of time between when the backup started and when it finished. For databases in particular, fuzzy backups are worthless.[citation needed ]

Snapshot backup

A snapshot is an instantaneous function of some storage systems that

presents a copy of the file system as if it were frozen at a specific point in

time, often by a copy-on-write mechanism. An effective way to back up live

data is to temporarily quiesce it (e.g. close all files), take a snapshot, and

then resume live operations. At this point the snapshot can be backed up

through normal methods.[9] While a snapshot is very handy for viewing afilesystem as it was at a different point in time, it is hardly an effective

backup mechanism by itself.

Open file backup

Page 9: Data Security Technologies

8/6/2019 Data Security Technologies

http://slidepdf.com/reader/full/data-security-technologies 9/24

Many backup software packages feature the ability to handle open files in

backup operations. Some simply check for openness and try again later. File

locking is useful for regulating access to open files.

When attempting to understand the logistics of backing up open files, one

must consider that the backup process could take several minutes to back upa large file such as a database. In order to back up a file that is in use, it is

vital that the entire backup represent a single-moment snapshot of the file,

rather than a simple copy of a read-through. This represents a challenge

when backing up a file that is constantly changing. Either the database file

must be locked to prevent changes, or a method must be implemented to

ensure that the original snapshot is preserved long enough to be copied, all

while changes are being preserved. Backing up a file while it is being

changed, in a manner that causes the first part of the backup to represent

data before changes occur to be combined with later parts of the backup

after the change results in a corrupted file that is unusable, as most large

files contain internal references between their various parts that must remainconsistent throughout the file.

Cold database backup

During a cold backup, the database is closed or locked and not available to

users. The datafiles do not change during the backup process so the

database is in a consistent state when it is returned to normal operation.[10] 

Hot database backup

Some database management systems offer a means to generate a backup

image of the database while it is online and usable ("hot"). This usually

includes an inconsistent image of the data files plus a log of changes made

while the procedure is running. Upon a restore, the changes in the log files

are reapplied to bring the database in sync.[11] 

[edit] Metadata

 Not all information stored on the computer is stored in files. Accurately recovering a completesystem from scratch requires keeping track of this non-file data too.

System description

System specifications are needed to procure an exact replacement after a

disaster.

Boot sector 

Page 10: Data Security Technologies

8/6/2019 Data Security Technologies

http://slidepdf.com/reader/full/data-security-technologies 10/24

 The boot sector can sometimes be recreated more easily than saving it. Still,

it usually isn't a normal file and the system won't boot without it.

Partition layout

 The layout of the original disk, as well as partition tables and filesystem

settings, is needed to properly recreate the original system.

File metadata 

Each file's permissions, owner, group, ACLs, and any other metadata need to

be backed up for a restore to properly recreate the original environment.

System metadata

Different operating systems have different ways of storing configuration

information. Microsoft Windows keeps a registry of system information that is

more difficult to restore than a typical file.

[edit] Manipulation of data and dataset optimisation

It is frequently useful or required to manipulate the data being backed up to optimize the backup process. These manipulations provide many benefits including improved backup speed, restorespeed, data security, media usage and reduced bandwidth requirements.

Compression 

Various schemes can be employed to shrink the size of the source data to be

stored so that it uses less storage space. Compression is frequently a built-in

feature of tape drive hardware.

De-duplication 

When multiple similar systems are backed up to the same destination storage

device, there exists the potential for much redundancy within the backed up

data. For example, if 20 Windows workstations were backed up to the same

data repository, they might share a common set of system files. The data

repository only needs to store one copy of those files to be able to restore

any one of those workstations. This technique can be applied at the file level

or even on raw blocks of data, potentially resulting in a massive reduction in

required storage space. Deduplication can occur on a server before any data

moves to backup media, sometimes referred to as source/client side

deduplication. This approach also reduces bandwidth required to send

backup data to its target media. The process can also occur at the target

storage device, sometimes referred to as inline or back-end deduplication.

Duplication 

Page 11: Data Security Technologies

8/6/2019 Data Security Technologies

http://slidepdf.com/reader/full/data-security-technologies 11/24

Sometimes backup jobs are duplicated to a second set of storage media. This

can be done to rearrange the backup images to optimize restore speed, to

have a second copy at a different location or on a different storage medium.

Encryption 

High capacity removable storage media such as backup tapes present a data

security risk if they are lost or stolen.[12] Encrypting the data on these media

can mitigate this problem, but presents new problems. Encryption is a CPU

intensive process that can slow down backup speeds, and the security of the

encrypted backups is only as effective as the security of the key

management policy.

Multiplexing 

When there are many more computers to be backed up than there are

destination storage devices, the ability to use a single storage device with

several simultaneous backups can be useful.

Refactoring

 The process of rearranging the backup sets in a data repository is known as

refactoring. For example, if a backup system uses a single tape each day to

store the incremental backups for all the protected computers, restoring one

of the computers could potentially require many tapes. Refactoring could be

used to consolidate all the backups for a single computer onto a single tape.

 This is especially useful for backup systems that do incrementals forever 

style backups.

Staging 

Sometimes backup jobs are copied to a staging disk before being copied to

tape. This process is sometimes referred to as D2D2T, an acronym for Disk to

Disk to Tape. This can be useful if there is a problem matching the speed of 

the final destination device with the source device as is frequently faced in

network-based backup systems. It can also serve as a centralized location for

applying other data manipulation techniques.

[edit] Managing the backup process

It is important to understand that backing up is a process. As long as new data is being createdand changes are being made, backups will need to be updated. Individuals and organizations withanything from one computer to thousands (or even millions) of computer systems all haverequirements for protecting data. While the scale is different, the objectives and limitations areessentially the same. Likewise, those who perform backups need to know to what extent theywere successful, regardless of scale.

Page 12: Data Security Technologies

8/6/2019 Data Security Technologies

http://slidepdf.com/reader/full/data-security-technologies 12/24

[edit] Objectives

Recovery point objective (RPO)

 The point in time that the restarted infrastructure will reflect. Essentially, this

is the roll-back that will be experienced as a result of the recovery. The mostdesirable RPO would be the point just prior to the data loss event. Making a

more recent recovery point achievable requires increasing the frequency of 

synchronization between the source data and the backup repository.[13] 

Recovery time objective (RTO)

 The amount of time elapsed between disaster and restoration of business

functions.[14] 

Data security 

In addition to preserving access to data for its owners, data must berestricted from unauthorized access. Backups must be performed in a

manner that does not compromise the original owner's undertaking. This can

be achieved with data encryption and proper media handling policies.

[edit] Limitations

An effective backup scheme will take into consideration the limitations of the situation.

Backup window

 The period of time when backups are permitted to run on a system is calledthe backup window. This is typically the time when the system sees the least

usage and the backup process will have the least amount of interference with

normal operations. The backup window is usually planned with users'

convenience in mind. If a backup extends past the defined backup window, a

decision is made whether it is more beneficial to abort the backup or to

lengthen the backup window.

Performance impact

All backup schemes have some performance impact on the system being

backed up. For example, for the period of time that a computer system isbeing backed up, the hard drive is busy reading files for the purpose of 

backing up, and its full bandwidth is no longer available for other tasks. Such

impacts should be analyzed.

Costs of hardware, software, labor

Page 13: Data Security Technologies

8/6/2019 Data Security Technologies

http://slidepdf.com/reader/full/data-security-technologies 13/24

All types of storage media have a finite capacity with a real cost. Matching

the correct amount of storage capacity (over time) with the backup needs is

an important part of the design of a backup scheme. Any backup scheme has

some labor requirement, but complicated schemes have considerably higher

labor requirements. The cost of commercial backup software can also be

considerable.

Network bandwidth

Distributed backup systems can be affected by limited network bandwidth.

[edit] Implementation

Meeting the defined objectives in the face of the above limitations can be a difficult task. Thetools and concepts below can make that task more achievable.

Scheduling

Using a job scheduler can greatly improve the reliability and consistency of 

backups by removing part of the human element. Many backup software

packages include this functionality.

Authentication

Over the course of regular operations, the user accounts and/or system

agents that perform the backups need to be authenticated at some level. The

power to copy all data off of or onto a system requires unrestricted access.

Using an authentication mechanism is a good way to prevent the backup

scheme from being used for unauthorized activity.

Chain of trust 

Removable storage media are physical items and must only be handled by

trusted individuals. Establishing a chain of trusted individuals (and vendors)

is critical to defining the security of the data.

[edit] Measuring the process

To ensure that the backup scheme is working as expected, the process needs to include

monitoring key factors and maintaining historical data.

Backup validation 

(also known as "backup success validation") The process by which owners of 

data can get information about how their data was backed up. This same

process is also used to prove compliance to regulatory bodies outside of the

organization, for example, an insurance company might be required under

Page 14: Data Security Technologies

8/6/2019 Data Security Technologies

http://slidepdf.com/reader/full/data-security-technologies 14/24

HIPAA to show "proof" that their patient data are meeting records retention

requirements[15]. Disaster, data complexity, data value and increasing

dependence upon ever-growing volumes of data all contribute to the anxiety

around and dependence upon successful backups to ensure business

continuity. For that reason, many organizations rely on third-party or

"independent" solutions to test, validate, and optimize their backupoperations (backup reporting).

Reporting

In larger configurations, reports are useful for monitoring media usage,

device status, errors, vault coordination and other information about the

backup process.

Logging

In addition to the history of computer generated reports, activity and change

logs are useful for monitoring backup system events.

Validation

Many backup programs make use of checksums or hashes to validate that

the data was accurately copied. These offer several advantages. First, they

allow data integrity to be verified without reference to the original file: if the

file as stored on the backup medium has the same checksum as the saved

value, then it is very probably correct. Second, some backup programs can

use checksums to avoid making redundant copies of files, to improve backup

speed. This is particularly useful for the de-duplication process.

Monitored backup

Backup processes are monitored by a third party monitoring center. This

center alerts users to any errors that occur during automated backups.

Monitored backup requires software capable of pinging the monitoring

center's servers in the case of errors. Some monitoring services also allow

collection of historical meta-data, that can be used for Storage Ressource

Management purposes like projection of data growth, locating redundant

primary storage capacity and reclaimable backup capacity. The Wizards

Storage Portal is an example of a solution that monitors IBM's well known

 Tivoli Storage Manager(TSM) solution.

[edit] Lore

Wikiquote has a collection of quotations related to: Backup

Page 15: Data Security Technologies

8/6/2019 Data Security Technologies

http://slidepdf.com/reader/full/data-security-technologies 15/24

[edit] Confusion

Due to a considerable overlap in technology, backups and backup systems are frequentlyconfused with archives and fault-tolerant systems. Backups differ from archives in the sense thatarchives are the primary copy of data, usually put away for future use, while backups are a

 secondary copy of data, kept on hand to replace the original item. Backup systems differ fromfault-tolerant systems in the sense that backup systems assume that a fault will cause a data lossevent and fault-tolerant systems assure a fault will not .

[edit] Advice

 This article may contain original research. Please improve it by verifying 

the claims made and adding references. Statements consisting only of 

original research may be removed. More details may be available on the talk

page. (November 2009)

•  The more important the data is that is stored on the computer, the greater isthe need for backing up this data.

• A backup is only as useful as its associated restore strategy. For criticalsystems and data, the restoration process must be tested.

• Storing the copy near the original is unwise, since many disasters such asfire, flood, theft, and electrical surges are likely to cause damage to thebackup at the same time. In these cases, both the original and the backupmedium are likely to be lost.

• Automated backup and scheduling should be considered, as manual backupscan be affected by human error.

• Backups can fail for a wide variety of reasons. A verification or monitoringstrategy is an important part of a successful backup plan.

• Multiple backups on different media, stored in different locations, must beused for all critical information.

• Backed up archives should be stored in open and standard formats,especially when the goal is long-term archiving. Recovery software andprocesses may have changed, and software may not be available to restoredata saved in proprietary formats.

• System administrators and others working in the information technology fieldare routinely fired for not devising and maintaining backup processes suitableto their organization.

• If you already have a tape backup system, a second backup program may benecessary, do an additional backup to the external hard disk with anautomatic backup program, you will have the double data security, and it iseasy to check the backed up files in the external hard disk.

[edit] Events

• In 1996, during a fire at the headquarters of Crédit Lyonnais, a major bank inParis, system administrators ran into the burning building to rescue backup

Page 16: Data Security Technologies

8/6/2019 Data Security Technologies

http://slidepdf.com/reader/full/data-security-technologies 16/24

tapes because they didn't have off-site copies. Crucial bank archives andcomputer data were lost.[16][17] 

• Privacy Rights Clearinghouse has documented [18] 16 instances of stolen orlost backup tapes (among major organizations) in 2005 & 2006. Affectedorganizations included Bank of America, Ameritrade, Citigroup, and TimeWarner.

• On 3 January 2008, an email server crashed at TeliaSonera, a major Nordictelecom company and internet service provider. It was subsequentlydiscovered that the last serviceable backup set was from 15 December 2007.

 Three hundred thousand customer email accounts were affected

Data Masking

Data Masking of structured data is the process of obscuring (masking) specific data within adatabase table or cell to ensure that data security is maintained and sensitive information is notexposed to unauthorized personnel. This may include masking the data from users (for exampleso banking customer representatives can only see the last 4 digits of a customers national identity

number), developers (who need real production data to test new software releases but should not be able to see sensitive financial data), outsourcing vendors, etc.

Data erasure (also called data clearing) is a software-based method of overwriting data thatcompletely destroys all electronic data residing on a hard disk drive or other digital media. Permanent data erasure goes beyond basic file deletion commands, which only remove direct pointers to data disk sectors and make data recovery possible with common software tools.Unlike degaussing and physical destruction, which render the storage media unusable, dataerasure removes all information while leaving the disk operable, preserving IT assets and theenvironment.

Software-based overwriting uses a software application to write patterns of meaningless dataonto each of a hard drive's sectors. There are key differentiators between data erasure and other overwriting methods, which can leave data intact and raise the risk of data breach or spill,identity theft and failure to achieve regulatory compliance. Data erasure also provides multipleoverwrites so that it supports recognized government and industry standards. It providesverification of data removal, which is necessary for meeting certain standards.

To protect data on lost or stolen media, some data erasure applications remotely destroy data if the password is incorrectly entered. Data erasure tools can also target specific data on a disk for routine erasure, providing a hacking protection method that is less time-consuming thanencryption.

Information technology (IT) assets commonly hold large volumes of confidential data. Socialsecurity numbers, credit card numbers, bank details, medical history and classified informationare often stored on computer hard drives or  servers and can inadvertently or intentionally maketheir way onto other media such as printer, USB, flash, Zip, Jaz, and REV drives.

[edit] Data breach

Page 17: Data Security Technologies

8/6/2019 Data Security Technologies

http://slidepdf.com/reader/full/data-security-technologies 17/24

Increased storage of sensitive data, combined with rapid technological change and the shorter lifespan of IT assets, has driven the need for permanent data erasure of electronic devices as theyare retired or refurbished. Also, compromised networks and laptop theft and loss, as well as thatof other portable media, are increasingly common sources of data breaches.

If data erasure does not occur when a disk is retired or lost, an organization or user faces that possibility that data will be stolen and compromised, leading to identity theft, loss of corporatereputation, threats to regulatory compliance and financial impacts. Companies have spent nearly$5 million on average to recover when corporate data was lost or stolen.[1] High profile incidentsof data theft include:

• Oklahoma Corporation Commission (2008-05-21): Server sold at auctioncompromises more than 5,000 Social Security numbers.[2] 

• University of Florida College of Medicine, Jacksonville (2008-05-20):Photographs and identifying information of 1,900 on improperly disposedcomputer.[3] 

• Compass Bank (2008-03-21): Stolen hard drive contains 1,000,000 customer

records.[4] • Lifeblood (2008-02-13): Missing laptops contain personal information

including dates of birth and some Social Security numbers of 321,000.[5] • Hannaford (2008-03-17): Breach exposes 4.2 million credit, debit cards.[6] • CardSystems Solutions (2005-06-19): Credit card breach exposes 40 million

accounts.[7] 

[edit] Regulatory compliance

Strict industry standards and government regulations are in place that force organizations tomitigate the risk of unauthorized exposure of confidential corporate and government data. These

regulations include HIPAA (Health Insurance Portability and Accountability Act); FACTA (TheFair and Accurate Credit Transactions Act of 2003); GLB (Gramm-Leach Bliley); Sarbanes-Oxley Act (SOx); and Payment Card Industry Data Security Standards (PCI DSS). Failure tocomply can result in fines and damage to company reputation, as well as civil and criminalliability.

[edit] Preserving assets and the environment

Data erasure offers an alternative to physical destruction and degaussing for secure removal of all disk data. Physical destruction and degaussing destroy the digital media, requiring disposaland contributing to electronic waste while negatively impacting the carbon footprint of 

individuals and companies.[8] Hard drives are nearly 100% recyclable and can be collected at nocharge from a variety of hard drive recyclers after they have been sanitized.

 [ edit  ] Limitations

Data erasure through overwriting only works on hard drives that are functioning and writing toall sectors. Bad sectors cannot usually be overwritten but may contain recoverable information.Software driven data erasure could also be compromised by malicious code.[9]

Page 18: Data Security Technologies

8/6/2019 Data Security Technologies

http://slidepdf.com/reader/full/data-security-technologies 18/24

[edit] Differentiators

Software-based data erasure uses a special application to write a combination of 1's and 0's ontoeach hard drive sector. The level of security depends on the number of times the entire hard driveis written over.

[edit] Full disk overwriting

There are many overwriting programs, but data erasure offers complete security by destroyingdata on all areas of a hard drive. Disk overwriting programs that cannot access the entire harddrive, including hidden/locked areas like the host protected area (HPA), device configurationoverlay (DCO), and remapped sectors, perform an incomplete erasure, leaving some of the dataintact. By accessing the entire hard drive, data erasure eliminates the risk of data remanence.

Data erasure also bypasses the BIOS and OS. Overwriting programs that operate through theBIOS and OS will not always perform a complete erasure due to altered or corrupted BIOS data

and may report back a complete and successful erasure even if they do not access the entire harddisk, leaving data accessible.

[edit] Hardware support

Data erasure can be deployed over a network to target multiple PCs rather than having to eraseeach one sequentially. In contrast with DOS-based overwriting programs that may not detect allnetwork hardware, Linux-based data erasure software supports high-end server and storage areanetwork (SAN) environments with hardware support for Serial ATA, Serial Attached SCSI (SAS) and Fibre Channel disks and remapped sectors. It operates directly with sector sizes suchas 520, 524, and 528, removing the need to first reformat back to 512 sector size.

[edit] Standards

Many government and industry standards exist for software-based overwriting that removes data.A key factor in meeting these standards is the number of times the data is overwritten. Also,some standards require a method to verify that all data has been removed from the entire harddrive and to view the overwrite pattern. Complete data erasure should account for hidden areas,typically DCO, HPA and remapped sectors.

The 1995 edition of the National Industrial Security Program Operating Manual (DoD 5220.22-M) permitted the use of overwriting techniques to sanitize some types of media by writing all

addressable locations with a character, its complement, and then a random character. This provision was removed in a 2001 change to the manual and was never permitted for Top Secretmedia, but it is still listed as a technique by many providers of data erasure software.[10]

Data erasure software should provide the user with a validation certificate indicating that theoverwriting procedure was completed properly. Data erasure software should also comply withrequirements to erase hidden areas, provide a defects log list, and list bad sectors that could not be overwritten.

Page 19: Data Security Technologies

8/6/2019 Data Security Technologies

http://slidepdf.com/reader/full/data-security-technologies 19/24

Overwriting StandardDat

e

Overwritin

g RoundsPattern Notes

NIST SP-800-88[11] 200

61 ?

NSA/CSS Storage Device

Declassification Manual

(SDDM)[12]

200

7

Not

approved[by 

whom?]

? Degauss or destroy

U.S. National Industrial

Security Program 

Operating Manual (DoD

5220.22-M)[10]

200

6? ?

U.S. DoD Unclassified

Computer Hard Drive

Disposition[13]

200

1

3

A character, its

complement,

another pattern

U.S. Navy Staff Office

Publication NAVSO P-

5239-26[14]

199

33

A character, its

complement,

random

Verification is

mandatory

U.S. Air Force System

Security Instruction

5020[15]

199

64

All 0's, all 1's, any

character

Verification is

mandatory

British HMG Infosec

Standard 5, BaselineStandard

? 1 All 0's

Verification is

optional

British HMG Infosec

Standard 5, Enhanced

Standard

? 3All 0's, all 1's,

random

Verification is

mandatory

Communications Security

Establishment Canada 

ITSG-06[16]

200

63

All 1's or 0's, its

complement, a

pseudo-random

pattern

For unclassified

media

German Federal Office for

Information Security[17] 

200

42-3

Non-uniform

pattern, its

complement

Australian Government

ICT Security Manual[18]

200

81 ?

Degauss or destroy

 Top Secret media

New Zealand Government 200 1 ? For data up to

Page 20: Data Security Technologies

8/6/2019 Data Security Technologies

http://slidepdf.com/reader/full/data-security-technologies 20/24

Communications Security

Bureau NZSIT 402[19] 8 Confidential

Peter Gutmann's

Algorithm

199

6

Up to 35 ?

Originally intended

for MFM and RLL

disks, which are nowobsolete

Bruce Schneier's

Algorithm[20]

199

67

All 1's, all 0's,

pseudo-random

sequence five

times

Data can sometimes be recovered from a broken hard drive. However, if the platters on a harddrive are damaged, such as by drilling a hole through the drive (and the platters inside), then datacan only be recovered by bit-by-bit analysis of each platter with advanced forensic technology.Seagate is the only company in the world to have credibly claimed such technology, althoughsome governments may also be able to do this.

[edit] Number of overwrites needed

Data on floppy disks can sometimes be recovered by forensic analysis even after the disks have been overwritten once with zeros (or random zeros and ones).[21] This is not the case with modernhard drives:

• According to the 2006 NIST Special Publication 800-88 Section 2.3 (p. 6):"Basically the change in track density and the related changes in the storage

medium have created a situation where the acts of clearing and purging themedia have converged. That is, for ATA disk drives manufactured after 2001(over 15GB) clearing by overwriting the media once is adequate to protectthe media from both keyboard and laboratory attack."[11] 

• According to the 2006 CMRR Tutorial on Disk Drive Data SanitizationDocument (p. 8): "Secure erase does a single on-track erasure of the data onthe disk drive. The U.S. National Security Agency published an InformationAssurance Approval of single pass overwrite, after technical testing at CMRRshowed that multiple on-track overwrite passes gave no additionalerasure."[22] "Secure erase" is a utility built into modern ATA hard drives thatoverwrites all data on a disk, including remapped (error) sectors.

• Further analysis by Wright et al. seems to also indicate that one overwrite isall that is generally required.[23] 

Data recovery

From Wikipedia, the free encyclopedia

Page 21: Data Security Technologies

8/6/2019 Data Security Technologies

http://slidepdf.com/reader/full/data-security-technologies 21/24

 Jump to: navigation, search 

Data recovery is the process of salvaging data from damaged, failed, corrupted, or inaccessiblesecondary storage media when it cannot be accessed normally. Often the data are being salvagedfrom storage media such as hard disk drives, storage tapes, CDs, DVDs, RAID, and other 

electronics. Recovery may be required due to physical damage to the storage device or logicaldamage to the file system that prevents it from being mounted by the host operating system.

The most common "data recovery" scenario involves an operating system (OS) failure (typicallyon a single-disk, single- partition, single-OS system), in which case the goal is simply to copy allwanted files to another disk. This can be easily accomplished with a Live CD, most of which provide a means to mount the system drive and backup disks or removable media, and to movethe files from the system disk to the backup media with a file manager or  optical disc authoringsoftware. Such cases can often be mitigated by disk partitioning and consistently storing valuabledata files (or copies of them) on a different partition from the replaceable OS system files.

Another scenario involves a disk-level failure, such as a compromised file system or disk  partition or a hard disk failure. In any of these cases, the data cannot be easily read. Dependingon the situation, solutions involve repairing the file system, partition table or  master boot record,or hard disk recovery techniques ranging from software-based recovery of corrupted data tohardware replacement on a physically damaged disk. If hard disk recovery is necessary, the disk itself has typically failed permanently, and the focus is rather on a one-time recovery, salvagingwhatever data can be read.

In a third scenario, files have been "deleted" from a storage medium. Typically, deleted files arenot erased immediately; instead, references to them in the directory structure are removed, andthe space they occupy is made available for later overwriting. In the meantime, the original file

may be restored. Although there is some confusion over the term, "data recovery" may also beused in the context of forensic applications or espionage.

Recovering data after physical damage

A wide variety of failures can cause physical damage to storage media. CD-ROMs can have their metallic substrate or dye layer scratched off; hard disks can suffer any of several mechanicalfailures, such as head crashes and failed motors; tapes can simply break. Physical damage alwayscauses at least some data loss, and in many cases the logical structures of the file system aredamaged as well. Any logical damage must be dealt with before files can be salvaged from thefailed media.

Most physical damage cannot be repaired by end users. For example, opening a hard disk in anormal environment can allow airborne dust to settle on the platter and become caught betweenthe platter and the read/write head, causing new head crashes that further damage the platter andthus compromise the recovery process. Furthermore, end users generally do not have thehardware or technical expertise required to make these repairs. Consequently, costly datarecovery companies are often employed to salvage important data.

Page 22: Data Security Technologies

8/6/2019 Data Security Technologies

http://slidepdf.com/reader/full/data-security-technologies 22/24

[edit] Recovery techniques

Recovering data from physically-damaged hardware can involve multiple techniques. Somedamage can be repaired by replacing parts in the hard disk. This alone may make the disk usable, but there may still be logical damage. A specialized disk-imaging procedure is used to recover 

every readable bit from the surface. Once this image is acquired and saved on a reliable medium,the image can be safely analysed for logical damage and will possibly allow for much of theoriginal file system to be reconstructed.

 [ edit  ] Hardware repair 

Media that has suffered a catastrophic electronic failure will require data recovery in

order to salvage its contents.

Examples of physical recovery procedures are: removing a damaged PCB ( printed circuit board)and replacing it with a matching PCB from a healthy drive, performing a live PCB swap (inwhich the System Area of the HDD is damaged on the target drive which is then instead readfrom the donor drive, the PCB then disconnected while still under power and transferred to thetarget drive), read/write head assembly with matching parts from a healthy drive, removing thehard disk platters from the original damaged drive and installing them into a healthy drive, andoften a combination of all of these procedures. Some data recovery companies have proceduresthat are highly technical in nature and are not recommended for an untrained individual. Each of them will void the manufacturer's warranty. For companies who will not void a warranty, seecompanies such as Kroll Ontrack , SalvageData, and DriveSavers.

Page 23: Data Security Technologies

8/6/2019 Data Security Technologies

http://slidepdf.com/reader/full/data-security-technologies 23/24

[edit] Recovering from logical (non-hardware) damage

Result of a failed data recovery from a Hard disk drive.

[edit] Overwritten data

See also: Data erasure

When data have been physically overwritten on a hard disk it is generally assumed that the previous data are no longer possible to recover. In 1996, Peter Gutmann, a computer scientist[1], presented a paper that suggested overwritten data could be recovered through the use of Scanning transmission electron microscopy.[1] In 2001, he presented another paper on a similar topic.[2] Substantial criticism has followed, primarily dealing with the lack of any concreteexamples of significant amounts of overwritten data being recovered.[3][4] To guard against thistype of data recovery, he and Colin Plumb designed the Gutmann method, which is used byseveral disk scrubbing software packages.

Although Gutmann's theory may be correct, there's no practical evidence that overwritten datacan be recovered. Moreover, there are good reasons to think that it cannot.[ specify][5][6][7]

[edit] Corrupt filesystems

In some cases, data on a hard drive can be unreadable due to damage to the filesystem. In themajority of these cases, at least a portion of the original data can be recovered by repairing thedamaged filesystem using specialized data recovery software. This type of data recovery can be performed by knowledgeable end-users as it requires no special physical equipment. However,more serious cases can still require expert intervention.

[edit] Online Data Recovery

Page 24: Data Security Technologies

8/6/2019 Data Security Technologies

http://slidepdf.com/reader/full/data-security-technologies 24/24

"Online" or "Remote" data recovery is yet another method to restore the lost or deleted data. It issame as performing the regular software based recoveries except that this kind of recovery is performed over the Internet without physically having the drive or computer in possession. Therecovery technician sitting somewhere else gains access to user's computer and complete therecovery job online. In this scenario, the user doesn't have to travel or send the media to

anywhere physically.

Although online data recovery is convenient and useful in many cases, it still carries some pointsmaking it less popular than the classic data recovery methods. First of all, it requires a stable broadband Internet connection for it to be performed correctly, which many third world countriesstill lack. Also, it cannot be performed in case of physical damage to media and for such cases,the traditional in-lab recovery has to take place