34
Security Planning Archived content. No warranty is made as to technical accuracy. Content may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist. Written by: Christopher Benson, Inobits Consulting (Pty) Ltd Contributors: Denis Bensch, Dawie Human, Louis De Klerk, and Johan Grobler, all of Inobits Consulting (Pty) Ltd Reviewed by: Glenn Berg Microsoft Solutions Framework Best Practices for Enterprise Security Note: This white paper is one of a series. Best Practices for Enterprise Security (http://www.microsoft.com/technet/archive/security/bestprac/bpent/bpentsec.mspx) contains a complete list of all the articles in this series. See also the Security Entities Building Block Architecture (http://www.microsoft.com/technet/archive/security/bestprac/bpent/sec2/secentbb.mspx). On This Page The Focus of This Paper Basic Risk Assessment Proactive Security Planning Reactive Security Planning References The Focus of This Paper Overview The most important part of deployment is planning. It is not possible to plan for security, however, until a full risk assessment has been performed. Security planning involves developing security policies and implementing controls to prevent computer risks from becoming reality. The policies outlined in this paper are merely guidelines. Each organization is different and will need to plan and create policies based upon its individual security goals and needs. The discussion of tools and technologies in this paper is focused on features rather than technology. This emphasis allows security officials and IT managers to choose which tools and techniques are best suited to their organizations' security needs. Top Of Page Basic Risk Assessment Overview Risk assessment is a very important part of computer security planning. No plan of action can be put into place before a risk assessment has been performed. The risk assessment provides a baseline for Page 1 of 34 Security Planning 10/16/2013 http://technet.microsoft.com/en-us/library/cc723503(d=printer).aspx

Security Planning

  • Upload
    nick

  • View
    6

  • Download
    0

Embed Size (px)

DESCRIPTION

The most important part of deployment is planning. It is not possible to plan for security, however, until a fullrisk assessment has been performed. Security planning involves developing security policies andimplementing controls to prevent computer risks from becoming reality.The policies outlined in this paper are merely guidelines. Each organization is different and will need to planand create policies based upon its individual security goals and needs.The discussion of tools and technologies in this paper is focused on features rather than technology. Thisemphasis allows security officials and IT managers to choose which tools and techniques are best suited totheir organizations' security needs.

Citation preview

Page 1: Security Planning

Security Planning

Archived content. No warranty is made as to technical accuracy. Content may contain URLs that were valid

when originally published, but now link to sites or pages that no longer exist.

Written by: Christopher Benson, Inobits Consulting (Pty) Ltd

Contributors: Denis Bensch, Dawie Human, Louis De Klerk, and Johan Grobler, all of Inobits Consulting (Pty)

Ltd

Reviewed by: Glenn Berg

Microsoft Solutions Framework

Best Practices for Enterprise Security

Note: This white paper is one of a series. Best Practices for Enterprise Security

(http://www.microsoft.com/technet/archive/security/bestprac/bpent/bpentsec.mspx) contains a complete list

of all the articles in this series. See also the Security Entities Building Block Architecture

(http://www.microsoft.com/technet/archive/security/bestprac/bpent/sec2/secentbb.mspx).

On This Page

The Focus of This Paper

Basic Risk Assessment

Proactive Security Planning

Reactive Security Planning

References

The Focus of This Paper

Overview

The most important part of deployment is planning. It is not possible to plan for security, however, until a full

risk assessment has been performed. Security planning involves developing security policies and

implementing controls to prevent computer risks from becoming reality.

The policies outlined in this paper are merely guidelines. Each organization is different and will need to plan

and create policies based upon its individual security goals and needs.

The discussion of tools and technologies in this paper is focused on features rather than technology. This

emphasis allows security officials and IT managers to choose which tools and techniques are best suited to

their organizations' security needs.

Top Of Page

Basic Risk Assessment

Overview

Risk assessment is a very important part of computer security planning. No plan of action can be put into

place before a risk assessment has been performed. The risk assessment provides a baseline for

Page 1 of 34Security Planning

10/16/2013http://technet.microsoft.com/en-us/library/cc723503(d=printer).aspx

Page 2: Security Planning

implementing security plans to protect assets against various threats. There are three basic questions one

needs to ask in order to improve the security of a system:

• What assets within the organization need protection?

• What are the risks to each of these assets?

• How much time, effort, and money is the organization willing to expend to upgrade or obtain new

adequate protection against these threats?

You cannot protect your assets if you do not know what to protect against. Computers need protection

against risks, but what are risks? In simple terms, a risk is realized when a threat takes advantage of a

vulnerability to cause harm to your system. After you know your risks, you can then create policies and plans

to reduce those risks.

There are many ways to go about identifying all the risks to your assets. One way is to gather personnel from

within your organization and have a brainstorming session where you list the various assets and the risks to

those assets. This will also help to increase security awareness within your organization.

Risks can come from three sources: natural disaster risks, intentional risks, and unintentional risks. These

sources are illustrated in the following figure.

In Security Strategies, another paper in the Best Practices for Enterprise Security white paper series, a

methodology to define security strategies is outlined in the following flowchart. The first step in the flowchart

is assessing risk.

Page 2 of 34Security Planning

10/16/2013http://technet.microsoft.com/en-us/library/cc723503(d=printer).aspx

Page 3: Security Planning

The risk assessment step in the Security Strategy flowchart can be divided further into the following steps.

1. Identify the assets you want to protect and the value of these assets.

2. Identify the risks to each asset.

3. Determine the category of the cause of the risk (natural disaster risk, intentional risk, or unintentional

risk).

4. Identify the methods, tools, or techniques the threats use.

Once these steps have been completed, it is possible to plan security policies and controls to minimize the

realization of risks. In this paper, we will discuss primarily the first two steps. For information about steps

three and four, please see the Security Strategies paper.

Companies are dynamic, and your security plan must be too. Update your risk assessment periodically. In

addition, redo the risk assessment whenever you have a significant change in operation or structure. Thus, if

you reorganize, move to a new building, switch vendors, or undergo other major changes, you should

reassess the risks and potential losses.

Identifying the Assets

One important step toward determining the risks to assets is performing an information asset inventory by

identify the various items you need to protect within your organization. The inventory should be based on

your business plan and the sensitivity of those items. Consider, for example, a server versus a workstation. A

server has a higher level of sensitivity than a typical user's workstation. Organizations should store the

inventory online and categorize each item by its importance. The inventory should include everything that the

organization would consider to be valuable. To determine if something is valuable, consider what the loss or

damage of the item might be in terms of lost revenue, lost time, or the cost of repair or replacement. Some of

the items that should be on your item inventory are:

Page 3 of 34Security Planning

10/16/2013http://technet.microsoft.com/en-us/library/cc723503(d=printer).aspx

Page 4: Security Planning

• Physical items

◦ Sensitive data and other information

◦ Computers, laptops, palmtops, etc.

◦ Backups and archives

◦ Manuals, books, and guides

◦ Communications equipment and wiring

◦ Personnel records

◦ Audit records

◦ Commercial software distribution media

• Non-physical items

◦ Personnel passwords

◦ Public image and reputation

◦ Processing availability and continuity of operations

◦ Configuration information.

◦ Data integrity

◦ Confidentiality of information

For each asset, the following information should be defined:

• Type: hardware, software, data

• General support system or a critical application system

• Designated owner of the information

• Physical or logical location

• Inventory item number where applicable

• Service levels, warranties, key contacts, where it fits in to supplying availability and or security, and

replacement process

Identifying Risks to the Assets

Page 4 of 34Security Planning

10/16/2013http://technet.microsoft.com/en-us/library/cc723503(d=printer).aspx

Page 5: Security Planning

After identifying the assets, it is necessary to determine all the risks that can affect each asset. One way of

doing this is by identifying all the different ways an asset can be damaged, altered, stolen, or destroyed. For

example:

The asset:

• Financial information stored on a database system

The risks:

• Component failure

• Misuse of software and hardware

• Viruses, Trojan horses, or worms

• Unauthorized deletion or modification

• Unauthorized disclosure of information

• Penetration ("hackers" getting into your machines)

• Software bugs and flaws

• Fires, floods, or earthquakes

• Riots

In order to develop an effective information security policy, the information produced or processed during

the risk analysis should be categorized according to its sensitivity to loss or disclosure. Most organizations

use some set of information categories, such as Proprietary, For Internal Use Only, or Organization Sensitive.

The categories used in the security policy should be consistent with any existing categories. Data should be

broken into four sensitivity classifications with separate handling requirements: sensitive, confidential, private,

and public. This standard data sensitivity classification system should be used throughout the organization.

These classifications are defined as follows:

• Sensitive. This classification applies to information that needs protection from unauthorized

modification or deletion to assure its integrity. It is information that requires a higher than normal

assurance of accuracy and completeness. Examples of sensitive information include organizational

financial transactions and regulatory actions.

• Confidential. This classification applies to the most sensitive business information that is intended

strictly for use within the organization. Its unauthorized disclosure could seriously and adversely

impact the organization, its stockholders, its business partners, and/or its customers. Health care-

related information should be considered at least confidential.

• Private. This classification applies to personal information that is intended for use within the

organization. Its unauthorized disclosure could seriously and adversely impact the organization and/or

its employees.

Page 5 of 34Security Planning

10/16/2013http://technet.microsoft.com/en-us/library/cc723503(d=printer).aspx

Page 6: Security Planning

• Public. This classification applies to all other information that does not clearly fit into any of the above

three classifications. While its unauthorized disclosure is against policy, it is not expected to impact

seriously or adversely affect the organization, its employees, and/or its customers.

After identifying the risks and the sensitivity of data, estimate the likelihood of each risk occurring.

Quantifying the threat of a risk is hard work. Some ways to estimate risk include:

• Obtaining estimates from third parties, such as insurance companies.

• Basing estimates on your records, if the event happens on a regular basis.

• Investigating collected statistics or published reports from industry organizations.

• Basing estimates on educated guesses extrapolated from past experience. For instance:

◦ Your power company can provide an official estimate of the likelihood that your building will

experience a power outage in the next year.

◦ Past experience and best guess can be used to estimate the probability of a serious bug being

discovered in your vendor software.

Once all the risks have been realized for each asset, it is necessary to identify whether the damage caused will

be intentional or accidental.

Identifying Type of Threat and Method of Attack

A threat is any action or incident with the potential to cause harm to an organization through the disclosure,

modification, or destruction of information, or by the denial of critical services. Security threats can be divided

into human threats and natural disaster threats, as the following picture illustrates.

Human threats can be further divided into malicious (intentional) threats and non-malicious (unintentional)

threats. A malicious threat exploits vulnerabilities in security policies and controls to launch an attack.

Malicious threats can range from opportunistic attacks to well-planned attacks.

Non-malicious human threats can occur through employee error or ignorance. These employees may

accidentally cause data corruption, deletion, or modification while trying to capture data or change

information. (Hardware or software failures, while not a human threat, are other non-malicious threats.)

In understanding these various threats, it is possible to determine which vulnerabilities may be exploited and

which assets are targeted during an attack. Some methods of attack include:

• Social engineering

Page 6 of 34Security Planning

10/16/2013http://technet.microsoft.com/en-us/library/cc723503(d=printer).aspx

Page 7: Security Planning

• Viruses, worms, and Trojan horses

• Denial of service attack tools

• Packet replaying

• Packet modification

• IP spoofing

• Password cracking

Top Of Page

Proactive Security Planning

Overview

After assessing your risk, the next step is proactive planning. Proactive planning involves developing security

policies and controls and implementing tools and techniques to aid in security.

As with security strategies, it is necessary to define a plan for proactive and reactive security planning. The

proactive plan is developed to protect assets by preventing attacks and employee mistakes. The reactive plan

is a contingency plan to implement when proactive plans have failed.

Developing Security Polices and Controls

A company's security plan consists of security policies. Security policies give specific guidelines for areas of

responsibility, and consist of plans that provide steps to take and rules to follow to implement the policies.

Policies should define what you consider valuable, and should specify what steps should be taken to

safeguard those assets. Policies can be drafted in many ways. One example is a general policy of only a few

pages that covers most possibilities. Another example is a draft policy for different sets of assets, including e-

mail policies, password policies, Internet access policies, and remote access policies.

Two common problems with organizational policies are:

1. The policy is a platitude rather than a decision or direction.

Page 7 of 34Security Planning

10/16/2013http://technet.microsoft.com/en-us/library/cc723503(d=printer).aspx

Page 8: Security Planning

2. The policy is not really used by the organization. Instead it is a piece of paper to show to auditors,

lawyers, other organizational components, or customers, but it does not affect behavior.

A good risk assessment will determine whether good security policies and controls are implemented.

Vulnerabilities and weaknesses exist in security policies because of poor security policies and the human

factor, as shown in the following diagram. Security policies that are too stringent are often bypassed because

people get tired of adhering to them (the human factor), which creates vulnerabilities for security breaches

and attacks.

For example, specifying a restrictive account lockout policy increases the potential for denial of service

attacks. Another example is implementing a security keypad on the server room door. Administrators may get

tired of entering the security PIN number and stop the door from closing by using a book or broom, thereby

bypassing the security control. Specifying restrictive password policy can actually reduce the security of the

network. For example, if you require passwords longer than seven characters, most users have difficulty

remembering them. They might write their passwords down and leave them where an intruder can find them.

The following diagram illustrates the relationships between a good risk assessment and good security polices

and controls.

To be effective, policy requires visibility. Visibility aids implementation of policy by helping to ensure policy is

fully communicated throughout the organization. This is achieved through the plan of each policy that is a

written set of steps and rules. The plan defines when, how, and by whom the steps and rules are

implemented. Management presentations, videos, panel discussions, guest speakers, question/answer

forums, and newsletters increase visibility. If the organization has computer security training and awareness, it

is possible to effectively notify users of new policies. It also can be used to familiarize new employees with the

organization's policies.

Computer security policies should be introduced in a manner that ensures that management's unqualified

support is clear, especially in environments where employees feel inundated with policies, directives,

guidelines, and procedures. The organization's policy is the vehicle for emphasizing management's

commitment to computer security and making clear their expectations for employee performance, behavior,

and accountability.

Types of Security Policies

Page 8 of 34Security Planning

10/16/2013http://technet.microsoft.com/en-us/library/cc723503(d=printer).aspx

Page 9: Security Planning

Policies can be defined for any area of security. It is up to the security administrator and IT manager to

classify what policies need to be defined and who should plan the policies. There could be policies for the

whole company or policies for various sections within the company. The various types of policies that could

be included are:

• Password policies

◦ Administrative Responsibilities

◦ User Responsibilities

• E-mail policies

• Internet policies

• Backup and restore policies

Password Policies

The security provided by a password system depends on the passwords being kept secret at all times. Thus, a

password is vulnerable to compromise whenever it is used, stored, or even known. In a password-based

authentication mechanism implemented on a system, passwords are vulnerable to compromise due to five

essential aspects of the password system:

• A password must be initially assigned to a user when enrolled on the system.

• A user's password must be changed periodically.

• The system must maintain a "password database."

• Users must remember their passwords.

• Users must enter their passwords into the system at authentication time.

• Employees may not disclose their passwords to anyone. This includes administrators and IT managers.

Password policies can be set depending on the needs of the organization. For example, it is possible to

specify minimum password length, no blank passwords, and maximum and minimum password age. It is also

possible to prevent users from reusing passwords and ensure that users use specific characters in their

passwords making passwords more difficult to crack. This can be set through Windows 2000 account policies

discussed later in the paper.

Administrative Responsibilities

Many systems come from the vendor with a few standard user logins already enrolled in the system. Change

the passwords for all standard user logins before allowing the general user population to access the system.

For example, change administrator password when installing the system.

The administrator is responsible for generating and assigning the initial password for each user login. The

user must then be informed of this password. In some areas, it may be necessary to prevent exposure of the

Page 9 of 34Security Planning

10/16/2013http://technet.microsoft.com/en-us/library/cc723503(d=printer).aspx

Page 10: Security Planning

password to the administrator. In other cases, the user can easily nullify this exposure. To prevent the

exposure of a password, it is possible to use smart card encryption in conjunction with the user's username

and password. Even if the administrator knows the password, he or she will be unable to use it without the

smart card. When a user's initial password must be exposed to the administrator, this exposure may be

nullified by having the user immediately change the password by the normal procedure.

Occasionally, a user will forget the password or the administrator may determine that a user's password may

have been compromised. To be able to correct these problems, it is recommended that the administrator be

permitted to change the password of any user by generating a new one. The administrator should not have

to know the user's password in order to do this, but should follow the same rules for distributing the new

password that apply to initial password assignment. Positive identification of the user by the administrator is

required when a forgotten password must be replaced.

User Responsibilities

Users should understand their responsibility to keep passwords private and to report changes in their user

status, suspected security violations, and so forth. To assure security awareness among the user population,

we recommend that each user be required to sign a statement to acknowledge understanding these

responsibilities.

The simplest way to recover from the compromise of a password is to change it. Therefore, passwords should

be changed on a periodic basis to counter the possibility of undetected password compromise. They should

be changed often enough so that there is an acceptably low probability of compromise during a password's

lifetime. To avoid needless exposure of users' passwords to the administrator, users should be able to change

their passwords without intervention by the administrator.

E-mail Policies

E-mail is increasingly critical to the normal conduct of business. Organizations need policies for e-mail to help

employees use e-mail properly, to reduce the risk of intentional or inadvertent misuse, and to assure that

official records transferred via e-mail are properly handled. Similar to policies for appropriate use of the

telephone, organizations need to define appropriate use of e-mail. Organizational polices are needed to

establish general guidance in such areas as:

• The use of e-mail to conduct official business

• The use of e-mail for personal business

• Access control and confidential protection of messages

• The management and retention of e-mail messages

It is easy to have e-mail accidents. E-mail folders can grow until the e-mail system crashes. Badly configured

discussion group software can send messages to the wrong groups. Errors in e-mail lists can flood the

subscribers with hundreds of error messages. Sometime errors messages will bounce back and forth between

e-mail servers. Some ways to prevent accidents are to:

• Train users what to do when things go wrong, as well as how to do it right.

• Configure e-mail software so that the default behavior is the safest behavior.

Page 10 of 34Security Planning

10/16/2013http://technet.microsoft.com/en-us/library/cc723503(d=printer).aspx

Page 11: Security Planning

• Use software that follows Internet e-mail protocols and conventions religiously. Every time an online

service gateways its proprietary e-mail system to the Internet, there are howls of protest because of

the flood of error messages that result from the online service's misbehaving e-mail servers.

Using encryption algorithms to digitally sign the e-mail message can prevent impersonation. Encrypting the

contents of the message or the channel that it's transmitted over can prevent eavesdropping. E-mail

encryption is discussed later in this paper under "Public Key Infrastructures."

Using public locations like Internet cafes and chat rooms to access e-mail can lead to the user leaving

valuable information cached or downloaded on to internet computers. Users need to clean up the computer

after they use it, so no important documents are left behind. This is often a problem in places like airport

lounges.

Internet Policies

The World Wide Web has a body of software and a set of protocols and conventions used to traverse and

find information over the Internet. Through the use hypertext and multimedia techniques, the Web is easy for

anyone to roam, browse, and contribute to.

Web clients, also known as Web browsers, provide a user interface to navigate through information by

pointing and clicking. Browsers also introduce vulnerabilities to an organization, although generally less

severe than the threat posed by servers. Various settings can set on Internet Explorer browsers by using

Group Policy in Windows 2000.

Web servers can be attacked directly, or used as jumping off points to attack an organization's internal

networks. There are many areas of Web servers to secure: the underlying operating system, the Web server

software, server scripts and other software, and so forth. Firewalls and proper configuration of routers and the

IP protocol can help to fend off denial of service attacks.

Backup and Restore Policies

Backups are important only if the information stored on the system is of value and importance. Backups are

important for a number of reasons:

• Computer hardware failure. In case certain hardware devices such as hard drives or RAID systems fail.

• Software Failure. Some software applications could have flaws in them whereby information is

interpreted or stored incorrectly.

• User Error. Users often delete or modify files accidentally. Making regular backups can help restore

deleted or modified files.

• Administrator Error. Sometimes administrators also make mistakes such as accidentally deleting active

user accounts.

• Hacking and vandalism. Computer hackers sometimes alter or delete data.

• Theft. Computers are expensive and usually easily to sell. Sometimes a thief will steal just the hardware

inside the computer, such as hard drives, video cards, and sound drivers.

Page 11 of 34Security Planning

10/16/2013http://technet.microsoft.com/en-us/library/cc723503(d=printer).aspx

Page 12: Security Planning

• Natural disasters. Floods, earthquakes, fires, and hurricanes can cause disastrous effects on computer

systems. Building can be demolished or washed away.

• Other disasters. Unforeseeable accidents can cause damage. Some examples are if a plane crashes into

buildings or if gas pipes leak and cause explosions.

When doing hardware and software upgrades:

• Never upgrade without backing data files that you must have.

• Be sure to back up system information such as registries, master boot records, and the partition boot

sector.

• In operating systems such as Microsoft Windows 2000 and Microsoft Windows NT, make sure that an

up-to-date emergency repair disk exists.

Information that should be backed up includes:

• Important information that is sensitive to the organization and to the continuity of operations. This

includes databases, mail servers, and any user files.

• System databases, such as registries and user account databases.

Backup Policies

The backup polices should include plans for:

• Regularly scheduled backups.

• Types of backups. Most backup systems support, normal backups, incremental backups, and

differential backups.

• A schedule for backups. The schedule should normally be during the night when the company has the

least amount of users.

• The information to be backed up.

• Type of media used for backups. Tapes, CD-ROMs, other hard drives, and so forth.

• The type of backup devices: Tape devices, CD writers, other hard drives, swappable hard drives, and

maybe to a network share. Devices also come in various speeds, normally measured in the amount of

megabytes backed up per minute. Depending on the system requirements, the amount of time it takes

to perform backups.

• Onsite and offsite storage of backups.

◦ Onsite Storage: Store backups in a fireproof safe. Backups should not be stored in the drawer of

the table on which the computer sits. Secure storage protects against natural disaster, theft,

Page 12 of 34Security Planning

10/16/2013http://technet.microsoft.com/en-us/library/cc723503(d=printer).aspx

Page 13: Security Planning

and sabotage of critical data. All software including operating system software, service packs,

and other critical application software should also be safely stored.

◦ Offsite storage: Important data should also be stored offsite. Certain companies specialize in

storing data. An alternative solution could be using a safe deposit box and a bank.

Emergency Repair Disks

In Microsoft Windows 2000 and Microsoft Windows NT, there is an option to create an emergency repair disk

(ERD). The ERD contains certain registry information and other system files to help recover or repair a

corrupted Windows installation. The repair disk should be updated periodically or every time new users or

system configuration changes, such as adding or deleting disk partitions. ERDs should be stored with backups

both onsite and offsite if possible.

Windows 2000 Software Policies

Account Policies

In Windows 2000, account policies are the first subcategory of Security Settings. Account policies include:

• Password Policy. Password policies can be set depending on the needs of the organization. For

example, it is possible to specify minimum password length, no blank passwords, and maximum and

minimum password age. It is also possible to prevent users from reusing passwords and ensure that

users use specific characters in their passwords making passwords more difficult to crack.

• Account Lockout Policy. With this policy, it is possible to determine what happens when users fail to

enter the correct password for an account. Users can be locked out after a specified number of failed

logon attempts and the period of time that accounts are locked out for.

• Kerberos Authentication Policy. You can modify the default Kerberos settings for each domain. For

example, you can set the maximum lifetime of a user ticket.

Group Policy

Page 13 of 34Security Planning

10/16/2013http://technet.microsoft.com/en-us/library/cc723503(d=printer).aspx

Page 14: Security Planning

Group Policy is a way of forcing rules about computer configuration and user behavior. It is possible to have

different policies throughout the company. As a user connects to a Windows 2000 domain controller that has

Group Policy settings enabled, the policies are automatically downloaded to the user's computer and stored

in the registry. Some of the settings include:

• Addition or removal of items from the desktop and control panel.

• Automatically installing software on users' computers without user interaction.

• Configuring Internet Explorer options for users including security zones.

• Configuring network settings such as mapped network drives and permissions to view computer

browse list.

• Configuring system settings such as disabling computer shutdown options and the ability to run task

manager.

IP Security Policies

The Internet Protocol (IP) underlies the majority of corporate networks as well as the Internet. It has worked

well for decades. It is powerful, highly efficient, and cost-effective. Its strength lies in its flexibly routed

packets, in which data is broken up into manageable pieces for transmission over networks. And it can be

used by any operating system.

In spite of its strengths, IP was never designed to be secure. Due to its method of routing packets, IP-based

networks are vulnerable to spoofing, sniffing, session hijacking, and man-in-the-middle attacks—threats that

were unheard of when IP was first introduced.

The initial attempts to provide security over the Internet have been application-level protocols and software,

such as Secure Sockets Layer (SSL) for securing Web traffic and Pretty Good Privacy (PGP) for securing e-mail.

These applications, however, are limited to specific applications.

Using IP security it is possible to secure and encrypt all IP traffic. It is possible to make use of IP security

policies in Windows 2000 to control how, when, and on whom IP security works. The IP security policy can

define many rules, such as:

Page 14 of 34Security Planning

10/16/2013http://technet.microsoft.com/en-us/library/cc723503(d=printer).aspx

Page 15: Security Planning

• What IP addresses to scan for

• How to encrypt packets

• Setting filters to take a look at all IP traffic passing through the object on which the IP security policy is

applied

Tools and Techniques to Aid in Security

There are various technologies, tools, and techniques to help aid in securing networks and computers. This

section deals with some of those technologies, outlining the features and uses more than providing an in-

depth technical evaluation. The idea is to allow security officials and IT managers to gain an overall

impression of these techniques and then to decide what techniques and tools will best suit the organization.

In-depth technical studies of some of the concepts discussed can be found on the Windows 2000 resource kit

and in the links to various sites in the References section at the end of the chapter.

Secure Access, Secure Data, Secure Code

People like confidentiality and privacy, however attackers can eavesdrop or steal information that is sensitive

to a person or organization. If a company comes up with a new innovative product and would like to store

the ideas on a computer system, it is going to want protection for that the data on the system and the

transferring of data from one system to another. Networks and data communication channels are often

insecure, subjecting messages transmitted over the channels to passive and active threats. With a passive

threat, an intruder intercepts messages to view the data. This intrusion is also known as eavesdropping. With

an active threat, the intruder modifies the intercepted messages. An effective tool for protecting messages

against the active and passive threats inherent in data communications is cryptography.

Cryptography is the science of mapping readable text, called plaintext, into an unreadable format, called

ciphertext, and vice versa. The mapping process is a sequence of mathematical computations. The

computations affect the appearance of the data, without changing its meaning.

To protect a message, an originator transforms a plaintext message into ciphertext. This process is called

encryption as shown in following flow diagram. The ciphertext is transmitted over a network or data

communications channel. If the message is intercepted, the intruder only has access to the unreadable

ciphertext. Upon receipt, the message recipient transforms the ciphertext into its original plaintext format.

This process is called decryption.

The mathematical operations used to map between plaintext and ciphertext are cryptographic algorithms.

Cryptographic algorithms require the text to be mapped, and at a minimum, require some value that controls

Page 15 of 34Security Planning

10/16/2013http://technet.microsoft.com/en-us/library/cc723503(d=printer).aspx

Page 16: Security Planning

the mapping process. This value is called a key. Given the same text and the same algorithm, different keys

produce different mappings.

Cryptography is used to provide the following services: authentication, integrity, non-repudiation, and

secrecy. In an e-mail message, for example, cryptography provides:

• Authentication. Allows the recipient of a message to validate its origin. It prevents an imposter from

masquerading as the sender of the message.

• Integrity. Assures the recipient that the message was not modified en route. Note that the integrity

service allows the recipient to detect message modification, but not to prevent it.

• Non-repudiation. There are two types of non-repudiation service. Non-repudiation with proof of origin

provides the recipient assurance of the identity of the sender. Non-repudiation with proof of delivery

provides the sender assurance of message delivery.

• Secrecy. Also known as confidentiality, prevents disclosure of the message to unauthorized users.

Public Key Infrastructures

Public key cryptography can play an important role in helping provide the needed security services including

confidentiality, authentication, digital signatures, and integrity. Public key cryptography uses two electronic

keys: a public key and a private key. These keys are mathematically related, but the private key cannot be

determined from the public key. The public key can be known by anyone while the owner keeps the private

key secret.

A Public Key Infrastructure (PKI) provides the means to bind public keys to their owners and helps in the

distribution of reliable public keys in large heterogeneous networks. Public keys are bound to their owners by

public key certificates. These certificates contain information such as the owner's name and the associated

public key and are issued by a reliable certification authority (CA). Digital certificates, also called Digital IDs,

are the electronic counterparts to driver licenses, passports, or membership cards. A digital certificate can be

presented electronically to prove your identity or your right to access information or services online. Digital

certificates are used not only to identify people, but also to identify Web sites (crucial to e-business) and

software that is being sent over the Web. Digital certificates bring trust and security when you are

communicating or doing business on the Internet.

A PKI is often composed of many CAs linked by trust paths. The CAs may be linked in several ways. They may

be arranged hierarchically under a "root CA" that issues certificates to subordinate CAs. The CAs can also be

arranged independently in a network. This makes up the PKI architecture.

Digital Signatures

Electronic transactions are becoming increasingly important. Many companies offering online services and e-

commerce would like to have mechanisms in place to increase confidence in electronic transactions. When a

buyer buying a product from a seller hands a bank check (bill of exchange) to the seller he or she has to sign

the check verifying his or her identity and making the transaction legal.

The widespread use of PKI technology to support digital signatures can help increase confidence in electronic

transactions. For example, the use of a digital signature allows a seller to prove that goods or services were

requested by a buyer and therefore demand payment. The use of a PKI allows parties without prior

knowledge of each other to engage in verifiable transactions.

Page 16 of 34Security Planning

10/16/2013http://technet.microsoft.com/en-us/library/cc723503(d=printer).aspx

Page 17: Security Planning

For example, a buyer interested in purchasing goods electronically would need to obtain a public key

certificate from a CA. The process of obtaining a certificate from a CA is to generate a public-private key pair.

The buyer sends the public key with valid information about the company to a registration authority (RA), and

asks for a certificate. The RA verifies the buyer's identity based on the information provided and vouches for

the identity of the buyer to a CA, who would then issue the certificate.

The newly certified buyer can now sign electronic purchase orders for the goods. The goods vendor receiving

the purchase order can obtain the buyer's certificate and the certificate revocation list (CRL) for the CA that

issued the buyer's certificate, check that the certificate has not been revoked, and verify the buyer's signature.

By verifying the validity of the certificate, the vendor ensures receipt of a valid public key for the buyer; by

verifying the signature on the purchase order, the vendor ensures the order was not altered after the buyer

issued it.

Once the validity of the certificate and the signature are established, the vendor can ship the requested

goods to the buyer with the knowledge that the buyer ordered the goods. This transaction can occur without

any prior business relationships between the buyer and the seller.

Secure Sockets Layer

Secure Sockets Layer (SSL) is a protocol that protects data sent between Web browsers and Web servers. SSL

also ensures that the data came from the Web site it is supposed to have originated from and that no one

tampered with the data while it was being sent. Any Web site address that starts with "https" has been SSL-

enabled.

SSL provides a level of security and privacy for those wishing to conduct secure transactions over the Internet.

SSL protocol protects HTTP transmissions over the Internet by adding a layer of encryption. This ensures that

your transactions are not subject to "sniffing" by a third party.

SSL provides visitors to your Web site with the confidence to communicate securely via an encrypted session.

For companies wishing to conduct serious e-commerce, such as receiving credit card numbers or other

sensitive information, SSL is a must. Web users can tell when they've reached an SSL-protected site by the

"https" designation at the start of the Web page's address. The "s" added to the familiar HTTP—the Hypertext

Transfer Protocol—stands for secure.

Companies that want to conduct business via the Internet through and using the capabilities of SSL need to

contact a certificate authority, such as VeriSign Inc., which is a third-party organization that confirms a

company is indeed what it claims to be. Once that is complete, the company can set up its Web servers for

SSL connections. Users don't have to do anything to trigger an SSL connection. The client portion of SSL is

built into the Web browser.

Secure E-mail

Standard Internet e-mail is usually sent as plaintext over networks. Intruders can monitor mail servers and

network traffic to obtain sensitive information.

There are currently two actively proposed methods for providing secure e-mail security services: Pretty Good

Privacy (PGP) and Secure/Multipurpose Internet Mail Extensions (S/MIME). These services typically include

authentication of the originator and privacy for the data. They can also provide a signed receipt from the

recipient. At the core of these capabilities is the use of public key technology and large-scale use of public

keys requires a method of certifying that a given key belongs to a given user.

Page 17 of 34Security Planning

10/16/2013http://technet.microsoft.com/en-us/library/cc723503(d=printer).aspx

Page 18: Security Planning

PGP is a military grade encryption scheme available to all computer users. It works using paired sets of keys.

The public key can be used to encode a message that can only be decoded with the matching private key.

Likewise, e-mail "signed" with a private key can be verified as authentic with its matching public key.

S/MIME is the same cryptographic method used for secure e-mail, adopted by every major e-mail vendor in

the industry. S/MIME uses public key cryptography to digitally sign and encrypt each message sent between

trading partners. This ensures that not only can the message not be read, but also that the message came

only from the sender and was not altered in transport.

Encryption File System

Data encryption has become an increasingly important factor in everyday work. Users seek a method of

securing their data with maximum comfort and minimum additional requirements on their part. They want a

security system that protects any files used by any of their applications, without resorting to application-

specific encryption methods.

In today's world of advanced technology, your electronic records are your business. Previously, using

networked computers or remote laptops meant either sacrificing productivity or risking loss. Traveling with

copies of important business databases was out of the question, but not anymore.

Today, critical enterprise information no longer resides solely on mainframe computers or central servers.

Strategic planning, research, product development, marketing data, third-party information, and other

corporate secrets are widely distributed on individual computers throughout an enterprise. These

workstations, regular desktop computers, individual computers in home offices, and notebook computers are

the most numerous, most vulnerable entry points to any enterprise, and they're all open to intrusion and

theft. Even if an enterprise uses advanced network access security, an unattended workstation offers instant

access to files on the hard drive and also the network. Similarly, a stolen notebook computer offers easy

access to critical data by competitors, unauthorized employees, and others whose knowledge of such

information can profit at the expense of the victimized organization.

To solve the problem of attackers being able to read the files on the disks, you can use Encrypting File System

(EFS). EFS is a new feature in Microsoft Windows 2000 that allows the protection and confidentiality of

sensitive data by using symmetric key encryption in conjunction with public key technology. Only the owner

of the protected file can open it and read just like a normal document. EFS is integrated into the NT file

system (NTFS). You can set the encryption attribute for folders and files just as you would for other attributes.

EFS provides users with privacy. Besides the user who encrypts the file, only a designated administrator can

decrypt the file in cases of emergency recovery. EFS is a transparent operation in which file encryption does

not require the user to encrypt and decrypt the file.

Authentication

Modern computer systems provide a service to multiple users and require the ability to accurately identify the

user making a request. In traditional systems, the user's identity is verified by checking a password typed

during login; the system records the identity and uses it to determine what operations may be performed.

The process of verifying the user's identity is called authentication. Password-based authentication is not

suitable for use on computer networks. Passwords sent across the network can be intercepted and

subsequently used by eavesdroppers to impersonate the user.

Verifying the identity of someone or something is important. Administrators do not want unauthorized users

or imposters to impersonate users. Administrators want to be able to verify that whoever is logging on to a

system is who they say they are. Microsoft Windows 2000 supports two types of authentication protocols:

Page 18 of 34Security Planning

10/16/2013http://technet.microsoft.com/en-us/library/cc723503(d=printer).aspx

Page 19: Security Planning

Kerberos authentication protocol and NTLM authentication protocol. Kerberos authentication protocol is the

default authentication protocol for computers running Windows 2000. NTLM authentication protocol is

provided for backward compatibility with other Microsoft operating systems. In this section we are going to

outline the various features of each protocol and the application of each protocol.

Kerberos Authentication

Kerberos is designed to provide strong authentication for client/server applications by using secret-key

cryptography. The Kerberos protocol uses strong cryptography so that a client can prove its identity to a

server (and vice versa) across an insecure network connection. Kerberos is a trusted third-party authentication

system, whose main purpose is to allow people and processes (known to Kerberos as principals) to prove

their identity in a reliable manner over an insecure network. Instead of transmitting secret passwords in the

clear, where they may be intercepted and read by unauthorized parties, principals obtain special Kerberos

vouchers (known as session tickets) from Kerberos, which they can use to authenticate themselves to each

other. The session ticket lasts only for the session while a user is logged on.

Kerberos authentication requires the existence of a trusted network entity that acts as an authentication

server for clients and servers requesting authentication information. This authentication server is known the

key distribution center (KDC). It has access to a database consisting of a list of users and client services, their

default authentication parameters, their secret encryption keys, and other data. Authentication is typically a

one-way process. This is the process by which a service authenticates the client. An advantage of Kerberos

over NTLM is that it allows for mutual authentication, where the client authenticates the service.

Kerberos authentication occurs when special authentication model messages, session tickets, are passed

among client applications, server applications, and one or more KDCs. Client processes acting on behalf of

users authenticate themselves to servers by means of the session ticket. The KDC generates tickets, which are

sent to the requesting client processes. Kerberos maintains a set of secret keys, one for every entity to be

authenticated within a particular realm (a realm is the Protocols equivalent of a Windows 2000 domain) or

domain. A client presents a ticket to the server as evidence that the principal is who it claims to be. The ticket

presented to the server "proves" that a KDC authenticated the client.

Kerberos streamlines the process of logging on and accessing resources as opposed to NTLM. In Kerberos

authentication, the computer first contacts the KDC for authentication to the network. Then, when the user is

ready to access a resource for the first time, the computer contacts the KDC for a session ticket to access the

resource. On each subsequent attempt, the computer can simply contact the resource directly, using the

same ticket, without having to go to a domain controller first. In this way unnecessary communication with

the domain controller is eliminated. This new process allows users to log on faster and gain access to network

resources more quickly.

NTLM Authentication

In NTLM authentication, to avoid revealing passwords directly over an untrusted network, a challenge-

response system is used. At its simplest, the server sends the user some sort of challenge, which would

typically be some sort of random string. The user would then compute a response, usually some function

based on both the challenge and the password. This way, even if an intruder captures a valid challenge-

response pair, it will not help the intruder gain access to the system since future challenges are likely to be

different and thus require different responses.

In Microsoft Windows NT, the client contacts a primary domain controller (PDC) or a backup domain

controller (BDC) to log on to the domain. Then, when the client is ready to establish a session with a

particular resource, such as a printer share, it will contact server that maintains the resource. The server, in

turn, will contact the domain controller that maintains the resource in order to give it the client's required

Page 19 of 34Security Planning

10/16/2013http://technet.microsoft.com/en-us/library/cc723503(d=printer).aspx

Page 20: Security Planning

credentials or access token. NTLM is used in Windows 2000 for backward compatibility with other Windows

products such as Windows NT. NTLM is also used with the Telnet service in Windows 2000 so users do not

transmit their passwords in clear text to the Telnet service. The Telnet service is only implemented on

Windows 2000 when Services for Unix is installed.

Smart Cards

Smart Cards are typically credit card type cards that contain a small amount of memory and sometimes a

processor. Since smart cards contain more memory than a typical magnetic stripe and can process

information, they are being used in security situations where these features are a necessity. They can be used

to hold system logon information such as the user's private key along with other personal information on the

user including passwords. In a typical smart card logon environment, the user is required to insert his or her

smart card into a reader device connected to the computer. Then, the software uses the information stored

on the smart card for authentication. When paired with a password and/or a biometric identifier, the level of

security is increased. For example, requiring the user to simply enter a password for logon is less secure than

having them insert a smart card and enter a password. File encryption utilities which use the smart card as the

key to the electronic lock is another security use of smart cards.

Secure Code

Electronic software distribution over any network involves potential security problems. Software can contain

programs such as viruses and Trojan horses. To help address some of these problems, you can associate

digital signatures with the files. A digital certificate is a means of establishing identity via public key

cryptography; code signed with a digital certificate verifies the identity of the publisher and ensures that the

code has not been tampered with after it was signed. Certificates and object signing establish identity and let

the user make decisions about the validity of a person's identity. When the user executes the code for the

first time, a dialog box comes up. The dialog box provides information on the certificate and a link to the

certificate authority.

Microsoft developed the Microsoft Authenticode technology, which enables developers and programmers to

digitally sign software. Before software is released to the public or internal to the organization, developers

can digitally sign the code. If the software is modified after digitally signing the software, the digital signature

becomes invalid. In Internet Explorer, you can specify security settings that prevent users form downloading

and running unsigned software from any security zone. Internet Explorer can be configured to automatically

trust certain software vendors and authorities so that software and other information is automatically

accepted.

Technologies to Secure Network Connectivity

Businesses and other organizations use the Internet because it provides useful services. Organization could

choose to support or not support Internet-based services based on a business plan or an information

technology strategic plan. In other words, organizations should analyze their business needs, identify

potential methods of meeting the needs, and consider the security ramifications of the methods along with

cost and other factors.

Most organizations use Internet-based services to provide enhanced communications between business

units, or between the business and its customers, or provide a cost-savings means of automating business

processes. Security is a key consideration—a single security incident can wipe out any cost savings or revenue

provided by Internet connectivity.

Some of the ways to protect the organization from outside intrusions include firewalls and virtual private

networks (VPN).

Page 20 of 34Security Planning

10/16/2013http://technet.microsoft.com/en-us/library/cc723503(d=printer).aspx

Page 21: Security Planning

Firewalls

Many organizations have connected or want to connect their private LANs to the Internet so that their users

can have convenient access to Internet services. Since the Internet as a whole is not trustworthy, their private

systems are vulnerable to misuse and attack. A firewall is a safeguard that one can use to control access

between a trusted network and a less trusted one. A firewall is not a single component; it is a strategy for

protecting an organization's Internet-reachable resources. A firewall serves as the gatekeeper between the

untrustworthy Internet and the more trustworthy internal networks.

The main function of a firewall is to centralize access control. If outsiders or remote users can access the

internal networks without going through the firewall, its effectiveness is diluted. For example, if a traveling

manager has a modem connected to his office computer that he or she can dial into while traveling, and that

computer is also on the protected internal network, an attacker who can dial into that computer has

circumvented the firewall. If a user has a dial-up Internet account with a commercial ISP, and sometimes

connects to the Internet from his or her office computer via modem, he or she is opening an unsecured

connection to the Internet that circumvents the firewall. Firewalls provide several types of protection:

• They can block unwanted traffic.

• They can direct incoming traffic to more trustworthy internal systems.

• They hide vulnerable systems that cannot easily be secured from the Internet.

• They can log traffic to and from the private network.

• They can hide information such as system names, network topology, network device types, and

internal user IDs from the Internet.

• They can provide more robust authentication than standard applications might be able to do.

As with any safeguard, there are trade-offs between convenience and security. Transparency is the visibility of

the firewall to both inside users and outsiders going through a firewall. A firewall is transparent to users if

they do not notice or stop at the firewall in order to access a network. Firewalls are typically configured to be

transparent to internal network users (while going outside the firewall); on the other hand, firewalls are

configured to be non-transparent for outside network coming through the firewall. This generally provides

the highest level of security without placing an undue burden on internal users.

Types of firewalls include packet filtering gateways, application gateways, and hybrid or complex gateways.

Packet Filtering Gateways

Packet filtering firewalls use routers with packet filtering rules to grant or deny access based on source

address, destination address, and port. They offer minimum security but at a very low cost, and can be an

appropriate choice for a low-risk environment. They are fast, flexible, and transparent. Filtering rules are not

often easily maintained on a router, but there are tools available to simplify the tasks of creating and

maintaining the rules.

Filtering gateways do have inherent risks, including:

Page 21 of 34Security Planning

10/16/2013http://technet.microsoft.com/en-us/library/cc723503(d=printer).aspx

Page 22: Security Planning

• The source and destination addresses and ports contained in the IP packet header are the only

information that is available to the router in making decision whether or not to permit traffic access to

an internal network.

• They do not protect against IP or DNS address spoofing.

• An attacker will have a direct access to any host on the internal network once access has been granted

by the firewall.

• Strong user authentication isn't supported with some packet filtering gateways.

• They provide little or no useful logging.

Application Gateways

An application gateway uses server programs (called proxies) that run on the firewall. These proxies take

external requests, examine them, and forward legitimate requests to the internal host that provides the

appropriate service. Application gateways can support functions such as user authentication and logging.

Because an application gateway is considered as the most secure type of firewall, this configuration provides

a number of advantages to the medium-high risk site:

• The firewall can be configured as the only host address that is visible to the outside network, requiring

all connections to and from the internal network to go through the firewall.

• The use of proxies for different services prevents direct access to services on the internal network,

protecting the enterprise against insecure or badly configured internal hosts.

• Strong user authentication can be enforced with application gateways.

• Proxies can provide detailed logging at the application level.

Hybrid or Complex Gateways

Hybrid gateways combine two or more of the above firewall types and implement them in series rather than

in parallel. If they are connected in series, then the overall security is enhanced; on the other hand, if they are

connected in parallel, then the network security perimeter will be only as secure as the least secure of all

methods used. In medium to high-risk environments, a hybrid gateway may be the ideal firewall

implementation.

Virtual Private Networks and Wide Area Networks

Many organizations have local area networks and information servers spread across multiple locations. When

organization-wide access to information or other LAN-based resources is required, leased lines are often

used to connect the LANs into a Wide Area Network. Leased lines are relatively expensive to set up and

maintain, making the Internet an attractive alternative for connecting physically separate LANs.

Page 22 of 34Security Planning

10/16/2013http://technet.microsoft.com/en-us/library/cc723503(d=printer).aspx

Page 23: Security Planning

The major shortcoming to using the Internet for this purpose is the lack of confidentiality of the data flowing

over the Internet between the LANs, as well as the vulnerability to spoofing and other attacks. Virtual private

networks use encryption to provide the required security services. Typically encryption is performed between

firewalls, and secure connectivity is limited to a small number of sites.

One important consideration when creating virtual private networks is that the security policies in use at each

site must be equivalent. A VPN essentially creates one large network out of what were previously multiple

independent networks. The security of the VPN will essentially fall to that of the lowest common

denominator—if one LAN allows unprotected dial-up access, all resources on the VPN are potentially at risk.

Remote Access

Increasingly, businesses require remote access to their information systems. This may be driven by the need

for traveling employees to access e-mail, sales people to remotely enter orders, or as a business decision to

promote telecommuting. By its very nature, remote access to computer systems adds vulnerabilities by

increasing the number of access points.

Dial-in

Typically the remote computer uses an analog modem to dial an auto answer modem at the corporate

location. Security methods for protecting this connection include:

• Controlling knowledge of the dial-in access numbers. This approach is vulnerable to automated attacks

by "war dialers," simple pieces of software that use auto-dial modems to scan blocks of telephone

numbers and locate and log modems.

• Username/password pairs. Since an attacker would need to be tapping the telephone line, dial-in

connections are less vulnerable to password sniffer attacks that have made reusable passwords almost

useless over public networks. However, the use of network sniffers on internal networks, the lack of

password discipline, and social engineering make obtaining or guessing passwords easy.

• Advanced authentication. There are many methods that can be used to supplement or replace

traditional passwords. A few examples are:

◦ Dial-back modems. These devices require the user to enter a username/password upon initial

connection. The corporate modem then disconnects and looks up the authorized remote

telephone number for the connecting user. The corporate modem then dials the remote

modem and establishes a connection.

◦ Public key certificates. The use of public key certificates described earlier when logging on.

◦ Microsoft Challenge Handshake Authentication Protocol (MS-CHAP). This is a variant of CHAP

that does not require a plaintext version of the password on the authenticating server.

Page 23 of 34Security Planning

10/16/2013http://technet.microsoft.com/en-us/library/cc723503(d=printer).aspx

Page 24: Security Planning

◦ Microsoft Challenge Handshake Authentication Protocol version 2 (MS-CHAP v2). This provides

mutual authentication, stronger initial data encryption keys, and different encryption keys for

sending and receiving.

◦ Extensible Authentication Protocol (EAP). This is an extension to the Point-to-Point protocol

(PPP) that works with dial-up clients.

The organization's ability to monitor the use of remote access capabilities can also become an issue. The

most effective approach is to centralize the modems into remote access servers or modem pools. There

should be control in allowing users to connect their own modems to their work computers. In most cases, this

should not be allowed due to the fact that it becomes difficult to monitor modems that are not accessed

through the firewall and are distributed throughout the organization. They are potential security risks.

Information regarding access to company computer and communication systems, such as dial-up modem

phone numbers, should be considered confidential. This information should not be posted on electronic

bulletin boards, listed in telephone directories, placed on business cards, or made available. The Network

Services Manager should periodically scan direct dial-in lines to monitor compliance with policies and should

periodically change the telephone numbers to make it more difficult for unauthorized parties to locate

company communications numbers.

Intrusion Detection Tools

Intrusion detection is the process of detecting unauthorized use of, or attack upon, a computer or network.

Intrusion Detection Systems (IDSs) are software or hardware systems that detect such misuse. IDSs can detect

attempts to compromise the confidentiality, integrity, and availability of a computer or network. The attacks

can come from attackers on the Internet, authorized insiders who misuse the privileges given them, and

unauthorized insiders who attempt to gain unauthorized privileges.

Intrusion detection capabilities are rapidly becoming necessary additions to every large organization's

security infrastructure. The question for security professionals should not be whether to use intrusion

detection, but which features and capabilities to use. However, one must still justify the purchase of an IDS.

There are at least three good reasons to justify the acquisition of IDSs: to detect attacks and other security

violations that cannot be prevented, to prevent attackers from probing a network, and to document the

intrusion threat to an organization.

There are several types of IDSs available today, characterized by different monitoring and analysis

approaches. Each has distinct uses, advantages, and disadvantages. IDSs can monitor events at three different

levels: network, host, and application. IDSs can analyze these events using two techniques: signature

detection and anomaly detection. Some IDSs also have the ability to automatically respond to the detected

attacks. These variations are discussed in the following sections.

Virus Detection

Anti-virus tools perform three basic functions. Tools may be used to detect, identify, or remove viruses.

Detection tools perform proactive detection, active detection, or reactive detection. That is, they detect a

virus before it executes, during execution, or after execution. Identification and removal tools are more

straightforward in their application; neither is of use until a virus has been detected.

Detection tools detect the existence of a virus on a system. These tools perform detection at a variety of

points in the system. The virus may be actively executing, residing in memory, or being stored in executable

Page 24 of 34Security Planning

10/16/2013http://technet.microsoft.com/en-us/library/cc723503(d=printer).aspx

Page 25: Security Planning

code. The virus may be detected before execution, during execution, or after execution and replication. There

are three categories of analysis detection tools:

• Static Detection. Static analysis detection tools examine executables without executing them. They can

be used to detect infected code before it is introduced to a system.

• Detection by Interception. To propagate, a virus must infect other host programs. Some detection tools

are intended to intercept attempts to perform such activities. These tools halt the execution of virus-

infected programs as the virus attempts to replicate or become resident.

• Detection of Modification. All viruses cause modification of executables in their replication process. As a

result, the presence of viruses can also be detected by searching for the unexpected modification of

executables. This process is sometimes called integrity checking. Note that this type of detection tool

works only after infected executables have been introduced to the system and the virus has replicated.

Identification tools are used to identify which virus has infected a particular executable. This allows the user to

obtain additional information about the virus. This is a useful practice, since it may provide clues about other

types of damage incurred and appropriate clean-up procedures.

Removal tools attempt to efficiently restore the system to its uninfected state by removing the virus code

from the infected executable. In many cases, once a virus has been detected, it is found on numerous systems

or in numerous executables on a single system. Recovery from original diskettes or clean backups can be a

tedious process.

There are many third-party vendors developing the previously mentioned tools and releasing updates on

viruses. Acquiring the correct type of tool will depend on the organization's needs for virus scanning and

removal.

Auditing

After you have established the protection mechanisms on your system, you will need to monitor them. You

want to be sure that your protection mechanisms actually work. You will also want to observe any indications

of misbehavior or other problems. This process of monitoring the behavior of the system is known as

auditing.

Various operating systems maintain a number of log files that keep track of what has been happening to the

computer. Log files are an important building block of a secure system: they form a recorded history, or audit

trail, of your computer past, making it easier to track down intermittent problems or attacks. By using log

files, you may be able to piece together enough information to discover the cause of a bug, the source of a

break-in, and the scope of the damage involved. In cases where you cannot stop damage from occurring, at

least you will have some record of it. Those logs could be exactly what you need to build your system,

conduct an investigation, give testimony, recover insurance money, or get accurate field service performed.

Log files also have a fundamental vulnerability: because they are often recorded on the system itself, they are

subject to alteration or deletion.

Events to Audit

Careful consideration should be taken when looking at which events to audit. Auditing can cause potential

performance loss. If all events are audited on a system, the performance of a system will degrade

Page 25 of 34Security Planning

10/16/2013http://technet.microsoft.com/en-us/library/cc723503(d=printer).aspx

Page 26: Security Planning

substantially. The events to be audited are to be chosen carefully depending on what you want to audit.

Operating systems audit a variety of events:

• Logon and logoff information

• System shutdown and restart information

• File and folder access

• Password changes

• Object access

• Policy changes

Most audit logs are able to keep a history or backlog of events. Log files can be set up in various ways. Some

of these ways include:

• Setting the log file to a certain size and then overwriting the events as needed when the log file fills

up. They use the concept of first in-first out.

• Setting the log file to fill up for a certain amount of days.

• Setting the log file to specified size. Once the log file fills up, the log file needs to be cleared manually.

Technologies to Keep the System Running in the Event of a Failure

Computers are not failure proof; you can only make computers more failure resistant. Faulty hardware,

attackers, natural disasters, power failures, and errors from users can corrupt, damage, or delete data from a

system. In the likely event that any of these threats do occur, a disaster recovery plan needs to be in place.

To prevent these disasters from becoming a financial burden on the organization, you should develop plans

for the recovery and restoration of data. There are several questions one needs to ask in order to establish

what plans and recovery systems are currently in use:

• What information needs to be backed up and what backup strategies and plans need to be

considered?

• Are backups stored onsite and offsite? If onsite, are the backups stored in fireproof safes? If offsite,

how readily are the backups available in case of emergencies? Are backups tested regularly?

• Are technologies such as Microsoft Cluster Server in place?

• What Redundant Array of Inline Disks (RAID) system implementations are in place?

• Is there a record of critical systems hardware and software configurations?

• What training is required so operators and administrators can respond in a timely and professional

manner?

Page 26 of 34Security Planning

10/16/2013http://technet.microsoft.com/en-us/library/cc723503(d=printer).aspx

Page 27: Security Planning

• What records need to be maintained in order to recover from a failure or disaster?

• Is there an incident response team available to in case of emergency?

• Where are licensed software packages kept and what onsite support is there from vendors?

• Have fire drills been practiced by the incident response team and security officials?

Other components and procedures could be included also; this is just a guideline on how to start going

about setting up a disaster recovery plan. One important step to take is to always try to test what plans you

have implemented. Most administrators know that it takes money, equipment, and time to test recovery

procedures. If plans and procedures are structured and tested correctly, recovery will become easier. Here is a

general list of some things that can make it easier to recover from disasters:

• Plans and procedures should already be developed before a failure occurs. Most the time when a

failure occurs and continuity of operations is halted for a prolonged period of time is because

procedures and plans have not been developed correctly.

• The software configuration of systems should be maintained. This includes operating system versions,

service pack updates, and any other software.

• You should keep track of hardware configurations such as disks and partitions; peripheral devices

installed; and IRQ, DMA, and I/O addresses.

• Always ensure that backups are current and up to date. If possible, perform trial restore operations to

test backups.

• Implement new technologies such as Microsoft Cluster Server. Microsoft cluster server technology will

be discussed later in the paper.

• Implement RAID technologies. These are also discussed later in the paper.

• It is also possible in some cases to implement standby servers. Backed up information is restored on a

computer that is purely for redundant purposes.

Spares

It's always a good idea to have spares readily available in case of emergency. This includes both hardware

and software spares. The following is a basic inventory that lists the hardware and software components that

should be stored as emergency spares:

• Motherboards, CPUs, memory modules, video cards and screens, and power supplies

• Hard drives, floppy drives, tape drives, CD ROM readers, etc.

• Network interface cards and modems

• Network cables, hubs, switches, bridges, routers, and other networking hardware

• Original copies of currently installed software and service packs

Page 27 of 34Security Planning

10/16/2013http://technet.microsoft.com/en-us/library/cc723503(d=printer).aspx

Page 28: Security Planning

• Original copies of currently installed operating systems and service packs

• Any additional hardware cards like serial cards and printer port cards

• Any peripheral components like printers, scanners, and multimedia devices

Once you decide which hardware and software components to have spares of, general maintenance and

record keeping will help you discover impending errors. Many organizations keep a configuration

management database or record book for each critical system. Configuration databases help to track when

patches and changes are made to a system, or hardware or software changes. Included in the database

should be general system information such as:

• Hardware configuration

• Software configuration including operating system versions, service packs applied, software packages

installed, and disk configurations such as partition information

• Network configuration such as network cards, protocols, and any physical and logical addresses

Errors and failures should also be logged in the database. This creates a history and often certain patterns

and events appear.

Maintenance schedules should be set up to check general systems. Audit logs and general system and

application logs should be checked on a regular basis. If possible, run defragmentation utilities on disks and

partitions where general data is stored. Run integrity checking utilities on databases like Microsoft SQL Server

and Exchange Server. Run registry-monitoring utilities like Regmon to track registry changes and file

monitoring utilities like Filemon. You can find Regmon and Filemon utilities at see

http://www.sysinternals.com/.

Develop an Incident Response Team

Develop an incident response team to help control and recover systems in the event of a disaster. The

incident response team should document:

• Notification plan of who to contact for which kinds of problems or emergencies, and how to notify

them

• Contact information for administrators that need to be notified

• Contact information on certain vendors and consultants support

• Management personnel that need to be notified

• Any other critical users

Fault Tolerance

To minimize the loss of data and allow for the continuity of operations, you can use technologies such as

Redundant Array of Inline Disks (RAID) and Microsoft Cluster Technology. In this section we are going to

Page 28 of 34Security Planning

10/16/2013http://technet.microsoft.com/en-us/library/cc723503(d=printer).aspx

Page 29: Security Planning

concentrate on RAID technologies. RAID is a fault tolerant disk configuration in which part of the physical

storage capacity contains redundant information about data stored on the disks. Redundant information that

is stored on the disks helps to keep the system running in the event of a single disk failure.

RAID technology is either implemented through software or hardware systems. Hardware implementations of

RAID are more expensive than software, but faster. Some hardware implementations of RAID support hot

swapping of disks, which enables administrators to swap failed hard disks while the computer is running.

Software fault tolerant RAID systems are cheaper and are only available on Microsoft Windows NT and

Microsoft Windows 2000. Both hardware and software fault tolerant RAID systems regenerate data when a

drive fails and reconstructs the data onto the new disk when the failed disk is replaced.

There are various types of RAID techniques used. For simplicity's sake we are only going to discuss the two

most common techniques: Disk mirroring and disk striping with parity.

Disk Mirroring

In disk mirroring only two disks are used. Information on one disk is duplicated onto the other disk. When

data is written to one disk, it is duplicated on the other disk. This could cause a slight loss in write

performance. A variation of mirroring is disk duplexing, where each disk has its own controller. This helps to

increase write operations and provide redundancy incase a controller fails. Read operations on disk duplexing

and mirroring.

Advantages of using mirror sets are:

• Read operations are fast.

• Recovery from failure is rapid.

• In software implantations of mirror sets, the system and boot partitions can be mirrored.

Disadvantages of mirror sets are:

• There is a slight loss in performance during write operations.

• Only fifty percent of the total storage space can be used to store data. For example, two 1GB hard

drives. One drive is used as a backup; the other stores the data.

• If you use software mirror sets, you will be required to create a fault tolerant boot disk.

Disk Striping with Parity

Strips of equal size on each disk in the volume make up a stripe set. A stripe set with parity adds parity to a

stripe set configuration.

Data is written across two or more hard drives, while another hard drive holds the parity information. The

data and parity information is written in such a way on the volume so that they are always on different disks.

This way, if one of the hard drives fails, the two remaining drives can recalculate the lost information using

the parity information from other disks. When the faulty hard drive is replaced, information can be

Page 29 of 34Security Planning

10/16/2013http://technet.microsoft.com/en-us/library/cc723503(d=printer).aspx

Page 30: Security Planning

regenerated back onto a newly installed working hard drive by using the parity information. The minimum

number of hard drives involved in disk striping with parity is 3, and the maximum number is 32 hard drives.

A stripe set with parity works well when large databases are implemented on a system and read operations

are performed more often than write operations. This is because a stripe set with parity has excellent read

operations. Stripe sets with parity should be avoided in situations where applications require high-speed data

collection from a process or database applications, where records are continually being updated. In write

operations, performance degrades as the percentage of write operations increases.

Advantages of using stripe set with parity are:

• Read operations are faster than using a single disk drive. The more drives you put into the system the

faster the read operations.

• Stripe set with parity uses only one disk for parity information. The more disks you insert the more

space there is for data.

• There is not a lot of administrative effort in replacing a faulty disk.

Disadvantages are:

• In software implementations of stripe sets with parity, neither the boot nor the system partition can be

on the strip set.

• Write operations are slower because of the parity information that needs to be generated.

• When a hard disk fails in the stripe set the performance of the system degrades. This is due to the

information having to be recalculated when requests for information occurs.

• Stripe sets with parity consume more memory than mirror sets because of the parity information that

needs to be generated.

Cluster Server Technology

Certain organizations would like to keep computer systems operational continuously, 24 hours a day, 7 days a

week, 365 days a year. One way to do this is by implementing cluster server technology. A cluster is an

interconnected group of servers that act as a single unit in sharing some resource or responsibility. Cluster

server technology allows users to view a group of clustered computers as one entity. Both cables and cluster

server software connect the computers in into a cluster. Microsoft Windows 2000 advanced server has cluster

software readily available that will allow you to manage clustering. Microsoft cluster service and network load

balancing offer availability and scalability to organizations that build applications using a multi-tier model.

Cluster server technology allows features such as:

• Fault tolerance. In the event of a computer or node failure in a cluster, the other computers keep

running. Fault-tolerant systems employ redundant hardware and operating systems that work

together at every level in exact synchronization across two server units. Think of a fault-tolerant

system as a failover cluster with very high responsiveness (often on the order of milliseconds).

Page 30 of 34Security Planning

10/16/2013http://technet.microsoft.com/en-us/library/cc723503(d=printer).aspx

Page 31: Security Planning

• High availability. This focuses on maximizing uptime by implementing automated response to failure

and failover systems. To enhance availability, you add on more servers and backup systems to the

cluster in order to take over responsibility in the event of a failure. The servers need to keep

monitoring each other's activities, and must maintain consistency every few milliseconds. This is

usually implemented by a high-speed interconnect directly between the servers.

• Resource sharing. Resource sharing involves making server components, such as disk storage and

printers, available across all the nodes in the cluster. This is especially important for database servers,

which need to share large volumes between machines while maintaining consistency of data.

• Load sharing. Load sharing involves balancing application processing across the various nodes in the

cluster. This can be implemented by distributing new logins to different servers, based on their load at

the moment. It could also involve directly moving a running application from one server to another

• High throughput. High throughput focuses on the ability to process network requests or packets

quickly. This becomes most important in applications like Web or FTP servers, whose primary job is to

push out data. This kind of clustering focuses on improving the network interfaces and the routing of

network requests to servers. It can be built into the cluster nodes themselves, or may be a property of

an external balancing device.

Using a two-node cluster, Microsoft cluster service empowers reliable application, transactional, and file and

print services. To create reliable database and messaging services combine Microsoft cluster service with

Microsoft SQL server and Exchange Server.

In multitier applications designed for the Internet, Network Load Balancing can extend the functionality of IIS

5.0 by supplying load balancing and high availability to the first tier—the user interface. Up to 32 servers can

be used in a Web cluster.

Organizations can combine both cluster service and network load balancing to provide comprehensive

enterprise e-commerce solutions. An example on an e-commerce Web site is to cluster your front-end Web

servers running IIS 5.0 with network load balancing, and have them accessing a back-end cluster running SQL

Server Enterprise Edition.

Standby Servers

It is possible to set up a standby server in case the production server fails. The standby server should mirror

the production server. You can use the standby server to replace the production server in the event of a

failure or as a read-only server.

Create the standby server by loading the same operating system and applications as on the production

server. Make backups of the data on the production server and restore these backups on the standby server.

This also helps to verify backups that are performed. The standby server will have a different IP address and

name if it is connected to the network. You will have to change the IP address and name of the standby

server if the production server fails and the standby server needs to become the production server.

To maintain the standby server, regular backups and restorations need to be performed. For example, let's

say you make a full backup on Mondays and incremental backups every other day of the week. You would

restore the full backup on the standby server and subsequent incremental backups thereafter on the days

that the backups are performed.

Top Of Page

Page 31 of 34Security Planning

10/16/2013http://technet.microsoft.com/en-us/library/cc723503(d=printer).aspx

Page 32: Security Planning

Reactive Security Planning

Overview

In reactive planning the goal is to get the business back to normal operations as fast as possible in the event

of a disaster. By having efficient and well thought out contingency plans, this goal can be achieved.

Contingency Plan

A contingency plan is an alternative plan that should be developed in case an attack causes damage to data

or any other assets, stopping normal business operations and productivity, and requiring time to restore

them. The ultimate goal of the contingency plan is to maintain the availability, integrity, and confidentiality of

data. It is the proverbial "Plan B." There should be a plan per type of attack and/or per type of threat. A

contingency plan is a set of steps that should be taken in case an attack breaks through the security policies

and controls. The plan should address who must do what, when, and where to keep the organization

functional.

For example:

• Moving productivity to another location or site

• Implementing disaster recovery plans.

• Contacting vendors and consultants

• Contacting clients

• Rehearsed the plan periodically to keep staff up to date with current contingency steps.

The following points outline the various tasks to develop a contingency plan:

• Address the organization's current emergency plan and procedures and how they are integrated into

the contingency plan.

• The current emergency response procedures should be evaluated and their effect on continuous

operation of business.

• Planned responses to attacks and whether they are adequate to limit damage and minimize the

impact on data processing operations should be developed and integrated into the contingency plan.

• Backup procedures, including the most recent documentation and disaster recovery tests.

• Disaster recovery plans should be added to provide a temporary or longer operating environment.

Disaster recovery plans should cover the required levels of security to see if they continue to enforce

security throughout the process of recovery, temporary operations, and when the organization moves

back to its original processing site or to the new processing site.

Draw up a detailed document outlining the various findings in the above tasks. The document should list:

• Any scenarios to test the contingency plan.

Page 32 of 34Security Planning

10/16/2013http://technet.microsoft.com/en-us/library/cc723503(d=printer).aspx

Page 33: Security Planning

• The impact that any dependencies, assistance outside the organization, and difficulties in obtaining

essential resources will have on the plan.

• A list of priorities observed in the recovery operations and the rationale in establishing those priorities.

A contingency plan should be tested and revised by someone other than the person who created and wrote

it. This should be done to test whether the contingency plan is clearly outlined so that anybody who reads it

can implement the plan.

Top Of Page

References

Microsoft Windows 2000 Resource Kit

Microsoft Windows NT 4.0 Server Resource Kit

Microsoft Windows NT 4.0 Workstation Resource Kit

Practical Unix and Internet Security by Simon Garfinkel and Gene Spafford

Computer Security by Dieter Gollmann

An Intro to Computer Security by Del Armstrong - John Simonson

Internet Hoaxes: http://ciac.llnl.gov

Viruses: http://ciac.llnl.gov

Automated Security By Donn Parker: http://www.infosecurity.com/

Distributed Denial Of Service Attacks: http://www.icsa.net/

Viruses: http://e-comm.webopedia.com/

Trojan Horses http://www.cert.org

Electronic Sabotage by Carol E. Brown and Alan Sangster http://www.bus.orst.edu/

Special Report: DDOS wreaks havoc on the Internet: http://www.infosecurity.com/

Have Script will Destroy (Lessons In DoS) by Brian Martin: http://www.attrition.org/

Back-End System issues for online financial sites: http://www.incurrent.com/

Internet Security Policy: A Technical Guide by Barbara Guttman and Robert Bagwill: National Institute of

Standards and Technology Computer Security Division http://csrc.nist.gov/

Threat Assessment of Malicious Code and Human Threats by Lawrence E. Bassham & W. Timothy Polk:

National Institute of Standards and Technology Computer security Division http://csrc.nist.gov/

Pretty Good Privacy: http://www.pgp.com

A false sense of security by Julie Bort: Lantimes http://www.lantimes.com

Page 33 of 34Security Planning

10/16/2013http://technet.microsoft.com/en-us/library/cc723503(d=printer).aspx

Page 34: Security Planning

Is the hacker threat real? By Christopher Null: Lantimes http://www.lantimes.com

The Latest in Denial of Service Attacks: "Smurfing" http://www.pentics.net/denial-of-service/white-

papers/smurf.cgi

Things that Go Bump in the Net by David Chess http://www.research.ibm.com

Trusted Computer Security Evaluation Criteria (Orange Book): National Computer Security Center

The Trusted Network Interpretation ('Red Book'): National Computer Security Center

Macintosh is a registered trademark of Apple Computer, Inc.

Top Of Page

© 2013 Microsoft. All rights reserved.

Page 34 of 34Security Planning

10/16/2013http://technet.microsoft.com/en-us/library/cc723503(d=printer).aspx