78
6 CHAPTER 1 INTRODUCTION

a study on Network security and management - full report.docx

Embed Size (px)

Citation preview

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 1/78

6

CHAPTER 1

INTRODUCTION

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 2/78

7

1. INTRODUCTION

The fundamental philosophy behind the Internet is expressed by the scalability argument: no

 protocol, mechanism, or service should be introduced into the Internet if it does not scale well. A

key corollary to the scalability argument is the end-to-end argument: to maintain scalability,

algorithmic complexity should be pushed to the edges of the network whenever possible.

Perhaps the best example of the Internet philosophy is TCP congestion control, which is

implemented primarily through algorithms operating at end systems. Unfortunately, TCP

congestion control also illustrates some of the shortcomings the end-to-end argument. As a result

of its strict adherence to end-to-end congestion control, the current Internet suffers from main

maladies: congestion collapse from undelivered packets.

The Internet’s excellent scalability and robustness result in part from the end-to-end nature of 

Internet congestion control. End-to-end congestion control algorithms alone, however, are unable

to prevent the congestion collapse and unfairness created by applications that are unresponsive to

network congestion. To address these maladies, we propose and investigate a novel congestion-

avoidance mechanism called Network Security and Maintenance (NSM).

 NSM entails the exchange of feedback between routers at the borders of a network in order to

detect and restrict unresponsive traffic flows before they enter the network, thereby preventing

congestion within the network.

Modern organizations have a huge challenge on their hands, on a scale unlike anything they’ve

seen since the Y2K crisis. They must ―secure‖ the organization in the face of increasing

complexity, uncertainty, and interconnection brought about by an unprecedented reliance on

technology to accomplish their mission. They must also stay mindful of the heavy hand of 

regulation as legislators discover the importance of security. This paper explores some of the

challenges that organizations must overcome to be successful in this environment and introduces

ways in which a change in perspective might be the impetus for an emerging mission-driven

approach to security.

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 3/78

8

Let’s start by recalling 9th grade biology class when you were introduced to the concept of a cell.

You might remember that each cell in the body performs a specific function. To carry out this

function, cells receive input from their environment, transform it, and create output — a

continuous cycle that lasts throughout the cell’s life. The success of the cell in pe rforming this

cycle is important to the larger environment in which it exists — the organs, systems, and

functions of the body. Interestingly, this cycle can also be used to describe the basic functions of 

information systems and organizations as well. In particular, an organization can be described as

an open system1 that gives and takes from its environment to exist, to be sustained, and to

succeed.

1.2 PROBLEM OF THE STUDY

The scope of security management Security as it is traditionally defined in organizations is oneof the most pervasive problems that an organization must address. Rarely has there been an

organizational issue, problem, or challenge that requires the mobilization of everyone in the

organization to solve. (In this case, the Y2K effort comes to mind with one significant

distinction —  security is an ongoing issue that must be managed well beyond New Year’s Eve!)

The sheer expanse of any problem that traverses the entire organization poses many management

challenges, particularly when the focus is security.

First, the most important areas of the organization must be identified and targeted. This requiresthe organization to take an inventory to determine what needs to be protected and why. In a

large, complex organization, this can result in the identification of hundreds of assets that are

important to strategic drivers. Second, to secure this collection of organizational assets requires

many skills and resources that are typically scattered throughout the organization.

Because security is a problem for the whole organization, it simply is no longer effective or 

acceptable to manage it from the information technology department. Chief Security Officers

have the one of the most difficult jobs in executive-level management because their success

depends on utilizing many of the organization’s skills and resources. 

In effect, CSOs must mobilize many disparate parts of the organization to work together and to

expand their core responsibilities to include security. This is not unlike a similar problem faced

 by U. S. automakers in the early 1980s. Faced with the need to improve quality to compete with

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 4/78

9

their Asian counterparts, some U. S. automakers wisely focused on quality as a core element of 

their mission. As a result, quality became a part of every worker’s core responsibilities and was

integrated into key business processes, and thus, the entire organization worked together to

overcome these deficiencies.

This presents another challenge for managing security because the security strategy must be

sufficiently dynamic to keep pace with the rate of organizational and technical change. On

 balance, security management must support the organization’s quest to be sensing, flexible, and

adaptive to its environment and must be able to make a measurable contribution to the

organization’s bottom-line and long-term resiliency.

PROBLEM DEFINITION

  Packets are buffered in the routers which causes Congestion collapse from undelivered

 packets

  Retransmission of undelivered packets is required to ensure no loss of data.

1.3 NEED FOR THIS STUDY

In the same way, the ultimate benefactor of the security activities that an organization undertakes

should be the organization itself. Organizations deploy their internal structures — core assets and

 processes — with the goal of accomplishing their mission and providing benefits to stakeholders.

As with the cell example, anything that impedes assets and processes from doing their jobs

 potentially derails the organization’s ability to be successful. From this perspective, ensuring that

assets and processes remain productive is the real benefit and focus of the organization’s

investment in security.

Managing security in the context of the organization’s strategic drivers provides both advantages

and conflict. On the one hand, this approach ensures that the goals of security management are

forged from and aligned with the high-level goals of the organization. On the other hand, the

strategic drivers and needs of the organization are often in conflict with the actions required to

ensure that assets and processes remain productive.

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 5/78

10

In addition, as the organization is exposed to more complexity and uncertainty (because of the

increasing use of technology and the pace at which the organization’s risk environment changes),

keeping security activities and strategic drivers aligned becomes more difficult. In the end,

finding the right balance between protecting the organization’s core assets and processes and

enabling them to do their job becomes a challenge for security management — and a significant

 barrier to effectiveness.

1.4 OBJECTIVES

Security as an investment:

Dealing with a complex operating environment is costly and can significantly impact an

organization’s profitability. Protecting the financial condition and stability of an organization is

one of the most important issues for management. The resulting pressures from managing to the

 bottom line are a rich source of challenges for many activities throughout an organization,

especially for security management.

Expenditures receive much of the focus in organizations because they directly affect the

organization’s bottom line. Responsible financial managers scrutinize all expenses and make

implicit, if not direct, risk-versus-reward decisions. Security management is no exception — it is

often an expense-driven activity that can directly affect an organization’s profitability. It is no

wonder then that organizations are reluctant to view security as an investment that can generate

 benefits to the bottom line.

The view of security as overhead is an unfortunate outgrowth of the lack of inclusion of 

measurement and metrics as an essential element of security management. Organizations do not

routinely require return on investment calculations on security investments, nor do they attempt

to measure or gather metrics on the performance of security investments.

Absent a set of established and accepted metrics for measuring security ROI, there is little an

organization can do on its own in this area other than perform measurement in the context of 

incident avoidance or impact of a realized risk (i.e., the impact costs less than the control, and

therefore provides a return). And organizations are faced with another problem: Which security

investments should be measured? Technical controls, monitoring software, security staff, CSOs?

The measurement dilemma is pervasive across the entire security community, and lacking

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 6/78

11

specific guidance, organizations have become comfortable characterizing security activities as an

expense on their balance sheets.

In much the same way that information technology investments are now commonly capitalized,

the challenge for security management is to drive the organization in the same direction for 

security. The shift to characterizing security as an organizational investment promotes the view

that security can, at a minimum, preserve an organization’s bottom line, if not improve it.

Consider this: an organization that successfully approaches security as an investment may

increase its overall value in the marketplace, and may even be able to capture this value as

―goodwill‖ on their balance sheet.

In the future, a determinant of an organization’s value may be the amount of goodwill on its

 balance sheet that is directly due to its ability to secure critical assets and processes and improve

its resiliency. Certainly, an organization that can keep its core assets and processes in service in

the face of an attack, accident, or failure (and actually improve their ability to adapt to future

events) may be worth more than one that cannot, if only because of the competitive advantage

they create. Until organizations shift their view away from security as a burden, the ability of 

security management to effectively do its job at the organizational level will be impeded.

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 7/78

12

CHAPTER 2

LITERATURE SURVEY

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 8/78

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 9/78

14

  Deploy the Cisco Monitoring Analysis and Response System (MARS)

  Configure and use the Q1 Labs QRadar SIEM system

  Implement ArcSight Enterprise Security Management (ESM) v4.5

  Develop your SIEM security analyst skills

Network Security Bible by Eric Cole (Sep 8, 2009)

 Network security is constantly evolving, and this comprehensive guide has been

thoroughly updated to cover the newest developments. If you are responsible for network 

security, this is the reference you need at your side.

Covering new techniques, technology, and methods for approaching security, it also

examines new trends and best practices being used by many organizations. The revised  Network 

Security Bible complements the Cisco Academy course instruction in networking security.

  Covers all core areas of network security and how they interrelate

  Fully revised to address new techniques, technology, and methods for securing an

enterprise worldwide

  Examines new trends and best practices in use by organizations to secure their enterprises

  Features additional chapters on areas related to data protection/correlation and forensics

  Includes cutting-edge topics such as integrated cybersecurity and sections on Security

Landscape, with chapters on validating security, data protection, forensics, and attacks

and threats

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 10/78

15

CHAPTER 3

METHODOLOGY

3. METHODOLOGY

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 11/78

16

Technological biases:

The view of security as a financial impediment for the organization is often a consequence of the

tendency of organizations to consider security as a technology-driven activity. The security

industry itself contributes greatly to this characterization.

Framing security in technical terms is a logical outgrowth of the expansive (and ever-increasing)

number of technical products and services that are available to ―help‖ organizations get a handle

on security management.

Worse yet, there is a propensity for organizations to frame security problems in technical terms,

often ignoring the management and operational weaknesses that are root causes or contributing

factors. The bias toward technological solutions or the framing of security issues in technical

terms has done a great disservice to organizations in their pursuit of adequate security.

Security is a business problem:

Security is a business or organizational problem that must be framed and solved in the context of 

the organization’s strategic drivers. However, many organizations adopt a technology-centric

approach to security by default.

There are several reasons why this has occurred. As stated previously, the largest contributor to

the technology-centric view of security is the industry itself  — there is a strong technology bias to

security approaches and solutions, and even in the selection of skilled security personnel. Not

only has this made organizations more likely to view security as a technical specialty, but it has

also corrupted them into misplacing their most prized security resources in the IT department,

further alienating them from connecting to and aligning with the organization’s strategic drivers.  

The evolution of a risk-based paradigm for security has made it clear that a secure organizationdoes not result from securing technical infrastructure alone. A security approach that is mission-

centric (i.e., based on strategic drivers) strives to secure the organization’s critical assets and

 processes regardless of where they ―live.‖ 

This can be illustrated by examining the importance of information as an organizational asset.

Information is frequently stored, transported, and processed through technical means, and

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 12/78

17

therefore is considered a technical asset. However, this characterization is a distortion that can

lead organizations inappropriately to a technical security approach.

For example, an organization may store its product designs on paper or keep its medical records

in paper form —both of which may be critical for meeting the organization’s mission.

Securing the organization’s technical infrastructure will not provide a proper level of protection

for these assets, nor will it protect many other information assets that are in no way dependent on

technology for their existence or protection. Thus, the organization would be lulled into a false

sense of security if they relied on protecting their technical infrastructure alone.

In the end, the ―network‖ that most matters is the one that defines the organization and its related

 boundaries. The importance of the organization’s technical network is established in its role in

enabling the organization’s assets and processes, but it provides little context for which of these

assets and processes matter most to strategic drivers.

It is only in the organizational network where the context for the importance of each asset and

 process is found, as well as the rationale for what needs to be protected and why it is provided.

Regulatory biases A final consideration for security management is the organization’s regulatory

environment. Just as the organization must expose itself to its environment to operate, so must it

 be willing to accept some of the limitations imposed on like organizations that operate in its

competitive space. This brings another level of challenges that affects the organization’s ability

to be effective at security management.

Regulations reflect the need for organizations in a particular industry to look critically at their 

 protection needs and to implement corresponding security strategies and controls. While this has

had a positive effect in elevating the need to focus on security, for some organizations it can also

 be deleterious in that regulations can become an organization’s security strategy by default.

Regulations can draw the or ganization’s focus away from organizational drivers and on to the

compliance requirements of the moment.

Complying with regulations is certainly an important activity in an organization, but it cannot

substitute for a mission-focused, strategic security management process. Regulation is intended

to improve the core industries on which it is focused, but compliance activities can give

organizations a false sense of the overall effectiveness of their security programs.

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 13/78

18

For example, compliance with HIPAA regulations may improve the security over core assets

that are subject to the regulations, but other assets and processes are left unprotected. A

compliance-driven approach to security can also cause costly and inefficient investments in

 protection mechanisms and controls to protect those assets and processes that are subject to

regulation, when in fact this may not be the best use of limited resources for the organization.

Organization-centric approaches to security management consider the impact of risks and their 

effect on the organization to determine which security activities and practices are best for them.

In effect, this allows the organization to focus on their true security needs.

Security management that is subsumed by a need to comply with regulations can detour an

organization from this strategy by diverting their attention away from what is best for their 

unique organizational context.

Security as a core competency:

Organizations want to focus their energy on their core competencies — those functions and

activities that define the organization’s existence and its value to stakeholders. The upsurge in

outsourcing of horizontal business functions by organizations supports this claim.For many

functions, such as payroll processing or benefits administration, this may be perfectly

acceptable — if an organization cannot realize a strategic and competitive advantage from

excelling at payroll processing, it may not make sense to develop a core competency in this area.

However, this is why organizations may need to develop a core competency in security

management based on their strategic drivers.

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 14/78

19

CHAPTER 4

DATA ANALYSIS &

INTERPRETATION

4. ANALYSIS OF THE SITUATION 

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 15/78

20

Achieving a security goal in a networked system requires the cooperation of a variety of devices,

each device potentially requiring a different configuration. Many information security problems

may be solved with appropriate models of these devices and their interactions, and giving a

systematic way to handle the complexity of real situations.

We present an approach, rigorous automated network security management, which front-loads

formal modeling and analysis before problem solving, thereby providing easy-to-run tools with

rigorously justified results.

With this approach, we model the network and a class of practically important security goals.

The models derived suggest algorithms which, given system configuration information,

determine the security goals satisfied by the system.

The modeling provides rigorous justification for the algorithms, which may then be

implemented as ordinary computer programs requiring no formal methods training to operate.

We have applied this approach to several problems.

In this paper we describe two: distributed packet filtering and the use of IP security (IPsec)

gateways. We also describe how to piece together the two separate solutions to these problems,

 jointly enforcing packet filtering as well as IPsec authentication and confidentiality on a single

network.

Supported by the National Security Agency under contract DAAB07-99-C-C201, and the United

States Air Force under contract F19628-99-C-0001. Preliminary versions of parts of this material

appeared in Proceedings, 1997 IEEE Symposium on Security and Privacy; Proceedings,

ESORICS 2000; and Proceedings, VERIFY 2002.

Controlling complexity is a core problem in information security. In a networked system many

devices, such as routers, firewalls, virtual private network gateways, and individual host

operating systems must cooperate to achieve security goals.

These devices may require different configurations, depending on their purposes and network 

locations. To solve many information security problems, one needs models of these devices and

their interactions. We have focused for several years on these problems, using rigorous

automated network security management as our approach.

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 16/78

21

Rigorous automated security management front-loads the mathematical work needed for 

 problem-solving. Rigorous analysis is needed to solve many information security problems, but

unfortunately specialists in modeling are in short supply.

We focus the modeling work on representing behavior as a function of configurations, and

 predicting the consequences of interactions among differently configured devices. A useful

model must also allow one to express a class of practically important security goals.

The models suggest algorithms that take as input information about system configuration, and

tell us the security goals satisfied in that system. Sometimes we can also derive algorithms to

generate configurations to satisfy given security goals.

The soundness of the algorithms follows from the models. However, the algorithms are

implemented as computer programs requiring no logical expertise to use. Resolving individual

 practical problems then requires little time and no formal methods specialists, while offering a

good level of the higher assurance of security.

Our purpose in this paper is to illustrate the rigorous security management approach. We

describe a group of problems and the modeling frameworks that lead to their solutions. One

 problem concerns distributed packet filtering, in which packet-filtering routers are located at

various points in a network.

The problem is to constrain the flow of different types of packets through the network. Another 

 problem concerns gateways running the IP security protocols (IPsec); the problem is to ensure

that authentication and confidentiality goals are achieved for specific types of packets traversing

the network.

We have implemented solutions to these problems [7, 8, and 10]. The goal of the present paper is

to provide an integrated description of our methods, and also to unify the two solutions, so that

 packet filtering goals and IPsec goals are jointly enforced on a network.

Steps in Rigorous Automated Security Management. The method requires four steps:

Modeling: 

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 17/78

22

Construct a simple formal model of the problem domain. For instance, the packet filtering

system model contains a bipartite graph, in which nodes are either routers or network areas.

Edges represent interfaces, and each interface may have packet filters representing the set of 

 packets permitted to traverse that edge in each direction.

Expressing Security Goals: 

Selecting a model constrains which security goals are expressible, so that model simplicity must

 be balanced against the ability to represent core security problems. Within each model, identify

one or a few security goal forms that define security-critical aspects of the system.

In our treatment of IPsec, one goal form characterizes assertions about authenticity;

confidentiality is expressed using a different goal form. People managing a particular system will

choose a number of goals of these general forms as expressing the properties they need for their 

system.

Thus, it is crucial that these goal forms express at least some of the most important security

considerations that the system managers need to achieve.

Deriving Algorithms: 

The system model and security goal forms must be chosen so that algorithms can determine if 

goals are enforced in a particular system. Each specific system configuration (abstracting from

characteristics not reflected in the model) is a problem instance, for which the analysis

algorithms must determine whether given goals are achieved.

In some cases, another algorithm may, given some information about a system and some desired

goal statements, fill in the details to determine a way for the system to satisfy the goals. The rigor 

in our method lies in the mathematical character of the model and the opportunity to give

convincing proofs of correctness for these algorithms.

Implementing: Having defined and verified one or several goal enforcement algorithms in the

 previous step, one writes a program to check goal enforcement. The inputs to this program

consist of goal statements that should be enforced, and system configuration information. For 

instance, in our packet filtering example, the system configuration information consists of 

network topology information, and the router configuration files.

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 18/78

23

The program then enumerates which goals are met, and gives counterexamples for unmet goals.

The program may also generate new and better configuration files.

Packet-Filtering Devices:

Packet filtering devices such as routers, firewall systems, and IPsec gateways are an important

component of network layer access control. Since packets passing from one area in a network to

another often traverse many intermediate points, and may travel via alternate routes, filtering

devices in several locations may need to cooperate.

It is difficult to determine manually what division of labor among devices at different points in a

network ensures policy enforcement, particularly given multiple routes. This is a problem of 

localization. We will describe this problem from the rigorous automated security management

vantage point, stressing the four steps: modeling

Modeling

Our model has two parts, namely a model of networks as undirected bipartite graphs (Section

2.1.1), and a model of packets as having certain distinguishable characteristics.

Modeling Networks:

We regard a network as a bipartite graph. The nodes of the graph are areas, collections of hosts

and networks which are similar in terms of security policy; and devices, which are dual-homed

hosts or packet filtering routers connecting the areas and moving packets between them. There is

an (undirected) edge between a filtering device and an area if the device has at least one interface

on that area.

Thus, areas and devices are the two sorts of node in our network graph, and we use variables

such as a and d to range over them (respectively). In Engineering, External, Allied, etc. are areas,

and the black squares indicate filtering devices.

This diagram represents a corporation that owns the three networks marked Engineering,

Finance, and Periphery. The Internet, indicated as External, is connected to the corporation via a

filtering device at Periphery. However, the engineering staffs have long term collaborative

relations with another organization called Allied, and must exchange different network services

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 19/78

24

with their collaborators than would be acceptable with other organizations. Hence, there is also a

dedicated network connection (and filtering point) between them.

Such situations where different trust relations attach to different portions of a network are

common. Our problem is to reliably enforce a security policy sensitive to these differences. Trust

relations must be translated to filtering decisions in a way sensitive to the topology of the

network.

Formalizing a real-world network takes some care. We can express access control policies on the

network only if they involve the flow of packets from one area to a different area. We cannot

express requirements on packets traveling within a single area, nor could we enforce them.

Thus, security goals for a particular network must determine the granularity of its model. In

addition, we must ensure that all of the real-world connectivity between distinct areas in our 

networks is represented.

We cannot enforce access controls on the traffic between areas if we do not know what filtering

devices (or dual homed hosts) may move packets from one area to another.

On the other hand, the areas may represent large collections of physical networks that have

many routers within them. Those internal devices are of no interest for our analysis. This point is

highly relevant to the usability of our method. Large networks, administered by a single

authority, typically use relatively few routers as filtering devices.

Thus, substantial practical problems arise with networks not much more complicated than the

one shown in Figure 1. Indeed, often there are different layers of administrative control. For 

example, enterprise-level network administration may be concerned with policy among large

corporate units, the Internet, and strategic business partners.

By contrast, a corporate laboratory may have more specific requirements concerning

communication among its distributed portions, located within different large corporate units.

Their point of view distinguishes smaller components within them, and may lump the remainder 

together as a single area providing transport among these smaller units. Enforcement for their 

 policy may depend on an entirely separate collection of filtering devices, typically also of modest

size.

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 20/78

25

Modeling Packets:

In our modeling, we need only consider an aspect of packets if a device that filters and routes

 packets may be sensitive to it. Generally speaking, these are the source and destination

addresses, the protocol (such as tcp, icmp, and so forth), and certain protocol-specific fields.

Protocol-specific fields include the source and destination ports in the case of tcp or udp;

message type in the case of icmp and igmp, and for icmp additionally a message code; and an

additional bit in the case of tcp indicating whether the packet belongs to an already established

connection. We ignore many other characteristics such as the payload, the source-routing history,

time-to-live, and checksum.

These are the only packet properties that are available to configure a Cisco router for packet

filtering [5]. Other filtering methods such as ip Chains and ip Filters give loosely similar 

expressiveness for properties of packets [21, 20]. We refer to a possible value of these fields as

an abstract packet. We regard an abstract packet as a single item in our model, even though it

represents many concrete ip packets. These ip packets are indiscernible, as far as we are

concerned, so our theory identifies them into a single abstract packet.

The boolean algebra of sets of abstract packets allows us to represent filter behavior and to

express what dangers a security policy seeks to prevent. Our model consists of essentially only

these two ingredients, namely a bipartite graph representing the network and this notion of 

abstract packet (together with the boolean algebra of sets of them). The remainders of the notions

we need are defined in terms of these ingredients, notably paths, trajectories, and security

 postures.

Devices and Filtering Postures:

A filtering device in our model is a node with interfaces on one or more network areas. Thus, we

regard an interface as an edge between that node and the node representing the network area.Packets flow in both directions across this edge.

Most filtering devices can be configured to discard packets passing in either direction across any

interface, and they typically pass different packets depending on the direction of flow. Thus,

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 21/78

26

from our point of view, an edge between an area a and a device d is characterized by two sets of 

abstract packets:

inb(a, d) defines the packets permitted to traverse the edge inbound into the device from the area,

while outb(a, d) defines the packets permitted to traverse the edge outbound from the device to

the area.

A filtering posture for a particular network (i.e. a bipartite graph) consists of an assignment of 

sets inb(a, d) and outb (a, d) to each pair a, d such that d is a device having an interface onto the

area a.

We have no interest in distinguishing different interfaces that the filtering device may have on

the same area. They cannot control the flow of packets in any additional useful ways.

Paths and Trajectories:

A path through the network is a sequence of immediately connected nodes on the associated

 bipartite graph. We ignore issues of routing in our current presentation; so that our conclusions

will hold even on the conservative assumption that routing tables may change unpredictably.

Thus, a packet may potentially traverse any path through the network. Routing information may

 be easily incorporated into the model if desired, and the tool we describe in has an option to

determine whether to consider routing.

A trajectory is a path _ taken by a packet p, which we regard as simply the pair (_, p) consisting

of the path and the packet (i.e. an abstract packet). The trajectory is a crucial notion. The purpose

of filtering devices is to ensure that some trajectories cannot occur.

These are trajectories in which packets exercising vulnerable services are transmitted from

untrusted hosts, and allowed to reach endpoints we would like to protect. Thus, security goals

will be certain sets of trajectories, interpreted as the trajectories acceptable according to that

 policy. A configuration enforces a goal if it filters and discards a packet before it traverses a

trajectory not in the set, interpreted as a trajectory contrary to the policy.

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 22/78

27

Our definition of trajectory, as just given, effectively assumes that the state of the packet does

not change as the packet traverses the network. A trajectory as defined here does not associate

different packets (or distinguishable states of the traveling packet) with successive locations. (In

Section 3 we consider an elaborated notion of trajectory that does, as is necessary because the

 protocols we consider there depend on transforming the packet as it travels.)

Expressing Security Goals:

Security goals rely upon two sorts of ingredient:

1. Which areas has the packet traversed? For instance, was it once in the External area, and has it

now reached the Engineering area?

2. What does the packet say? The contents of the packet are simply its properties as an abstract

 packet. For instance, if the destination port of the packet is port 53, then given the vulnerabilities

in many dns implementations, we may wish to discard it unless the destination address is a

carefully administered host we make available for external dns queries.

Ingredient 1 concerns the actual path of the packet as it traverses the network, regardless of what

it claims. Ingredient 2 concerns only what the packet claims, not where it has really passed.

These two kinds of information diverge when filtering devices send packets through unexpected

 paths, or packets are spoofed, or packets are intercepted before reaching their nominal

destinations.

A useful notion of security policy must consider both kinds of information. As an example,

suppose that in the network shown in Figure 1 we wish to have a dns server located in the

Engineering area accessible only by company hosts.

Then we may have a policy that no packet with destination port 53 that was ever in the Externalarea should be delivered to the Engineering area. Here, the source field address of the packet is

irrelevant. Even if the source address claims to be in the Periphery network, the packet should

not be delivered.

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 23/78

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 24/78

29

1. In this policy statement, we aim to protect hosts in the Engineering area from attacks that

might be transmitted from the External area; the only exception being that specific mail hosts are

not protected against packets participating in the smtp protocol, i.e. tcp packets with destination

 port 25. It is also possible to interpret this form of statement as offering some confidentiality

 protection. In this interpretation, it states that a1 is protected against loss of data to a2, if that data

is carried only in packets p such that _(p). For instance, a corporation may use outbound filtering

to ensure that traffic with a database server cannot be misrouted outside its networks, since much

sensitive business information is carried in these packets.

It would also be possible to consider more complicated policy statements, involving e.g. three

areas. As an example, we might require a packet that came from the External area via the Allied

area and eventually reached the Engineering area to have:

• An external address as its ip source field; 

• An internal address as its ip destination field;

• A source or destination port of 25, indicating that it is an smtp packet. 

Other packets could not pass through the Allied area. However, realistic security goals appear to

 be expressible using two-area policy statements. In the case of our example, we could replace

this three area policy statement with a (slightly stronger) pair of two-area policy statements.

The first would require that if a packet p that was in the External area reaches the Allied area,

and if p has a destination address in the internal areas, then p’s source address should be in the

External area and p’s service should be smtp.

The second would require that if a packet p that was in the Allied area reaches the Engineering

area, then p’s destination address should be in one of the internal areas. If this pair of two-area

statements are satisfied, then the three-area requirement will also be satisfied. The extra strength

of these two-area statements was probably desired anyway: namely, that the corporation’s

internal networks should not be used as a pass-through from the Allied organization.

Therefore, for the remainder of, a policy statement will be a two area statement, asserting that

any packet p that was in one area and later arrives in a different area meets some constraint _(p).

A policy will mean a set of policy statements, one for each pair of distinct areas a1, a2. The

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 25/78

30

constraint may be vacuously true, allowing everything to pass between them; or else at the other 

extreme, unsatisfiable, requiring that nothing pass.

The security properties we would like to achieve are essentially the same as in the preceding

sections, namely filtering goals, authentication goals, and confidentiality goals. Since

authentication and confidentiality goals were already formalized within the same modeling

framework in Section 3.3, we leave them unchanged here.

The same methods still suffice to ensure that they are met in a particular network configuration.

In defining filtering goals, we have additional degrees of freedom. In Section 2.1.4, trajectories

associate a single packet with all of the locations traversed.

By contrast, in the trajectories formalized as histories of the state machines of Section 3.2,

Definitions 1 and 2, the packet may have different sequences of headers while traversing

different locations. If we retain the idea from Section 2.2.1 that a filtering policy statement will

concern two locations and the packets that can travel from one to the other, then we have a

choice how to characterize those packets. Should we consider the state of the packet same

headers at the early and later position, which we can call ―symmetric two location filtering

statements.‖ 

This decision is motivated by two considerations:

• Known attacks do not involve IPsec. That is, a packet with IPsec headers is not known to

exercise serious vulnerabilities, for instance in the tcp/ip stack and its processing of IPsec

messages. Such vulnerabilities, if discovered, must be corrected very quickly, since it is

intolerable to have vulnerabilities within the infrastructure intended to provide security itself.

Known attacks may be transmitted via packets that, for part of their trajectory, are IPsec-

 protected; however, they do their harm only when restored to their original form.

Thus, we consider that symmetric two-location filtering statements express the practicallyimportant security objectives.

• An adaptation of the algorithms of Section 2.3 provide good ways of reasoning about

symmetric two-location filtering statements. One other decision is also needed, whether to

interpret the first location of a two-area symmetric statement as concerning the location at which

the packet originates, or simply some location traversed before the other location. We choose to

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 26/78

31

interpret it as the point of origin of the message, a notion that makes sense in a model in which

data origin authentication is one of the security services provided.

  Buffering of packets in carried out in the edge routers rather than in the core routers.

  The packets are sent into the network based on the capacity of the network in terms of 

 packets.

  Congestion was controlled at ingress router.

  Congestion was controlled by introducing, dynamic routing and hop-by-hop routing was

used.

  Congestion is reduced due to absence of Undelivered Packets.

  Secured data transmission.

  Fair allocation of bandwidth is ensured.

We want now to develop algorithms which given M = (,!) and a filtering statement `, `0, _, will

definitely tell us in case M does not satisfy `, `0, _, and will rarely report failure unless there

exists a counterexample to the filtering statement. To do so, we reduce the problem from the

enriched graph G of , to an ordinary undirected, bipartite graph G0 to which the NPE algorithms

apply.

In the process, we will also use the confidentiality and authentication properties known to hold of 

M to give additional information reducing false positives. That is, the additional information will

help reduce the cases in which we report that there may be violations, when in fact there is no

counterexample.

For the remainder of this section, fix a network configuration M = (,!). We ―reabsorb‖ the

interfaces of a router into the router, and replace each pair of contrary directed edges by a single

undirected edge, thus turning a graph having the form shown in Figure 5 into one taking the form

shown in.

If an enriched graph G is reabsorbed in this way, we refer to the result as Gr. IPsec Tunnels

More importantly, we create new fictional interfaces between two potentially distant routers to

model the IPsec tunnels that may lie between them. In this way, we recognize IPsec as a service

that transports packets via a safe medium. We regard these fictional interfaces as filtering points

for each of the two routers. One permits outbound (from the router) all those packets that the

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 27/78

32

endpoint pushes IPsec headers onto, and the other permits inbound (into that router) all packets

that the router accepts, having popped IPsec headers off them.

More precisely, let od[a] be the outgoing interface from device d onto an area a. We say that

od[a] is a tunnel entry point for d0 if there are any packet states hhi and protocol data p such that

od[a] pushes [d, d0, p] onto hhi, eventually readying [d, d0, p] · hhi, or the result of further 

 pushes, to move from the interface.

We say that an incoming interface id0 [a] is a tunnel exit point from d if there are any packets

[d, d0, p] · hhi off which id0 [a] pops [d, d0, p], eventually readying hhi or the result of further 

 pops to move from the interface. We say that there is a tunnel between od[a] and id0 [a] if the

former is a tunnel entry point for the latter, and the latter is a tunnel exit point for the former.

We say that the contents of a tunnel are the set of packet states hhi such that, if hhi reaches od[a],

then it readies some · [d, d0, p] · hhi, and moreover there is some · · · · [d, d0, p] · hhi which, if 

it reaches id0 [a], then hhi will be readied by id0 [a].

Given a graph Gr, constructed by reabsorbing an enriched network graph G, we add an interface

 between d and d0 whenever there is a tunnel between any pair of their interfaces. The device d is

assumed to pass outbound over this interface the union of the contents of all tunnels to interfaces

of d0, and d0 is assumed to pass inbound the same set of packet states.

In order to make the result formally a bipartite graph, we must also add a fictitious area to which

only these two new interfaces are connected. We will refer to the resulting bipartite graph as Grt.

Exploiting Authentication and Confidentiality Goals In our original NPE work, we never had a

guarantee that the source field of a packet could be trusted. Since IPsec gives us such guarantees,

we propose to take advantage of them.

Likewise, IPsec also gives us a confidentiality assertion that certain packets, having originated in

one location, will always be in encrypted form when traversing another location.

We extract, therefore, two families of sets of packet states. For each pair of locations `, `0, we let

 _`,`0 be the set of h[s, d, p]i such that address s belongs to the area or device `, and M provides

authentication services for h[s, d, p]i, as determined for instance by CAIC .

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 28/78

33

Let `,`0 be the set of packets h[s, d, p]i such that address s belongs to the area or device ` and M

 provides confidentiality services for h[s, d, p]i, as determined likewise by CAIC. We may now

use the same algorithms provided by NPE, though when calculating the packets that may flow

from `0 to `1, we may omit packets in _`,`1 for ` 6= `0.

These packets cannot reach `1 unless they originate in `, not `0. Likewise, we may omit packets

in `0,`1 , as these packets will never be in their original state when they reach `1, but will

necessarily have an IPsec confidentiality header.

In this way, we may use the same methods as in NPE, but sharpened to reflect both the transport

opportunities created by IPsec and also the authentication and confidentiality assertions that it

offers. We have not yet incorporated these methods into a tool such as NPE or CAIC.

Automated Network Security Management 

Deriving Algorithms:

The ideas introduced in previous sections suggest two algorithms that exploit the boolean

operations on constraints _(p) in combination with the graph structure of the underlying network 

specification. These algorithms may be used to check a putative filtering posture, or to generate a

filtering posture that will enforce a given policy.

Both of these algorithms depend on the notion of the feasibility set of a path. Given a filtering

 posture hinb, outbi, the feasibility set of a path _ is the set of all abstract packets that survive all

of the filters traversed along the path.

That is, if _ traverses device d, entering it from area a1, then an abstract packet p is in the

feasibility set of _ only if p 2 inb(a1, d). If _ enters area a2 from d, then p is in the feasibility set

of _ only if p 2 outb (a2, d).

We can compute the feasibility set of a path iteratively by starting with the set of all packets; aswe traverse the inbound step from a1 to d, we take an intersection with inb(a1, d); as we traverse

the outbound step from d to a2, we take an intersection with outb(a2, d). Binary Decision

Diagrams allow us to carry out such computations reasonably efficiently.

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 29/78

34

Checking a Posture:

To check that a posture enforces a policy P, we examine each path between areas to ensure that

the feasibility set for that path is included in the policy statement for the areas it connects.

If _ is a path starting at area a0 and terminating at area ai, we must check that the feasibility set

for _ is included in P(a0, ai), i.e., the set of abstract packets that can actually traverse the path is a

subset of the set of abstract packets permitted to travel from a0 to ai. Algorithmically, it is

enough to check this property for non cyclic paths, as the feasibility set for a cyclic path _1 must

 be a subset of the feasibility set for any non cyclic sub-path _0.

The set of non cyclic paths is fairly small for reasonable examples; in the case of the corporate

example, 40 non cyclic paths begin and end at areas (rather than at filtering devices).

Generating a Posture:

Creating a posture is a more open-ended problem. There are essentially different solutions,

different ways to assign filtering behavior, possibly to different devices or to different interfaces

of a device, such that the net result enforces the global security policy.

Outbound Filtering:

Various posture generation algorithms can be based on the idea of ―correcting‖ a preexisting

filtering posture F = hinb, outbi. We say that F0 tightens F if inb0(a, d) _ inb(a, d) and outb0(a,

d) _ outb(a, d), for all a and d. Suppose that _ is a path from area a0 to ai that enters ai from

device d, and suppose that the feasibility set for _ is _. If _ is not a subset of the policy constraint

P(a0, ai), then we can update F to a new filtering posture F0 = hinb0, outb0i where F0 differs

from F only in that outb0(ai, d) = outb(ai, d) \ (_ \ P(a0, ai)) where _\ is the set difference of _ 

and . F0 tightens F to prevent any policy violations that would otherwise occur on the last step

of _. This change cannot cause any new policy violations, because it cannot increase any

feasibility set.

It can only reduce the feasibility sets of other paths that also traverse this edge. Hence, if we start

from an arbitrary filtering posture F0 and iterate this correction process for every cycle free path

 _, we will obtain a filtering posture that satisfies the policy P.

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 30/78

35

We organize this process as a depth-first traversal of the graph starting from each area in turn. It

 performs the tightening by side effecting data structures that hold the filters for the individual

filtering device interfaces. However, this recipe for generating a posture does not say how to use

the inbound filters effectively.

Inbound Filtering. We use the inbound filters for protection against spoofing, because they know

which interface the packet has arrived through, which the outbound filter does not. Many human-

constructed firewalls use inbound filters for this purpose.

As a heuristic, we assume that packets from one area should not take a detour through another 

area to reach a directly connected filtering device. Our expectation is that there will normally be

good connectivity within any one area, and that a packet originating anywhere in an area will

easily be able to reach a device if the device has an interface anywhere in that area.

Although this expectation may not always be met — for instance when an area, like External

consists of most of the Internet — a security policy may choose to require that packets arrive as

expected, and act defensively otherwise.

We may easily formalize this heuristic. Suppose a packet p reaches a device d through its

interface to area a, but the source field of p asserts that it originates in area a0 where a0 6= a. If d

also has an interface on a0, then we want to discard p. For, if p had really originated where it

claims to have originated, then p should have reached d through its interface on a0. We will refer 

to the inbound filters that implement this idea as inb0.

We apply our correction technique starting with inb0 as inbound filters. In constructing inb0 we

have used only the structure of the network specification, not the policy or any pre-existing

filtering posture.

These ingredients may be consulted to produce somewhat more finely tuned filtering postures.

Network Policy Enforcement:

In this section, we first describe earlier implementation efforts, and then summarize the

functionality of a tool kit currently undergoing technical transition to operational use. NPT and

the Atomizer.

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 31/78

36

This method for checking a filtering posture against a policy was implemented in 1996 in the

 Network Policy Tool (NPT).

The following year it was reimplementation in Objective Caml. NPT also implemented posture

generation, recommending filtering behavior for each of the routers on which a network’s

security depends. NPT did a symbolic analysis, working not with sets of concrete IP address for 

instance, but rather with symbolic names representing non overlapping sets of hosts. Similarly, a

service was a symbolic name representing a set of ports within a particular protocol.

An abstract packet was then essentially a triple, consisting of a symbolic name representing the

source field, a symbolic name representing the destination field, and an oriented service.

An oriented service consisted of a service together with a flag saying whether the well-known

 port was the destination port or the source port. Thus, it indicated whether the packet was

traveling from client to server or vice-versa. Special data structures offered representations for 

sets of abstract packets.

 NPT was highly efficient, and executed the algorithms we described in Sections 2.3.1 – 2.3.2 in

seconds, when run on examples of quite realistic size. These included a dozen and a half areas

and a dozen filtering devices.

Unfortunately, the abstract representation made it hard for a system administrator to construct

input to NPT that would accurately represent the real world network and its policy. Likewise,

output from NPT was hard to translate back into filtering router access lists for devices such as

Cisco routers. Firmato and Fang were developed soon after NPT, and provide similar 

functionality with more emphasis on reading and reconstructing actual configurations, and more

emphasis on management convenience, but with less attention to modeling and rigor.

The first of our problems — constructing NPT models of real systems — was solved in the

Atomizer, a tool that would read Cisco access lists and generate NPT specifications, bydiscovering which sets of hosts and ports were always treated the same way, and could therefore

 be fused into a single symbolic name. Sets were represented via Binary Decision Diagrams, and

the algorithm to fuse them (―atomize‖ them) exploited the bdd representation essentially.

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 32/78

37

Interpreting Access Lists:

From our point of view, a configuration file such as for a Cisco router, contains interface

declarations and access lists. An interface declaration may specify a particular access list to

apply to packets arriving inbound over the interface or being transmitted outbound over the

interface.

An access list is a list of lines. Each line specifies that certain matching packets should be

accepted (―permitted‖)  or discarded (―denied‖). When a packet traverses the interface in the

appropriate direction, the router examines each line in turn. If the first line that matches is a

―deny‖ line, then the packet is discarded. If the first line that matches is a ―permit‖ line, then the

 packet is permitted to pass. If no line matches, then the default action (with Cisco routers) is to

discard the packet.

For instance, the lines in Figure 2 permit two hosts (at ip addresses 129.83.10.1 and 11.1) to talk 

to the network 129.83.114._. They also permit the other hosts on the networks 129.83.10._ and

129.83.11._ to talk to the network 129.83.115._. The asterisks are expressed using a netmask 

0.0.0.255, meaning that the last octet is a wildcard. For simplicity, in this particular example

there is no filtering on tcp or udp ports, which can also be mentioned in access list lines.

Each line of an access list defines a set of sources 's, destinations 'd, and service characteristics

'v, and stipulates whether matching packets should be discarded or passed. A datagram _ 

matches a line if _.src 2 's ^ _.dst 2 'd ^ _.svc 2 'v.

At any stage in processing, a packet that has not yet been accepted or rejected is tested against

the first remaining line of the list. If the line is a ―permit‖ li ne, the packet has two chances to be

 permitted: it may match the specification for the first line, or it may be permitted somehow later 

in the list.

If the line is a ―deny‖ line, the packet has to meet two tests to be permitted: 

it must not match the specification for the first line, and it must be permitted somehow later in

the list. Since the default is to deny packets, the empty list corresponds to the null set of 

 permissible packets. Thus, we have a recursive function _ of the access list:

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 33/78

38

The function _ allows us to transform a parser for the individual configuration file lines (emitting

sets describing the matching conditions) into a parser that emits a set describing the meaning of 

the whole access list.

Filtering devices other than Cisco routers use different languages to express which packets are

 permitted to traverse each interface, but their semantic content is similar.

The NPE Tools:

Recently, NPT has been updated and packaged with a suite of complementary tools to form the

 Network Policy Enforcement (NPE) software. NPE supports a cycle of policy discovery,

analysis, and enforcement.

Its primary input is a file listing the routers to be queried, with some annotations. The

annotations include an IP address and a password for the router, which is required to connect to it

and retrieve the full configuration.

The router probing component uses telnet or ssh to connect with each of the routers, retrieving its

currently effective configuration. The probe tool records:

1. the IP address and network mask for each interface,

2. the access list for filtering across each interface in each direction, and

3. the routing table.

The information in item (1) determines a network map. Items (2) and optionally (3) determine

what packets can traverse each interface in each direction. When routing information from item

(3) is used, we take account of the fact that a packet cannot traverse an interface outbound unless

the routing table routes it that way.

Since this information is, however, dynamic, and changes as a consequence of routing protocols,some sites do not want to rely on it to ensure their security, which is the reason why our tools

may be configured not to consider it.

A second input to NPE is an optional file describing an desired policy for the network under 

analysis. This file defines sets of addresses, and states policies. A policy concerns two sets of 

addresses, s1 and s2 and a set _ of packets.

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 34/78

39

The semantics is that if any two areas a1 and a2 are such that any address in s1 resides in a1 and

any address in s2 resides in a2, and any packet p can pass from a1 to a2, then p 2 _. NPE

constructs bdds representing the sets of packets that may traverse each interface in each

direction.

It calculates from these bdds an ―effective policy‖ containing the most permissive policy

enforced by the given collection of configuration files. This is done using a relatively

straightforward depth-first search through the network graph derived from item (1). It also

identifies violations of the desired policy.

For each violation, the system administrator is given a path through the network and a set of 

 packets that are capable of traversing this path, all of which are incompatible with the desired

 policy. Given a set of violations, NPE can also recommend a number of routers, which if 

reconfigured suffice to eliminate all of these violations.

It determines the set of packets that should be permitted to cross each interface of these routers.

The choice of routers can be made according to several different strategies. For instance, packets

that cannot be delivered may be stopped as late as possible, or alternatively as early as possible.

The latter strategy avoids wasting network bandwidth, but may also prohibit packets

unnecessarily, on he grounds that they may later be misrouted. Another strategy is to choose a

cut set of routers that lie on the paths of all of the violations reported, selecting the set to have

minimal cardinality. This strategy is based on the idea that frequently the cost of reconfiguring

and retesting a router dwarfs the cost of some lost bandwidth, or the fine points of whether a few

services are deliverable.

As of the current writing, the descriptions of what packets to allow over an interface are given in

a vendor-independent language, rather than in the configuration language for specific routers. An

alternative, vendor-specific back-end is under development.

 NPE has been tested with large router files containing a total of 1,300 access list lines in a single

run. The resulting process requires 175 MB and runs for two minutes on a 550 MHz Pentium III

Linux machine, hardly an unreasonable burden. However, it has not been tested with large

numbers of routers or highly interconnected networks.

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 35/78

40

It is relatively difficult to find installations that depend on more than a few routers for filtering.

Moreover, most networks are interconnected in a very sparse, highly structured way, as is

required in order to manage them in a reasonable way.

Thus, despite the fact that our algorithms would become unfeasible in highly interconnected

networks relying on large numbers of filtering routers, they are quite usable in practice. The IP

Security Protocols (IPsec) The IP security protocols collectively termed IPsec, are an important

set of security protocols that include ensuring confidentiality, integrity, and authentication of 

data communications in an IP network.

A major advantage of IPsec is that the security protection occurs at the IP layer. This renders

application modifications unnecessary, and one security infrastructure is capable of protecting all

traffic. In addition, because of the way IPsec is usually deployed, it requires no changes to

individual computers;

IPsec enabled routers are generally put in place at strategic points in the network. This flexibility

has led to great interest from the commercial market; many IPsec products are now available.

However, to provide such flexibility, the IPsec protocol set is fairly complex, and the chances

that a product will be misconfigured — or that several devices will be configured in inconsistent

ways — are high. Even with good products, the way they are used can compromise the security it

is capable of providing. Many organizations will set up their IPsec infrastructure too quickly,

getting it wrong, an anxiety also expressed by Ferguson and Schneier who detail several ways in

which misconfigurations could compromise security. The IPsec protocols specify headers and

 processing that can be used to ensure that packet payloads are encrypted during some portion of 

their trajectories.

They also allow packets to be accompanied by a message authentication code (mac) for part of 

the trajectory, enabling a recipient to determine the point in the network at which this mac was

applied, and making any subsequent alterations detectable. We abstract from the actual state of 

the packet payload or mac, and simply regard associated IPsec headers as representing the

security relevant state of the packet.

IPsec operations may be applied repeatedly at different points on the network. All decisions are

made locally about when to add encryption or macs. Likewise, payload decryption and verifying

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 36/78

41

macs when removing them are local operations. This presents us with another problem of 

localization: to determine what meaningful global security properties are achieved by some set of 

local IPsec operations.

An example of the difficulty of localization can be seen in Figure 3. Assume that companies A

and B each have their own connection to the Internet, with IPsec gateways at the perimeters.

Further assume that the two engineering divisions, EngA and EngB have a direct connection to a

collaborative testing network, PrivNet. There are likely several different granularities of security

 policy being implemented: a company-wide policy dictating what traffic should leave the

company in an IPsec tunnel; a financial policy about what traffic should leave SG1 unencrypted;

an engineering policy about what sorts of traffic should be allowed in the PrivNet area and with

what protection; and so on.

It is clear that these localized policies can affect one another, and we desire a way to determine

in what ways they actually do. In, we formalized the types of security goal that IPsec is capable

of achieving.

We then provided criteria that entail that a particular network achieves its IPsec security goals.

We present this work here as an example of rigorous automated security management. Before

 presenting our network model and defining the security properties of interest to us here, we

 present a brief introduction to the IPsec protocols.

we develop algorithms for checking whether security goals are met, and we prove that the

algorithms are correct. The problem is more demanding than the algorithms of Section 2.3, so we

have given the correctness proofs in much more detail. In Section 3.5, we describe the

configuration-checking software we developed to automate this procedure.

IP Security :

IPsec is a set of Internet standards-track protocols designed to provide high quality, IP layer security services for IPV4 and IPV6. These services include connectionless integrity and data

origin authentication (hereafter referred to jointly as authentication), rejection of replayed

 packets, confidentiality, and limited traffic flow confidentiality, and can be used by any higher 

level protocol.

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 37/78

42

IPsec processing can be done either within a host or else at a security gateway. In an entirely

host-based environment, each individual machine would handle its own IPsec processing,

applying cryptographic transforms (and the IPsec headers describing them) before emitting

 packets onto the network.

For incoming packets, IPsec processing requires undoing cryptographic operations, checking the

results, and deleting the corresponding IPsec headers on packets received off the network. IPsec

 processing can also dictate that certain packets without cryptographic processing be passed or 

dropped. In a host-based implementation, this typically requires changes to the IP stack 

executing in each host’s kernel. 

Alternately, one can deploy a smaller number of IPsec enabled gateways, each of which

 performs IPsec processing on behalf of the machines ―behind‖ it. This end-to-end, network level

 protection makes IPsec a good protocol for Virtual Private Networks (VPNs), and many current

VPNs conform to the IPsec standards. Keys may be manually placed, or alternatively key

exchange may be handled by the Internet Key Exchange, a protocol designed for use with IPsec.

In this case, there are supplementary needs, including a certain amount of public key

infrastructure. In our discussion in this paper, we will not discuss key management further, but

will assume that it is handled by some secure means.

IPsec processing for outbound packets consists of the following elements. First, a Security

Association (SA) is selected based on properties of the existing IP and transport layer headers of 

the packet. The SA specifies the type of protection, including what cryptographic operations are

to be performed, together with parameters including the key.

Encryption and cryptographic hashes are used to produce a new payload or a test for integrity

(respectively) to achieve confidentiality or authenticated integrity. SAs can be combined into

 bundles for Composite protection. Second, the processing specified in the SA (or sequence of 

SAs making up the bundle) is applied. Third, IPsec header information is added to the resultingcipher text or the hash.

The IPsec standard defines two security protocols for traffic protection: Authenticated Header, or 

AH, and the Encapsulating Security Payload, or ESP. AH provides, as one would expect,

authentication services, and some protection against replay attacks. SP provides confidentiality

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 38/78

43

and limited traffic flow confidentiality; it may be configured to provide authentication with data

integrity as well.

There are some fine points: AH ensures the integrity of certain header fields that ESP does not

cover, even when ESP is used with authentication. Also ESP may be used to provide

authentication without encryption. We will assume that when encryption is used, authentication

services are also used, because this is advisable, and because it is hard to see the real meaning of 

 providing confidentiality for data that may change without notice.

The security protocols can be used in one of two modes: tunnel and transport. In transport mode,

the IPsec header fields are combined with the original IP headers of the packet. This mode can

therefore be used only if the entities performing IPsec processing at both endpoints of the

communication are the same as the entities communicating. If either end is a security gateway,

tunnel mode must be used instead. In tunnel mode, the entire IP packet is protected; a new IP

header is created with a new source field and a new destination field.

These fields give the addresses of the entry point and the exit point of the tunnel. We focus on

IPsec in networks where at least some of the IPsec processing occurs at security gateways. The

reasons for this are as follows:

• Most networks relying on IPsec do involve security gateways.

• Firewalls typically already exist at strategic locations for IP security gateways, and in fact

many products provide both firewall functionality and IPsec processing.

• Gateways have the advantage that an organization can define an organization wide security

 policy; the lion’s share of the enforcement of this policy may be carried out using a limited

amount of equipment directly under the management of the organization. Sub-organizations (or 

individual users on particular hosts) may be completely unaware of this policy, assuming that

they do not use IPsec on their own. If they desire to use IPsec, then there are constraints that theymust follow to ensure that their processing is compatible with the organization-wide policy;

these constraints are described in.

Thus, while our model allows for individual hosts to be IPsec-enabled, our primary interest lies

in the case where this is not the exclusive form of IPsec. In this situation, there are things that

can certainly go wrong.

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 39/78

44

For instance, suppose that we want packets from a particular peer s to be authenticated when

they arrive at a destination d, and in fact there is a pair of gateways offering IPsec processing

 between them. There are still ways that packets not originating at s can reach d. For instance,

suppose that a spoofer can route packets to d without traversing the gateways.

Or suppose the spoofer can route a packet to s’s gateway that arrives on the same interface that

traffic from s traverses. To achieve this, the spoofer may even be able to send IPsec-protected

traffic to a security gateway near s, where the protected packets claim to be from s but are not.

When s wants to ensure confidentiality for packets to d, there are dual concerns. Possibly the

 packet may be misrouted onto a public network without traversing a gateway. Or possibly a

gateway will provide ESP protection, but transmit it to a gateway distant from d, so its contents

will be disclosed before it reaches d. We seek a systematic way to express these sorts of 

 problems, and to ensure that they do not occur.

Modeling:

We view systems as composed of areas and devices capable of IPsec operations or packet

filtering. A device has interfaces on one or more areas. Any machine (such as a switch or host)

that performs no IPsec operations or filtering we may simply ignore.

In the example shown in Figure 3, areas appear as ovals and devices appear as black squares. An

edge represents the interfaces between a device and the area to which it is connected. We will

never connect two areas directly via an edge; this would not give a security enforcement point to

control the flow of packets between them. Instead, we coagulate any areas that are connected by

a device that provides no security enforcement, representing them by the same oval.

While this simple, bipartite graph representation is useful heuristically, it is inconvenient for 

rigorous modeling of IPsec processing. Several steps of processing may need to occur while the

 packet is associated with a particular interface, and they may depend as well on the direction inwhich the packet is traversing that interface.

Therefore we prefer a system model in which there are two nodes corresponding to each

interface. They represent the conceptual location of a packet when IPsec processing is occurring,

either as it traverses the interface inbound into the device or as it traverses the interface outbound

from the device. We call these conceptual locations directed interfaces.

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 40/78

45

To incorporate this notion, we will now introduce an enriched system model consisting of a

directed graph containing three kinds of nodes. These represent areas, devices, and directed

interfaces. To construct a model from a system representation in the style of Figure 3, for each

edge between a device d and an area a we insert two directed interface nodes, which we will call

id[a] and od[a].

A header is a member of the set H = V × V × P, consisting of a source location, a destination

location, and a protocol data value. Packet states are members of H_, that is, possibly empty

sequences hh1, . . . , hni. We use · as prefixing operator: h · hh1, . . . , hni = hh, h1, . . . , hni.

Let K be a set of ―processing states,‖ with a distinguished element ready 2 K. Intuitively, when

an interface has taken all of the processing steps in the Security Association (SA, see 3.1) for a

 packet p, then it enters the processing state ready, indicating that the packet is now ready to

move across the arc from that interface.

If this is an outbound interface, it means that the packet may now go out onto the attached area;

if it is an inbound interface it means that the packet may now enter the device, whether to be

routed to some outbound interface or for local delivery. Other members of K are used to keep

track of multistep IPsec processing, when several header layers must be added or removed before

 processing is complete at a particular interface.

These clusters of behavior represent IPsec Security Association bundles. We regard the travels of 

a packet through a system as the evolution of a state machine. The packet may not yet have

started to travel; this is the start state. The packet may no longer be traveling; this is the finished

state.

Every other state is a triple of a node ` 2 V , indicating where the packet currently is situated; a

 processing state _ 2 K, indicating whether the packet is ready to move, or how much additional

 processing remains; and a packet state _ 2 H_, indicating the sequence of headers nested around

the payload of the packet. Definition 1 (G,K, P, A,C) is the set of network states over the graph

G = (V,E), the processing states K, and the protocol data P with C _ A _ P. Let H = V × V × P.

(G,K, P,A,C) is the disjoint union of 

1. start,

2. stop, and

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 41/78

46

3. the triples (`, _, _), for ` 2 V , _ 2 K, and _ 2 H_.

The transition relation of a network state machine is a union of the following parameterized

 partial functions. We define what the resulting state is, assuming that the function is defined for 

the state given. We also constrain when some of these functions may be defined. Different IPsec

security postures are determined by different choices of domain for each of these partial

functions (subject to the constraints given).

The sets of authenticated headers A and confidentiality headers C play no role in determining the

evolution of network states, but they play a central role in expressing and verifying the security

goals for IPsec, as formulated in Section 3.3.

Definition 2 A network operation is any partial function of one of the following forms:

1. Packet creation operators create`,h(start) = (`, ready, hhi), when defined, for (`, h) 2 V ×H.

create`,h is not defined unless its argument is the state start.

2. The packet discard operator discard(`, _, _) = stop, when defined. Discard is undefined for 

start.

3. Packet movement operators movee,_(`, ready, _) = (`0, _, _), when e 2 E, ` e! `0 and _ 6=

ready. movee,_ is undefined for all other network states.

4. Header prefixing operators prefixh,_(`, _0, _) = (`, _, h · _) when defined. The function

 prefixh,_ is nowhere defined when h 62 A.

5. Header pop operators pop_(`, _0, h · _) = (`, _, _) when defined. 6. Null operators null_(`, _0,

 _) = (`, _, _) when defined.

A transition relation! _ ((G,K, P, A,C) × (G,K, P,A,C)) is a union of operators create, discard,

move, prefix, pop, and null.

When ! is a transition relation for (G,K, P,A,C), we regard each history of the associated state

machine as representing a possible trajectory for a packet through the network.

The assumption that prefix,_ is nowhere defined when h 62 A means that the only nested headers

we consider are IPsec headers. The discard operator allows us to subsume packet filtering, as in

Section 2, as part of IPsec functionality, which matches the intent of the IPsec RFC [14].

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 42/78

47

We call the assumption that movee,_0 (`, _, _) = (`0, _0, _) is not defined when _ 6= ready the

motion restriction. We call the assumption that it is not defined when _0 = ready the inbound

motion restriction.

The motion restriction codifies the assumption that a device will not move a packet until it is

ready. The inbound motion restriction codifies the assumption that there will always be a chance

to process a packet when it arrives at a location, if needed, before it is declared ready to move to

the next location.

Given a packet p, we call the address in the source header field of its topmost header src(p). We

call the address in the destination header field of the topmost header dst(p). We also call a packet

 p an IPsec packet if its outermost header h is in A.

We say that a header h provides confidentiality if h 2 C, and that it provides only authentication

if h 2 A \ C. Our treatment need not distinguish between ESP used only for authentication and

AH; however, these headers may be different members of A, and individual systems may have

security goals requiring one rather than the other.

Cryptographic Assumptions We will make two assumptions about the IPsec cryptographic

headers. First, we assume that cryptographic headers cannot be spoofed; in other words, that if 

we receive a message with an authenticating header from a source ―known to us,‖1 then the

entity named in the 1Presumably as certified by some public key infrastructure, and certainly

assumed to include those devices which are shown as nodes in the system model.

Source field of the header is the entity that applied the header, and the payload cannot have been

changed without detection. Second, confidentiality headers have the property that packets

 protected with them can be decrypted only by the intended recipient, i.e. the device named in the

ESP header destination field. More formally, using a dash for fields that may take any value, we

stipulate that for any transition:

These properties axiomatize what is relevant to our analysis in the assumption that key material

is secret. If keys are compromised then security goals dependent on them are unenforceable.

Expressing Security Goals:We focus on authentication and confidentiality as security goals in

our analysis. Concrete security goals select certain packets that should receive protection [14];

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 43/78

48

selection criteria may use source or destination addresses, protocol, and other header components

such as the ports, in case the protocol is TCP or UDP.

Authentication Goals:

The essence of authentication is that it allows the recipient to — so to speak  —  take a packet at

face value. Thus, for a packet p selected for protection by an authentication goal, If A is the

value in the source header field of p as received by B, then p actually originated at A in the past,

and the payload has not been altered since.

We do not regard a packet as being (properly) received unless the cryptographic hash it contains

matches the value computed from a secret shared between the two IPsec processing devices and

the packet contents. It will not be delivered up the stack otherwise, nor forwarded to another 

system after IPsec processing.

Confidentiality Goals:

We assume that confidentiality headers provide authentication, and add encryption. We have two

reasons for doing so. First, the IPsec specification allows both authentication and confidentiality

to be used with the ESP header; it is inadvisable to request only confidentiality when

authentication can also be had at the same time, and at modest additional processing cost.

Second, it seems hard to state precisely what data is kept confidential, if that data might change

as the packet traverses the network. When using confidentiality headers, we are therefore

attempting to achieve an authentication goal as well as a confidentiality goal.

A confidentiality goal for a packet with source field A, requiring protection from disclosure in

some network location C, stipulates: If a packet originates at A, and later reaches the location C,

then while it is at C it has a header providing confidentiality.

The cryptographic protection may refer to the ESP header more specifically, stipulating certain parameters (key length, algorithm, etc). The proviso that the packet was once at A is necessary,

 because in most cases we cannot prevent someone at C from creating a spoofed packet with

given header fields.

However, a spoofed packet cannot compromise the confidentiality of A’s data if it has no causal

connection to A.

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 44/78

49

Example Goals:

Consider the network in Figure 3. Given this example network, a potential authentication goal

could be that packets traveling from EngA to EngB should be authenticated, meaning that any

 packet with source field claiming to be from EngA that reaches EngB should in fact have

originated in EngA.

An example confidentiality goal is that packets traveling from FinA to FinB should be encrypted

whenever outside those areas. This means that if a packet has source field in FinA, and actually

originated there, then if it reaches any other area R, it has an ESP header providing encryption

while at R. One advantage to this form of expression is that it is semantically precise.

Another is that policies expressed in this form appear to be intrinsically composable, in the sense

that separate goals can always be satisfied together. Moreover, this form of expression often

suggests placement of trust sets, in a sense we will now introduce.

Trust Sets:

Once a packet enters an appropriate cryptographic tunnel, achieving a security goal does not

depend on what happens until it exits. Thus, the set of locations in the network topology that are

accessible to the packet from the source (before entering the tunnel) or accessible from the exit

of the tunnel (before reaching the destination) are the only ones of real importance.

We will call these locations a trust set for a particular security goal. A trust set is goal-specific;

different goals may have different trust sets. For instance, an engineering group working on a

sensitive project could easily have much more restrictive security goals than its parent

corporation (in terms of trust).

Typically, a trust set is not a connected portion of the network, but often instead consists of two

large portions (each a connected sub graph), with a large, less trusted network between them,

such as the public Internet. In some cases the trust set may consist of several islands, and the

tunnels may not connect all of them directly.

In this case, a packet may need to traverse several tunnels successively in order to get from one

island of the trust set to a distant one. The choice of trust set for a particular security goal is a

matter of balance. Clearly, the source must belong to the same island of the trust set as the tunnel

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 45/78

50

entrance, and the tunnel exit must belong to the same island as the destination (or the entrance to

the next tunnel).

This encourages creating trust sets as large as possible, since then a few tunnels may serve for 

many endpoints. However, the scope of a trust set must generally be limited to a set of areas on

which it is possible to monitor traffic and check configurations.

This encourages making the trust sets as small as possible. The art of using IPsec effectively

consists partly in balancing these two contrasting tendencies.

Boundaries Of special importance are those systems inside a trust set with a direct connection to

systems outside the trust set. We term these systems the boundary of the trust set. We assume

that every device on the boundary of a trust set is capable of filtering packets. This may be a

 portion of its IPsec functionality [14]. Alternatively, the device may not be IPsec-enabled, but

instead be a filtering router or packet-filtering firewall. We regard such devices as a degenerate

case of an IPsec-enabled device, one which happens never to be configured to apply any

cryptographic operations.

Definition 3 A trust set S for G = (V,E) consists of a set R _ V of areas, together with all devices

d adjacent to areas in R and all interfaces id[_] and od[_]. The inbound boundary of S, written

@inS is the set of all interfaces id[a] or id[d0] where d 2 S and a, d0 62 S. The outbound

 boundary of S, written @outs is the set of all interfaces od [a] or od[d0] where d 2 S and a, d0

62S.

Suppose that, in a security goal states that packets traveling between Ai and Bj (i, j 2 {1, 2})

must be protected with a confidentiality header whenever in the Internet area. A reasonable trust

set S for this goal would include all areas except for the Internet, as well as their attached devices

and interfaces. The trust set S is not a connected set.

Deriving Algorithms:

Given an environment in which one can rigorously reason about packet states, and precise

specifications of security goals, how does one ensure the goals are enforced? This section

focuses on answering that question, by detailing formal behavior requirements for systems and

then proving they guarantee enforceability.

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 46/78

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 47/78

52

SG5 testconfigs1/CRCF.encr-SG5.txt ios

...end routers

The first field in a router specification is a unique name given to the router (for example, SG1 in

Figure 3). The second field identifies the router’s configuration information (this information can

 be gotten by running the command show running-config on a Cisco router).

The third field indicates the type of router configuration; currently, the tool only supports Cisco

routers running IOS. Expansion of the tool to support other types of router is ongoing, however.

CAIC can be used in an iterative manner, to see how changing configurations affect goal

achievement. This file format allows all router configuration files could be located in some

central location, and referenced by path.

After router specification, trust sets are defined. On of the advantages of our treatment of IPsec is

that it is compatible with the layered structure of organizations. In particular, the two

corporations A and B in Figure 3 may have security goals they want to achieve via IPsec, while

the two engineering departments within them may have more tightly constrained goals that they

need to achieve.

In this case, they may manage their own IPsec capable equipment and configure them as needed.

Typically such sub organizations do not have access to the topological boundary of the parent

organizations. In this case, to be sure that their IPsec configurations do not interfere with the

goals of their parents, they need only ensure that they obey two conditions, namely the Pop Rule

and the Push Rule for trust sets S in which that the parent participates.

To protect its networks, organizations need to use a variety of different techniques in tandem.

Most companies use both packet-filtering firewalls and IPsec, typically as part of a Virtual

Private Network. We have presented techniques to use either of these mechanisms separately to

ensure that meaningful security goals are achieved, but their use in combination complicatesmatters.

The remaining question is how to ensure that dangerous packets that should have been filtered

were not protected by IPsec headers (and possibly encrypted) as they traverse the filtering point

where they should be discarded.

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 48/78

53

Later, a security gateway may remove the IPsec headers and cryptographic transformation,

unleashing packets that will damage the recipient. Thus, our core idea is that tunnel endpoints

must impose all filtering constraints that might have been missed while the packet was

encapsulated with IPsec.

Following our method for rigorous automated security management, we need to take four steps

to resolve this problem. The first, the modeling step, is unnecessary in this case, since the model

is already sufficiently expressive. In particular, packet filtering is already expressed as an aspect

of IPsec processing as codified in Definition 2. We can adopt the previous model unchanged.

Network Security Management with High-level Security Policies

High-level security policy:

Most existing tools for automatic network management adopt a policy-based approach. System

administrators decide upon a global policy specifying how the network should be configured.

The tools can verify that a given policy is correctly implemented by low-level mechanisms.

In some cases, they can also translate policies into sets of configuration directives and push

them to the corresponding network devices. In order to make sure certain security requirements

are met, the administrator only needs to examine the policy, which is easier and less error-prone

than examining every piece of the configuration.

While the separation of policy from mechanisms is an important step towards eliminating human

errors, an equally important question is how to make policy itself less error-prone. A good policy

language design should require little technical knowledge to write a ―correct‖ policy. However,

this ideal is often hard to achieve, largely because most security problems are caused by complex

interactions among different network components.

The correct behavior of a device is not only dependent on its own configuration, but also on

other devices in the network. This kind of interactions are usually explicitly expressed in

 policies, either in the form of policy constraints [12, 15] or as a set of conditions associated with

a policy rule [11]. This puts more burdens on the policy maker to take caution in defining those

constraints or policy conditions. Any mistake in this process may lead to insecure system.

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 49/78

54

For example, a policy regarding logging in to hosts may have a rule ―allow logging in to A only

if the protocol is SSH‖. The purpose is to prevent machine A’s data from being transmitted as

 plain text. However, one can telnet to a machine B (if that is allowed by the policy) and then ssh

into A, potentially leaking information communicated with A through the telnet hop. To fix this

 problem one can either set a constraint on policy rules to avoid any such insecure multi-hop

logins, or put an extra condition on A’s login policy like ―allow logging in to A from B only if 

the protocol is SSH and communication to B is secure‖.

While it is relatively easy to express the inter-domain configuration constraints in this simple

example, in reality they can be too complex to specify and reason about. A better policy should

start from a higher level, i.e. the goal of security administration. For the login example, a high-

level policy could be ―information communicated with machine A should not appear as plaintext

in the network‖.

This way we abstract away the complex interactions among different components and state the

ultimate effect desired. In comparison to low-level policies, a high-level policy says ―what we

want‖ instead of ―what to do‖. Since it closely matches the intention of policy decision -maker, a

high-level policy is easier to get right. Another problem with low-level policies is that they are

often designed to control a particular kind of resource.

For example, a packet-filtering policy only controls configuration elements such as firewalls,

routers, and network topology. Changes in other configuration elements may require

modification to the packet-filtering policy in order to achieve the ultimate security goal. Frequent

 policy update is undesirable because a policy’s correctness affects security. A high-level policy,

on the other hand, is much more stable because the goals of security administration usually do

not change very frequently.

Multiple low-level policies describe different parts of configurations; for each low-level policy

domain, there will be a tool to check that implementations conform with the low-level policy (or in other words, the low-level policy is a correct description of the actual network). A checker 

described in this paper will detect violations of the high-level policy, given a set of low-level

configuration descriptions.

The high-level policy does not replace the low level ones. Instead, it is used to reason about them

 — making sure that together they satisfy certain security criteria.

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 50/78

55

Dealing with software vulnerabilities:

Many security problems are caused by software vulnerabilities. Thus when configuring a

network, a system administrator must take into consideration the security robustness of installed

software components. While one should generally avoid running bug-ridden software, the

ubiquitous existence of vulnerabilities is a status quo that has to be dealt with for the foreseeable

future.

And it is a harsh reality that some vendors of popular software do not release patches for even a

critical vulnerability quickly. In the wake of a new bug report, an automatic tool that quickly

identifies what security risks it brings to the network would be useful.

This allows the system administrator to take preemptive actions before a patch is available — 

modifying firewall rules, moving sensitive information from potentially compromisable zones or 

disabling the buggy software if there is no other alternative. To this end, the tool presented in this

 paper takes potential software vulnerabilities into consideration when checking compliance with

high-level policies.

Software bugs can be quite subtle and it requires expertise to fully understand their security

impact. Since new varieties of exploits emerge every day, security administrators need to pay

attention to reports like CERT advisory or Bug Traq to keep updated on the most recent threats.

This poses a significant challenge for incorporating software vulnerabilities into automatic

security management: the tool must be able to take as input new software bugs and reason about

them. This requires a formal language to specify software vulnerabilities.

These formal descriptions could be written by security experts from an outside source like CERT

and local administrators would only need to plug them into the tool.

A Motivating Example We use the example network in Figure 2 to illustrate our approach. There

are three zones (Internet, dmz and corp) separated by two firewalls (FW1 and FW2). Theadministrator manages the web Server and the file Server while the project PC is operated by

corporate employees.

The company owns proprietary information so the security management needs to ensure that

their confidentiality will not be compromised by an outside attacker. To achieve this security

goal, multiple configuration elements must be set up appropriately.

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 51/78

56

First, the topology and firewall configuration allow outside packets to reach dmz zone, but not

corp zone where the confidential project Plan is stored. Second, the file sharing service running

on file Server requires authentication for access to project Plan.

Only project managers have the access rights. Third, the administrator maintains a collection of 

application binaries so that individual employees do not need to install programs on the project

PC. This scheme may look quite secure  — even if an attacker can compromise web Server, it is

still hard for him to get his hands on the confidential data. Since FW2 only allows web Server to

communicate with file Server, the attacker will have to somehow compromise file Server to

 proceed.

If the file sharing service is secure, it seems that we can rest assured. However, a dedicated and

clever attacker can still cause a security breach in this configuration. Remember web Server is

managed by the administrator. So if web Server is compromised, it is likely that the credential of 

the administrator will also be leaked to the attacker, through a password sniffer for example.

The administrator’s credential does not enable the attacker to access project Plan on file Server(it

can only be accessed by project managers). However, it does allow the attacker to update the

application binaries. So he can install his version of Acrobat Reader. Some day a project

manager will open a project plan in PDF format, and besides showing the file, the Trojan horse

Acrobat Reader communicates the content to the attacker.

A safer way to configure the network is to move web pages from corp zone to dmz zone and ban

any inbound access to corp zone. In our management framework, the administrator can define a

high-level policy for data confidentiality such as ―project Plan can only be read by project

managers‖. He can then run our tool to check if the policy is upheld. In this case the tool will

report a violation and attack steps as described above.

A High-level Policy :

Our high-level policy language is in the form of data access list (DACL). A data access list is a

list of data access rules (DataAccRule). Each rule specifies a legal operation a subject (Principal)

is permitted to perform on an object (Data). Principal is represented by a list of symbols. Each

symbol stands for a group of people.

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 52/78

57

The list of symbols stands for the union of the groups represented by each symbol. A similar 

representation is used for Data. Op can be either read or write. A DACL policy for the example

is shown below.

There is a subtle difference between the semantics of DACL and that of a low-level access-

control policy. A DACL rule cannot simply be translated into some enforcement mechanism that

controls access to the data. As shown by the example, an attacker can steal confidential

information from the file Server without violating the file server’s access control mechanism.

Rather, he exploits vulnerabilities in webServer, which is allowed to access file Server by the

firewall.

The security of a piece of data is affected not only by the authentication mechanisms on the host

where it is stored, but also depends on the ―hardness‖ of all machines that can access the host.

The security property specified by DACL is enforced collectively by the configurations of all

those hosts. Thus, to verify that the high-level DACL policy is upheld, all configuration

 parameters in the network must be taken into account. 

Configuration Description:

To verify that a DACL policy is upheld, our checker needs descriptions of the actual network.

Part of the descriptions can be low-level policies that control certain aspects of network 

configurations. In particular, a host access control list (HACL) is a good abstraction for the

overall effects of packet-filtering devices and there are existing firewall management tools that

can be used to relate a HACL policy to the actual status of those devices [2, 6]. So we use HACL

as the description language for those devices. This section shows description languages for the

other configuration information — information about principals and hosts.

Principal binding :

A principal binding (PBnd) provides security relevant information about people. The locationfield includes the hosts from which the principal can operate and the local user account he has on

the hosts. As a first step we only differentiate two kinds of users: privileged and unprivileged.

The trust field indicates the principal’s trustworthiness.

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 53/78

58

A principal is trusted if his intention is always good and it is unlikely he will perform potentially

dangerous operations. An example of such operations is opening attachments in unsolicited

emails.

A principal is incompetent if his intention is always good but he may inadvertently perform those

 potentially dangerous operations. A malicious principal has bad intentions and may try to

illegally access information by launching attacks. A sample principal binding for the example is

shown below. In reality one can write a program to query a LDAP database to get the

information, although we have not done so.

The importance of principal binding information in policy verification lies in two aspects. First,

the locations of malicious principals provide an initial attack condition for the attack simulation

algorithm; second, the locations of incompetent principals provide a set of targets for client-side

attacks.

These will become clear in section 5. 4.2 Host configuration The language for describing

configuration of a host is.

A host configuration (Host Conf ) includes the network address of the host (Host)3, descriptions

of applications running on the host, and descriptions of data accessible from the host. The latter 

two are explained below. An application (App) is a piece of software running under the privilege

of Usr and with certain configuration options (Option List).

In this section we address the problem of the higher level check   —  assuming a given

configuration description is a truthful representation of the actual network, how can we reason

about it in order to determine whether the high-level policy is upheld.

We achieve this goal in two stages. In the first stage, we model the whole network as a state-

transition system. This gives us the ability to reason about multi-stage attacks. The transition

rules are contributed by all components in the system.

For each component class, there is a general specification as to what kind of transition rules the

component generates under various con- figuration options. From this specification, combined

with the component description information, Data binding relates a conceptual notion of data to

its physical representation on a host. The access field indicates the access control mechanism on

the local host to protect the data.

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 54/78

59

For example, ―privileged writable‖ means privileged users can modify the data. The location

field tells if the data is locally stored on the host or resides in a remote file server. For the latter 

case, the file transfer protocol, the address of the server and relevant mount information must be

 provided.

If the same data has more than one access or location attributes, there will be multiple bindings

for the data. A sample data binding for project Plan on project PC is shown below.

We can uniquely determine the set of transition rules a particular instance of the component

contributes. The transition rules of the whole network system is the collection of all transition

rules contributed by every component on every host.

In the second stage, we simulate attacks on the state-transition system. An initial attack condition

is introduced in the system and an efficient algorithm finds out the final state under the transition

rules.

Conditions in the final state are examined to see if any of them is in violation of the policy.

Section 5.1 discusses the component specification language, and section 5.2 discusses the attack 

simulation algorithm.

Component specification:

A component specification describes the transition rules a software or data module generates

under various configuration situations.

This specification can be provided by an outside source with expertise in security analysis.

Local administrators only need to supply the relevant configuration information. Thus,

component specifications should be general so that they can be reused across sites.

Following is an example specification for a buffer overflow bug in certain versions of Apache:

specDef{

SwID = apache

Version = 1.2.2 - 1.3.24

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 55/78

60

 perm = Usr 

net in anyHost http exploit

==> fullControl Usr}

Intuitively, the above specification says ―if a software component apache with version from 1.2.2 

to 1.3.24 is running under the permission of Usr, and a malicious packet from an attacker arrives

through http protocol, then he can hijack the software and do whatever Usr is allowed in the local

system.

Finds in a set of global conditions the ones that are relevant to host _ and converts them to the

local form is the reverse process: find in a set of local conditions the ones that affect other hosts

and convert them to the global form.

Finds all applications of rules in _ to conditions in _ . It outputs the resulting conditions and a

rule set that includes both _ and any new rules generated by the application.

Computes the final state of a host, given an initial state _ and a set of global conditions _ 

 propagated to it. The algorithm repeatedly applies the rule set _ to the newly generated

conditions _*_ until no more new conditions can be generated.

The set of new global conditions (_'+ ) is also returned. Figure 7 is the algorithm for the attack 

simulation. The algorithm continues as long as there are still new global conditions generated by

the hosts.

Let, be the number of predicates defined in the system, __- be the maximum arity of all

 predicates, and ./- be the maximum cardinality of all argument domains. The number of different

conditions that may enter _ is bounded by ,0.213 - (conjunction conditions are decomposed

 before entering the condition set).

In compliance checking, we use a polynomial attack simulation algorithm to search all possible

attack scenarios. Another approach is to use a standard model checker to do the search. It is well

known that the complexity of model-checking is inherently exponential in the size of the state-

transition system.

Our algorithm can achieve polynomial time due to an implicit assumption of monotonic.

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 56/78

61

Under the monotonic assumption, an attacker does not need to relinquish the access he has got in

order to gain more accesses, thus no backtracking is needed during search. The monotonic

assumption is generally true if the policy only involves confidentiality and integrity.

If one also wants to reason about availability the assumption will no longer hold  — a denial of 

service attack not only compromises the availability of a host, but also compromises the ability

of the attacker to launch further attacks from that host.

In that case backtracking would be necessary and we suspect an off the shelf model checker may

outperform a custom tailored search engine.

In reality trust management (TM) plays an important role in network security. The TM policies

themselves can be quite complex and the analysis of their security properties is hard.

We simplified the modeling of trust relationships by ignoring the possible relationships between

different principals’ credentials.

After the attacker obtains one principal’s credential, our model does not infer its effects on trust

relationships other than that the attacker can gain privileges of the victim principal.

In this section, we discuss several issues concerning the implementation of the compression of 

image by Huffman coding.

Compression is a technique for the reduction of size of text, video, audio, image etc. For this

certain principles or methods are followed for the compression of the above-mentioned data.

Basically the steps followed for the compression of the images are as follows:

a) Block preparation.

 b) Separating into pixels/coefficients.

c) Quantization.

d) DCT scanning.

e) Entropy encoding.

f) Frame building.

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 57/78

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 58/78

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 59/78

64

Since no system could be useful if it does not produce the required output in the specified format.

The output generated are displayed by the system under consideration are tested by comparing

with the format required by the user.

Here the output format is considered in two ways. One is onscreen and the other in printed

format. The output format on the screen is found to be correct as the system design phase

according to the user needs for the hard copy also, the output comes out as specified

requirements by the user hence; the output testing does not result in any correction in the system.

User Acceptance Testing:

User Acceptance of a system is a key factor to the success of any system. The system under 

consideration was tested for user acceptance by constantly keeping in touch with the prospective

system user at the time of developing and making changes wherever required.

This is done with regard to the following points.

  Input screen design.

  Output screen design.

  Online message to guide the user.

  Event driven system.

Black Box Testing:

Knowing the specified function that a product has been designed to perform, test can be

conducted that each function is fully operational.

Black Box test are carried out to test that input to a function is properly accepted and output is

correctly produced. A black box test examines some aspects of a system with little regard for the

internal logical structure of the software.

Errors in the following categories were found through Black Box testing

  Incorrect or missing functions

  Interface errors

  Errors in database structure or external database access\

  Performance error.

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 60/78

65

  Initialization and termination errors.

While box testing:

White Box testing of software is predicted on a close examination of procedural detail. The

status of the program may be tested at various points to determine whether the expected or 

asserted status is corresponding to the actual status.

Using these following test cases can be derived

  Exercise all logical conditions on their true and false side

  Execute all loops within their boundaries and their operation bounds

SAMPLE REPORT

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 61/78

66

Fig 3: Router 

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 62/78

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 63/78

68

In this paper we have focused on a well known statistical redundancy of DCT blocks. To exploit

this redundancy we proposed a flexible band coding approach and have presented encoders for 

the compression.

The encoders implemented in the project have the efficiency to perform in compressing the

images using the JPEG sequential mode encoding and JPEG sequential mode encoder for 

 progressive mode transmission.

After the completion of the experiment we conclude with the experimental results obtained. The

conclusion of the experimental results for the above is:

  JPEG table is indeed very efficient for image coding at the default quality if only one

AC code table is to be used.

  BASC-I compress the AC code by about 5.78% more over what is generated by the

JPEG encoder. It is significant reduction in the light that some adaptive encoders such

as character mode arithmetic encoder obtain reduction.

  Our encoders can be realized very easily by modifying the JPEG sequential mode

encoder. JPEG decoder is upward compatible with our encoders.

  JPEG spectral selection method is not efficient if the objective is to transmit image

code progressively and still generate minimum code of all bands combined.

  An implementation of two bands SSM not only generates larger total AC code than

ESSC, it generates even more code than the JPEG sequential mode encoder. Further 

ESSC perform better than three bands and four bands SSM.

FUTURE ENHANCEMENT 

As our project is a compression of images it is mainly used to compress the image and transform

without any loss of data. Our aim is to compress the image and send to destination point with the

minimum use of bandwidth.

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 64/78

69

i)  The coding can be developed and compress into a form of chip and implement in

the mobile phones.

ii)  After doing that the picture message can be sent by having some advantage such as:

a)  Minimum bandwidth is used for the transfer of compressed

image.

 b)  Also no data is lost while transferring.

c)  Working time is less for transferring using this mode of 

compression.

iii) It can be used with the Digital Signature. It increases the security of the

information.

iv)  In this project, we are going to do monitor the congestion on NBP.

v)  Through this result will some enhancement we can achieve the High Performance

 Network.

  Today, all networks suffer from main Problem of congestion collapse from undelivered

 packets.

  Two new module ingress and egress router was introduced for avoiding congestion.

  Congestion was controlled at ingress router.

  Congestion was controlled by introducing Dynamic routing and hop-by-hop routing was

used.

  To achieve scalability ,robustness and avoid packet loss through NBP

  Packets are buffered in the routers present in the network which causes Congestion collapse

from undelivered packets.

  Also, there is no security for data transmission.

  Unfair bandwidth allocation arises in the Intranet due to the presence of undelivered packets.

TERMS USED: PUBLIC KEY ENCRYPTION

Public key encryption algorithms rely on one key for encryption and a different but related key

for decryption. These algorithms have the following important characteristics:

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 65/78

70

  It is computationally infeasible to determine the decryption key given only knowledge of 

the cryptographic algorithm and the encryption key.

In addition, some algorithms such as RSA, also exhibit the following characteristics:

  Either of the two related keys can be used for encryption, with the other used for 

decryption.

A public key encryption scheme has six ingredients,

  Public key:

This is the readable message or data that is fed into the algorithm as input.

  Encryption algorithm:

The encryption algorithm performs various transformations on the plain text.

  Public and Private key:

This is a pair of keys that have been selected so that if one is used for encryption, the

other is used for decryption. The exact transformations performed by the encryption algorithm

depend on the public or private key that is provided as input.

  Cipher text:

This is the scrambled message produced as output. It depends on the plaintext and the key. For a

given message, two different keys will produce two different cipher texts.

  Decryption algorithm:

This algorithm accepts the cipher text and the matching key and produces the original plaintext.

The essential steps are the following:

  Each user generates a pair of keys to be used for the encryption and decryption of 

messages.

  Each user places one of the two keys in a public register or accessible file. This is the

 public key. Te companion key is kept private. As the following figure suggests that the

each user maintains a collection of public keys obtained from others.

  If BOB wishes to send a confidential message to Alice. Bob encrypts the message using

Alice’s public key. 

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 66/78

71

  When Alice receives the message, she decrypts it using her private key. No other 

recipient can decrypt the message because only Alice knows Alice’s private key. 

  With this approach, all participants have access to public keys, and private keys are

generated locally by each participant and therefore need never be distributed. As long as

a system controls its private key, its incoming communication is secure.

  At any time, a system can change its private key and publish the companion public key

to replace its old public key.

  The following table summarizes some of the important aspects of symmetric and public

key encryption. To discriminate between the two, we will generally refer to the key used

in symmetric encryption as a secret key. The two keys used for public key encryption are

referred to as the public key and the private key.

  Invariably, the private key kept secret, but it is referred to as a private key rather than a

secret key to avoid confusion with symmetric encryption.

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 67/78

72

CHAPTER 5

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 68/78

73

5.1 SUMMARY OF FINDINGS

We have proposed a policy-based approach to automatic network security management. he

approach has the following characteristics.

1. A high-level policy expresses security concerns of protecting data. The policy language is

simple to understand and easy to get right.

2. Potential software vulnerabilities are taken into consideration in compliance checking. Our 

tool can incorporate knowledge of software bugs from an independent outside source.

3. The two-level framework allows for leveraging existing management tools based on low level

 policies and provides good modularity. We have implemented the compliance checker in a

 prolog-style logic program.

With the explosion of the public Internet and e-commerce, private computers, and computer 

networks, if not adequately secured, are increasingly vulnerable to damaging attacks. Hackers,

viruses, vindictive employees and even human error all represent clear and present dangers to

networks. And all computer users, from the most casual Internet surfers to large enterprises,

could be affected by network security breaches. However, security breaches can often be easily

 prevented. How? This guide provides you with a general overview of the most common network 

security threats and the steps you and your organization can take to protect yourselves from

threats and ensure that the data traveling across your networks is safe.

Network Security Testing

Activities that provide information about the integrity of an organization's networks and

associated systems through testing and verification of network-related security controls on a

regular basis. ―Security Testing‖ or ―Testing‖ is used throughout this document to refer to

 Network Security Testing.

The testing activities can include any of the types of tests described in Chapter 3, including

network mapping, vulnerability scanning, password cracking, penetration testing, war dialing,

war driving, file integrity checking, and virus scanning.

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 69/78

74

Operational Security Testing

 Network security testing conducted during the operational stage of a system’s life, that is, while

the system is operating in its operational environment.

Vulnerability

A bug or misconfigurations or special sets of circumstances that could result in an exploitation of 

that vulnerability.

For the purposes of this document, vulnerability could be exploited directly by an attacker, or 

indirectly through automated attacks such as Distributed Denial of Service (DDOS) attacks or by

computer viruses.

The primary reason for testing the security of an operational system is to identify potential

vulnerabilities and subsequently repair them. The number of reported vulnerabilities is growing

daily; for example, the number of new information system vulnerabilities reported to the Bugtraq 

database has more that quintupled since the start of 1998, from an average of 20 to over 100 per 

month.

The number of computers per person in many organizations continues to rise, increasing the

demands on competent and experienced system administrators. Consequently, it is imperative

that organizations routinely test systems for vulnerabilities and misconfigurations to reduce the

likelihood of system compromise.

Roles and Responsibilities for Testing

Only designated individuals, including network administrators or individuals contracted to

 perform the network scanning as part of a larger series of tests, should conduct the tests

described in this section.

The approval for the tests may need to come from as high as the CIO depending on the extent of 

the testing. It would be customary for the testing organization to alert other security officers,

management, and users that network mapping is taking place.

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 70/78

75

Since a number of these test mimic some of the signs of attack, the appropriate manages must be

notified to avoid confusion and unnecessary expense.

Wireless communications offer organizations and users many benefits such as portability and

flexibility, increased productivity, and lower installation costs. Wireless technologies cover a

 broad range of differing capabilities oriented toward different uses and needs.

Wireless local area network (WLAN) devices, for instance, allow users to move their laptops

from place to place within their offices without the need for wires and without losing network 

connectivity.

Less wiring means greater flexibility, increased efficiency, and reduced wiring costs. Ad hoc

networks, such as those enabled by Bluetooth, allow data synchronization with network systems

and application sharing between devices.

Bluetooth functionality also eliminates cables for printer and other peripheral device

connections. Handheld devices such as personal digital assistants (PDA) and cell phones allow

remote users to synchronize personal databases and provide access to network services such as

wireless e-mail, Web browsing, and Internet access.

Moreover, these technologies can offer dramatic cost savings and new capabilities to diverse

applications ranging from retail settings to manufacturing shop floors to first responders.

However, risks are inherent in any wireless technology. Some of these risks are similar to those

of wired networks; some are exacerbated by wireless connectivity; some are new.

Perhaps the most significant source of risks in wireless networks is that the technology’s

underlying communications medium, the airwave, is open to intruders, making it the logical

equivalent of an Ethernet port in the parking lot.

The loss of confidentiality and integrity and the threat of denial of service (DoS) attacks are risks

typically associated with wireless communications.

Unauthorized users may gain access to agency systems and information, corrupt the agency’s

data, consume network bandwidth, degrade network performance, launch attacks that prevent

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 71/78

76

authorized users from accessing the network, or use agency resources to launch attacks on other 

networks.

As described in this document, the risks related to the use of wireless technologies are

considerable. Many current communications protocols and commercial products provide

inadequate protection and thus present unacceptable risks to agency operations. Agencies must

actively address such risks to protect their ability to support essential operations, before

deployment of wireless technologies.

Furthermore, many organizations poorly administer their wireless technologies. Some examples

include deploying equipment with ―factory default‖ settings, failing to control or inventory

access points, not implementing the security capabilities provided, and not developing or 

employing security architecture suitable to the wireless environment (e.g., one with firewalls

 between wired and wireless systems, blocking of unneeded services/ports, use of strong

cryptography). To a large extent, most of the risks can be mitigated.

Authority:

The National Institute of Standards and Technology (NIST) developed this document in

furtherance of its statutory responsibilities under the Computer Security Act of 1987 and the

Information Technology Management Reform Act of 1996 (specifically 15 United States Code

[U.S.C.] 278 g-3 (a)(5)).

This is not a guideline within the meaning of 15 U.S.C. 278 g-3 (a)(3). Guidelines in this

document are for federal agencies that process sensitive information. They are consistent with

the requirements of the Office of Management and Budget (OMB) Circular A-130.

This document may be used by nongovernmental organizations on a voluntary basis. It is not

subject to copyright.

 Nothing in this document should be taken to contradict standards and guidelines made

mandatory and binding upon federal agencies by the Secretary of Commerce under statutory

authority. Nor should these guidelines be interpreted as altering or superseding the existing

authorities of the Secretary of Commerce, the Director of the OMB, or any other federal official.

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 72/78

77

Document Purpose and Scope:

The purpose of this document is to provide agencies with guidance for establishing secure

wireless networks.1 Agencies are encouraged to tailor the recommended guidelines and solutions

to meet their specific security or business requirements.

The document addresses two wireless technologies that government agencies are most likely to

employ: wireless local area networks (WLAN) and ad hoc or  — more specifically — Bluetooth

networks.

The document also addresses the use of wireless handheld devices. The document does not

address technologies such as wireless radio and other WLAN standards that are not designed to

the Institute of Electrical and Electronics Engineers (IEEE) 802.11 standard.

These technologies are out of the scope of this document. Wireless technologies are changing

rapidly. New products and features are being introduced continuously. Many of these products

now offer security features designed to resolve long-standing weaknesses or address newly

discovered ones. Yet with each new capability, a new threat or vulnerability is likely to arise.

 Network scanning involves using a port scanner to identify all hosts potentially connected to an

organization's network, the network services operating on those hosts, such as the file transfer 

 protocol (FTP) and hypertext transfer protocol (HTTP), and the specific application running the

identified service, such as WU-FTPD, Internet Information Server (IIS) and Apache for the

HTTP service.

5.2 SUGGESTION

References

1 Hong, Namsoo; Al-Khatib, Wallid; Magagna, Bill; McLoughlin, Andrea; & Coehttp, Brenda.

Systems Theory.

http://www.ed.psu.edu/insys/ESD/systems/theory/SYSTHEO2.htm.

2 Berinato, Scott. ―After the Storm, Reform.‖ CIO Magazine, Dec. 15, 2003.

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 73/78

78

http://www.cio.com/archive/121503/securityfuture.html

3 Starr, Randy; Newfrock, Jim; & Delurey, Michael. ―Enterprise Resilience: 

Managing Risk in the Networked Economy.‖ Strategy & Business, Spring 2003. 

http://www.strategy-business.com

4 Mimoso, Michael S. ―Measuring Security ROI a Tall Order.‖ SearchSecurity.com, April 15,

2002. http://www.searchsecurity.com

5 WordNet. Cognitive Science Laboratory, Princeton University.

http://www.cogsci.princeton.edu

[1] Paul Ammann, Duminda Wijesekera, and Saket Kaushik. Scalable, graph-based network 

vulnerability analysis. In Proceedings of 9th ACM Conference on Computer and

Communications Security, Washington, DC, November 2002.

[2] Yair Bartal, Alain J. Mayer, Kobbi Nissim, and Avishai Wool. Firmato: A novel firewall

management toolkit. In IEEE Symposium on Security and Privacy, pages 17 – 31, 1999.

[3] S. Bhatt, A.V. Konstantinou, S. R. Rajagopalan, and Yechiam Yemini. Managing security in

dynamic networks. In 13th

USENIX Systems Administration Conference

(LISA’99), Seattle, WA, USA, November 1999. 

[4] Matt Blaze, Joan Feigenbaum, John Ioannidis, and Angelos D. Keromytis. The KeyNote

Trust-Management System, Version 2, Sept 1999. Request For Comments (RFC) 2704.

[5] Matt Blaze, Joan Feigenbaum, John Ioannidis, and Angelos D. Keromytis. The role of trust

management in distributed systems security. In Secure Internet Programming: Issues in

Distributed and Mobile Object Systems, Springer-Verlag Lecture Notes in Computer Science

State-of-the-Art series, pages 185 – 210, Berlin, 1999.

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 74/78

79

[6] J. Burns, A. Cheng, P. Gurung, S. Rajagopalan, and et al. Automatic management of network 

security policy. In DARPA Information Survivability Conference and Exposition (DISCEX

II’01), volume 2, Anaheim, California, June 2001. 

[7] N. Damianou, N. Dulay, and M Sloman E. Lupu. The ponder policy specification language.

In Workshop on Policies for Distributed Systems and Networks (Policy2001), HP Labs Bristol,

Jan 2001.

[8] J. D. Guttman. Filtering postures: Local enforcement for global policies. In Proc. IEEE

Symp. on Security and Privacy, pages 120 – 129, Oakland, CA, 1997.

[9] Susan Hinrichs. Policy-based management: Bridging the gap. In 15th Annual Computer 

Security Applications Conference, Phoenix, Arizona, Dec 1999.

[10] Sushil Jajodia, Steven Noel, and Brian O’Berry. Topological Analysis of Network Attack 

Vulnerabiity, chapter 5. Kluwer Academic Publisher, 2003.

[11] Angelos D. Keromytis, Sotiris Ioannidis, Michael B. Greenwald, and Jonathan M. Smith.

The STRONGMAN architecture. In Proceedings of the 3rd DARPA Information Survivability

Conference and Exposition (DISCEX III), pages 178 – 188, Washington, DC, April 2003.

[12] Alexander V. Konstantinou, Yechiam Yemini, and Danilo Florissi. Towards self-

configuring networks. In DARPA Active Networks Conference and Exposition (DANCE), San

Franscisco, CA, May 2002.

[13] Ninghui Li, William H. Winsborough, and John C. Mitchell. Beyond proof-ofcompliance:

Safety and availability analysis in trust management. In 2003 IEEE Symposium on Security and

Privacy, Berkeley, California, May 2003.

[14] Steven Noel, Sushil Jajodia, Brian O’Berry, and Michael Jacobs. Efficient minimum-cost

network hardening via exploit dependency graphs. In 19th Annual Computer Security

Applications Conference (ACSAC), December 2003.

[15] C. Ribeiro, A. Zuquete, P. Ferreira, and P. Guedes. SPL: An access control language for 

security policies and complex constraints. In Network and Distributed System Security

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 75/78

80

Symposium (NDSS), Feb 2001.

5.3 CONCLUSION

It is no wonder that security is so difficult to manage in modern organizations. The practice of 

security management continues to evolve inside organizations, and therefore has yet to garner its

fair share of attention and resources.

This is partially the consequence of the organization’s inability to see the value of security

outside of technical constraints and regulatory compliance.

In addition, the industry’s affinity for technology- based solutions alienates the ―business people‖in the organization.

There is hope however that organizations and the security industry are evolving in this respect.

Any organizations are adopting a risk-based approach to security. The move to a risk based

 paradigm is a catalyst for moving security from a technical specialty to an organizational

competency.

Applying a risk perspective to security is a logical progression — risk management is a basic

 business function, and whether it is done implicitly or explicitly, it must be performed at an

organizational level to be purposeful.

But even this paradigm for security has significant challenges to overcome, notwithstanding the

many definitions of ―risk‖ and the somewhat negligent way in which risk is bandied about as the

new security buzzword.

For example, the security industry offers many options and services for performing security

―risk‖ assessments; however, at the nucleus of these offerings is usually a traditional

vulnerability assessment with very little connection to risk drivers.

Organizations must also be cognizant that a risk perspective alone is not a panacea for solving all

of the issues that they face in elevating security to the level of other pervasive business problems.

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 76/78

81

In pursuit of addressing the challenges noted herein, the first obstacle that an organization must

confront is to determine what they are trying to accomplish with their security activities.

In essence, the organization must ask what benefits they get from ―doing security.‖ The

organizational perspective is essential to determining these benefits and for setting appropriate

targets for security.

One of the most important characteristics of cells and larger organisms is their ability to adapt to

new or changing environments. Organizations, like cells, also must continually adapt to their 

environment and emerging risks — risks that are perhaps unknown until the organization is

impacted by them.

In order to do this successfully, organizations need to view security in the context of the larger 

 picture — one of organizational or enterprise resilience. A resilient approach transforms the basic

 premise of security —that of ―locking down‖ an asset so that it is free from harm— to one that

 positions security as a contributor to strengthening the organization’s ability to adapt to new risk 

environments and accomplish its mission.

Aiming to make the organization more sensing, agile, and prepared provides a clearer purpose,

direction, and context for security management.

Looking beyond security (to resiliency) may provide the change in perspective that organizations

need to balance security and risk management with the organization’s strategic drivers. 

Series in Enterprise Security Management/Enterprise Resiliency:

This article is the first in a series exploring a new view of security in today’s complex

organizations. In future articles, we’ll discuss in more detail the evolutionary shifts that are

needed in organizations to allow them to be more effective at managing security across an ever-

changing enterprise.

We’ll also present our current research on a process centric view of security as a vehicle for 

improvement. Finally, the concepts of enterprise resiliency will be introduced, and we’ll explore

the role that enterprise security management plays in the pursuit of resilience.

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 77/78

82

We have argued by example in favor of rigorous automated network security management. This

method emphasizes modeling, which allows a class of systems to be represented in a uniform

mathematical style.

Network Intrusion Detection System :

The NIDS chosen is snort. Snort is a high performance, light weight, highly customizable open

source NIDS. It supports a wide range of reporting formats, which will be really useful in our 

case.

Customizing the NIDS :

Snort normally has thousands of rules. Having everything enabled will drastically increase

resource requirements on the RIS/MIS servers. Therefore, the rules will have to be tuned to only

include what we actually need Tuning the rules also helps in bringing down the number of false

 positives.

Reporting format :

For standards compliance and easy integration with any upper level NMS, the default format has

to be the Intrusion Detection Messages Exchange Format (IDMEF) which is an IETF draft. For 

real-time reporting however, SNMP traps are much better.

Attack Classification:

We should also look for only certain types of attacks and classify them according to severity.

And then, based on this severity, we can decide whether we want to take any immediate

automated action. Attacks are classified as: a. Denial of Service (DoS) attacks either directed

Bandwidth used :

We have only around 64kbps available at each node and 256kbps at the Data Centrer(DC). At

the DC, we will have to manage 100's of nodes. Most of the bandwidth savings are obtained by

writing cfengine extensions to take decisions for automated actions at the nodes instead of 

sending real-time reports to the DC.

Latency:

7/27/2019 a study on Network security and management - full report.docx

http://slidepdf.com/reader/full/a-study-on-network-security-and-management-full-reportdocx 78/78

We need to minimize the time taken for any control action to take place. It could either be

automated or operator assisted. In case of operator assisted actions, we need to provide an

interface to make the operators job easier.

Load on the servers at the LSP :

The servers on the LSP are already providing various services to the customers. Any

management system is an add-on that must not consume too much of the available resources.

The way host based intrusion detection is performed, the rules on the NIDS, the reporting

mechanism, etc. play a role in how good the developed system is.

We've shown the basic design and architecture of a Security Management System that can work 

on low bandwidth networks. Most of the challenges have been met. Performance studies are still

to be completed. The system is based completely on open standards and uses open source

components wherever possible. This helps us to drive down costs and makes it easy to customize

the various components involved.

A standardized mechanism is needed for this. We need to be easily use/integrate the security

management system with any higher level NMS. We need to be able to use a standard NMS to

monitor and control the security management system. To monitor the security management

system, we need to use standard reporting formats for both statistical data and real-time updates.

IDMEF is a XML based reporting mechanism recommended by the Internet Engineering Task 

Force (IETF). And this suits us best for transferring statistical information.