Upload
others
View
13
Download
0
Embed Size (px)
Citation preview
5
CHAPTER 2
LITERATURE SURVEY
2.1 WIRELESS MESH NETWORKS
Various wireless networks evolve into the next generation to provide
better services. A key technology, wireless mesh networks (WMNs) has emerged
recently. WMNs have many advantages such as low up-front cost, easy network
maintenance, robustness, and reliable service coverage. Wireless Mesh Networks
(WMNs) consist of mesh routers and mesh clients, where mesh routers have
minimal mobility and form the backbone of WMNs. Each node operates not only as
a host but also as a router, forwarding packets on behalf of other nodes that may not
be within direct wireless transmission range of their destinations. A WMN is
dynamically self-organized and self-configured, with the nodes in the network
automatically establishing and maintaining mesh connectivity among themselves
and conventional clients.
The integration of WMNs with other networks such as the Internet,
cellular, IEEE 802.11, IEEE 802.15, IEEE 802.16, sensor networks, etc., can be
accomplished through the gateway and bridging functions in the mesh routers. Mesh
clients can be either stationary or mobile, and can form a client mesh network
among themselves and with mesh routers. WMNs are anticipated to resolve the
limitations and to significantly improve the performance of wireless networks,
Wireless Local Area Networks (WLANs), Wireless Personal Area Networks
(WPANs), and Wireless Metropolitan Area Networks (WMANs). They are
undergoing rapid progress and inspiring numerous deployments. WMNs will deliver
wireless services for a large variety of applications in personal, local, campus, and
metropolitan areas. Despite recent advances in wireless mesh networking, many
research challenges remain in all protocol layers. The authors [1] present a detailed
study and open research issues in WMNs. System architectures and applications of
WMNs are described, followed by discussing the critical factors influencing
6
protocol design. Theoretical network capacity and the state-of-the-art protocols for
WMNs are explored with an objective to point out a number of open research issues.
Finally, test-beds, industrial practice, and current standard activities related to
WMNs are highlighted.
The authors [2] summarize the technologies and challenges related to
wireless mesh networks. Using technologies like 802.11i, WPA in wireless LAN,
enterprise deployments have finally begun to embrace wireless access networks.
Wireless LAN technology has often been approached cautiously in enterprise
deployments, partly due to well-known and easily exploitable attacks on early
802.11 security technology and partly due to the lack of physical control of the
access medium. Over the past several years, early adoption of some 802.11i security
features by the Wi-Fi Alliance in the Wi-Fi Protected Access (WPA) interoperability
forums, as well as the standardization of the 802.11i security amendment, have
greatly improved the authentication, encryption and integrity security capabilities.
However, new challenges with wireless mesh architectures using 802.11, the
pending solutions with 802.11s, and the security pitfalls of metro Wi-Fi networks
rekindle many of the original threats and technology maturity issues with ubiquitous
wireless access networking. The security technologies will cover current industry
capabilities and 802.11s, and the overall security architecture.
Unlike traditional wireless networks [3], in Hybrid Wireless Mesh
Network (HWMN), hosts may rely on each other to keep the network connected.
Operators and wireless internet service providers are choosing HWMNs to offer
Internet connectivity, as it allows fast, easy and affordable network deployments.
One main challenge in design of these networks is their vulnerability to security
attacks. The authors investigate the main security issues focusing on the most
vulnerable part of the hybrid WLAN mesh infrastructure which concerns the ad hoc
network part. Through the proposed architecture, security architecture for operator’s
hybrid WLAN Mesh Network, author identifies the new challenges and
opportunities posed by this emerging networking environment and explore
approaches to secure users, data and communications. By analyzing the strengths
and weaknesses of secured routing protocols, designed a new robust routing
7
structure called Macrograph (MG). MG structure is extracted from the mesh ad hoc
network for each communication to be established between a source and a
destination. Especially, MG is a robust structure based on node-disjoint path routing
scheme and dynamic trust management that can be adapted to respond to
applications’ security requirements.
The Authors [4] has given some recommendations for wireless mesh
backhaul designs and implementations. Radio links are used to provide backhaul
connectivity for base stations of mobile networks, in cases in which cable-based
alternatives are not available and cannot be deployed in an economic or timely
manner. While such wireless backhauls have been predominantly used in redundant
tree and ring topologies in the past, mobile network operators have become
increasingly interested in meshed topologies for carrier-grade wireless backhauls.
However, wireless mesh backhauls are potentially more susceptible to security
vulnerabilities, given that radio links are more exposed to tampering and given their
higher system complexity. This article extends prior security threat analysis of 3rd
generation mobile network architectures for the case of wireless mesh backhauls. It
presents a description of the security model for the considered architecture and
provides a list of the basic assumptions, security objectives, assets to be protected
and actors of the analysis. On this foundation, potential security threats are analyzed,
discussed and then assessed for their corresponding risk. The result of this risk
assessment is then used to define a set of security requirements.
2.2 WIRELESS MESH NETWORKS SECURITY
It is challenging to design a key management scheme in current mission-
critical networks to fulfill the required attributes of secure communications, such as
data integrity, authentication, confidentiality, non-repudiation and service
availability. Mission-critical networks show great potential in emergency response
and/or recovery, health care, critical infrastructure monitoring, etc. Such mission-
critical applications demand security service as “anywhere”, “anytime” and
“anyhow” [5]. The authors present a self-contained public key management scheme,
called SMOCK, which achieves almost zero communication overhead for
8
authentication, and offers high service availability. In this scheme, small numbers of
cryptographic keys are stored off-line at individual nodes before they are deployed
in the network. To provide good scalability in terms of number of nodes and storage
space, authors utilize a combinatorial design of public-private key pairs, which
means nodes combine more than one key pair to encrypt and decrypt messages. The
cryptographic key management is challenging due to the following characteristics of
wireless communications:
� Unreliable Communications and Limited Bandwidth
� Network Dynamics
� Large Scale
� Resource Constraints
Using Wireless Mesh Networks (WMNs) to offer internet connectivity is
becoming a popular choice for Wireless Internet Service Providers as it allows a
fast, easy and inexpensive network deployment [6]. However, security in WMNs is
still in its infancy as very little attention has been devoted so far to this topic by the
research community.
Wireless Mesh Networks (WMNs) represent a good solution to providing
wireless Internet connectivity in a sizable geographic area; this new and promising
paradigm allows for network deployment at a much lower cost than with classic
Wi-Fi networks. A large number of Wireless Hot Spots (WHSs) is needed to deploy
a Wi-Fi network; extending further the network coverage requires the deployment of
additional WHSs, which is costly and delicate. In WMNs, it is possible to cover the
same area (or even a larger one) with only one WHS and several wireless Transit
Access Points (TAPs). The TAPs are not connected to the wired infrastructure and
therefore rely on the WHS to relay their traffic. The cost of a TAP is much lower
than that of the WHS, which makes the use of WMNs a compelling economical
case; WMNs are thus suitable for areas where it is costly to install a traditional Wi-
Fi network (e.g., buildings that do not have existing data cabling for WHSs) or for
the deployment of a temporary wireless network.
9
In cellular networks, a given area is divided into cells and each cell is
under the control of a base station. Each base station handles a certain number of
mobile clients that are in its immediate vicinity (i.e., communication between the
mobile clients and the base station is single-hop) and it plays an important role in the
functioning of the cellular network; the entity that plays an equivalent role in WMNs
would be the WHS. WMNs represent a simple and inexpensive solution to extend
the coverage of a WHS. However, the deployment of such networks is slowed down
by the lack of security guarantees. Author has analyzed the characteristics of WMNs
and has deduced three fundamental network operations that need to be secured: (i)
the detection of corrupt TAPs, (ii) the definition and use of a secure routing
protocol, and (iii) the definition and enforcement of a proper fairness metric in
WMNs. The author has proposed some solutions to secure these operations. Finally,
the author has described two future WMNs (vehicular networks and multi-operator
WMNs) and has briefly analyzed the new security challenges they introduce.
The architecture of the WMN is a connectionless-oriented, mobile and
dynamic traffic of routed packets [7]. The mesh infrastructure environment easily
forms multiple chains of wireless LANs (WLAN) coupled with the simultaneous
multi hop transmission of data packets from peripherals via mobile gateways to the
wireless cloud. WMN operates as an access network to other communication
technologies. This exposes the WMN to numerous security challenges not only in
the mesh transmission operation security but also in the overall security against
foreign attacks. The authors survey and identify the security vulnerabilities in
Internet Protocol (IP) broadband networks, the security challenges in the routing
layer of the WMN and explore new concepts to solving security challenges in WMN
using Traffic Engineering (TE) security resolution mechanisms.
In the paper author explores the security threats in WMN over a
broadband network. Determination and investigation of the security solution using
traffic engineering mechanism were explored. Author has tested the influence of
security mechanisms over increased traffic loads and hop-count. For multi-layer
comparative analysis with the traditional 802.11i, author has conducted a test for the
observation of the influence of node mobility in a multihop scenario. Further
10
evaluation was done on the network load influence, end-to-end traffic delays and
delivery ratio in attack simulated scenarios. Observations of the technical
advantages using a derived and adapted technique in traffic engineering are carried
out. The proposed VPN-IPSec solution applied to the WMN security shows
enhanced overall performance.
Severe security threats such as DDoS also comparatively more effective
security resolution in WMN. The proposed management model security technique
demonstrates that a distributed security failure caused by traffic flooding, grey hole
and black hole DDoS in WMN security can be prevented and resolved using VPN-
IPSec. The security model shows efficient performance in intrusion detection and
prevention mechanism too. Analysis of the investigation shows high performance in
the different metrics used. In the analysis, author notices that it will be very hard to
provide an effective security for multi-hopped wireless mesh network because of its
inherent architectural weakness. However, author proposes a mutual combination
and use of cooperative IP communication security mechanisms in the prevention and
defense of security threats and attacks in the WMN as shown by the IPSec and
MPLS-VPN technique. The VPN-IPSec through authentication, encryption,
cryptography and tunneling and IP security configuration and operational
mechanisms of the MPLS-TE lowers the overhead and processing of the WMN. The
improved VPN-IPSec integrates most of the security measures needed to
comprehensively secure both the data traffic and the infrastructure wireless mesh
network. The authors analyze the advantages, comparative strengths and weakness
in the use of traffic engineering based on simulation results and evaluations.
Wireless mesh networks (WMNs) have emerged recently as a technology
for next-generation wireless networking [8]. The authors propose MobiSEC, a
complete security architecture that provides both access control for mesh users and
routers as well as security and data confidentiality of all communications that occur
in the WMN. MobiSEC extends the IEEE 802.11i standard exploiting the routing
capabilities of mesh routers; after connecting to the access network as generic
wireless clients, new mesh routers authenticate to a central server and obtain a
temporary key that is used both to prove their credentials to neighbor nodes and to
11
encrypt all the traffic transmitted on the wireless backbone links. A key feature in
the design of MobiSEC is its independence from the underlying wireless technology
used by network nodes to form the backbone; furthermore, MobiSEC permits
seamless mobility of both mesh clients and routers. Have Implemented MobiSEC in
a real-life test-bed and measured its performance in different network scenarios.
Numerical results show that the proposed architecture increases considerably the
WMN security with a negligible impact on the network performance, thus
representing an effective solution for wireless mesh networking.
MobiSEC tackles the security problems of both the access and backbone
areas of WMNs, providing an effective and transparent security solution for end
users and mesh nodes. Author implements the proposed security architecture in
Mobi- MESH, a complete wireless mesh network framework, and has tested it in
several realistic network scenarios.
A simple self-propagating worm can quickly spread across the Internet
and cause severe damage to the society [9]. Facing this great security threat, need to
build an early detection system that can detect the presence of a worm in the Internet
as quickly as possible in order to give people accurate early warning information
and possible reaction time for counteractions. This paper presents an Internet worm
monitoring system. Then, based on the idea of “detecting the trend, not the burst” of
monitored illegitimate traffic, author present a “trend detection” methodology to
detect a worm at its early propagation stage by using Kalman �lter estimation, which
is robust to background noise in the monitored data. In addition, for uniform-scan
worms such as Code Red, can effectively predict the overall vulnerable population
size, and estimate accurately how many computers are really infected in the global
Internet based on the biased monitored data. For monitoring a non uniform scan
worm, especially a sequential-scan worm such as Blaster, author show that it is
crucial for the address and space covered by the worm monitoring system to be as
distributed as possible.
Author proposes a monitoring and early detection system for Internet
worms to provide an accurate triggering signal for mitigation mechanisms in the
12
early stage of a future worm. Such a system is needed in view of the propagation
scale and the speed of the past worms. It has been lucky that the previous worms
have not been very malicious; the same cannot be said for the future worms. The
analysis and simulation studies indicate that such a system is feasible, and the “trend
detection” methodology poses many interesting research issues. This paper will
generate interest and participation in this topic, and eventually lead to an effective
Internet worm monitoring and early detection system.
Since the days of the Morris worm, the spread of malicious code has been
the most imminent menace to the Internet. Worms use various scanning methods to
spread rapidly [10]. Worms that select scan destinations carefully can cause more
damage than worms employing random scan. This paper analyzes various scan
techniques. Author proposes a generic worm detection architecture that monitors
malicious activities and evaluates an algorithm to detect the spread of worms using
real time traces and simulations. The author finds that the solution can detect worm
activities even when only 4% of the vulnerable machines are infected. The results
bring insight to the future battle against worm attacks.
When the attackers are more sophisticated, probing is fundamentally not a
costly process. From the discussions above, it seems that the game would favor the
attackers when the Internet links are fast enough and the size of the code is not
critical to the propagation speed. This does not imply that monitoring is of no use. In
future, an efficient traffic monitoring infrastructure will be an important part of the
global intrusion detection systems. A consequence of the worm detection method is
that the attackers will have to use a limited number of IP addresses to scan the
Internet. Therefore, the impact of worm scanning on the Internet traffic will be
reduced.
The author finds that as the backbone link speeds and hosts of greater
capacity are affordable to the attackers, it will be more difficult for us to detect
worm scanning from the Internet traffic. However, the detection methods can still be
useful in that it forces the attacker to use lesser traffic and scan more slowly and
cautiously. Author has designed two new scan techniques, Routable scan and
13
Divide-Conquer scan. Basically, they both use the idea of a routable IP address list
as the destination base where the scan object is selected. A routable worm is easy to
implement; it poses a big menace to the network security. Have to keep in mind that
anytime in future the next worm incident may be worse. Author observes that the
number of false alarms increase in the case of a DDoS attack or in the case of a hot
website visit. Future work lies in this direction of developing an integrated approach
to further improve the above proposed technique and develop an efficient algorithm
to fight worm attacks.
Security analysts must observe and analyze unusual activity on multiple
firewalls, intrusion detection systems or hosts [11]. A worm might not be positively
identified until it already has spread to most of the Internet, eliminating many
defensive options. In this paper, Authors present an automated system that can
identify active worms seconds or minutes after they first begin to spread, a necessary
precursor to halting the spread of a worm, rather than simply cleaning up afterward.
The implemented system collects ICMP Unreachable messages from instrumented
network routers, identifies those patterns of unreachable messages that indicate
malicious scanning activity, and then searches for patterns of scanning activity that
indicate a propagating worm. In this paper, author examines the problem of active
worms; describe the ICMP-based detection system, and present simulation results.
2.3 OPTIMIZATION
Transmitter power control can be used to concurrently achieve several
key objectives in wireless networking, including minimizing power consumption
and prolonging the battery life of mobile nodes, mitigating interference and
increasing the network capacity and maintaining the required link Quality of Service
(QoS) by adapting to node movements, fluctuating interference, channel
impairments and so on[12]. Moreover, power control can be used as a vehicle for
implementing on-line several basic network operations, including admission control,
channel selection and switching and handoff control. Consider issues associated
with the design of power-sensitive wireless network architectures, which utilize
power efficiently in establishing user communication at required QoS levels.
14
Besides reviewing some recent developments in power control, author formulates
some general associated concepts which have wide applicability to wireless network
design. A synthesis of these concepts into a framework for power-sensitive network
architectures is done, based on some key justifiable points. Various important
relevant issues are highlighted and discussed, as well as several directions for further
research in this area. Overall, a first step is taken toward the design of power-
sensitive network architectures for next generation wireless network.
Author has identified a number of key concepts and issues concerning the
potential of adaptive power control in wireless networking. A synthesis of these
concepts has led to a justified framework for power sensitive architectures based on
the Distributed Power Control (DPC)/Active Link Protection (ALP)/Voluntary Drop
Out/Forced Drop Out/Probing suite of algorithms. Important related issues of quick
QoS estimation, power conservation, and so on have also been highlighted. This is a
first step toward designing power sensitive wireless network architectures.
The authors [13] describe the distributed position-based network protocol
optimized for minimum energy consumption in mobile wireless networks that
support peer-to-peer communications. Given any number of randomly deployed
nodes over an area, it illustrates a simple local optimization scheme executed at each
node, guarantees strong connectivity of the entire network and attains the global
minimum energy solution for stationary networks. Due to its localized nature, this
protocol proves to be self-reconfiguring and stays close to the minimum energy
solution when applied to mobile networks.
Applications where minimum energy networking can affect significant
benefits include the digital battlefield, where soldiers are deployed over an
unfamiliar terrain, and multi sensor networks, where sensors communicate with each
other with no base station nearby. Even in the presence of base stations, such as in
cellular phone systems, minimum energy network design can allow longer battery
life and mitigate interference. In this paper, author presents a position-based
algorithm to set up and maintain a minimum energy network between users that are
randomly deployed over an area and are allowed to move with random velocities.
15
Author denotes these mobile users by “nodes” over the two-dimensional plane. The
network protocol reconfigures the links dynamically as nodes move around, and its
operation does not depend on the number of nodes in the system. Simulation results
are used to verify the performance of the protocol.
Span is a power saving technique for multi-hop wireless networks that
reduce energy consumption without significantly diminishing the capacity or
connectivity of the network [14]. Span builds on the observation that when a region
of a shared-channel wireless network has a sufficient density of nodes, only a small
number of them need be on at any time to forward traffic for active connections. It is
a distributed, randomized algorithm where nodes make local decisions on whether to
sleep, or join a forwarding backbone as a coordinator. Each node bases its decision
on an estimate of how many of its neighbors are awake and the amount of energy
available to it. It describes a randomized algorithm where coordinators rotate with
time, demonstrating how localized node decisions lead to a connected, capacity-
preserving global topology. Improvement in system lifetime due to span increases as
the ratio of idle-to-sleep energy consumption increases, and increases as the density
of the network increases. Simulations show that with a practical energy model,
system lifetime of an 802.11 network in power saving mode with Span is a factor of
two better than without. Span integrates nicely with 802.11-when run in conjunction
with the 802.11 power saving mode, Span improves communication latency,
capacity and system lifetime.
Span adaptively elects coordinators from all the nodes in the network, and
rotates them in time. Span coordinators stay awake and perform multi hop packet
routing within the network, while other nodes remain in power saving mode and
periodically check if they should be awaken and become a coordinator
With Span, each node uses a random back off delay to decide whether to
become a coordinator. This delay is a function of the number of the nodes in the
neighborhood that can be bridged using this node and the amount of energy it has
remaining. The results show that Span not only preserves latency, and provides
significant energy savings. For example, for a practical range of node densities and a
16
practical energy model, The simulations show that the system lifetime with Span is
more than a factor of two better than without Span.
To build an enterprise network that delivers real value to the business, it's
no longer enough to simply add bandwidth: we have to manage the bandwidth more
effectively than ever before. This book [15] shows how to reduce costs, delay
expenditures, and deliver new applications with precisely service quality they
require. The book explains how to do the following:
1. Understand the technologies and business trends that are driving
service level management in the enterprise network.
2. Learn advanced techniques for differentiating between low-priority
and high-priority applications; then delivering bandwidth in the
appropriate quantities, within appropriate latency and jitter
parameters.
3. Compare Class of Service (CoS) approaches with Quality of
Service (QoS) approaches such as ATM's QoS and Resource
Reservation Protocol for IP networks.
4. Understand how to classify customers and give them preferred
access to Internet and other network resources; handle peak loads
more effectively; delay network upgrades; and much more.
The book includes four detailed case studies representing financial
services, consulting, retail and academic organizations. The definitive guide to
policy-based IP traffic management shows how to guarantee the performance of
mission-critical Internet applications.
Operations Research (OR) is a science which deals with problem,
formulation, solutions and finally appropriate decision making [16]. Scientists and
technocrats form team to study the problem arising out of difficult situations and at
the later stage solutions to these problems. It is the research designed to determine
17
most efficient way to do something new. OR is the use of mathematical models,
statistics and algorithm to aid in decision-making. It is most often used to analyse
complex real life problems typically with the goal of improving or optimizing
performance. Some decisions can be taken by common sense, sound judgment and
experience without using mathematics, and some cases this may not be possible and
use of other techniques is inevitable.
Information is generated in certain nodes and needs to reach a set of
designated gateway nodes [17]. Each node may adjust its power within a certain
range that determines the set of possible one hop away neighbours. Traffic
forwarding through multiple hops is employed when the intended destination is not
within immediate reach. The nodes have limited initial amounts of energy that is
consumed in different rates depending on the power level and the intended receiver.
The author proposes algorithms to select the routes and the corresponding power
levels such that the time until the batteries of the nodes drain-out is maximized. The
algorithms are local and amenable to distributed implementation. When there is a
single power level, the problem is reduced to a maximum �ow problem with node
capacities and the algorithms converge to the optimal solution. When there are
multiple power levels, then the achievable lifetime is close to the optimal (that is
computed by linear programming) most of the time. It turns out that in order to
maximize the lifetime; the traffic should be routed such that the energy consumption
is balanced among the nodes in proportion to their energy reserves, instead of
routing to minimize the absolute consumed power.
The general inverse maximum flow problem is considered [18], where
lower and upper bounds for the flow are changed so that a given feasible flow
becomes a maximum flow and the modification cost that is measured by sum-type
weighted Hamming distance is minimum. Authors present the combinatorial
algorithm for solving the problem that runs in strongly polynomial times.
The minimum cost flow problem with interval data can be solved using
two minimum cost flow problems with crisp data. In this paper, the idea of author
was extended for solving the minimum flow problem with interval-valued lower,
18
upper bounds and flows [19]. This problem can be solved using two minimum flow
problems with crisp data. Then, this result is extended to networks with fuzzy lower,
upper bounds and flows.
In this paper, a new method to solve the minimum cost flow problem with
interval data is presented. First, it solves a minimum cost flow problem with lower
bounds, flows, and costs, also a minimum cost flow problem with upper bounds,
flows, and costs. Then, the method combines two solutions as an interval solution.
Author proves that the interval solution is optimal for the minimum cost flow
problem with interval bounds, flows, and costs. In this paper, this idea is extended to
solve the minimum flow problem with interval bounds and flows.
In this paper, author developed the wave preflow algorithm for minimum
flow [20]. This algorithm is a special implementation of the generic preflow
algorithm. It is hybrid between the FIFO preflow algorithm and the highest-label
preflow algorithm for minimum flow. It examines the active nodes in non
decreasing order of their distance labels and the node examination terminates when
either the node deficit becomes zero or the node is relabeled. The wave preflow
algorithm for minimum flow runs in O (n3) time.
The literature on network flow problem is extensive. Over the past 50 years
researchers have made continuous improvements to algorithms for solving several
classes of problems. Researchers designed many of the fundamental algorithms for
network flow, including methods for maximum flow and minimum cost flow
problems. In the next decades, there are many research contributions concerning
improving the computational complexity of network flow algorithms by using
enhanced data structures, techniques of scaling the problem data etc.
2.4 GROUP KEY MANAGEMENT
In this paper author shows how to divide data D into n pieces in such a
way that D is easily reconstructable from any k pieces, but even complete
knowledge of k-1 pieces reveals absolutely no information about D [21]. This
technique enables the construction of robust key management schemes for
19
cryptographic systems that can function securely and reliably even when
misfortunes destroy half the pieces and security breaches expose all but one of the
remaining pieces.
The Author generalises the problem to one in which the secret is some
data D (e.g., the safe combination) and in which non mechanical solutions (which
manipulate this data) are also allowed. Author’s goal is to divide D into n pieces D1,
. . . . . D n in such a way that:(1) knowledge of any k or more D i pieces make D
easily computable;(2) knowledge of any k- 1 or fewer Di pieces leaves D completely
undetermined (in the sense that all its possible values are equally likely).Such a
scheme is called a (k, n) threshold scheme.
Group communications can bene�t from IP multicast to achieve scalable
exchange of messages. However, there is a challenge of effectively controlling
access to the transmitted data [22]. IP multicast by itself does not provide any
mechanisms for preventing non group members to have access to the group
communication. Although encryption can be used to protect messages exchanged
among group members, distributing the cryptographic keys becomes an issue.
Researchers have proposed several different approaches to group key management.
These approaches can be divided into three main classes: centralized group key
management protocols, decentralized architectures and distributed key management
protocols. The three classes are described here and an insight given to their features
and goals. The area of group key management is then surveyed and proposed
solutions are classi�ed according to those characteristics.
In this article, authors present a survey in the secure group
communication area, particularly regarding the secure distribution and refreshment
of keying material. Authors review several proposals, placing them into three main
classes: group key management protocols, which try to minimize the requirements
of KDC and group members; decentralized architectures, which divide large group
in smaller subgroups in order to make the management more scalable; and �nally,
the distributed key management protocols, which gives all members the same
20
responsibilities. Every class has its particularities, presenting different features,
requirements and goals.
Analysis made it clear that there is no unique solution that can achieve all
requirements. While centralized key management schemes are easy to implement,
they tend to impose an overhead on a single entity. Protocols based on hierarchical
sub grouping are relatively harder to implement and raise other issues, such as
interfering with the data path or imposing security hazards on the group. Distributed
key management, by design, is simply not scalable. Additionally, the best solution
for a particular application may not be best for another, hence it is important to
understand fully the requirements of the application before selecting a security
solution.
WiMAX is the next generation technology that offers broadband wireless
access over long distances [23]. As WiMAX standards expand from considering a
fixed line-of-sight propagation and point-to-multipoint infrastructure high frequency
system to a lower frequency non-line-of-sight mobile system, It is open to more
security threats than other wireless systems. This paper presents the different
security issues present in Privacy and Key Management Protocol along with the
proposed solutions.
Secure group communication has become an important issue in many
applications [24]. Both intra-group and inter-group multicast traffic must be
protected by shared secret keys. In order to communicate securely in the same group
and among different groups, authors employ a polynomial P to achieve efficient
intra-group key refreshment and generate a polynomial H(x) to create an inter-group
key. Proposed polynomial-based key management schemes have the following
advantages: (1) Group members and the group controller can share the intra-group
key without any encryption/decryption. (2) When the members of the group get
changed, the group controller needs to update and distribute the renewed group keys.
The proposed mechanism can reduce the number of re-keying messages. (3) The
proposed mechanism lessens the storage overhead of group members and the group
controller by adopting a polynomial-based key management scheme. (4) As
21
compared with previous approaches, the group controller does not need to broadcast
heavy messages which are necessary for creating an inter-group key. Hence, it
introduces only a small amount of broadcast traffic to the group members. The
analysis of the proposed mechanism is conducted to demonstrate the improvements.
Secret sharing scheme is a method which distributes shares of a secret to a
set of participants in such a way that only specified groups of participants can
reconstruct the secret by poling their shares [25]. Secret sharing is related to key
management and key distribution. These problems are common to all crypto
systems. Secret sharing is also used in multi-party secure protocols. Future, secret
sharing schemes have natural applications in access control and cryptographic key
initialization. Key transfer protocols rely on a mutually trusted Key Generation
Center (KGC) to select session keys and transport session keys to all communication
entities secretly.
Mobile ad hoc networks (MANETs) can be defined as a collection of
large number of mobile nodes that form temporary network without aid of any
existing network infrastructure or central access point [26]. Due to the nature of
MANETs, to design and maintaining security is a challenging task for researcher in
an open and distributed communication environment. This paper proposes security
architecture for MANET grid and optimal key management by combining
symmetric key technique and elliptic curve public key technique. The proposed
architecture and optimal key management eliminates threats including the man-in-
the-middle attack and the Black hole attack can be effectively eliminated under the
proposed scheme. Author proposes a scheme which includes strong security,
scalability, fault-tolerance, accessibility, and efficiency.
The demand for an efficient scheme to manage group keys for secure
group communication becomes more urgent [27], as applications of secure multicast
in networks continue to grow. In this paper, authors propose a new key tree structure
for group key management. With this optimal tree structure, system resources such
as network bandwidth can be saved. Authors devise an algorithm to generate this
optimal tree and show that it can be implemented efficiently. Also they design an
22
adaptive system for group key management which consists of four components: a
request receiver, a key tree update controller, a delay calculator and a request
predictor. This system can maintain the optimality of the key tree dynamically. It is
verified by theoretical analysis and simulation result that the performance of the
scheme is better than other schemes based on traditional tree structures.
In key management schemes that realize secure multicast communications
encrypted by group keys on a public network, tree structures are often used to
update the group keys efficiently [28]. Authors have proposed an efficient scheme
which updates dynamically the tree structures based on the withdrawal probabilities
of members. In this paper, it is shown that this scheme is asymptotically optimal for
the cost of withdrawal. Furthermore, a new key management scheme, which takes
account of key update costs of joining, in addition to withdrawal, is proposed. The
proposed scheme is also asymptotically optimal, and it is shown by simulation that it
can attain good performance for non asymptotic cases.
2.5 TESLA
Wireless networks will consist of low-powered, compute-constrained
devices [29]. These devices will have limited ability to perform the expensive
computational operations associated with public key cryptography. This will limit
the usefulness of conventional authentication mechanisms based on public key
certificates in these domains. The authors introduce an alternative to conventional
public key certificates that is based upon symmetric key cryptography and the
principles of delayed key disclosure. The work formalizes concepts presented in
earlier work on a broadcast authentication protocol, known as TESLA. TESLA
certificates rely upon a Trade-off between computation and authentication delay in
order to achieve a certificate infrastructure that reduces computational complexity
associated with certificate verification when compared with traditional public key
infrastructure certificates.
A wireless network has one or more base stations that talk to a large set of
wireless sensors, where the wireless life depends on a small battery that consumes
23
most power during communication [30]. Before this network can be applied, many
security problems must be solved, yet traditional security protocols usually need a
lot of communication overhead.
Secure group communication has become an important issue in many
applications. Both intra-group and inter-group multicast traffic must be protected by
shared secret keys [31]. In order to communicate securely in the same group and
among different groups, it employs a polynomial P to achieve efficient intra-group
key refreshment and generate a polynomial H(x) to create an inter-group key.
Proposed polynomial-based key management schemes have the following
advantages: (1) Group members and the group controller can share the intra-group
key without any encryption/decryption. (2) When the members of the group get
changed, the group controller needs to update and distribute the renewed group keys.
The proposed mechanism can reduce the number of re-keying messages. (3) The
proposed mechanism lessens the storage overhead of group members and the group
controller by adopting a polynomial-based key management scheme. (4) As
compared with previous approaches, the group controller does not need to broadcast
heavy messages which are necessary for creating an inter-group key. Hence, it
introduces only a small amount of broadcast traffic to the group members. The
analysis of the proposed mechanism is conducted to demonstrate the improvements.
The contributions of the proposed schemes are: (1) Sharing the intra-group key
between the group controller and group members do not need to adopt any
encryption/decryption mechanisms. (2) When membership changes happen, the keys
are renewed immediately. The designed mechanism reduces the number of re-
keying messages during group membership changes. (3) The adoption of the
polynomial which is used for deriving an intra-group key can reduce the key storage
overhead at the group members and the group controller. (4) After the intra-group
key is derived, the members self generate the polynomial functions which are
necessary for creating an inter-group key. It helps to reduce the communication
overhead at the group controller.
The TESLA multicast stream authentication protocol is distinguished
from other types of cryptographic protocols in both its key management scheme and
24
its use of timing [32]. It takes advantage of the stream being broadcast to
periodically commit to and later reveal keys used by a receiver to verify that packets
are authentic, and it uses both inductive reasoning and time arithmetic to allow the
receiver to determine that an adversary cannot have prior knowledge of a key that
has just been revealed. While an informal argument for the correctness of TESLA
has been published, no mechanized proof appears to have previously been done for
TESLA or any other protocol of the same variety. This paper reports on a
mechanized correctness proof of the basic TESLA protocol based on establishing a
sequence of invariants for the protocol using the tool TAME. It discusses the
organization and process used in the proof, and the possibilities for reusing these
techniques in correctness proofs of similar protocols, starting with more
sophisticated versions of TESLA.
The present strategies that reduce the delay associated with multicast
authentication, make more efficient usage of receiver-side buffers, make delayed
key disclosure authentication more resilient to buffer overflow denial of service
attacks, and allow for multiple levels of trust in authentication [33]. Throughout this
base paper, the main focus of discussion will be on the popular multicast
authentication scheme Timed Efficient Stream Loss-tolerant Authentication
(TESLA) based upon the delayed key disclosure principle. Similar to other schemes
based upon delayed key disclosure, TESLA is susceptible to Denial-of-Service
(DoS) attacks and is not well suited for delay-sensitive applications.
The Internet of Things (IoTs) is an emerging concept referring to
networked everyday objects that interconnect to each other via wireless sensors
attached to them [34]. TESLA is a source authentication protocol for the broadcast
network. Scalability of TESLA is limited by distribution of its unicast-based initial
parameter. Low energy consumption version of TESLA is �TESLA, which is
designed for wireless sensor network (WSN), while cannot tolerate DoS attack.
TESLA++ is the DoS-tolerant version and is designed for VANET. TESLA++
cannot be accepted by WSN because of its higher consumption of power. To realize
secure and robust DoS attack in the hybrid-vehicle-sensor network, author provides
a TESLA-based protocol against DoS attack with a lower consumption of power.
25
Analysis results demonstrate that using this protocol is better than using �TESLA or
TELSA++, respectively
Author summarizes the soft verification of message protected by
symmetric cryptographic check values, i.e. Message Authentication Codes [35]. Soft
verification is introduced as an extension of hard or standard verification, which is
usual today in cryptographic applications. Algorithm for iterative correction of
messages protected by Message Authentication Codes is theoretically analyzed,
using probability theory. Results of the analysis are used for defining the most
important parameter for the correct work of the algorithm – a threshold value.
Theoretical analysis is also used for comparison with results of simulations of the
threshold value used in the algorithm for soft verification. Similar results of the
comparison confirm the theoretical analysis. At the end of the paper, simulation
results and considerable gain of corrected messages and their Message
Authentication Codes is shown.
Authenticated key exchange protocols allow two participants A and B,
communicating over a public network and each holding an authentication means, to
exchange a shared secret value [36]. Methods designed to deal with this
cryptographic problem ensure A (resp. B) that no other participants aside from B
(resp. A) can learn any information about the agreed value, and often also ensure A
and B that their respective partner has actually computed this value. A natural
extension to this cryptographic method is to consider a pool of participants
exchanging a shared secret value and to provide a formal treatment for it. Starting
from the famous 2-party Diffie-Hellman (DH) key exchange protocol, and from its
authenticated variants, security experts have extended it to the multi-party setting for
over a decade and completed a formal analysis in the framework of modern
cryptography in the past few years. This paper synthesizes the body of work on the
provably-secure authenticated group DH key exchange.
In this paper author has provided a formal model and security definitions,
as well as methods, for authenticated group Diffie-Hellman key exchange. This
work should allow cryptographic experts to properly analyze the security of a group
26
key exchange protocol, to address in a rigorous way the security requirements a
given method aims to achieve, and to come up with provably secure protocols. The
proposed model is sufficiently generic to be adapted to many cryptographic
scenarios well-suited for key exchange in a group. In addition, performed a security
analysis a protocol suite already proposed for dynamic group Diffie-Hellman key
exchange; enhanced it with authentication services, proposed a modular
implementation that can be used to abstract out the use of cryptographic devices, and
exhibit a formal security proof under standard computational assumptions. This
paper, will enable security architects to pick a method based not only on its
efficiency but also on its (provable) security.
One of the main challenges of securing multicast communication is
source authentication, or enabling receivers of multicast data to verify that the
received data originated with the claimed source and was not modified enroute [37].
The problem becomes more complex in common settings where other receivers of
the data are not trusted, and where lost packets are not retransmitted. Several source
authentication schemes for multicast have been suggested in the past, but none of
these schemes is satisfactorily efficient in all prominent parameters. Authors have
recently proposed a very efficient scheme, TESLA, which is based on initial loose
time synchronization between the sender and the receivers, followed by delayed
release of keys by the sender. This paper proposes several substantial modifications
and improvements to TESLA. One modification allows receivers to authenticate
most packets as soon as they arrive (whereas TESLA requires buffering packets at
the receiver side and provides delayed authentication only). Other modifications
improve the scalability of the scheme, reduce the space overhead for multiple
instances, increase its resistance to denial-of-service attacks.
The basic TESLA protocol has the following salient properties:
� Low computation overhead. On the order of one MAC function
computation per packet for both sender and receiver.