10
1278 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 9, NO. 8, AUGUST 2014 Incorporating Attack-Type Uncertainty Into Network Protection Andrey Garnaev, Melike Baykal-Gursoy, and H. Vincent Poor, Fellow, IEEE Abstract— Network security against possible attacks involves making decisions under uncertainty. Not only may one be ignorant of the place, the power, or the time of potential attacks, one may also be largely ignorant of the attacker’s purpose. To illustrate this phenomenon, this paper proposes a simple Bayesian game-theoretic model of allocating defensive (scanning) effort among nodes of a network in which a network’s defender does not know the adversary’s motivation for intruding on the network, e.g., to bring the maximal damage to the network (for example, to steal credit card numbers or information on bank accounts stored there) or to infiltrate the network for other purposes (for example, to corrupt nodes for a further distributed denial of service botnet attack on servers). Due to limited defensive capabilities, the defender faces the dilemma of either: 1) focusing on increasing defense of the most valuable nodes, and in turn, increasing the chance for the adversary to sneak into the network through less valuable nodes or 2) taking care of defense of all the nodes, and in turn, reducing the level of defense of the most valuable ones. An explicit solution to this dilemma is suggested based on the information available to the defender, and it is shown how this information allows the authorities to increase the efficiency of a network’s defense. Some interesting properties of the rivals’ strategies are presented. Notably, the adversary’s strategy has a node-sharing structure and the adversary’s payoffs have a discontinuous dependence on the probability of the attack’s type. This discontinuity implies that the defender has to take into account the human factor since some threshold values of this inclination in the adversary’s behavior could make the defender’s policy very sensitive to small perturbations, while in other situations it produces minimal impact. Index Terms— Bayesian equilibrium, network protection, search, scan, computer networks, infrastructure networks. I. I NTRODUCTION A DANGER to daily life, economic vitality and national security stems from the cyber intrusion which has increased dramatically over the last decade, disrupting critical operations, imposing high costs on the economy [1] and exposing sensitive personal and business information (say, the hacking of Bush family e-mails [2], theft of credit card Manuscript received June 25, 2013; revised October 30, 2013; accepted May 21, 2014. Date of publication June 6, 2014; date of current version July 10, 2014. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Jeff Yan. A. Garnaev is with the Wireless Information Network Laboratory, Rutgers University, North Brunswick, NJ 08902 USA (e-mail: [email protected]). M. Baykal-Gursoy is with the Department of Industrial and Systems Engineering, Rutgers University, Piscataway, NJ 08854-8018 USA (e-mail: [email protected]). H. V. Poor is with the Department of Electrical Engineering, Princeton University, Princeton, NJ 08544 USA (e-mail: [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TIFS.2014.2329241 numbers [3], and cracking into Wall Street Journal [4] and NASA computers [5], etc.). The increasing number of successful cyber attacks calls for new approaches to developing security systems. An extended literature exists on the construction and modeling of different aspects of security systems for communication and network security [6]–[11], security in wireless networks [12], [13] and cyber-security [14]–[16]. In [17] the readers can find a structured and comprehensive survey of the research contribu- tions that analyze and solve security and privacy problems in computer networks via game-theoretic approaches. Developing more reliable security systems requires devel- oping advanced technologies and algorithms for intrusion detection. It is interesting to draw some parallels between failing to prevent cyber attacks, and the recent economic turmoil [18]. It has by now been realized that the economic models did fail to keep pace with the explosive growth in complex securities, the resulting intricate web of risk, and the dimensions of the resulting danger to the economy. However, the larger failure was human, namely, it was in how the risk models were applied, understood and managed. In this paper we show that in order for the security systems for networks to be efficient, besides improving technologically, they should also take into account the possible human factor, namely, the motivations of the adversary to attack the network. To demonstrate that such security systems can be more efficient than ones that do not take the adversary’s objectives into account, we suggest a simple Bayesian game-theoretic model between a defender and an adversary. We focus on two types of possible threats. A. Maximizing Damage Attacks Examples of such attacks can be the theft of credit card numbers [5], or corrupting nodes by key-logging bots, i.e. the bots listening for keyboard activity and reporting the keystrokes upstream to a bot herder. Some such bots can have built-in triggers to look for visits to particular websites where passwords or bank account information is entered. B. Infiltration/Harassment Attacks Examples of infiltration attacks are those in which a hacker merely wants to demonstrate his or her ability to crack into a network, such as was the case in compromising the Wall Street Journal computers [4] and the NASA computers [5], or when computers are corrupted by DDoS botnets that can be further used to wage war on other computers on the Internet by completely saturating bandwidth or other resources. Another 1556-6013 © 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Incorporating attack type uncertainty into network protection

Embed Size (px)

DESCRIPTION

Wingz Technologies offers IEEE 2014 Final Year Projects for CSE and IT Students in JAVA-J2EE, DOTNET- NS2 (NETWORK SIMULATOR2)- ANDROID APPLICATIONS – MATLAB – EMBEDDED – VLSI - POWER ELECTRONICS and APPLICATION PROJECTS in various Platform. The Quality of Our Staff and Project Management Ensures that we unfailingly deliver those solutions on time, on budget and to the Highest Level of Quality. We support final year ME / MTECH, BE/ BTECH( IT, CSE), MCA, MSC (IT/ CSE /Software Engineering), BCA, BSC (CSE / IT), MS IT, EEE,ECE,CIVIL, Mechanical engineering students with IEEE PROJECTS/NON IEEE PROJECTS and Real Time APPLICATION Projects in various leading domains and enable them to become future software engineers. Our IEEE Projects and Application Projects are developed by experienced professionals with accurate designs on hot titles of the current year. • Inplant Training(IPT) • Final Year projects • Internship Training • Corporate Training • Custom Software Development • SEO(Search Engine Optimization) • Research Work (Ph.d and M.Phil) • Offer Courses for all platforms. Project center in chennai Final year project center in Chennai IEEE Engineering Project Best Inplant Training In Chennai Inplant Training Best Project Center in Chennai IEEE 2014 Final Year Projects Inplant Training In Chennai Inplant Training In Chennai for cse Best IPT in Chennai implant training ImPlant Training in Chennai Free InPlant Training in Chennai IPT in Chennai inplant training in software companies in chennai Free IPT in Chennai InPlant Training in Chennai for CSE InPlant Training in Chennai for IT InPlant Training in Chennai for ECE InPlant Training in Chennai for EEE InPlant Training in Chennai for EIE InPlant Training in Chennai for Mechanical InPlant Training in Chennai for Civil InPlant Training for Computer Science Students Java Course Dot Net Course Multimedia Course Android training in Chennai internship in chennai for ece students Internship in chennai for engineering students internship in electronics and communication internship for CSE IT ECE EEE students in Chennai internship in chennai in MNC companies internship in IT companies in chennai internship for electronics students internship in JAVA internship in DotNet internship in Android internship in MatLab internship in Embedded Systems internship in PHP Java Course Dot Net Course Multimedia Course A Android training in chennai

Citation preview

Page 1: Incorporating attack type uncertainty into network protection

1278 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 9, NO. 8, AUGUST 2014

Incorporating Attack-Type UncertaintyInto Network Protection

Andrey Garnaev, Melike Baykal-Gursoy, and H. Vincent Poor, Fellow, IEEE

Abstract— Network security against possible attacks involvesmaking decisions under uncertainty. Not only may one beignorant of the place, the power, or the time of potential attacks,one may also be largely ignorant of the attacker’s purpose.To illustrate this phenomenon, this paper proposes a simpleBayesian game-theoretic model of allocating defensive (scanning)effort among nodes of a network in which a network’s defenderdoes not know the adversary’s motivation for intruding on thenetwork, e.g., to bring the maximal damage to the network(for example, to steal credit card numbers or information onbank accounts stored there) or to infiltrate the network forother purposes (for example, to corrupt nodes for a furtherdistributed denial of service botnet attack on servers). Due tolimited defensive capabilities, the defender faces the dilemma ofeither: 1) focusing on increasing defense of the most valuablenodes, and in turn, increasing the chance for the adversary tosneak into the network through less valuable nodes or 2) takingcare of defense of all the nodes, and in turn, reducing the levelof defense of the most valuable ones. An explicit solution tothis dilemma is suggested based on the information availableto the defender, and it is shown how this information allowsthe authorities to increase the efficiency of a network’s defense.Some interesting properties of the rivals’ strategies are presented.Notably, the adversary’s strategy has a node-sharing structureand the adversary’s payoffs have a discontinuous dependence onthe probability of the attack’s type. This discontinuity impliesthat the defender has to take into account the human factorsince some threshold values of this inclination in the adversary’sbehavior could make the defender’s policy very sensitive to smallperturbations, while in other situations it produces minimalimpact.

Index Terms— Bayesian equilibrium, network protection,search, scan, computer networks, infrastructure networks.

I. INTRODUCTION

A DANGER to daily life, economic vitality and nationalsecurity stems from the cyber intrusion which has

increased dramatically over the last decade, disrupting criticaloperations, imposing high costs on the economy [1] andexposing sensitive personal and business information (say,the hacking of Bush family e-mails [2], theft of credit card

Manuscript received June 25, 2013; revised October 30, 2013; acceptedMay 21, 2014. Date of publication June 6, 2014; date of current versionJuly 10, 2014. The associate editor coordinating the review of this manuscriptand approving it for publication was Dr. Jeff Yan.

A. Garnaev is with the Wireless Information Network Laboratory, RutgersUniversity, North Brunswick, NJ 08902 USA (e-mail: [email protected]).

M. Baykal-Gursoy is with the Department of Industrial and SystemsEngineering, Rutgers University, Piscataway, NJ 08854-8018 USA (e-mail:[email protected]).

H. V. Poor is with the Department of Electrical Engineering, PrincetonUniversity, Princeton, NJ 08544 USA (e-mail: [email protected]).

Color versions of one or more of the figures in this paper are availableonline at http://ieeexplore.ieee.org.

Digital Object Identifier 10.1109/TIFS.2014.2329241

numbers [3], and cracking into Wall Street Journal [4] andNASA computers [5], etc.).

The increasing number of successful cyber attacks calls fornew approaches to developing security systems. An extendedliterature exists on the construction and modeling of differentaspects of security systems for communication and networksecurity [6]–[11], security in wireless networks [12], [13]and cyber-security [14]–[16]. In [17] the readers can find astructured and comprehensive survey of the research contribu-tions that analyze and solve security and privacy problems incomputer networks via game-theoretic approaches.

Developing more reliable security systems requires devel-oping advanced technologies and algorithms for intrusiondetection. It is interesting to draw some parallels betweenfailing to prevent cyber attacks, and the recent economicturmoil [18]. It has by now been realized that the economicmodels did fail to keep pace with the explosive growth incomplex securities, the resulting intricate web of risk, and thedimensions of the resulting danger to the economy. However,the larger failure was human, namely, it was in how the riskmodels were applied, understood and managed. In this paperwe show that in order for the security systems for networksto be efficient, besides improving technologically, they shouldalso take into account the possible human factor, namely, themotivations of the adversary to attack the network.

To demonstrate that such security systems can be moreefficient than ones that do not take the adversary’s objectivesinto account, we suggest a simple Bayesian game-theoreticmodel between a defender and an adversary. We focus on twotypes of possible threats.

A. Maximizing Damage Attacks

Examples of such attacks can be the theft of credit cardnumbers [5], or corrupting nodes by key-logging bots, i.e.the bots listening for keyboard activity and reporting thekeystrokes upstream to a bot herder. Some such bots can havebuilt-in triggers to look for visits to particular websites wherepasswords or bank account information is entered.

B. Infiltration/Harassment Attacks

Examples of infiltration attacks are those in which a hackermerely wants to demonstrate his or her ability to crack intoa network, such as was the case in compromising the WallStreet Journal computers [4] and the NASA computers [5], orwhen computers are corrupted by DDoS botnets that can befurther used to wage war on other computers on the Internet bycompletely saturating bandwidth or other resources. Another

1556-6013 © 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Page 2: Incorporating attack type uncertainty into network protection

GARNAEV et al.: INCORPORATING ATTACK-TYPE UNCERTAINTY INTO NETWORK PROTECTION 1279

example is the cracking of the private Yahoo Mail account ofSarah Palin by a hacker shortly after midnight on September16, 2008, [19], [20]. The hacker claimed he had read Palin’spersonal e-mails because he was looking for something that“would derail her campaign”. However, he later confessed thathe could not find any incriminating evidence. This attack onYahoo Mail account can be considered as infiltration. Thehacker was so proud of his ability to crack into the networkthat he bragged about his achievement on the Internet (thatfinally allowed the authorities to find, try and sentence him fora felony). Since he has found nothing incriminating in a hackedmailbox, perhaps one might say it even helped her electioncampaign. Not every infiltration attack can be so harmless.It can turn out to be damaging for the network’s users aswell as the network. Say, if the mailbox contained somethingincriminating, it could kill someone’s career, and it could alsodivert a lot of users from using Yahoo e-mail servers.

There is another area in which such game-theoretic modelscan improve the work of security, namely, infrastructure secu-rity. The September 11, 2001 attacks (which can be consideredas an example of maximizing damage attacks) introduced theterm homeland security into the public consciousness aroundthe world. In the United States, this term is defined as “aconcerted national effort to prevent terrorist attacks within theUnited States, reduce America’s vulnerability to terrorism, andminimize the damage and recover from attacks that do occur”(Homeland Security Act 2002 [21]). There is an extendedliterature on infrastructure security systems [22]–[24]. Notethat in infrastructure security, the adversary can also plandifferent types of attacks, say, perhaps the recent BostonMarathon bombings can be considered as an example of aharassment attack, and law enforcement may not know theexact motivation of the adversary. Such uncertainty about thetype of attack can be considered as a sub-scale for threat levelswhich have been created to keep populations informed aboutthe level of threat the public faces from terrorism at any giventime [25]. This system helps police and other law enforcementagencies decide how to allocate their resources, e.g., policesquads, video cams, sensors, etc. The threat level representsthe likelihood of an attack in the near future. This paper showsthat introducing into the threat levels a sub-scale specifyingthe likelihood of type of threat can increase efficiency of thedefense’s resource allocation, especially when the resourcesare scarce.

The organization of this paper is as follows: in Section II,we first define two auxiliary games with complete informationabout an attack’s type. In Section III we formulate our problemwith uncertainty about the attack’s type and present its equilib-rium strategies explicitly. In Section IV numerical illustrationsare presented. Finally, in Sections V and V discussions andthe proofs of the results are offered.

II. COMPLETE INFORMATION ABOUT ATTACK’STYPE: TWO BASIC GAMES

In this section and its two subsections, we describe twosimple matrix games on network nodes with complete infor-mation about an attack’s type. As a basic model we have in

mind computer or communication networks, nodes of whichcontain or transmit valuable data (which can be either stolenor eavesdropped). We assume that the network consists of Nnodes. It is an abstract network composed of communicationlinks and nodes that may contain data that need to be protected.In fact, some nodes may represent communication links.As such, the network does not correspond to any specifictopology. The agent who wants to minimize the effects ofan attack is called as the defender (say, it can be intrusiondetection system (IDS)). The agent who wants to attack thenetwork is called the adversary. A strategy of the defender isa normalized vector x = (x1, . . . , xN ), where xi is the search(scan, protection) resource (efforts) applied at node i , i.e.∑N

i=1 xi = 1. A strategy of the adversary is a normalized vectory = (y1, . . . , yN ) where yi is the attack resource applied tonode i , i.e.

∑Ni=1 yi = 1. Let vi (xi , yi ) be the vulnerability

of node i with xi and yi resources invested into its protectionand attack, respectively. Each node is characterized by a valueCi (say, stored data). We assume that the damage to node i isequal to stolen data or casualties which is proportional to thevalue and vulnerability of the node: Ri (xi , yi ) = vi (xi , yi )Ci .We assume that the vulnerability of node i depends linearly oninvestments in defense and attack, namely, vi (xi , yi ) = (1 −di xi )yi , where di ∈ (0, 1) is the node’s defense characteristic.

A. The Adversary Aims to Inflict the Maximal Damage

We consider the scenario also discussed in [24] in which theadversary wants to inflict the maximal damage to the network.The payoff to the adversary is the total damage s/he can cause,so it is

u1A(x, y) =

N∑

i=1

vi (xi , yi )Ci =N∑

i=1

(1 − di xi )Ci yi (1)

and the payoff to the defender is u1D(x, y) = −u1

A(x, y),i.e., here we deal with a zero-sum game. This matrix gameis closely related to a search matrix game suggested in [26]and [27]. We assume that the rivals know the node’s valuesCi and defense characteristics di . Recall that (x∗, y∗) is asaddle point (Nash equilibrium) if and only if the followinginequalities hold [28],

u1D(x, y∗) ≤ u1

D(x∗, y∗) ≤ u1D(x∗, y) for any (x, y).

As a detailed example of the payoffs to the defender and theadversary in communication networks we consider a scenariowhere N users within a secure area communicate with otherusers outside of this area. Each user employs a separate chan-nel for communication, so that no signal interference occurs.The adversary intends to eavesdrop this communication. For aparticular time slot the adversary can eavesdrop only one user.The eavesdropping capacity [13] of user i by an adversary isln(1+hEi Pi/σ

2E ) where Pi is the transmission power of user i ,

hEi is the channel gain and σ 2E is the background noise of

the channel. The defender, on the other hand, can jam theeavesdropping devices by applying jamming power J for oneof the channels. Thus, if the adversary eavesdrops user i andthe defender jams this eavesdropping, then the eavesdroppingcapacity reduces to ln(1+hEi Pi/(σ

2E +gEi J )) where gEi is the

Page 3: Incorporating attack type uncertainty into network protection

1280 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 9, NO. 8, AUGUST 2014

fading gain. We assume that the communication is performedin low signal-to-interference-plus-noise ratio (SINR) regime,so eavesdropping capacities can be approximated by SINR,namely, by hEi Pi/σ

2E and hEi Pi/(σ

2E + gEi J ). A strategy

x of the defender presents probabilities of jamming thecorresponding channels, while a strategy y of the adversaryis probabilities of eavesdropping the corresponding user. Theadversary wants to maximize the expected eavesdroppingcapacity that is given as follows:

u1A(x, y) =

N∑

i=1

xi

⎝ hEi Pi

σ 2E + gEi J

yi +N∑

j=1, j �=i

hE j Pj

σ 2E

y j

=N∑

i=1

hEi Pi

σ 2E

yi

(

1 − gEi J

σ 2E + gEi J

xi

)

=N∑

i=1

Ci yi (1 − di xi ), (2)

with

di = gEi J

σ 2E + gEi J

and Ci = hEi Pi

σ 2E

. (3)

Thus, Ci is un-jammed eavesdropping capacity and di is thecontribution of faded jamming power into the total inducednoise.

B. The Adversary Targets to Infiltrate Into theNetwork/to Perform a Harassment Attack

In the scenario, where the adversary wants to infiltrate(see [22], [29]) the network undetected, from the adversary’spoint of view all the nodes have the same value C . Thesenodes differ only in terms of their defense levels. Thus, thepayoff to the adversary is

u2A(x, y) = C

N∑

i=1

(1 − di xi )yi . (4)

Since the fact that the attack was successful is also mostdamaging for the defender we take the defender’s payoff asu2

D(x, y) = −u2A(x, y). Here we again have a zero-sum game.

We assume that the rivals know the value, C and defensecharacteristics, di .

Note that this game is equivalent to a diagonal matrixgame [30] with {Cdi } allocated along the main diagonal.Games with diagonal matrices can be found in the literaturein different contexts, for example, in describing bandwidthscanning strategies to detect illegal use of applications ina network [31]. The harassment/infiltration attack in theeavesdropping context corresponds to curious eavesdroppingwhen the adversary just intends to eavesdrop without payingattention to eavesdropping capacities.

III. UNCERTAINTY ABOUT ATTACK’S TYPE

In this section, we consider scenarios in which the defenderdoes not know the adversary’s goal: (a) to inflict the max-imal damage or (b) to infiltrate the network/to perform aharassment attack. This situation could also be interpreted as

that of responding to two different types of adversaries. Letthe adversary’s goal be to inflict the maximal damage withprobability q and to infiltrate the network with probability1 − q . Let y1 and y2 be the corresponding strategies of theadversary.

In order to solve this problem we will apply a Bayesianapproach. Note that Bayesian approaches have been widelyemployed in dealing with different problems in networks, forexample, for intrusion detection [10], [15], scanning band-width [31], [32], base station resource allocation [33] andtransmission under incomplete information [16], [34], [35].

Under the strategies of the defender, x, and the adversary,(y1, y2), the expected payoff to the defender is given asfollows:

u D(x, (y1, y2)) = qu1D(x, y1)+ (1 − q)u2

D(x, y2). (5)

The payoff to the adversary of type k is ukA(x, yk). We

assume that the rivals know the node’s values Ci , the valueof the infiltration attack C , defense characteristics di and theprobability q . We look for a Bayesian equilibrium. Recall that(x∗, (y1∗, y2∗)) is a Bayesian equilibrium if and only if for any(x, (y1, y2)) the following inequalities hold:

u D(x, (y1∗, y2∗)) ≤ u D(x∗, (y1∗, y2∗)),uk

A(x∗, yk) ≤ ukA(x∗, yk∗), k = 1, 2. (6)

For the sake of simplicity, we assume that all the nodes havedifferent values, i.e. Ci �= C j for i �= j. We further assumethat the nodes are arranged by their values in decreasing order:

C1 > C2 > · · · > CN . (7)

The following theorem gives an explicit solution of thisBayesian game.

Theorem 1: Consider the game described above. Let k bean integer such that

ψk/(ψk + 1/C) ≤ q ≤ ψk+1/(ψk+1 + 1/C), (8)

with {ψs}1≤s≤N+1 a strictly increasing sequence defined as

ψs =∑s−1

j=1 1/(d j C j )∑N

j=s(1/d j ), 1 ≤ s ≤ N, ψN+1 = ∞. (9)

Also, let m be such that

ϕm ≤ 1 < ϕm+1, (10)

with {ϕs}1≤s≤N+1 is a strictly increasing sequence defined as

ϕs =s∑

j=1

(C j − Cs)/(d j C j ), 1 ≤ s ≤ N, ϕN+1 = ∞. (11)

(a) If

k ≤ m, (12)

Page 4: Incorporating attack type uncertainty into network protection

GARNAEV et al.: INCORPORATING ATTACK-TYPE UNCERTAINTY INTO NETWORK PROTECTION 1281

then the game has a unique Bayesian equilibrium (x, (y1, y2)),where

y1i =

⎧⎪⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎪⎩

qdkCk + (1 − q)dkC

q�kdi Ci, i ≤ k − 1,

1 − qdkCk + (1 − q)dkC

q�k

k−1∑

j=1

1

d j C j, i = k,

0, i ≥ k + 1,

y2i =

⎧⎪⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎪⎩

0, i ≤ k − 1,

1 − qdkCk + (1 − q)dkC

(1 − q)�k

N∑

j=k+1

1

d j C, i = k,

qdkCk + (1 − q)dkC

(1 − q)�kdi C, i ≥ k + 1,

(13)

xi =

⎧⎪⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎪⎩

1

di

⎝1 +(

1 − ∑Nj=1(1/d j )

)dkCk

�kCi

⎠ , i ≤ k,

1

di

⎝1 +(

1 − ∑Nj=1(1/d j )

)dk

�k

⎠ , i ≥ k + 1,

with

�k = 1 +k−1∑

j=1

dkCk

d j C j+

N∑

j=k+1

dk

d j. (14)

The payoffs to the adversary and the defender under theassumption in part (a) are

u1A =

∑Ni=1(1/di)− 1

∑ki=1(1/(di Ci ))+ ∑N

i=k+1(1/(diCk)),

u2A =

(∑Ni=1(1/di )− 1

)C/Ck

∑ki=1(1/(di Ci ))+ ∑N

i=k+1(1/(diCk)),

u D = (1 − ∑Ni=1(1/di ))(Ckq + (1 − q)C)/Ck

∑ki=1(1/(di Ci ))+ ∑N

i=k+1(1/(diCk)).

(b) If

1 ≤ m ≤ k − 1, (15)

then the game has a unique equilibrium strategy for thedefender and a continuum of equilibrium strategies for theadversary. Namely,(b1) Let ξm �= −Cm with

ξs = 1 − ∑sj=1(1/d j )

∑sj=1 1/(d j C j )

, 1 ≤ s ≤ N. (16)

Then, the defender and the first type adversary have uniqueequilibrium strategies x and y1, meanwhile the second typeadversary has a continuum of equilibrium strategies y2:

xi =

⎧⎪⎪⎨

⎪⎪⎩

1di Ci∑m

j=11

d j C j

⎝1 −m∑

j=1

C j − Ci

d j C j

⎠ , i ≤ m,

0, i ≥ m + 1,

(17)

y1i =

⎧⎨

1/(di Ci )∑m

j=1 1/(d j C j ), i ≤ m,

0, i ≥ m + 1(18)

and y2 is any probability vector such that

y2i

⎧⎨

= 0, i ≤ m,

≤ q1−q

1/(diC)∑mj=1(1/(d j C j ))

, i ≥ m + 1.(19)

(b2) If ξm = −Cm . Then, the game has a continuum ofequilibria. Namely, the defender has the unique equilibriumstrategy

xi ={(1/di) (1 − Cm/Ci )) , i ≤ m,

0, i ≥ m + 1;

meanwhile both types of the adversaries have continuumequilibrium strategies, namely, for harassment/infiltration typegiven by (19) and for maximizing damage type given asfollows:

y1i =

⎧⎪⎪⎪⎨

⎪⎪⎪⎩

ε/(di Ci ) i ≤ m − 1,

1 − εm−1∑

j=1(1/(d j C j )), i = m,

0, i ≥ k + 1,

for any ε ∈ [1/

(∑mj=1(1/(d j C j ))

), 1/

(∑m−1j=1 (1/(d j C j ))

)].

Of course, these continuum of the adversary’s equilibriumstrategies are equivalent to each other, since they all bringthe same payoff.

The payoffs to the adversary and the defender are given asfollows:

u1A = −ξm , u2

A = C and u D = qξm − (1 − q)C.

Finally, note that since ψs and ϕs are increasing, by (10)the conditions (12) and (15) are equivalent respectively to

ϕk ≤ 1, (20)

andϕk > 1. (21)

Remark 1 (Part (a) of Theorem 1): It is quite interestingthat the defender’s payoff depends on probability q explicitlywhile his strategy depends on probability q only implicitly bymeans of switching point k defined through equation (8). Thesame phenomena occurs in reverse order for the adversary,namely, the adversary’s strategy depends on probability qexplicitly, while his payoff depends on probability q onlyimplicitly through k. Also, the adversary’s strategies have thenode sharing form, i.e., all nodes are under attack in such away that the maximum damage attack targets the most valuablenodes, while less valuable nodes are under infiltration attack.The defender distributes defense efforts among all the nodes.

Part (b) of Theorem 1: Since ψs is increasing, by (8), k isdecreasing on C , while, by (10), m does not depend on C .Thus, the condition (20) always holds for large enough C ,while (21) does not hold. It means that the case (b) arises whenthe loss caused by an infiltration attack (from the defender’spoint of view) is negligible compared to the one inflicted bythe maximum damage attack. In this situation, the defenderapplies all defense effort as if s/he meets only with a maximumdamage attack and infiltration attack is not expected at all.

Page 5: Incorporating attack type uncertainty into network protection

1282 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 9, NO. 8, AUGUST 2014

Fig. 1. (a) The payoff of defender, (b) the payoff of maximizing damage adversary’s type, and (c) the payoff of infiltration adversary’s type as functions ofthe probability q and the jamming power J .

Fig. 2. The equilibrium strategy of the defender for each channel as functions of the probability q and the jamming power J .

That is why the defender and the adversary strategies dependon probability q only implicitly by means of definition (8) ofswitching point k and (15).

Finally, note that the considered Bayesian game is equiva-lent to a zero-sum game with the same payoff to the defenderas in the Bayesian game. Thus, the equilibrium strategy of thedefender is maximin strategy and gives the optimal behaviorof the defender under the worst conditions.

IV. NUMERICAL ILLUSTRATION

As a numerical example we consider jamming versus eaves-dropping (2) and (3) interpretation of our model, where theadversary intends to jam eavesdropping capacities of the adver-sary trying to eavesdrop communication of N = 4 users usingseparate channels with other users allocated outside of thesecure area. There are two types of adversaries: maximizingeavesdropping damage adversary, and infiltration eavesdrop-ping adversary, which correspond to the maximizing damageand harassment/infiltration attacks, respectively. We assumethat the background noise is σ 2

E = 1, channel fading gains{gEi , i = 1, . . . , 4} = (0.55, 0.77, 3.33, 9.9), faded eavesdrop-ping power {hEi Pi/σ

2E , i = 1, . . . , 4} = (1.7, 1.5, 0.8, 0.2)

and {hEi Pi/σ2E , i = 1, . . . , 4} = (1, 1, 1, 1) for maximum

eavesdropping and curious eavesdropping attacks, respectively.The payoff to the defender is continuous in probability q

and jamming power J , while the payoff to the adversary ispiecewise-constant in probability q and piecewise-continuousin jamming power J (Figure 1). It is quite natural that thepayoff to the defender and the adversary are increasing and

decreasing on jamming power, respectively. Of course, thepayoff to the maximizing eavesdropping damage adversary isdecreasing in q , since increasing q means that the defender hasstronger belief that s/he meets such an adversary. The payoffto the infiltration eavesdropping adversary is increasing in qdue to the same reason. It is quite interesting that the payoffto the defender in general is not monotonic with respect toprobability q and can give an interior minimum on q . So,under some conditions, lack of complete information on realthreat can cause a reduction in the defender’s expected payoff.

The adversary, depending on the type of attack, applieschannel sharing strategies (Figures 3 and 4), while the defenderapplies jamming effort (Figure 2) among all the channels(if the condition (12) of Theorem 1 holds) or only amongthe channels where maximizing eavesdropping damage attackcan be inflicted (if the condition (15) of Theorem 1 holds).Figure 5(a) illustrates how the domain, where the defender hasto respond to infiltration attack, increases with increasing valueof such an attack. Figures 5(b) and (c) show the coefficient ofefficiency of the defender’s strategy tuned for the possibilitythat both types of attacks can arise compared to the defender’sstrategy tuned to only one type of attack. Here the coefficientof efficiency is the ratio of the corresponding payoffs. Sincedefender’s payoff is negative, smaller coefficient of efficiencymeans higher efficiency of the defender’s strategy. Also, if thiscoefficient equals to one, then no improvements in efficiencyoccur. Thus, in the domain where condition (15) holds such acoefficient with respect to the maximizing eavesdropping dam-age attack equals to 1 (Figure 5(b)), since under this condition

Page 6: Incorporating attack type uncertainty into network protection

GARNAEV et al.: INCORPORATING ATTACK-TYPE UNCERTAINTY INTO NETWORK PROTECTION 1283

Fig. 3. The equilibrium strategy of the maximum damage adversary type for each channel as functions of the probability q and the jamming power J .

Fig. 4. The equilibrium strategy of the infiltration adversary type for each channel as functions of the probability q and the jamming power J .

Fig. 5. (a) Curves sharing the domains where either (12) or (15) condition from Theorem 1 holds as functions of probability q and jamming power J .(b) and (c) The coefficient of efficiency of the defender’s strategy tuned to the expected threat compared to that tuned just to: (b) maximum damage attackand (c) infiltration attack.

the defender does not consider infiltration attack essential andfocuses on preventing only the maximizing damage attack. Thecoefficient of efficiency with respect to the infiltration attack isequal to 1 only when the possibility of such an attack becomesdominant (so, when q is close to 0). Figure 6 illustrates howthe rival’s payoffs depend on the value of the infiltrationattack. The expected payoff of the defender is continuouson C . Also, it shows that if the value of infiltration attackis small compared to the individual capacities of channels,then increasing the probability of the maximal damage attack,q , leads to a decrease in the defender’s payoff. Alternatively,if the value of infiltration attack is large, then increasing

q leads to an increase in the defender’s payoff. Finally, ifthis quantity q has an intermediate value, then the payoff ofthe defender is convex in q . The payoff to the maximizingeavesdropping damage adversary is piecewise-constant, andinfiltration eavesdropping adversary is piecewise-continuouson C and increasing on C . It is quite natural that it is increasingfor the maximizing eavesdropping damage adversary althoughthe payoff depends on C only implicitly. This is due to thefact that increasing C implies that the defender has to paymore attention to the possibility of meeting infiltration typeattack, that causes the defender to become vulnerable to themaximizing damage type attack.

Page 7: Incorporating attack type uncertainty into network protection

1284 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 9, NO. 8, AUGUST 2014

Fig. 6. (a) The payoff of the defender, (b) the payoff of the maximum damage adversary type, and (c) the payoff of the infiltration adversary type as functionsof the probability q and the value of attack, C .

V. DISCUSSION

In this paper we have dealt with a dilemma that the defenderof a network, due to limited resources, faces: either (a)to focus on increasing defense of the most valuable nodeswhich leads to increasing chance for the adversary to infiltratethe network through less valuable nodes, or (b) to defendall nodes while reducing the level of defense of the mostvaluable ones. To solve this problem most efficiently we havesuggested a Bayesian game in which the probability of theattack’s type can be considered as a sub-scale in the scaleof threat levels. This sub-scale allows for better tuning ofthreat level and for more efficient allocation of the limiteddefense resources. We have shown that the defender has aunique equilibrium, while the adversary has a unique strategyonly if all nodes are under attack. If the attacks effect only asubset of the nodes then continuum of equilibrium strategiescould arise for the adversary. The adversary’s payoffs havea discontinuous dependence on the probability of the attacktype. This discontinuity means that the defender has to takeinto account the human factor since some threshold valuesof this inclination in the adversary’s behavior could increasesensitivity of the defender’s strategy, while in other situationsit produces only minimal impact.

Of interest for future research is an extension of this modelto dynamical scenarios and the introduction of uncertaintyabout the attack tools employed and the number of attackers.

APPENDIX

PROOF OF THEOREM 1

Since u D(x, (y1, y2)) is linear in x and ukA(x, yk) is linear

in yk , by (6), (x, (y1, y2)) is an equilibrium if and only if thefollowing conditions hold for some ν1, ν2 and ν:

(di xi − 1)Ci

{= ν1, y1

i > 0,

≥ ν1, y1i = 0,

(22)

Cdi xi

{= ν2, y2

i > 0,

≥ ν2, y2i = 0,

(23)

q

⎝di Ci y1i −

N∑

j=1

C j y1j

⎠ + (1 − q)Cdi y2i

{= ν, xi > 0,

≤ ν, xi = 0.

(24)

Since x is a probability vector and di ∈ (0, 1), by (23), 0 ≤ν2 < C , and also (22) implies that ν1 < 0.

We will consider two cases separately: (A) xi > 0 forevery i , (B) there is an i such that xi = 0. We will showthese cases (A) and (B) relate to (a) and (b) correspondingly.

(A) Let xi > 0 for every i . Then by (23) ν2 > 0. By (24)we have that for every i only three cases are possible: (i1)y1

i > 0, y2i > 0, (i2) y1

i > 0, y2i = 0 and (i3) y1

i = 0, y2i > 0.

(i1) Assume that there exist i such that y1i > 0 and y2

i > 0.Then, by (22) and (23), we have that Ci (di xi − 1) = ν1,Cdi xi = ν2. So,

Ci =− ν1

1 − ν2/C, xi = ν2

Cdi= 1

di

(ν1

Ci+ 1

)

for y1i y2

i >0.

(25)(i2) Let y1

i > 0 and y2i = 0. Then, by (22) and (23), we

have that Ci (1 − di xi ) = −ν1, Cdi xi ≥ ν2. So,

Ci ≥ − ν1

1 − ν2/C, xi = (ν1/Ci )+ 1

difor y1

i > 0, y2i =0.

(26)(i3) Let y1

i = 0 and y2i > 0. Then, by (22) and (23), we

have that Ci (1 − di xi ) ≤ −ν1, Cdi xi = ν2. So,

Ci ≤ − ν1

1 − ν2/C, xi = ν2

Cdifor y1

i = 0, y2i > 0. (27)

By assumption (7) and equations (25)–(27) clearly, there existsa k such that

Ck = −ν1/(1 − ν2/C), (28)

y1i

⎧⎪⎨

⎪⎩

> 0, i ≤ k − 1,

≥ 0, i = k,

= 0, i ≥ k + 1,

(29)

y2i

⎧⎪⎨

⎪⎩

= 0, i ≤ k − 1,

≥ 0, i = k,

> 0, i ≥ k + 1

(30)

and

xi =

⎧⎪⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎪⎩

1di

(ν1

Ci+ 1

)

, i ≤ k − 1,

ν2

Cdk= 1

dk

(ν1

Ck+ 1

)

, i = k,

ν2

Cdi= 1

di

(ν1

Ck+ 1

)

, i ≥ k + 1.

(31)

Page 8: Incorporating attack type uncertainty into network protection

GARNAEV et al.: INCORPORATING ATTACK-TYPE UNCERTAINTY INTO NETWORK PROTECTION 1285

Then, by (24), (29) and (30), we have that

y1i =

⎧⎪⎨

⎪⎩

ω/(di Ci q), i ≤ k − 1,

y1k , i = k,

0, i ≥ k + 1,

(32)

y2i =

⎧⎪⎨

⎪⎩

0, i ≤ k − 1,

y2k , i = k,

ω/(C(1 − q)di), i ≥ k + 1

andqdkCk y1

k + (1 − q)dkCy2k = ω, (33)

with

ω = ν + qN∑

j=1

C j y1j . (34)

Then, since y1 and y2 are probability vectors, (32) implies:

y1k = 1 − (ω/q)

k−1∑

j=1

1/(d j C j ), (35)

y2k = 1 − (ω/(1 − q))

N∑

j=k+1

1/(d j C). (36)

Substituting (35) and (36) into (33) allows us to find ω as afunction of k with �k given by (14):

ω = (qCkdk + (1 − q)Cdk)/�k . (37)

This allows us, by using (32), (35) and (36) to get yt in closedform as it is given in (13) as a function of k.

How can k be found? It is defined by the conditionthat yt (t = 1, 2) is a strategy (a probability vector), so∑N

j=1 ytj = 1 (which holds by (32) and (35)) and yt

i ≥ 0for i ∈ {1, . . . , N}. Since ω > 0 then, by (13), yt

i ≥ 0 fori ∈ {1, . . . , N}\{k}. So, yt , t = 1, 2 are strategies if and onlyif y1

k ≥ 0 and y2k ≥ 0. These conditions, by (35)–(37), are

equivalent to

(qdkCk + (1 − q)Cdk)/�k

≤ min

⎧⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎩

qk−1∑

j=1

1/(d j C j )

,1 − q

N∑

j=k+1

1/(d j C)

⎫⎪⎪⎪⎪⎪⎬

⎪⎪⎪⎪⎪⎭

.

These inequalities are equivalent to

Cψk ≤ q/(1 − q) ≤ Cψk+1, (38)

with ψs given by (9). The sequence {ψs} is increasing since

ψs+1 − ψs =

N∑

j=s+11/(dsd j Cs)+

s∑

j=11/(dsd j C j )

N∑

j=s+1(1/d j )

N∑

j=s(1/d j )

.

Thus, k is uniquely defined by (38). Also, (38) is equivalentto (8), and y1 and y2 given by (13) are the unique equilibriumstrategy of the adversary.

Now we have to find the defender’s strategy. To do so, wehave to define ν1 in such way that x given by (31) has to bea probability vector. Then, summing up (31) implies that thefollowing equation has to hold:

F(ν1) :=k∑

i=1

1

di

(

1 + ν1

Ci

)

+N∑

i=k+1

1

di

(

1 + ν1

Ck

)

= 1. (39)

By (7) and (31), xi ≥ 0 for i ∈ [1, N] if and only if −Ck ≤ν1 ≤ 0. Since F(ν1) is increasing and F(0) > 1, there is a ν1

such that (39) holds if and only if

F(−Ck) =k∑

i=1

(1/di) (1 − Ck/Ci ) = ϕk ≤ 1. (40)

If (40) holds then ν1 is given uniquely as follows:

ν1 = 1 − ∑Ni=1(1/di )

∑ki=1(1/(di Ci ))+ ∑N

i=k+1(1/(diCk)). (41)

Substituting (41) into (31) produces the unique equilibriumstrategy of the defender.

To obtain the payoffs to the rivals note that by (28)

ν2 = C(1 + ν1/Ck). (42)

By (22), (23), (29)–(31) and definition of payoffs (1), (4) and(5) we have that

u1A = −ν1,

u2A = C − ν2 = (by (42) = −Cν1/Ck,

u D = −qu1A − (1 − q)u2

A,

and (a) follows.(B) Suppose there exists an i such that xi = 0. Then, by

(23), ν2 = 0, and so, y2i = 0 for xi > 0. Following the proof

of (i1)–(i3), (7) implies that there is an m such that

xi

{≥ 0, i ≤ m,

= 0, i ≥ m + 1,(43)

y1i

{≥ 0, i ≤ m,

= 0, i ≥ m + 1,(44)

y2i

{= 0, i ≤ m,

≥ 0, i ≥ m + 1.(45)

Also, by (22),

xidi

{= 1 + ν1/Ci , y1

i > 0,

≥ 1 + ν1/Ci , y1i = 0.

(46)

Consider two cases (B1) ξm �= −Cm and (B2) ξm = −Cm

separately. We will show that the case (B1) corresponds to(b1), and (B2) corresponds to (b2).

(B1) Let ξm �= −Cm . Then, by (22),

xi ={(1/di)

(1 + ν1/Ci

), i ≤ m,

0, i ≥ m + 1.(47)

Since x has to be a probability vector, summing up xi

from (47) yields that 1 = ∑mi=1(1/di)

(1 + ν1/Ci )

). Thus,

ν1 = ξm .

Page 9: Incorporating attack type uncertainty into network protection

1286 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 9, NO. 8, AUGUST 2014

For switching point m by (7), (46) and (47) the followingconditions have to hold: −Cm ≤ ν1 < −Cm+1. Theseconditions are equivalent to (10) with ϕi given by (11). Sinceν1 = ξm �= −Cm then xm > 0. So, by (7) and (47){i : xi > 0} = {i : y1

i > 0} = {1, . . . ,m}. By (24)

y1i =

{ω/(qdiCi ), i ≤ m,

0, i ≥ m + 1,(48)

y2i

{= 0, i ≤ m,

≤ ω/((1 − q)diC), i ≥ m + 1.(49)

Since∑m

j=1 y1j = 1, then (48) implies (18) and that ω =

q/∑m

j=1(1/(d j C j )). Then, since ψs is increasing, we havethat

N∑

i=m+1

ω

(1 − q)diC=

∑Ni=m+1(1/(diC))

∑mj=1(1/(d j C j ))

q

1 − q(50)

= q

C(1 − q)ψm+1≥ q

C(1 − q)ψk≥ (by (38)) ≥ 1, (51)

with m ≤ k − 1. So, y2 as a probability vector can be definedby (49) and the result follows.

(B2) Let ξm = −Cm . Then, by (43)-(45) and (24), we havethat

y1i

⎧⎪⎨

⎪⎩

> 0, i ≤ m − 1,

≥ 0, i = m,

= 0, i ≥ m + 1

(52)

and

xi

{> 0, i ≤ m − 1,

= 0, i ≥ m(53)

So, x is given by (47) with ν1 = −Cm . By (24) and (52)

y1i =

⎧⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎩

ωqdiCi

, i ≤ m − 1,

1 −k−1∑

j=1

ωqd j C j

≤ ωqdmCm

, i = m,

0, i ≥ m + 1.

(54)

Thus, y1 given by (54) is a probability vector if and only0 ≤ y1

m ≤ min{1, ω/(qdmCm)}, this is equivalent to1/(

∑mj=1(1/(d j C j )) ≤ ω/q ≤ 1/(

∑m−1j=1 (1/(d j C j )), and

putting ε = ω/q implies (b2), and the result follows.

REFERENCES

[1] Website of the Department of Homeland Security. (2012). Cyberse-curity Overview [Online]. Available: http://www.dhs.gov/cybersecurity-overview

[2] D. A. Fahrenthold and S. Horwitz. (2013, Feb. 8). Secret serviceinvestigating hack of Bush family e-mails. The Washington Post[Online]. Available: http://www.washingtonpost.com/politics/bush-family-e-mail-accounts-hacked/2013/02/08/ad2fbde0-720b-11e2-8b8d-e0b59a1b8e2a_story.html

[3] J. Callaham. (2011, Dec. 28). 50,000 Credit Card Numbers Stolenin Stratfor Cyber Attack [Online]. Available: http://www.neowin.net/news/50000-credit-card-numbers-stolen-in-stratfor-cybe%r-attack

[4] J. Mullen. (2013, Jan. 31). New York Times, Wall Street Journal Say Chi-nese Hackers Broke into Computers. CNN, Atlanta, GA, USA [Online].Available: http://www.cnn.com/2013/01/31/tech/china-nyt-hacking

[5] (2012, Mar. 2). Hackers Had Fully Functional Control of NASA Com-puters [Online]. Available: http://www.cnn.com/2013/01/31/tech/china-nyt-hacking

[6] T. Alpcan and T. Basar, “A game theoretic approach to decision andanalysis in network intrusion detection,” in Proc. 42nd IEEE Conf.Decision Control (CDC 2003), Dec. 2003, pp. 2595–2600.

[7] K. C. Nguyen, T. Alpcan, and T. Basar, “Security games with incompleteinformation,” in Proc. IEEE Int. Conf. Commun. (ICC 2009), Jun. 2009,pp. 1–6.

[8] S. N. Hamilton, W. L. Miller, A. Ottand, and O. S. Saydjari, “Challengesin applying game theory to the domain of information warfare,” in Proc.ISW, 2002, pp. 1–6.

[9] S. Roy, C. Ellis, S. Shiva, D. Dasgupta, V. Shandilya, and Q. Wu,“A survey of game theory as applied to network security,” in Proc. 43rdHICSS, Jan. 2010, pp. 1–10.

[10] Y. Liu, C. Comaniciu, and H. Mani, “A Bayesian game approach forintrusion detection in wireless ad hoc networks,” in Proc. Valuetools,2006.

[11] C. Comaniciu, N. B. Mandayam, and H. V. Poor, Wireless NetworksMultiuser Detection in Cross-Layer Design. New York, NY, USA:Springer-Verlag, 2005.

[12] A. Mukherjee and A. L. Swindlehurst, “Optimal strategies for counter-ing dual-threat jamming/eavesdropping-capable adversaries in MIMOchannels,” in Proc. Military Commun. Conf. (MILCOM 2010), 2010,pp. 1695–1700.

[13] Q. Zhu, W. Saad, Z. Han, H. V. Poor, and T. Basar, “Eavesdroppingand jamming in next-generation wireless networks: A game-theoreticapproach,” in Proc. Military Commun. Conf. (MILCOM 2011), 2011,pp. 119–124.

[14] L. Kong-wei and J. Wing, “Game strategies in network security,” Int. J.Inform. Security, vol. 4, nos. 1–2, pp. 71–86, 2005.

[15] A. Agah, S. K. Das, K. Basu, and M. Asadi, “Intrusion detectionin sensor networks: A non-cooperative game approach,” in Proc.3rd Int. Symp. IEEE Netw. Comput. Appl. (NCA 2004), Sep. 2004,pp. 343–346.

[16] Z. Han, N. Marina, M. Debbah, and A. Hjrungnes, “Physical layersecurity game: Interaction between source, eavesdropper, and friendlyjammer,” EURASIP J. Wireless Commun. Netw., Special Issue WirelessPhys. Layer Security, vol. 2009, Jun. 2009, doi: 10.1155/2009/452907.

[17] M. H. Manshaei, Q. Zhu, T. Alpcan, T. Basar, and J.-P. Hubaux, “Gametheory meets network security and privacy,” ACM Comput. Surv., vol. 45,no. 3, p. 25, 2013.

[18] S. Lohr. (2008, Nov. 4). In Modeling Risk, The Human Factor was LeftOut. The New York Times, New York, NY, USA [Online]. Available:http://www.nytimes.com/2008/11/05/business/05risk.html?pagewanted=all&_r=0

[19] J. Schofield. (2008, Sep. 19). How Sarah Palin’s Yahoo Mailbox wasSo Easily Hacked. The Guardian, London, U.K. [Online]. Available:http://www.theguardian.com/technology/askjack/2008/sep/19/security.email

[20] Wikipedia. (2009). Sarah Palin Email Hack [Online]. Available:http://en.wikipedia.org/wiki/Sarah_Palin_email_hack

[21] Homeland Security Act, United States Congress, Washington, DC, USA,2002.

[22] V. J. Baston and A. Y. Garnaev, “A search game with a protector,” NavalRes. Logistics, vol. 47, no. 2, pp. 85–96, 2000.

[23] K. Iida, R. Hohzaki, and K. Sato, “Hide-and-search game with the riskcriterion,” J. Oper. Res. Soc. Jpn., vol. 37, no. 3, pp. 287–296, 1994.

[24] M. Baykal-Gursoy, Z. Duan, H. V. Poor, and A. Garnaev. (2014).Infrastructure security games. Eur. J. Oper. Res. [Online]. Available:http://dx.doi.org/10.1016/j.ejor.2014.04.033

[25] Website of Home Office, London, U.K. (2014). Current ThreatLevel [Online]. Available: http://www.homeoffice.gov.uk/counter-terrorism/current-threat-level/

[26] M. F. Neuts, “A multistage search game,” J. Soc. Ind. Appl. Math.,vol. 11, no. 2, pp. 502–507, 1963.

[27] M. Sakaguchi, “Two-sided search games,” J. Oper. Res. Soc. Jpn.,vol. 16, no. 4, pp. 207–225, 1973.

[28] D. Fudenberg and J. Tirole, Game Theory. Boston, MA, USA: MITPress, 1991.

[29] A. Garnaev, G. Garnaeva, and P. Goutal, “On the infiltration game,” Int.J. Game Theory, vol. 26, no. 2, pp. 215–221, 1997.

[30] A. Garnaev, “A remark on a helicopter and submarine game,” NavalRes. Logistics, vol. 40, no. 5, pp. 745–753, 1993.

Page 10: Incorporating attack type uncertainty into network protection

GARNAEV et al.: INCORPORATING ATTACK-TYPE UNCERTAINTY INTO NETWORK PROTECTION 1287

[31] A. Garnaev, W. Trappe, and C.-T. Kung, “Dependence of optimalmonitoring strategy on the application to be protected,” in Proc. 2012IEEE Global Commun. Conf. (GLOBECOM 2012), Dec. 2012, pp.1054–1059.

[32] A. Garnaev and W. Trappe, “Stationary equilibrium strategies forbandwidth scanning,” in Multiple Access Communcations (Lecture Notesin Computer Science), vol. 8310, M. Jonsson, A. Vinel, B. Bellalta,N. Marina, D. Dimitrova, and D. Fiems, Eds. Berlin, Germany: Springer-Verlag, 2013, pp. 168–183.

[33] E. Altman, K. Avrachenkov, and A. Garnaev, “Fair resource allocation inwireless networks in the presence of a jammer,” Perform. Eval., vol. 67,no. 4, pp. 338–349, 2010.

[34] E. Altman, K. Avrachenkov, and A. Garnaev, “Jamming in wirelessnetworks under uncertainty,” Mobile Netw. Appl., vol. 16, no. 2,pp. 246–254, 2011.

[35] G. He, M. Debbah, and E. Altman, “k-player Bayesian waterfilling gamefor fading multiple access channels,” in Proc. 3rd IEEE Int. WorkshopComput. Adv. Multi-Sensor Adapt. Process., CAMSAP, Dec. 2009,pp. 17–20.

[36] A. Garnaev, Y. Hayel, and E. Altman, “A Bayesian jamming gamein an OFDM wireless network,” in Proc. 10th Int. Symp. Model.Optim. Mobile, Ad Hoc Wireless Netw. (WiOpt 2012), May 2012,pp. 41–48.

Andrey Garnaev received the M.Sc., Ph.D., andD.Sc. degrees from Saint Petersburg State Univer-sity, Saint Petersburg, Russia, in 1982, 1987, and1997, respectively. He is currently a Researcherwith WINLAB, Rutgers University, New Brunswick,NJ, USA, and a Professor with the Departmentof Computer Modelling and Multiprocessor Sys-tems, Saint Petersburg State University. His currentresearch interests are in the applications of gametheory and optimization theory in network security,wireless networks, pricing, and related fields. He

has been published in leading journals like Mobile Networks and Applica-tions, Telecommunication Systems Journal, Performance Evaluation, NavalResearch Logistics, International Journal of Game Theory, and the Journalof Optimization Theory and Applications.

Melike Baykal-Gursoy received the Ph.D. degreein systems engineering from the University of Penn-sylvania, Philadelphia, PA, USA, in 1988. She hasbeen on the faculty with Rutgers University, NewBrunswick, NJ, USA, since 1988, where she isan Associate Professor of Industrial and SystemsEngineering. Her research interests are in the areasof stochastic modeling and optimization, Markovdecision processes, stochastic games, and queueing,with applications to traffic, security, and supplychains. She has published in leading journals like

Mathematics of Operations Research, Queueing Systems: Theory and Applica-tions, Mathematical Methods of Operations Research, and European Journalof Operational Research.

H. Vincent Poor (S’72–M’77–SM’82–F’87)received the Ph.D. degree in electrical engineeringand computer science from Princeton University,Princeton, NJ, USA, in 1977. From 1977 to 1990,he was on the faculty of the University of Illinoisat Urbana-Champaign, Champaign, IL, USA. Since1990, he has been on the faculty with PrincetonUniversity, where he is the Michael Henry StraterUniversity Professor of Electrical Engineering andthe Dean of the School of Engineering and AppliedScience. His research interests are in the areas of

stochastic analysis, statistical signal processing, and information theory,and their applications in wireless networks and related fields such as socialnetworks and smart grid. Among his publications in these areas are the recentbooks Principles of Cognitive Radio (Cambridge, 2013) and Mechanismsand Games for Dynamic Spectrum Allocation (Cambridge University Press,2014).

Dr. Poor is a member of the National Academy of Engineering and theNational Academy of Sciences, and a foreign member of the AcademiaEuropaea and the Royal Society. He is also a fellow of the AmericanAcademy of Arts and Sciences, the Royal Academy of Engineering, U.K.,and the Royal Society of Edinburgh. He was a recipient of the TechnicalAchievement and Society Awards of the IEEE Signal Processing Societyin 2007 and 2011, respectively. Recent recognition of his work includesthe 2014 URSI Booker Gold Medal, the honorary doctorates from AalborgUniversity, the Hong Kong University of Science and Technology, and theUniversity of Edinburgh.