23
VDG, July 17, 2007 1 Copyright © 2007 On the Evolution of Adversary Models (from the beginning to sensor networks) Virgil D. Gligor Electrical and Computer Engineering University of Maryland College Park, MD. 20742 [email protected] Lisbon, Portugal July 17-18, 2007

On the Evolution of Adversary Models (from the beginning to sensor networks)

  • Upload
    walda

  • View
    34

  • Download
    0

Embed Size (px)

DESCRIPTION

On the Evolution of Adversary Models (from the beginning to sensor networks). Virgil D. Gligor Electrical and Computer Engineering University of Maryland College Park, MD. 20742 [email protected] Lisbon, Portugal July 17-18, 2007. Overview. - PowerPoint PPT Presentation

Citation preview

Page 1: On the Evolution of Adversary Models (from the beginning to sensor networks)

VDG, July 17, 2007 1 Copyright © 2007

On the Evolution of Adversary Models(from the beginning to sensor networks)

Virgil D. Gligor Electrical and Computer Engineering

University of MarylandCollege Park, MD. 20742

[email protected]

Lisbon, PortugalJuly 17-18, 2007

Page 2: On the Evolution of Adversary Models (from the beginning to sensor networks)

VDG, July 17, 2007 2 Copyright © 2007

Overview

1. New Technologies often require a New Adversary Def.- continuous state of vulnerability

2. Why is the New Adversary Different ? - ex. sensor, mesh networks, MANETs

- countermeasures

3. Challenge: find “good enough” security countermeasures

4. Proposal: Information Assurance Institute

Page 3: On the Evolution of Adversary Models (from the beginning to sensor networks)

VDG, July 17, 2007 3 Copyright © 2007

A system without an adversary definition cannotpossibly be insecure; it can only be astonishing…

… astonishment is a much underrated security vice.(Principle of Least Astonishment)

Page 4: On the Evolution of Adversary Models (from the beginning to sensor networks)

VDG, July 17, 2007 4 Copyright © 2007

1. New Technology > Vulnerability ~> Adversary <~> Methods & Tools

Why an Adv. Def. is a fundamental concern ?

-sharing user-mode confidentiality and untrusted user- sys. vs. user mode (’62->) programs& data; integrity breaches; mode programs rings, sec. kernel (’65, ‘72)- computing utility system penetration; & subsystems FHM (’75) theory/tool (’91)*(early – mid 1960s) acc. policy models (’71)

- shared stateful DoS instances untrusted user DoS = a diff. prob.(83-’85)*services processes; formal spec. & verif. (’88)*

e.g., DBMS, net. protocols concurrent, DoS models (’92 -> ) dyn. resource alloc. coord. attacks(early - mid 1970s)

- PCs, LANs; read, modify, block, “man in the middle” informal: NS, DS (’78–81)public-domain Crypto replay, forge active, adaptive semi-formal: DY (‘83)(mid 1970s) messages network adversary Byzantine (‘82 –>)

crypto attk models (‘84->) auth. prot. analysis (87->)

- internetworking large-scale effects: geo. distributed, virus scans, tracebacks (mid – late 1980s) worms, viruses, coordinated intrusion detection

DDoS (e.g., flooding) attacks (mid ’90s ->)

2. Technology Cost -> 0, Security Concerns persist

Page 5: On the Evolution of Adversary Models (from the beginning to sensor networks)

VDG, July 17, 2007 5 Copyright © 2007

Continuous State of Vulnerability

New Technology >

New Vulnerability ~>

New Adversary Model <~>

New AnalysisMethod & Tools

+/- O(months) +O(years)

+O(years)

Reuse of Old (Secure)Systems & Protocols

New Technology ~>

New Vulnerability

OldAdversary Model

… a perennial challenge (“fighting old wars”)

mismatch

Page 6: On the Evolution of Adversary Models (from the beginning to sensor networks)

VDG, July 17, 2007 6 Copyright © 2007

New Technology Ex.: Sensor Networks

Claim

Sensor Networks introduce:

- new, unique vulnerabilities: nodes captured and replicated

- new adversary: different from and Dolev-Yao and traditional Byzantine adv.s

and

- require new methods and tools: emergent algorithms & properties (for imperfect but good-enough security)

Mesh Networks have similar but not identical characteristics

Page 7: On the Evolution of Adversary Models (from the beginning to sensor networks)

VDG, July 17, 2007 7 Copyright © 2007

Two Extreme Examples

Low end: Smart Cards (< $15) High end: IBM 4764 co-proc. (~ $9K)

- no tamper resistance- non-invasive phys. attacks

- side-channel (timing, DPA)- unusual operating conditions

- temperature, power clock glitches

- invasive phys. attacks- chip removal from plastic cover- microprobes, electron beams

- tamper resistance, real time response- independent battery, secure clock- battery-backed RAM (BBRAM)- wrapping: several layers of non-metallic grid of conductors in a grounded shield to reduce detectable EM emanations- tamper detection sensors (+ battery)

- temp., humidity, pressure, voltage, clock, ionizing radiation

- response: erase BBRAM, reset device

Limited Physical Node Protection

Page 8: On the Evolution of Adversary Models (from the beginning to sensor networks)

VDG, July 17, 2007 8 Copyright © 2007

Observation: a single on-chip secret key is sufficient to protect (e.g., via Authenticated Encryption) many other memory-stored secrets (e.g., node keys)

Problem: how do we protect that single on-chip secret key ?

Potential Solution: Physically Unclonable Functions (PUFs)

observation: each IC has unique timing

basic PUF: Challenge extracts unique, secret Response (i.e., secret key) from

IC-hidden, unique timing sequence

Limited Physical Node Protection

Page 9: On the Evolution of Adversary Models (from the beginning to sensor networks)

VDG, July 17, 2007 9 Copyright © 2007

b0 b1 b2 b61 b128

bi=0 bi=1

Arbiter

switch

01

Arbiter 0

Arbiter 1

Arbiter operation

LFSR Challengee.g., 128 bits

Responsee.g., 255 bits

Basic PUF circuit [Jae W. Lee et al. VLSI ‘04]

b62feed-fwd arbiter

unknown challenge bit

IC

255

Page 10: On the Evolution of Adversary Models (from the beginning to sensor networks)

VDG, July 17, 2007 10 Copyright © 2007

Basic PUF counters:

brute-force attacks (2*128 challenge-response pairs => impractical)

duplication (different timing => different Secret Response)

invasive attacks (timing modification => different Secret Response)

However,

Pr. 1: adversary can build timing model of Arbiter’s output=> can build clone for secret-key generation

Pr. 2: Arbiter’ output (i.e., secret-key generation) is unreliableReality: intra-chip timing variation (e.g., temp, pressure, voltage)

=> errors in Arbiter’s output (e.g., max. error: 4 – 9%)

Basic PUF circuit [Jae W. Lee et al. VLSI ‘04]

Page 11: On the Evolution of Adversary Models (from the beginning to sensor networks)

VDG, July 17, 2007 11 Copyright © 2007

Suggested PUF circuit [Ed Suh et al. ISCA ‘05]

Solution to Pr. 1: hash Arbiter’s output to provide new Response - cannot discover Arbiter output from known Challenges and new Responses

Solution to Pr. 2: add Error Correcting Codes (ECCs) on Arbiter’s output

e.g., use BCH(n, k, d) n(timing bits) = k(secret bits) + b(syndrome bits) for (d-1)/2 errors

BCH (255,63,61) => 30 (> 10%n > max. no.) errors in Arbiter’s output are corrected > 30 errors ? (probability is 2.4 X10-6)

probability of incorrect output is smaller but not zero

hash Arbiter’s output and verify against stored Hash(Response)

Page 12: On the Evolution of Adversary Models (from the beginning to sensor networks)

VDG, July 17, 2007 12 Copyright © 2007

Suggested PUF circuit

b0 b1 b2 b61 b128

Arbiter

LFSRknownChallengee.g., 128 bit

secretResponse

b62feed-fwd arbiterBCH

knownSyndromee.g., 192 bits

Hash

(Off-line) Verifiable-Plaintext Attack:Get C, S, hash(R); guess remaining (e.g., 63) bits of Arbiter’s output; verify new R; repeat verifiable guesses until Arbiter’s output is known; discover secret key

However, syndrome reveals some (e.g., b=192) bits of Arbiter’s output (n=255)

IC

255bits

generate response: C -> R, S; retrieve response: C, S -> R

Page 13: On the Evolution of Adversary Models (from the beginning to sensor networks)

VDG, July 17, 2007 13 Copyright © 2007

Some Characteristics of Sensor Networks

1. Ease of Network Deployment and Extension- scalability => simply drop sensors at desired locations- key connectivity via key pre-distribution =>

neither administrative intervention nor TTP interaction

2. Low Cost, Commodity Hardware - low cost => physical node shielding is impractical

=> ease of access to internal node state(Q: how good should physical node shielding be to prevent access to a sensor’s internal state ?)

3. Unattended Node Operation in Hostile Areas => adversary can capture, replicate nodes (and node states)

Page 14: On the Evolution of Adversary Models (from the beginning to sensor networks)

VDG, July 17, 2007 14 Copyright © 2007

NEIGHBORHOOD i

1

3i

2

Captured Node

3

Replicated Node Insertion: How Easy ?

shared keyoutside neighborhood

shared keyoutside neighborhood

NEIGHBORHOOD j

NEIGHBORHOOD k

shared key

inside neighborhood

path key

Page 15: On the Evolution of Adversary Models (from the beginning to sensor networks)

VDG, July 17, 2007 15 Copyright © 2007

NEIGHBORHOOD j

NEIGHBORHOOD i

1

3i

2

Captured Node

3

NEIGHBORHOOD k

3

Node

Replica 1

3

Node

Replica 2

Note: Replica IDs are cryptographically bound to pre-distributed keys and cannot be changed

Attack Coordination among Replicas: How Easy ?

collusion

Page 16: On the Evolution of Adversary Models (from the beginning to sensor networks)

VDG, July 17, 2007 16 Copyright © 2007

New vs. Old Adversary

Old Byzantine Adversaries - can do 1) but not 2) - consensus problems impose fixed thresholds for captured nodes

(e.g., t < n/2, t < n/3) and fixed number of nodes, n.

Old (Dolev-Yao) Adversary can - control network operation

- man-in-the-middle: read, replay, forge, block, modify, insert messages anywhere in the network

- send/receive any message to/from any legitimate principal (e.g., node) - act as a legitimate principal of the network

Old (Dolev-Yao) Adversary cannot 1) adaptively capture legitimate principals’ nodes and discover a legitimate principal’s secrets 2) adaptively modify network and trust topology (e.g., by node replication)

Page 17: On the Evolution of Adversary Models (from the beginning to sensor networks)

VDG, July 17, 2007 17 Copyright © 2007

Countermeasures for Handling New Adv.?

1. Detection and Recovery - Ex. Detection of node-replica attacks- Cost ? Traditional vs. Emergent Protocols - Advantage: always possible, good enough detection- Disadvantage: damage possible before detection

2. Avoidance: early detection of adversary’s presence- Ex. Periodic monitoring - Cost vs. timely detection ? False negatives/positives ?- Advantage: avoids damage done by new adversary- Disadvantage: not always practical in MANETs, sensor and mesh networks

3. Prevention: survive attacks by “privileged insiders”- Ex. Subsystems that survive administrators’ attacks (e.g., auth)- Cost vs. design credibility ? Manifest correctness

- Advantage: prevent damage; Disadvantage: very limited use

Page 18: On the Evolution of Adversary Models (from the beginning to sensor networks)

VDG, July 17, 2007 18 Copyright © 2007

Example of Detection and Recovery

(IEEE S&P, May 2005)

- naïve: each node broadcasts <ID, “locator,” signature> perfect replica detection: ID collisions, different Iocators complexity: O(n2) messages

- realistic: each node broadcasts locally <ID, “locator,” signature> local neighbors further broadcast to g << n random witnesses good enough replica detection: ID collision, different

Iocators at witness detection probability: 70 - 80% is good enough

complexity: O(n x sqrt(n)) messages

Page 19: On the Evolution of Adversary Models (from the beginning to sensor networks)

VDG, July 17, 2007 19 Copyright © 2007

A New App.: Distributed Sensing

Page 20: On the Evolution of Adversary Models (from the beginning to sensor networks)

VDG, July 17, 2007 20 Copyright © 2007

A New Application: Distributed Sensing

Application: a set of m sensors observe and signal an event - each sensor broadcasts “1” whenever it senses the event;

else, it does nothing - if t m broadcasts, all m sensors signal event to neighbors; else do nothing

New (Distributed-Sensing) Adversary - captures nodes, forges, replays or suppresses (jams) broadcasts

(within same or across different sessions) - increases broadcast count with outsiders’ false broadcasts

Operational Constraints - absence of event cannot be sensed (e.g., no periodic “0” broadcasts) - broadcasts are reliable and synchronous (i.e., counted in sessions)

Adversary Goals: violate integrity (i.e., issues t m/2 false broadcasts) deny service (i.e., t > m/2, suppresses m-t+1 broadcasts)

Page 21: On the Evolution of Adversary Models (from the beginning to sensor networks)

VDG, July 17, 2007 21 Copyright © 2007

32

1

8

9

104

5

6

7

CommunicationNeighborhood

revocation target

An Example: distributed revocation decision[IEEE TDSC, Sept. 2005]

11

12

13

14

m=6, t = 4 votes in a session => revoke target

propagate revocation

decision

propagate re

voca

tion

decision

Keying Neighborhood

Page 22: On the Evolution of Adversary Models (from the beginning to sensor networks)

VDG, July 17, 2007 22 Copyright © 2007

New vs. Old Adversary

A (Reactive) Byzantine Agreement Problem ? - both global event and its absence are (“1/0”) broadcast by each node - strong constraint on t ; i.e., no PKI => t > 2/3m; PKI => t >m/2 - fixed, known group membership

New (Distributed-Sensing) Adv. =/= Old (Byzantine) Adv. - new adversary need not forge, initiate, or replay “0” broadcasts - new adversary’s strength depends on a weaker t (e.g., t < m/2) - new adversary may modify membership to increase broadcast count ( > t)

No.

Page 23: On the Evolution of Adversary Models (from the beginning to sensor networks)

VDG, July 17, 2007 23 Copyright © 2007

Conclusions

3. How effective are the countermeasures ?- provide “good enough” security; e.g., probabilistic security

properties

1. New Technologies => New Adversary Definitions- avoid “fighting the last war”- security is a fundamental concern of IT

2. No single method of countering new and powerful adversaries- detection- avoidance (current focus) - prevention (future)