Upload
maximilian-welch
View
215
Download
0
Embed Size (px)
Citation preview
UCDavis Computer Security Lab
Collaborative End-host Worm Defense Experiment
Senthil Cheetanceri, Denys Ma, Allen Ting, Jeff Rowe, Karl LevittUC Davis
Phil Porras, Linda Briesemeister, Ashish TiwariSRI
John Mark Agosta, Denver Dash, Eve SchoolerIntel Corp.
Overview
• Introduction
• End-host Based Defense Approach
• Our DETER Experiment
• General Testing Tools
• A Worm Defense Testing Framework
• Simulations and Analysis
Cyber Defense Testing
• Validation Using Simulations and Analysis (L. Briesemeister - SRI)– Quickly validate proposed cyber defense strategies– Test a large variety of conditions and configurations
• Live Deployment– Validation using real operating conditions– Reluctance to deploy systems without serious testing– Testing response to live attacks is impossible
• DETER Testbed (S. Cheetancheri – UC Davis)
– Tests defense systems using real code on real systems– Attacks can be safely launched– Bridges the testing gap between simulation and live deployment
The Specific Problem• Oftentimes centralized network worm defenses are unavailable.
– Mobile Users– Home Offices– Small Businesses– Network defenses have been bypassed or penetrated
• Local end-host detector/responders can form a last line of defense against large-scale distributed attacks.
• End-host detectors are “weak”– Without specific attack signatures, false-positives are high.– Local information isn’t sufficient in deciding whether a global attack is
occurring• Can “weak” end-host detectors be combined to produce a “strong”
global detector that triggers response?• How can a federation of local end-host detectors be used to detect
worm attacks?
Our Approach…
• Motivated by,Sequential Hypothesis Testing
Jung, J., Paxson, V., Berger, A., Balakrishnan, H., “Fast Portscan Detection Using Sequential Hypothesis Testing”, Proceedings of the IEEE Symposium on Security and Privacy, 2004
Corraborative Intrusion Detection and InferenceAgosta, J.M., Dash, D., Schooler, E., Intel Research
• Probabilistic inference by a federation of end-host local detection points.
• Protocol for distributing alert information within the federation.
Distributed Decision Chains
1,1 2,1 3,1 4,1 n,1
2,2 3,2 4,2 n,2
3,3 4,3 n,3
n,n
. . .
. . .
. . .
.
.
.
Matrix of Likelihood ratios of Bernoulli trials
{ i, j } = j local alerts seen after i steps
4,4 n,4. . .
• Worm threshold determines elements needed for an attack decision
• False alarm threshold determines elements for a false alarm decision
1,1
1,1
2,1
2,1
2,2
2,2
3,2
3,3
3,2
3,1
WORM!
False Alarm
Sequential Hypothesis Testing
H0 – Hypothesis that there is a worm
H1 – Hypothesis that there is a worm
Y = { 0 – No Alert raised
{ 1 – Alert raised
P[Y=1 | H0] = Fp P[Y=0 | H0] = (1- Fp)P[Y=0 | H1] = Fn P[Y=1 | H1] = (1-Fn)
TRW ParametersGiven:
Fp - False +ve rate of individual detectors.
Fn – False –ve rate of individual detectors.
Desired:dD – desired rate of Detection
dF – desired rate of False positive
Decision Making
Likelihood Ratio, L:
P[Y1|H1].P[Y2||H1]…P[Yn|H1]
P[Y1|H0].P[Y2||H0]…P[Yn|H0]
L < T0 (NoWorm) T0 = 1-dD/1-dF
> T1 (Worm) T1 = dD/dF
Experiment Components
• Local Detectors
• Defense Agents
• “Vulnerable” Service
• Safe Worm Generator
• Background Traffic Generator
End-host Detector and Defense Agents
• Implement a “weak” end-host local detector– Alert is generated for all connections to un-serviced ports
– False positive rate for local detection is high (one alert per hour per machine at UCDavis)
• Defense agents send local detector alerts to the defense agents on other end-hosts– Recipients are chosen at random for each alert
• Local alerts are aggregated into a global alert message. Agents use probabilistic inference do decide whether this is likely to be a worm or false alarm, or propagate global alert message if no decision has been reached.
Experimental Setup
• 200 Virtual Nodes on 40 Physical nodes.
• All nodes are on a single DETER LAN.
• 50 nodes are vulnerable– Alarms aren’t generated for worm
connections to these nodes
• All nodes have a local detector and defense agent
• Single node serves as the external infection source. Internal infected hosts also generate worm traffic
Detection Time
Random Scanning Worm @ 2 scans/sec
Full Saturation: 12 minutes after launch
Worm Detection: 4 minutes after launch
Infected Nodes: 5 (10%)
Results
• For random scanning worm:– Full saturation of infections occurs at 15 minutes post launce
– Worm detection trigger at 4 minutes after launch with 10% of vulnerable machines already infected.
– Global worm alert broadcast could protect 90%
• False alarms– At 4 false alarms per minute over all 200 machines (from UC
Davis laboratory network), no worm triggers
– Live testing in needed to evaluate false alarm performance over a longer time period
Summary
• Simulations by Intel Research show that a distributed TRW algorithm can be useful to detect worms using only “weak” end-host detectors.
• Emulated testing confirms that the algorithm and protocol works on live machines in the presence of real traffic
• Code tested and working on real Unix machines in DETER testbed will be deployed in the UCD and Intel networks for further testing and evaluation.
Testing Tools
• NTGC - A tool for Network Traffic Generation Control and Coordination
• WormGen – Safe worm generation for cyber defense testing
• A framework for worm defense evaluation
NTGC: A tool for Network Traffic Generation Control and Coordination
• To develop a background traffic generation tool which can:– Build traffic model by extracting important traffic parameters
from real traffic trace (tcpdump format)
– Automatically configuring the testbed nodes to generate traffic based on the traffic model extracted from real traffic
– Utilize existing traffic generators (e.g. TG, D-ITG, Harpoon) as low-level packet generation component
– Generate real TCP connections
Architecture
NTGC consists of the following components:• Traffic Analyzer
The traffic analyzer takes the trace data as input and reconstructs complete TCP connections.
• Traffic filterThe traffic filter can manipulate the traffic parameter data generated by the traffic analyzer.
• Network address mapping toolThis module maps the IP addresses of the packet trace into the DETER experimental network IP addresses.
• Configuration file generator–This module takes the output from the traffic analyzer (or traffic filter), and compile them into a TG or TTCP compatible configuration file–Parses the flow data generated by traffic analyzer and traffic filter, and then sends the flow information to the corresponding remote hosts.
• Command and flow data dispatcherWe use this tool to send commands to control the NTGC agents running on each DETER nodes.
• Low-level packet generators (e.g. TG, TTCP, D-ITG, Harpoon)
Modular Diagram
Raw trace 1
Raw trace n
……
……
……
…
Traffic Analyzer
Reconstruct TCP connections
Generate flow dataMerge tracesTimestamp normalization
Connection Data
Flow Data
Traffic Filter
Filtering
Address RemappingScale up/ downDuplicateRemove
Address Remapping rules.Topology file
Configuration File Generator
NTGC Summary
• The traffic analyzer is able to generate the flow-level data in XML format.
• We are able to manipulate the traffic parameters within the XML format flow data.
• We tested configuration file generation and dispatching feature on DETER testbed, with a 40 nodes topology. The configuration file generator generated TG compatible configuration files for all 40 nodes, and dispatched the configuration files to all the nodes.
• We observed that the traffic was sending and receiving between all the experimental nodes, based on the traffic model derived from the WIDE trace.
WormGen – Safe Worm Generation
• On demand distributed attack behavior is needed for the evaluation of defenses.
• Nobody wants to implement and deploy attacks that spread automatically using real vulnerabilities.
• How to produce realistic attack behavior without actually launching an attack?
• WormGen generates a propagating worm on the test network without using Malcode.
The worm simulation network consists of a several networkedagents and a single controller.
The controller assigns each agent a role for a given worm.1) Vulnerable (denoted by a red x)2) Vulnerable and initially infected (red x with XML code)3) Not Vulnerable (denoted by a green check)
The controller sends a start command to the initially infectedagent(s). The agent process the XML instructions, or “worm”,for information about how to spread.
The agent consults the information in the PortScanRate elementto determine the speed in which it "scans".
Based on the "probability" values of RandomScan andLocalSubnetScan elements, the agent chooses whichaddress range to target
For each infected agent, a new address range is chosen based on the the probability values, and the attack cycle continues.
or, once again, simply sends the worm. Only the vulnerableagents are infected.
When the worm is stopped, the controller gathers information from each agent and processes it into a report.
Motivation:Provide a framework for easy evaluation of worm defenses
in DETER test-bed environment.
Worm
Topology
Defense
EvaluationTest-bedAPI
Towards a Framework for Worm Defense Evaluation
The Framework itself
Features:
• Test-bed programming is transparent to the experimenter.
• Hooks for users’ defense, worms and background traffic replay.
• Event Control System for executing series of experiments in batch mode.
• Standardized vulnerable servers• Worm library
Advantages
Current Approach Our Approach
Approach Custom tools Standardized tools
Time to First experiment
Hours to weeks Hours
Setup time : Expt time ratio
10:1 1:100
Testbed details knowledge
Required Not Required
Example: Hierarchical Defense
Analysis
No Defense Defense Turned on
Only 5 iterations 10 iterations
Future Work
• Traffic Analysis on n/w components • Provide default topologies
– business networks, academic networks, defense networks, etc.,.
• Counter the effect of scale-down.• Provide a formal language to describe the API
for this framework.
Next Steps
• Implement and test other cooperative protocols– Multicast– Channel Biasing– Hierarchical Aggregation
• Include a variety of local end-host detectors with differing performance – more sophisticated Bayesian Network model developed by Intel Corp.
• Optimize local detector placement in the cooperative network