33
1 The Data Dissemination Problem A region requires event- monitoring (harmful gas, vehicle motion, seismic vibration, temperature, etc.) Deploy sensors forming a distributed network On event, sensed and/or processed information delivered to the inquiring destination Both static and mobile cases Static: DD & GRAB Mobile: TTDD Even t Sensor sources Sensor sink Data dissemination A sensor field

1 The Data Dissemination Problem A region requires event- monitoring (harmful gas, vehicle motion, seismic vibration, temperature, etc.) Deploy sensors

  • View
    215

  • Download
    0

Embed Size (px)

Citation preview

1

The Data Dissemination Problem

A region requires event-monitoring (harmful gas, vehicle motion, seismic vibration, temperature, etc.)

Deploy sensors forming a distributed network

On event, sensed and/or processed information delivered to the inquiring destination

Both static and mobile cases Static: DD & GRAB Mobile: TTDD

Event

Event

Sensor sources

Sensor sink

Data dissemination

A sensor field

2

Design Guidelines application-aware paradigm to facilitate efficient

aggregation, and delivery of sensed data to inquiring destination Data centric In-network data processing: aggregation, filtering, compression, etc.

Leverage the scale of node population to overcome the limitation of individual sensor nodes New way to achieve robustness

Challenges: Scalability Energy efficiency Robustness / Fault tolerance in outdoor areas Efficient routing (multiple source destination pairs)

3

General Solution Approach

Gradient based design A Publish/subscribe approach Emulates how water flows from a hill to a valley Establish gradients on the way

Two methods to build gradients Directed diffusion: explicit vector that points from one

node to another node GRAB: field based that builds a scalar field and uses

the derivatives of the field to indicate direction

4

Directed Diffusion

Typical IP based networks Requires unique host ID addressing Application is end-to-end, routers unaware

Directed diffusion – uses publish/subscribe Inquirer expresses an interest, I, using attribute values Sensor sources that can service I, reply with data

5

Directed Diffusion

• In-network data processing (e.g., aggregation, caching)

• Application-aware communication primitives– expressed in terms of named data (not in terms of the

nodes generating or requesting data)

– Query/reply round

• Distributed algorithms using localized interactions and measurement based adaptation

6

Data Naming

Expressing an Interest Using attribute-value pairs E.g.,

Other interest-expressing schemes possible E.g., hierarchical (different problem)

Type = Wheeled vehicle // detect vehicle locationInterval = 20 ms // send events every 20ms Duration = 10 s // Send for next 10 sField = [x1, y1, x2, y2] // from sensors in this area

7

Basic Directed DiffusionSetting up gradients

Source

Sink

Interest = Interrogation in terms of data attributesGradient = direction and strength

Similar to reverse link forwarding in multicast

8

Basic Directed Diffusion

Source

Sink

Sending data and Reinforcing the “best” path

Low rate event Reinforcement = Increased interest

9

Directed Diffusion and Dynamics

Recoveringfrom node failure

Source

Sink

Low rate event

High rate eventReinforcement

10

Directed Diffusion and Dynamics

Source

Sink

Stable path

Low rate event

High rate event

11

More on Path Failure / Recovery

Link failure detected by reduced rate, data loss Choose next best link (i.e., compare links based on

infrequent exploratory downloads)

Negatively reinforce lossy link Either send i1 with base (exploratory) data rate Or, allow neighbor’s cache to expire over time

EventEvent

Sink

Src AC

B

MD

Link A-M lossyA reinforces BB reinforces C …D need notA (–) reinforces MM (–) reinforces D

12

Average Dissipated Energy

In-network aggregation reduces DD redundancy Flooding poor because of multiple paths from source to sink

flooding

DiffusionMulticast

13

Delay

DD finds least delay paths, as OM – encouraging Flooding incurs latency due to high MAC contention, collision

flooding

Diffusion

Multicast

14

Delivery ratio degrades with higher % node failures Graceful degradation indicates efficient negative reinforcement

Event Delivery Ratio under node failures

0 %

10%20%

15

M gets same data from both D and P, but P always delivers late due to looping M negatively-reinforces (nr) P, P nr Q, Q nr M Loop {M Q P} eliminated

Conservative nr useful for fault resilience

Loop Elimination

A

QP

D M

16

Local Behavior Choices

• For propagating interests– In the example, floodIn the example, flood

– More sophisticated behaviors possible: e.g. based on cached information, GPS

• For data transmission– Multi-path delivery with Multi-path delivery with

selective quality along selective quality along different pathsdifferent paths

– probabilistic forwarding

– single-path delivery, etc.

• For setting up gradients• data-rate gradients are set data-rate gradients are set

up towards neighbors who up towards neighbors who send an interestsend an interest..

• Others possible: probabilistic gradients, energy gradients, etc.

• For reinforcement• reinforce paths, or parts reinforce paths, or parts

thereof, based on observed thereof, based on observed delaysdelays, losses, variances etc.

• other variants: inhibit certain paths because resource levels are low

17

DD Summary

Application-awareness – a beneficial tradeoff Data aggregation can improve energy efficiency Better bandwidth utilization

Network addressing is data centric Probably correct approach for sensor type applications

Notion of gradient (exploratory and reinforced) Flexible architecture – enables configuration based on

application requirements, tradeoffs

Implementation on Berkley motes Network API, Filter API

18

GRAB Design

• Two protocols addressing the two problems– Robust data delivery: MESH (focus for this class)

• Deliver data to the user in face of node failures and packet losses

– Long-lived system: PEAS• Extend sensing and data delivery lifetime in proportion to

the total number of deployed nodes

19

Design Goal: a forwarding mesh with controllable width

• Forward each data packet along parallel paths to the sink

• these paths interleave to form a forwarding mesh

• The mesh starts at the source, ends at the sink

• The width of the mesh should be adjusted to achieve certain delivery reliability

source

sink

20

How to forward data along an adjustable mesh

• build a cost field that gives each sensor the “implicit direction” towards the sink

• Directed diffusion uses explicit directino for forwarding to the sink

• assign each packet certain amount of credit which controls the width of the forwarding mesh

21

How to build a cost field?

• The sink broadcasts an ADV packet with cost 0

• Each node sets its cost as the smaller of– Its own cost ( initially)– The sum of the cost of the sender and the link

cost to the sender

• Then broadcasts its own cost

22

Excessive messages in building the cost field

Sink(0)

B

C

4

1.5

1

sink broadcasts

B (1)

C (4)

C, B broadcasts

C (2.5)

C broadcasts again• the farther a node, the more it broadcasts• an example: 1500 nodes, 150mx150m field, the farthest node broadcasts more than 150 times, each node broadcasts 50 times on average

23

A node waits for a time proportional to its cost

Sink (0)

B

C

4

1.5

1

T=0, sink broadcasts. B and C set timers, expiring after 1, 4 seconds

B (1)

C (4)

T=1, B broadcasts, C cancels the first timer andsets another one that expires after 1.5 seconds

B

C (2.5)

T=2.5, C broadcasts when its timer expires

24

How to control the width of the mesh

• Each packet carries a credit

• A copy can take any path that requires a cost <= credit + Cost_source

• Different copies can take different paths, forming a mesh

sink

source

Cost <= credit + Cost_source

Cost > credit + Cost_source

Cost_source

25

Allocate credit along different hops• Calculate how much

credit has been used:– alpha_used =

P_consumed + C_A – C_source

• Calculate how much is remaining

– R_alpha = (alpha – alpha_used) / alpha

• Compare to a threshold– R_thresh = (C_A /

C_source)^2

sink

Cost_source

cost_consumed

source

A

Cost_A

26

Key Ideas of GRAB

Cost field to indicate direction Direction is indicated by “cost-decreasing” pointers

Use credit to build a “mesh of paths” Multiple paths can increase robustness to node and link

failures

Mesh is built on the fly • No pre-computed mesh• Can change on each packet if the credit included in the packet

is different

27

Handling mobility

Source

Stimulus

Sink

Sink

28

Mobile SinkExcessive PowerConsumption

Increased WirelessTransmissionCollisions

State MaintenanceOverhead

29

Challenges• Battery powered sensor nodes• Communication via wireless links

– Bandwidth constraint– Load balancing

• Ad-hoc deployment in large scale– Fully distributed w/o global knowledge– Large numbers of sources and sinks

• Unexpected sensor node failures• Sink mobility

– No a-priori knowledge of sink movement

30

Goal, Idea

• Efficient and scalable data dissemination from multiple sources to multiple, mobile sinks

• Two-tier forwarding model– Source proactively builds a grid structure– Localize impact of sink mobility on data

forwarding– A small set of sensor node maintains

forwarding state

31

TTDD Basics

Source

Dissemination Node

Sink

Data Announcement

Query

Data

Immediate DisseminationNode

32

TTDD Mobile Sinks

Source

Dissemination Node

Sink

Data Announcement

Data

Immediate DisseminationNode

Immediate DisseminationNode

TrajectoryForwarding

TrajectoryForwarding

33

TTDD Multiple Mobile Sinks

Source

Dissemination Node

Data Announcement

Data

Immediate DisseminationNode

TrajectoryForwarding

Source