Upload
others
View
11
Download
0
Embed Size (px)
Citation preview
Course “Algorithmic Foundations of SensorNetworks”
Lecture 2: Data propagation algorithms part II
Sotiris NikoletseasProfessor
Department of Computer Engineering and InformaticsUniversity of Patras, Greece
Spring Semester 2017-2018
1 / 85
The problem
“How can sensor p, via cooperation with the rest of thesensors in the network, propagate information aboutevent E to the control center(s)?”
2 / 85
Lecture Overview
A. Data-centric networking - variants of DDB. Data gathering with compressionC. Data discovery and queryingD. Detailed presentation of LEACHE. Detailed presentation of Directed Diffusion
3 / 85
A. Data-Centric Networking
• A fundamental innovation of wireless sensor networking.• Basic idea: routing, storage and querying can all be made
more efficient if communication is based directly onapplication specific data content instead of the traditionalIP-style addressing, which does not take into account thecontent requested.
• Two great advantages of data-centric networking in termsof energy efficiency:
- Communication overhead for name binding is minimized.- In-network precessing as content moves through the
network is enabled, via data aggregation and compression.
4 / 85
Pull Vs Push Diffusion (I)
• The basic version of Directed Diffusion (DD) is actually atwo-phase pull mechanism:
- In phase 1, the sink pulls information from sources withrelevant information via injecting interests
- In phase 2, actual data is pulled down via reinforcedpath(s).
• An one-phase pull variant of DD- Eliminate reinforcement as a separate phase.- The sink propagates the interest along multiple paths.- The matching source directly picks the best of its gradient
links to send data and so on up the reverse path back to thesink.
• Critique:- potentially more efficient than two-phase pull DD.- Assumes bidirectionality or sufficient knowledge of the
links’ properties/qualities in each direction.
5 / 85
Pull Vs Push Diffusion (II)
• A second variant: push diffusion- The sink does not inject interests, instead, sources with
event detections send exploratory data along multiplepaths.
- The sink, if it has some relevant interests, reinforces one ofthese paths for data forwarding.
• Critique:- The pull and push variants are each most appropriate for
different kinds of applications.- Pull is more efficient than push whenever there are many
sources with high data generation rate but only few, rarelyinterested sinks.
- Push is more efficient when there are few sources whichare highly active but many, frequently interested sinks.
6 / 85
TEEN: Another Push-Based Data-Centric Protocol
• It is “threshold-sensitive energy efficient”.• Nodes react immediately to drastic changes in the value of
the sensed attribute.• When this change exceeds a given threshold, the activated
nodes communicate their changed value to a cluster-headfor forwarding to the sink.
7 / 85
B. Data Gathering With Compression
• Main idea: combine routing with in-network compression.• Usual efficiency metric: reduce total number of data
transmissions i.e. increase cumulative number of bits overeach hop of transmission, per round of data-gathering fromall sources.
• Compression of correlated data may be more efficient thansimple clustering techniques (e.g. LEACH) that are notcorrelation aware.
8 / 85
On the Impact of Compression
• An extreme case: the data from any number of sourcescan be combined into a single packet (e.g. suppression ofduplicate data, when the sources generate identical data).
• If there are k sources, all located close to each other andfar from the sink, then a route than combines theirinformation close to the sources can achieve a k−foldreduction in transmissions, compared to each nodesending its information separately without compression.
• In general, the optimal joint routing-compression structurefor this case is a minimum Steiner tree constructionproblem, which is NP-hard. However, there existpolynomial solutions for special cases where the sourcesare close to each other.
9 / 85
Network Correlated Data Gathering (I)
• Consider tha case when all nodes are sources but the levelof the correlation can vary.
• When the data is completely uncorrelated, then clearly theshortest path tree (SPT) provides the best solution(minimizing the total transmission cost).
• In the general case, assume the following correlationmodel:
- Only nodes at the leaves of the tree (whose root is the sink)need to provide R bits.
- All interior nodes, which have side information from othernodes, need only generate r bits (r ≤ R) of additionalinformation.
- The quantityρ = 1− r
Ris called the correlation coefficient.
10 / 85
Network Correlated Data Gathering (II)
• As ρ increases, a travelling salesman path (TSP) providesan arbitrarily more efficient solution compared with SPT.However, TSP is NP-hard.
• Good approximations are given with the following hybridcombinations of STP and TSP:
- All nodes within some range from the sink (larger the ρ,smaller this range) are connected via SPTs.
- Beyond them, an approximate TSP path is constructed byadding nearby nodes to each strand of the SPT.
11 / 85
Scale-free Aggregation (I)
• In practice, the degree of spatial correlation is a function ofdistance and nearby nodes are able to provide highcompression than nodes at a greater distance.
• A certain model for spatial correlation considers a squaregrid of sensors, assuming that grid nodes have informationabout all readings of all nodes within a k−hop radius.
• Grid nodes can communicate with any of their four gridneighbours, and aggregation/compression is performed bysuppressing redundant readings in the intermediate hopstowards the sink.
12 / 85
Scale-free Aggregation (II)
(x, y)
Pb
Pℓ
(0, 0)sink
• The sink is located at the bottom-left corner andrandomized routing of compressed data is performed at a(x , y) node as follows:
P` =x
x + yforward to left grid node
Pb =y
x + yforward to bottom grid node
• It is shown that a constant factor approximation (inexpectation) to the optimal solution is achieved.
13 / 85
C. Data Discovery and Querying
• Besides end-to-end routing, data discovery and queryingform an important communication primitive in sensornetworks.
• The goal is to design efficient alternatives to thehigh-overhead naive flooding based querying (FBQ),especially when it is not needed to provide continuouslyinformation to the sink but rather the sink is interested in asmall part of the sensed data. In such cases, the senseddata may be stored locally and only transmitted inresponse to a query issued by the sink.
• Several querying techniques have been proposed, suchas:
- expanding ring search- rumor routing- the comb-needle technique
14 / 85
Types of Queries (I)
1 Continuous vs one-shot queries, (i.e. queries for a longduration data flow or a single piece of data).
2 Simple vs complex queries. Complex queries:- combinations of multiple simple subqueries (e.g. what are
the locations of nodes where (a) the light intensity is at leastx and the humidity is at least y or, (b) the light intensity is atleast z)
- aggregate queries requiring information from severalsources e.g. “find the average temperature readings fromall nodes in a certain region R".
3 Queries for replicated data (available at multiple nodes) vsqueries for unique data at a single node only.
4 Queries for historic vs current/future data.
15 / 85
Types of Queries (II)
Note:• In truly continuous queries, the cost of initial querying
(even via naive flooding) may be relatively insignificant.• However, for one-shot data the cost and overhead of
flooding may be prohibitively expensive.• Also, in queries for replicated data, flooding techniques
may return multiple copies of the same data when only onecopy is necessary.
16 / 85
Expanding Ring Search
• It proceeds as a sequence of controlled floods, with theradius of the flood increasing at each step if the query hasnot been resolved at the previous step.
• The choice of the number of the hops to search at eachstep can be optimized to minimize the expected searchcost, using a dynamic programming technique.
• When there is enough replicated/cached information, themethod is more likely to improve performance a lot.However, in the absence of replicated/cashed data, theenergy saving achieved is marginal (less than 10%), whilethe delay increases a lot, and other methods are morerelevant.
17 / 85
Information-Driven Sensor Query Routing (IDSQ) (I)
• It is suitable for quite dynamic environments, when data isgenerated in a sporadic, unpredictable way.
• The query is routed in a way maximizing the informationgain, via a constrained anisotropic diffusion routing(CADR) method.
• CADR routes the query greedily, making a sequence oflocal decisions at intermediate steps, based on the sensorreadings of neighbouring nodes.
• In each step the query is forwarded to a neighbour nodeoptimizing some composite objective functions thatcombines the information utility and communication costs.The local decisions can vary, choosing a neighbour withthe highest objective function, or with the steepest (local)gradient in the objective function or which maximizes acombination of the local gradient and distanceimprovement to the (estimated) optimum location.
18 / 85
Information-Driven Sensor Query Routing (IDSQ) (II)
• Main advantages: if partial solutions are shipped back tothe query-originating node, it is provided with incrementallybetter information as the query moves towards the globaloptimum.Also, the objective function can be designed to optimizethe cost needed to route the query (e.g. the shortest path)or to maximize the information gain via an irregular walkwith more steps.
19 / 85
Active Query Forwarding (ACQUIRE) (I)
• It treats the query as an “intelligent entity" that moves inthe network towards the desired response, as a repeatedsequence of three parts.
- Examine cache: Arriving at a node the query first checks itsexisting cache to see if it is valid/fresh enough to directlyresolve the query there.
- If not, the query performs a Request update from nodeswithin a d−hop neighbourhood (via controlled flood).
- Forward phase: If not resolved, the query is forwarded toanother active node, located a sufficient number f of hopsaway, so that the controlled flood phases (request updates)do not overlap a lot.
f
d
d
20 / 85
Active Query Forwarding (ACQUIRE) (II)
• The look-ahead parameter d offers a tunable trade-off:when d is small, the query is forwarded in atrajectory-based manner (like a random walk), but when dis large (in terms of the network diameter), the methodresembles a flood. When d is small, the query needs to beforwarded more often but there are fewer update messagesat each step, while for large d , fewer forwarding steps areneeded but each one having more update messages.
• The optimal choice of d depends mostly on the sensor datadynamics captured e.g. by the ratio of the data change ratein the network to the query generation rate. When the datadynamics is low, caches remain valid for a long time andthe cost of large d can be amortized over several queries.However, when the data dynamics is very high, repeatedflooding is required and a small d should be chosen.
21 / 85
Rumor Routing (I)
• It provides an efficient rendezvous mechanism to combinepush and pull approaches to get the desired informationfrom the network.
• Sinks desiring information send queries through thenetwork. Sources generating important events sendnotifications through the network.
• Both are treated as mobile agents. The event notificationsleave “trails" for query agents visiting a node with a trailfrom an event notification agent to be able to find pointersto the corresponding source.
• The trajectories of both the events and the queries agentscan be either random walks (with built-in loop-prevention)or more directed e.g. straight lines.
• Substantial energy savings can be obtained compared withthe two extremes of query flooding (pull) and eventflooding (push).
22 / 85
Rumor Routing (II)
23 / 85
The Comb-Needle Technique (I)
• Similar combination of push and pull based onintersections of queries and event notifications.
• Basic version: The queries build a horizontal comb-likerouting structure, while events follow a vertical needle-liketrajectory to meet the “teeth" of the comb.
• A tunable key parameter is the spacing between branchesof the comb and correspondingly the length of the eventnotifications trajectory:
- the spacing and length are chosen smaller when theevent-to-query ratio is high (⇒ more pull, less push)
- when the event-to-query ratio is low the spacing and lengthshould be higher (⇒ less pull, more push)
- in adaptive versions the spacing and length are adjusteddynamically and distributively by the sources and sinksbased on local estimates of data and query frequencies toaddress their fluctuations over time.
24 / 85
The Comb-Needle Technique (II)
25 / 85
The Comb-Needle Technique (III)
• This basic comb structure (queries forming a horizontalcomb and events the vertical needles) is actually aglobal-pull, local-push model, best suited for conditionswhen the queries are less frequent than events.
• In the case when queries are more frequent than eventsand there are multiple querying models, then areverse-comb structure can be used (events form thevertical comb, queries form the horizontal needletrajectories), towards a global-push, local-pull structure.
26 / 85
D. Flat vs Hierarchical Routing
• Flat Routing: All nodes in the network have similar roleregarding the routing of data. No special nodes are used.
- Example: Directed Diffusion• Hierarchical Routing: Special nodes assume greater
responsibility regarding routing of data than most nodesinside the network. Super nodes – cluster heads.
- Example: LEACH
27 / 85
LEACH (Low Energy Adaptive Clustering Hierarchy)
W.R. Heinzelman, A. Chandrakasan & H. Balakrishnan
“Energy-Efficient Communication Protocol forWireless Microsensor Networks"
33rd Hawaii International Conference on System Sciences, HICCS-2000
28 / 85
Main features of LEACH
What is LEACH?
• cluster-based protocol that minimizes energy dissipation insensor network.
• key features:
- localized coordination and control for clusterset-up and orientation.
- randomized rotation of the cluster “basestation" or “cluster heads" and thecorresponding clusters.
- local compression to reduce globalcommunication.
29 / 85
Intuitive description of LEACH
How does LEACH work:• Network is partitioned in clusters.
• Each cluster has one cluster-head.
• Each non cluster-node sends data the head of the cluster itbelongs.
• Cluster-heads gather the sent data, compress them andsends them to the base-station directly.
30 / 85
Dynamic Clusters
31 / 85
Operation of LEACH
• LEACH operates in rounds.
• Each round in LEACH consists of phases.
• Advertisement Phase.• Cluster Set-Up Phase.• Steady Phase.
32 / 85
Advertisement Phase (1/3)
Election: Node n decides with probability T(n) to elect itselfcluster-head
T (n) =
{P
1−P∗(rmod 1P )
if n ∈ G
0 otherwise
P: the desired percentage of cluster heads.r : the current round.G: the set of nodes that have not been cluster-heads in the last1P rounds.
33 / 85
Advertisement Phase (2/3)
On every round we want on the average the same number ofcluster-heads in the network, i.e. N · P (N: number of nodes).On first round we have NP cluster-heads.On second round we have N(1− P)P1 cluster-heads.
N(1− P)P1 = N · P ⇒ P1 =P
1− P= T (n)
For third round we have N(1− P)(1− P1)P2 cluster-heads.
N(1− P)(1− P1)P2 = N · P ⇒
P2 =P
1 + P · P1 − (P + P1)=
11− 2P
= T (n)
Inductively, we get T (n)
34 / 85
Advertisement Phase (3/3)
Cluster-Head-Advertisement:• Each cluster-head broadcasts an advertisement message
to the rest nodes using CSMA-MAC protocol.
• Non cluster-head nodes hear the advertisements of allcluster-head nodes.
• Each non-cluster head node decides each cluster head bychoosing the one that has the stronger signal.
35 / 85
Cluster Set-Up Phase
• Each cluster head is informed for the members of itscluster.
• The cluster head creates a TDMA schedule• Cluster-head broadcasts the schedule back to the cluster
members.
36 / 85
Steady Phase
• Non cluster-heads• Senses the environment.• Send their data to the cluster head during their
transmission time.• Cluster-heads
• Receives data from non cluster-head nodes.• Compresses the data it has received.• Send its data to the base station.
The Duration of steady phase is “a priori” determined.
37 / 85
Multiple Clusters
PROBLEMTransmissions in one cluster can degrade communications innearby clusters.
SOLUTIONCluster-heads choose randomly from a list of spreading codesand informs all the members of its cluster.
38 / 85
Hierarchical Clustering Extensions
• Non cluster-heads communicate with their cluster-heads
• Cluster-heads communicate with super-cluster-heads
• And so on . . .
Advantages: Saves a lot of energy for larger networks, morerealisticDisadvantages: More complicated implementation, latency
39 / 85
Experimental Evaluation of LEACH
We are given a network of 100 nodes.In the area:
The base station is placed at (0,100).
40 / 85
Experimental Evaluation of LEACH
There is a comparative study between:• Minimum-Energy-Transmission.• Direct Transmission.• LEACH.
The evaluation measures:• Dead Nodes’ Distribution.• Total Energy Dissipation.• System Life-time.
41 / 85
Distribution of dead nodes (1/3)
Minimum-Energy-Transmission
• After 180 rounds nodes closer to the sink die faster!42 / 85
Distribution of dead nodes(2/3)
Direct-Transmission
• After 180 rounds nodes further to the sink die faster!43 / 85
Distribution of dead nodes(3/3)
• After 1200 rounds nodes die in uniform manner.
44 / 85
Total energy dissipation - network diameter
• LEACH reduces 7x to 8x compared to Direct-Transmission.• LEACH reduces 4x to 8x compared to
Minimum-Energy-Transmission.45 / 85
System Lifetime (1/2)
• LEACH more than doubles the useful system lifetime.• It takes 8-times longer for the first node to die in LEACH.• It takes 3-times longer for the last node to die in LEACH.
46 / 85
System Lifetime (2/2)
47 / 85
Conclusions about LEACH
LEACH is a cluster-based routing protocol.
• Minimizes energy dissipation through compressiontechniques in cluster-level.
• Distributes the load to all the nodes at different times.
In experimental evaluation• LEACH reduces communication energy as much as 8x
compared to DT or MTE.• In LEACH the first node death occurs over 8 times later
compared DT or MTE.• In LEACH the last node death occurs over 3 times later
compared to DT or MTE.
48 / 85
Criticism – Variations & Improvements
• Unrealistic radio assumptions (cluster-heads must transmitdirectly to gateways)
• No real implementation yet• No good for reactive applications (vs. proactive ones)• TEEN, PEGASIS, examples of variations improving on
latency and realism• Other protocols use more hierarchical levels or/and
multihop routing between cluster-heads.
49 / 85
E. Directed Diffusion
C. Intanagonwiwat, R. Govindan, D. Estrin
“Directed Diffusion: A Scalable and RobustCommunication Paradigm for Sensor Networks”
6th Annual International Conference on Mobile Computing and Networking,2000
50 / 85
Introduction
Sensor networks require coordination in order to performdistributed event sensing.
Directed diffusion is a• data-centric• application aware• energy efficient
coordination paradigm for delivering sensed data.
51 / 85
Introduction
Challenges for directed diffusion:
• Scalability• Energy efficiency• Robustness / Fault tolerance• Efficient routing
52 / 85
Directed Diffusion
Directed diffusion elements:
• Interest messages• Data messages• Gradients• Reinforcements of gradients
53 / 85
Interests and tasks
• An interest contains the description of a sensing task.• Task descriptions are named e.g. by a list of attribute-value
pairs.• The description specifies an interest for data matching the
attributes.
type = wheeled vehicle // detect vehicle locationinterval = 100 ms // send events every 100 msduration = 10 seconds // for the next 10 secondsrect = [-100, 100, 200, 400] // from nodes within rectangle
Example of an interest
54 / 85
Interests
Interests are injected into the network at some (possiblyarbitrary) node, the sink.The sink diffuses the interests through the sensor network.• For each interest a task is generated.• For each active task the sink generates an exploratory
interest message.type = wheeled vehicleinterval = 0.1srect = [-100, 100, 200, 400]timestamp = 01:20:40expiresAt = 01:30:40
Note that:• An interest is periodically refreshed by the sink.• Interests do not contain information about the sink.• Interests can be aggregated e.g. interests with identical
types and completely overlapping rect attributes.
55 / 85
Interests
Every node maintains an interest cache.Upon interest reception a new entry is created in the cache.
• A timestamp that stores the timestamp of the last receivedmatching interest.
• A gradient entry, up to one per neighbor, is created. Eachgradient stores:
• Data rate.• Duration.
Note that:• If an interest already exists in cache only a new gradient is
created for this interest.• An interest is erased from cache only when every gradient
has expired.
56 / 85
Gradients
Gradients are formed by local interaction of neighboring nodes.Neighboring nodes establish a gradient towards each other.Gradients store a value and a direction.Gradients facilitate "pulling down” data towards the sink.
Gradient establishment when flooding an interest.
57 / 85
Data propagation
A sensor that receives an interest it can serve, begins sensing.As soon as a matching event is detected:• The node computes the highest requested event rate
among it’s gradients.• The node generates event samples at this rate.
type = wheeled vehicle // type of vehicleinstance = truck // instance of this typelocation = [125, 220] // node locationintensity = 0.6 // signal amplitude measureconfidence = 0.85 // confidence in the matchtimestamp = 01:20:40 // local event generation time
• Data messages are unicasted individually to the relevantneighbors (neighbors where gradients point to).
58 / 85
Data propagation
Every node maintains a data cache.A node receives a data message.• If the data message doesn’t have a matching interest or
data exists in cache, the message is dropped.• Otherwise, the message is added to the data cache and is
forwarded.To forward a message:• A node determines the data rate of received events by
examining it’s data cache.• If the requested data rate on all gradients is greater or
equal to the rate of incoming events, the message isforwarded.
• If some gradients have lower data rate then the data isdown-converted to the appropriate rate.
59 / 85
Reinforcement for Path Establishment and Truncation
1 The sink initially repeatedly diffuses an interest for alow-rate event notification, the generated messages arecalled exploratory messages.
2 The gradients created by exploratory messages are calledexploratory gradients and have low data rate.
3 As soon as a matching event is detected, exploratoryevents are generated and routed back to the sink.
4 After the sink receives those exploratory events, itreinforces one particular neighbor in order to “draw down"real data.
5 The gradients setup for receiving high quality trackingevents are called data gradients.
60 / 85
Path Establishment Using Positive Reinforcement
To reinforce a neighbor, the sink re-sends the original interestmessage with a smaller interval (higher rate).
type = wheeled vehicleinterval = 10msrect = [-100, 100, 200, 400]timestamp = 01:22:35expiresAt = 01:30:40
Upon receiption of this message a node updates thecorresponding gradient to match the requested data rate.If the requested data rate is higher than the rate of incomingevents, the node reinforces one of it’s neighbors.
61 / 85
Path Establishment Using Positive Reinforcement
The selection of a neighbor for reinforcement is based on localcriteriai.e. the neighbor that reported first a new event is reinforced.The data cache is used to determine which criteria are fulfilled.
Gradient reinforcement.
62 / 85
Reinforcement for Path Establishment and Truncation
The above scheme is reactive to changes; whenever one pathdelivers an event faster than others, it is reinforced.This scheme supports the existence of multiple sinks andmultiple sources in the network.
63 / 85
Local Repair for Failed Paths
Paths may degrade over time.A node can detect path degradation i.e. by noticing reducedevent rates.Intermediate nodes on a path can apply the reinforcement rulesand repair the path.
64 / 85
Negative Reinforcement
A mechanism for truncating paths is required.
• Gradients timeout unless they are explicitly reinforced.• Negative reinforcement of a path.
Negative reinforcement is achieved by sending an interest withexploratory data rate.If all outgoing gradients of a node are exploratory the nodenegatively reinforces it’s neighbors.Negative reinforcement is applied when certain criteria are meti.e. a gradient doesn’t deliver any new messages for an amountof time.
65 / 85
Loop removal
Loops don’t deliver new events.
Some loops can’t be broken.
Removable and unremovable loops.
66 / 85
Analytic Evaluation
Let n sources and m sinks in a square grid topology of size√N ×
√N.
Let dn (dm) the number of hops between two adjacent sources(sinks).Measure the total cost of transmission and reception of oneevent from each source to all sinks.
• Flooding• Omniscient Multicast
• Uses minimum height multicast trees
• Directed diffusion
67 / 85
Topology
Example of a square grid topology
68 / 85
Topology
Source i is placed at [Si ,1] where
Si =
d√
N2 e if i = 1d√
N2 e − dnb i
2c if i evend√
N2 e+ dnb i
2c if i odd
Sink i is placed at[Di ,√
N]
where
Di =
d√
N2 e if i = 1d√
N2 e − dmb i
2c if i evend√
N2 e+ dmb i
2c if i odd
69 / 85
Flooding
Every event is broadcasted to all nodes in the network.
Each node broadcasts an event only once, nN transmissions.
On each channel each message is sent twice,2n(2
√N(√
N − 1) + 2(√
N − 1)2) receptions.
Cf = nN + 4n(√
N − 1)(2√
N − 1)
The order of Cf is O(nN).
70 / 85
Omniscient Multicast
Each source transmits events along a shortest-path multicasttree.Let Ti the tree with root source i , let C(Ti) the data delivery costfor tree Ti .Trees overlap but each message is transmitted once andreceived once on each tree edge. The cost of messagestransmitted on a tree can be expressed as the cost of the treeT1
C(Ti) = C(T1) + C(Ti − T1)− C(T1 − Ti)
Co =n∑
i=1
{C(T1) + C(Ti − T1)− C(T1 − Ti)}
71 / 85
Omniscient Multicast (cont.)
Also, the cost of transmissions on tree Tj can be expressed as
C(Tj) = H(Tj) + D(Tj)
where• H(Tj) the cost along the horizontal links• D(Tj) the cost along the diagonal links
Co =n∑
i=1
{D(T1) + H(Ti) + D(Ti − T1)− D(T1 − Ti)}
It is proven that C(Ti) is in the order of O(√
N) for m�√
N.The cost of omniscient multicast Co is in the order of O(n
√N)
72 / 85
Directed diffusion
Assume that established paths match the shortest-pathmulticast trees.If all sources send identical location estimates diffusion canperform application level duplicate suppression.Trees overlap but each message is transmitted only once andreceived only once on each tree edge.
Cd = C(⋃
T1→n)
Cd = C(T1) +n∑
i=1
{H(Ti −
⋃T1→(i−1)) + D(Ti −
⋃T1→(i−1))
}Similar to Co, Cd is in the order of O(n
√N) for m�
√N.
73 / 85
Comparison
Cf is several orders of magnitude higher than Co and CdAlthough Co and Cd are in the same order of magnitude,Co > Cd since
D(Ti − T1) > D(Ti −⋃
T1→(i−1))
Impact of network size
74 / 85
Simulation
Flooding, Omniscient Multicast and Directed Diffusion weresimulated on ns-2.Metrics:• Average dissipated energy.• Average delay.• Distinct-event delivery ratio.
75 / 85
Average dissipated energy
• Omniscient multicast dissipates significant less energy thanflooding since events are delivered along a single path.
• Directed diffusion outperforms omniscient multicast asin-network aggregation suppresses duplicate messages.
76 / 85
Average delay
• Flooding is remarkably slower due to collisions in the MAClayer.
• Directed diffusion performs comparably to omniscientmulticast.
77 / 85
Impact of dynamics
• Node failures occur randomly in the network.• Half of node failures occur on nodes on the shortest path
trees.• Sources send different location estimates.
78 / 85
Average dissipated energy
• Average dissipated energy doesn’t increase significantly.• Instead an improvement is observed in some cases.• Negative reinforcement rules allow a number of high
quality paths.79 / 85
Average delay
• Average delay increases but not more than 20%.
80 / 85
Event delivery ratio
• Event delivery ratio reduces proportionally to node failurepercentage.
81 / 85
Impact of Data Aggregation
• Without aggregation, diffusion dissipates 3 (for largernetworks) to 5 times (small network sizes) more energy.
• Longer (higher latency) alternative paths form in largenetworks, which are pruned by negative reinforcement.
82 / 85
Impact of Negative reinforcement
• Without negative reinforcement 2 times more energy isdissipated.
83 / 85
Impact of Radio Model
• Radios that consume high energy amounts when in theidle state, affect diffusion.
• Idle time dominates the performance of all schemes.84 / 85
Conclusions about Directed Diffusion
• Directed diffusion has the potential for significant energyefficiency.
• Diffusion mechanisms are stable under the range ofnetwork dynamics considered.
• The sensor radio MAC layer design affects directeddiffusion significantly.
85 / 85