Upload
duongthu
View
220
Download
0
Embed Size (px)
Citation preview
Wireless Sensor Networks for Hydrologic Measurements: Performance Evaluation and
Design Methods
Branko Kerkez, Department of Civil and Environmental Engineering, University of California,
Berkeley, USA
Steven D. Glaser, Department of Civil and Environmental Engineering, University of
California, Berkeley, USA
Roger C. Bales, Sierra Nevada Research Institute, University of California, Merced, California,
USA
Abstract 1
This paper presents the design and analyzes the performance of a 57-node wireless sensor 2
network (WSN) deployed to monitor water balance across a forested, 1-km2 headwater 3
catchment in the southern Sierra Nevada of California. The network was deployed to integrate 4
readings from over 300 sensors, measuring snow depth, solar radiation, relative humidity, soil 5
moisture and temperature, and matric potential. We show that WSN technology offers a cost-6
effective, reliable, and energy efficient means by which to instrument large, and remote areas in 7
real-time. A three-phase design procedure was used to optimize the WSN deployment. First, in 8
situ statistics, collected from an initial 19-node prototype deployment, were analyzed to derive 9
average indicators of network performance. This is essential, as reliable off-the-shelf 10
performance of current WSN platforms, especially for large-scale and, long-term deployments, 11
cannot yet be guaranteed. The packet delivery ration (PDR) and received signal strength 12
indicator (RSSI) were introduced as metrics to quantify network performance. Through detailed 13
analysis of PDR and RSSI, it was shown that for our site conditions, a conservative 50-m node-14
to-node spacing ensured low-power, reliable, and robust network communications. In the second 15
phase, the results gathered through the prototype deployment were used to refine hardware 16
specifications, and to guide the layout of the full 57-node network. Further analysis of network 17
statistics was conducted during the third, operational, phase to validate system performance. 18
With a specific focus on network durability and battery lifetime, we derived a set of core 19
heuristics that will ensure near-optimal WSN performance for most environmental monitoring 20
applications. Utilizing these heuristics, deployment time can be significantly reduced, allowing 21
more resources to be devoted to the actual scientific experiment. 22
Index terms: [4894] Instruments, sensors, and techniques; [1972] Sensor web; [1804] 23
Catchment 24
Keywords: wireless sensor networks, environmental and hydrologic monitoring, critical zone, 25
water information systems 26
1. Introduction 27
Spatial measurement of hydrologic processes at the catchment, and larger-basin scales is 28
subject to significant constraints in energy, accessibility, sensor coverage area, and cost. Many 29
off-the-shelf data-logging components do not lend themselves to real-time data acquisition when 30
sensors are spatially distributed at the km scale, as wiring runs are limited by signal noise and the 31
need to maintain acceptable levels of current. Such real-time data are often essential, especially 32
when considering their use for operational hydrology. Transmission along wires is further 33
complicated by heavy snow loads, and the proclivity of rodents to chew on cables. Recent 34
advances in sensing technology, particularly in the area of Wireless Sensor Networks (WSNs), 35
enable monitoring environmental phenomena in real-time, and at unprecedented spatial and 36
temporal scales. Rice and Bales [2010] showed that WSNs can be effectively used to 37
sufficiently capture both local spatial means and landscape variability of snow depth while 38
collecting spatially representative measurement with relatively few sensors. 39
In most spatially distributed sensor deployments it is often challenging, and inconvenient, to 40
physically collect cached data, readily detect faulty hardware, and alter equipment parameters. 41
While there are many commercial wireless systems that can transmit data between two points, 42
such hardware generally contains relatively high-powered radios and requires substantial energy. 43
Such point-to-point transmission systems operate independently, and can interfere with each 44
other if multiple links are deployed within a region. WSN technology offers an alternative for 45
cost-effective instrumentation of extended regions with limited accessibility, while permitting 46
real-time data access to distributed sensors through a centralized communications architecture. 47
2
Given the nascent nature of low-power wireless, in particular for remote field applications, off-48
the-shelf performance cannot be guaranteed, and much work remains to be conducted to quantify 49
the real-world performance of WSNs for hydrologic monitoring. 50
The research reported in this paper aims at developing techniques for efficient, scalable, and 51
robust WSN deployments for monitoring hydrologic phenomena, although the methods can 52
readily be extended for purposes of most environmental monitoring applications. We address 53
three specific questions. First, what is the performance of a large-scale, low-cost WSN built 54
from off-the-shelf hardware, when exposed to the harsh conditions of a mountain catchment in 55
the snow-covered Sierra Nevada? Second, which metrics can be used to quantify WSN 56
performance and to evaluate the design of the wireless monitoring system? Third, what is an 57
efficient approach to designing and deploying WSNs for long-term environmental monitoring 58
campaigns, whiles maintaining robust and reliable network performance? 59
60
2. WSN design principles 61
The core component of a WSN is known as a mote - a tiny, ultra-low power radio operated 62
by a microcontroller, and featuring analog, and digital, interfaces to which sensors can be 63
connected (Figure 1). A mote exchanges information wirelessly with neighboring motes in a 64
distributed network, which can relay this information to their neighbors, until it reaches a central 65
hub. Software optimization of power control (also known as duty cycling) allows some devices 66
to operate for several years on a pair of AA batteries [Karl and Willig, 2005; Dust, 2006]; 67
embedded software can be optimized to permit motes to remain in an extremely low-power state 68
the majority of the time (powering the most necessary components, such as the clock and basic 69
micro-controller features), waking up only periodically to transmit or receive data [Dust, 2011; 70
3
IEEE, 2009]. The recent mass adoption of these devices, particularly in industry, has made them 71
an extremely cost-effective alternative to placing wires [Emerson, 2010; Honeywell, 2010; ISA, 72
2009; Song et al., 2008]. 73
Depending on the implementation and protocols of the network stack (the general hardware 74
and software architecture), motes can aggregate into a number of network topologies. The two 75
most common are the star-, and the mesh-topology (Figure 2). In a star configuration, each mote 76
exchanges information only with a central base station. In the majority of such implementations, 77
the base station is programmed to keep its radio on continuously to listen for incoming 78
transmissions, which results in high energy usage. The span of a star-network is limited by the 79
distance of a single link between a mote and its network manager. The network manager is thus 80
often placed in a central location; multiple managers are required if the region being 81
instrumented cannot be covered by a single star cluster. 82
In a mesh-based topology, motes communicate with multiple neighbors to create an 83
internally redundant multi-hop network. Multiple paths between network nodes allow data to be 84
transmitted even if certain network links collapse [Karl and Willig, 2005]. Mesh networks can 85
also adapt their topology to reflect varying attenuation of radio signals caused by changes in the 86
environment, or changing needs of the operator. The mesh topology thus allows the network to 87
span much larger areas, compared to star layout, as out-of-range motes can exchange data 88
between each other by transmitting it via other motes in the mesh. 89
Most data transmitted within WSNs is too large to be reliably transferred by a single 90
transmission, and has to be quantized into formatted data blocks called packets [Tanenbaum, 91
2003]. Transmission failures within the network only require the retransmission of specific 92
packets, mitigating the need to retransmit the entire data stream. A successful transmission is 93
4
achieved if the transmitting mote receives an acknowledgment from the receiving mote, 94
notifying it that the transmitted data packet has been received. This acknowledgment 95
redundancy, while not built into all available WSN platforms, ensures that packets reach their 96
intended destination, even if retransmission is required. 97
Since remote wireless networks operate on battery power, overall power consumption 98
becomes a critical constraint. Radio transmission consumes the largest portion of a mote’s 99
power budget, with transmission accounting for up to 95% or more of power use [Dust, 2006] 100
(not including sensor). Thus, it is important to design reliable and robust networks that will 101
minimize needles packet retransmissions. Successful design and operation of such a field-102
deployed WSN hinges upon a rational set of metrics that can be used to quantify performance. 103
Extreme weather conditions, especially cold weather scenarios, can take significant tolls on 104
hardware and batteries [Hasler et al., 2008; Mainwaring et al., 2002]. Furthermore, 105
communication in WSNs is challenged by multipath radio propagation and narrow-band 106
interference [Karl and Willig, 2005; Watteyne et al., 2010] where topography can result in radio 107
signals bouncing off ambient surfaces, causing phase-shifted copies of the same signal to arrive 108
at the receiver antenna. This leads to destructive interference, which can cancel out the original 109
signal. It has been shown that even slight changes in node-to-node distance (on the order of 110
centimeters) can have significant impacts on this behavior [Watteyne et al., 2009]. 111
Advanced WSNs are equipped with the ability to operate on a number of transmission 112
channels (or, narrow-band sub-frequencies) in an attempt to mitigate communication challenges. 113
In the 2.4GHz band there are 16 different channels on which data can be transmitted [IEEE, 114
2009]. Improper channel selection can cause data to be lost during transmission, leading to the 115
need to retransmit data and manifesting itself in much higher energy requirements on the 116
5
network [Watteyne et al., 2009]. A common approach to address this issue is to conduct an 117
expert survey, which entails a physical visit to the field to select the frequency channel on which 118
the loss of transmitted packets will be minimized. This approach has major drawbacks, as 119
research has shown that the optimal channel may vary over time [Kerkez et al., 2009; Watteyne 120
et al., 2010]. Fluctuations in the surrounding environment can cause poor performance on 121
channels that originally performed well, and it has been shown that in arboreal environments 122
propagation of radio waves is adversely affected [Oestges et al., 2009]. An option to address this 123
time-varying behavior involves equipping WSNs with the ability to channel hop. In such 124
deployments, motes within the network randomly select one of the available channels every time 125
a transmission occurs, rather than persistently transmitting information on a single channel. 126
Adaptive channel hopping is an area of ongoing research [Kerkez et al., 2009; Watteyne et al., 127
2010]. 128
A number of commercial WSN packages are available [Archrock, 2011; Crossbow, 2011; 129
Dust 2011; Ember 2011; Libelium 2011; Nevis 2011; Sentilla 2011], each offering distinct 130
platforms with unique functionality, network stacks, and standards. The implementation of 131
algorithms for network topology, routing, and channel hopping is a nontrivial task, but advances 132
in WSN technology have alleviated, although not completely removed, many of these 133
challenges. No simple off-the-shelf solution exists and hardware performance can vary 134
significantly depending on environmental conditions. 135
Deployments of WSNs have ranged from data center HVAC control [Bell and Federspiel, 136
2009] structural health monitoring [Kim et al., 2007; Rice and Spencer, 2008], and military 137
applications [Culler et al., 2001]. Notable deployments for environmental monitoring purposes 138
include habitat monitoring [Hart and Martinez, 2006; Mainwaring et al., 2002; Ramanathan et 139
6
al., 2006], permafrost detection [Hasler et al., 2008], the study of mountain ranges [Ingelrest et 140
al., 2010], and for purposes of snow depth monitoring [Rice and Bales, 2010]. A successful 141
short-term deployment of a WSN for the monitoring of hydrologic phenomena was conducted by 142
Trubilowicz et al. [2009], who noted that the technology lacks ease of use and reliability. 143
Further examples of wireless deployments for ecological and environmental monitoring were 144
conducted by Etzel and Braun [2005], Porter et al. [2005], and Szewczyk et al. [2004]. The 145
hardware and software implementations in these studies varied significantly, making it difficult 146
to gauge which hardware, software, and network protocol implementation will perform the best 147
for a given deployment. The current lack of the out-of-the-box usability, coupled with the vast 148
choice of available hardware and network protocols, has the potential to become a severe time 149
drain for scientists designing an environmental monitoring system. 150
151
3. Methods 152
3.1. Network design 153
A 57-node WSN was designed and deployed to monitor water-balance variables in a remote, 154
forested, headwater catchment along a 1.5-km transect in the 1-km2 P301 headwater catchment 155
in the Southern Sierra Critical Zone Observatory (CZO) (37o04′ N, 119o11′ W), which is co-156
located with the Kings River Experimental Watershed [Hunsaker et al., submitted; Bales et al., 157
2011]. Site elevations range from 1950 m to 2010 m, with landscape varying from dense mixed-158
conifer forest to open meadows. Sensors were placed at 23 locations prior to WSN design, 159
located so as to make spatially representative measurements of snow depth, solar radiation, and 160
relative-humidity; soil moisture, temperature, and matric potential sensors were also placed at 161
four depths below ground surface (10, 30, 60 and 90 cm) (Table 1). Along with two additional 162
7
temperature measurements, this provided over 15 sensors readings at each measurement node 163
(Figure 3), giving a total of over 300 sensors for the entire network. Our deployment makes a 164
distinction between sensor-nodes, which are motes interfaced with the sensors and data logging 165
infrastructure (Figure 3), and repeater nodes, which only contain motes and a power source, 166
placed to ensure mesh redundancy, and serve as range extenders to transfer data between 167
locations which would otherwise be out of range. 168
The WSN deployment was carried out in two steps. A smaller-scale prototype deployment 169
was conducted in September 2009, relying on manufacturer-specified transmission distances as 170
guide to mote placement (Figure 4). Site-specific network statistics were collected to evaluate 171
the performance of the WSN prototype. An analysis of these statistics, utilizing a set of WSN 172
metrics, was then used to inform a network-wide redesign (Figure 8). Further repeater nodes 173
were added to the network, and exiting repeater nodes were re-located to ensure desirable 174
network performance. 175
176
3.2 Hardware 177
The fluctuating humidity, heavy storms, and extreme temperatures at the CZO called for special 178
considerations during the design of the monitoring hardware. Reliability requirements demanded 179
specific attention to the underlying network control algorithms, as well as the selection of the 180
appropriate transmission frequencies. The system uses wireless devices developed by Dust 181
Networks [Dust, 2011], a company that primarily develops wireless data transfer technologies 182
for industrial automation applications. The core software backbone of the Dust WSN relies on 183
the time synchronized mesh protocol architecture [Dust, 2006]. Each mote is powered by a 3V 184
battery and has a manufacturer-specified battery life of over two years. This low power 185
8
consumption is made possible by extremely tight time synchronization (on the order of 186
milliseconds), and an extremely efficient duty cycle (less than 1%), which ensures that the mote 187
remains in a low-power state for the majority its operation, consuming little current (< 25 µA) 188
compared to the much larger amount required for data transmission (20-48 mA). 189
Dust Networks hardware accounts for real-time fluctuations within the radio space by 190
dividing the 2.4 GHz band into 16 channels, and features a randomized channel hoping protocol, 191
providing real-time selection of transmission frequencies [Dust, 2006]. The technology also 192
features dynamic smart-meshing algorithms, which allow each mote to automatically join the 193
network, and adaptively communicate with multiple neighbors. The adaptive nature of this 194
WSN also changes network paths (node-to-node routes), based on changing as field conditions. 195
The availability of the above features, which are not standard on most other WSN platforms, 196
removes the need for low-level programming by the user, significantly curtailing development 197
time. The CZO WSN features one base station, or network manager, that acts as a central 198
network controller. The network manager communicates with a low-power, embedded Linux 199
computer via Ethernet and the two devices exchange commands over an extended mark-up 200
language (XML) interface to actuate motes within the network, log sensor readings, and gather 201
network statistics. The embedded computer is located at the base of a 60-m tower used for eddy 202
correlation. A wired-in GPRS modem is positioned 25 up the tower and provides Internet 203
connection for the transfer of real-time data to an off-site location. 204
Anomaly detection within the network is conducted by custom software written to 205
continuously monitor network statistics and sensor readings, informing network operators of 206
impending battery outages, sensor failures, or other events of interest. To further improve 207
9
transmission distance, physically hardened high-gain 8 dBi antennas were mounted 3 m above 208
ground surface at each network node. 209
A custom data-logging board was designed to control and power the sensor array, as well as 210
to form the data into RS-232 packets required for transmission by the Dust Networks mote. The 211
board is now the EME Systems OWL2pe data logger [EME, 2011]. The OWL2pe interfaces 212
nine analog and twenty digital inputs, allowing for further sensors to be attached in the future. 213
The board requires one about 100 mW of power while actuating sensors, but spends 99% of its 214
time in an extremely efficient low-power mode. The OWL2pe can be self contained, or 215
controlled remotely via the WSN, and built in flash memory allows the assembly to log data 216
locally in case the WSN becomes unresponsive. Lack of accessibility to the physical network, 217
especially during heavy snow days, as well as the possibility of long-lasting diminution of solar 218
radiation, played a significant role in the design of the power-management infrastructure. 219
Monitoring requirements called for data to be collected on ten to fifteen minute intervals, and 220
calculations (proven correct in the field) indicated that a 12 Volt, 7 Amp-hour lead-acid battery, 221
and a 10 W solar panel, could reliably power the data logger and sensor assembly. The power 222
consumption of the components that comprise each sensing-node is shown in Table 2. 223
224
3.3 WSN Metrics 225
The packet delivery ratio (PDR), and received signal strength indicator (RSSI) are two 226
metrics that provide particular insight when evaluating WSN performance [Al Basset Almamou 227
et al., 2009; Karl and Willig, 2005]. PDR is a metric that captures the overall communication 228
efficiency of a link between two WSN nodes, and is defined as the number of successfully 229
transmitted packets divided by the total number of transmitted packets. It can be thought of as 230
10
the probability that a transmission between two nodes will succeed. Needless retransmission of a 231
packet can take a significant toll on battery resources so conservation of network resources 232
ideally demands a PDR of 100%. An understanding of the interplay between spatial network 233
coverage and deployment-specific PDR characteristics is thus required. Since a primary 234
motivation of using WSN hardware for environmental monitoring purposes is the ability to span 235
large areas, optimization of a field deployment demands a balance between the need to maximize 236
battery life (power cost) while minimizing the number of required motes (capital and operational 237
costs). 238
RSSI, conventionally measured in decibels (dB) or decibels referenced to 1 milliwatt (dBm), 239
represents the power in a signal when it arrives at the receiving antenna. As a rule of thumb, the 240
RSSI decreases as the distance between two nodes increases. The true behavior of RSSI over 241
node-to-node distance depends upon a number of factors, such as multipath radio propagation 242
and environmental obstructions but can be roughly captured by an idealized Friis antenna 243
propagation model [Friis, 1946; Friis, 1971; Kraus, 1988]. Generally, WSN hardware has a 244
manufacturer-predefined RSSI threshold below which signals cannot be reliably decoded by the 245
hardware. RSSI has the tendency to fluctuate over time, and the operational RSSI threshold can 246
sometimes exceed, or under-perform, manufacturer specification. Thus, optimizing network 247
performance depends on understanding the RSSI behavior for any given deployment. Improper 248
determination of these thresholds can lead to network collapse and the need for reconfiguration 249
during the actual deployment. 250
Motes have the ability to enter any number of possible sleep states [Karl and Willig, 2005; 251
IEEE, 2009], where significant portions of the mote are shut off to conserve energy. A particular 252
problem resulting from an extended lack of communication occurs when motes enter a deep-253
11
sleep, or hibernation, state. In such cases, motes that have not been able to establish 254
communications with the network manager through their neighboring motes for a set period of 255
time, power off the majority of resources, and wake up only sporadically to rejoin the network by 256
listening to advertising beacons from neighbors. Depending on software implementation, field 257
experience has shown that it can take significant portions of time (hours to days) for motes to 258
rejoin the network. Removing motes from the network can thus severely compromise the mesh 259
topology and lead to outages of entire subsets of a network. This behavior underscores the need 260
to understand path specific PDR and RSSI characteristics for ensuring network robustness, and 261
avoiding network outages due to drops in connectivity. 262
263
4. Results 264
4.1 Prototype deployment. 265
During on-site evaluation of the motes it became apparent that the initial prototype network 266
layout (Figure 4) did not comply with manufacturer-provided specifications. The specifications 267
indicated an RSSI threshold of -89 dBm, with a prescribed mote-to-mote spacing of up to 200-268
300 meters. In practice, few reliable paths at such distances could be observed, and RSSI values 269
between -80 and -90 dBm were experienced for much shorter network links. Network statistics 270
were collected over a 25-day period, beginning with 10 motes on September 15, 2009. The full 271
network prototype, seen in Figure 4, was operational on September 18. RSSI and PDR value 272
were averaged over 15-minute intervals for each path in the network, providing a range of values 273
over which to determine proper hardware thresholds and evaluate network performance. 274
Figure 5 shows the evolution of the prototype network statistics over this initial 25 day 275
period. The figure displays the number of paths (or links between nodes) in the network over 276
12
time, and is an indication of how many routes were available for data transmission. Our WSN 277
system automatically adjusts the network topology to account for fluctuations in communication 278
reliability, and the number of network paths thus varies over time. As an example, for the 19-279
node prototype network in Figure 4, there were, at the time of the snapshot (late September 280
2009), over 30 available communication paths in the mesh. Generally, a larger number of paths 281
are desirable since it permits for greater diversity of routes on which data packets can be 282
transmitted. 283
The spike in network paths seen around September 18 corresponds to the time at which the 284
full 19-node prototype network was deployed. As more motes joined the network, the number of 285
paths increased correspondingly. The average network RSSI dropped as a result, implying that 286
the newly formed network paths possessed low connectivity. This correlation was observed for 287
the remainder of the 25-day evaluation of the network prototype, and indicates that not all 288
network paths were equally reliable. Ideal performance would have kept average RSSI steady, 289
regardless of the number of newly created network paths, indicating that all paths reflect similar 290
communication reliability. Further investigation into possible causes of network behavior 291
showed that both the average network-wide RSSI and the average relative humidity experienced 292
fluctuations over time, but did not appear to be positively correlated (Figure 5). 293
The western portion of the prototype network experienced a complete outage on September 294
19 due to poor communications (Figure 4, Figure 5); no prior indication of an impending 295
network outage was evident. During the outage, the number of paths in the network dropped 296
significantly and abruptly, leaving only a few network paths operational. This can be attributed 297
to a bottleneck effect within the network. In a meshed topology, child nodes far from the 298
network manager need to transmit data through neighboring parent nodes (nodes closer to the 299
13
network manager), which in turn pass the data further along the mesh until it reaches the network 300
manager. If connectivity to parent nodes is lost, child nodes which depend on these connections 301
lose connectivity to the remainder of the network. An outage of vital parent nodes can thus 302
cause an outage of the entire subsets of the network. This behavior explains the sudden drop in 303
network paths seen in Figure 5. Critical parent nodes in the mesh drifted into a hibernation state 304
due to a drop in connectivity (most likely due to low RSSI values exhibited by network paths in 305
the center of Figure 4). Child nodes in the western portion of the network thus lost connectivity 306
with the base station and were forced to enter a hibernation state as well. To avoid this 307
bottleneck phenomenon, redundancy and robustness must be built into the network to ensure that 308
no single outages can cause the remainder of the network to fail. This bottle-necking behavior 309
further explains why average network RSSI was inversely proportional to the number of paths. 310
Following this outage, only stable links with high RSSI values remained in the network. 311
The mote hibernation state was designed to conserve power during network communication 312
outages, since transmission during such instances of low connectivity can significantly shorten 313
battery life. Once in the hibernation state, it could take days for the mote to rejoin the network 314
depending on field conditions. A physical reset was required to bring motes out of hibernation 315
mode, and to re-establish the entire WSN mesh. Once the network hardware was reset a week 316
into the prototype deployment, the number of network paths increased and stayed constant for 317
the following two-week period. 318
Relative humidity did not seem affect the overall behavior of the network, but a rainstorm 319
around October 3 2009 coincided with another network collapse and a significant drop of nodes 320
from the network. To mitigate future network collapse, mote software could either be modified 321
to lower the threshold for entering the hibernation state, or additional repeater motes could be 322
14
added to ensure that required RSSI thresholds would be met. It was deemed that the low-power 323
sleep mode was necessary for efficient network operations, and that a redesigned network layout 324
would offer a better alternative. This was again confirmed when noticing a significant drain on 325
mote battery life after forced resets. 326
Part of the pre-deployment, or prototype, site analysis involved determining the field RSSI as 327
a function of node-to-node distance. Figure 6 presents a logarithmic plot of RSSI vs. path 328
distance for the CZO prototype deployment, and for comparison, the idealized Friis propagation 329
model [Friis, 1946], which governs how the hardware-specific RSSI between two nodes should 330
change assuming perfectly isotropic antenna conditions. The measured relationship was not 331
monotonic, nor sharply delineated, and showed considerable uncertainty, which was most likely 332
a function of multipath effects on radio propagation. The in situ network statistics showed a 333
significant reduction in transmission distance compared to the manufacturer-specified 200-300 m 334
outdoor range. As expected, the observed average RSSI decreased as the distance between nodes 335
increased, but a significant spread existed around the mean (shown by an estimate of the 95% 336
confidence bounds). The mean RSSI reached the manufacturer specified threshold of -89 dBm 337
at a transmission distance of 130-150 m, while a large portion of paths performed below this 338
threshold. At 100-120 m separation, a significant portion of network paths had the tendency to 339
drop below this threshold, under-performing manufacturer specifications of a 200 m range. For 340
distances below 25 m, the network links outperformed the predictions made by the Friis 341
propagation model but mean RSSI significantly deviated from the idealized Friis model after the 342
25 m mark. 343
Figure 7 presents another, and perhaps better, indicator of network performance - an analysis 344
of PDR (%) as a function of RSSI (dBm). Although fluctuations of PDR around specific RSSI 345
15
values existed, it is evident that for values of RSSI over -80 dBm, the PDR remained around 346
100%, implying that few, or no retransmissions were required to successfully deliver network 347
packets to their destination. For values below this RSSI threshold, paths experienced sudden 348
drops of PDR, falling entirely to 0% for RSSI values of -90dBm and lower. This sudden drop is 349
often referred to as waterfall behavior when describing to the effect of RSSI on PDR. 350
351
4.2 Post-deployment analysis. 352
Following the prototype deployment, an analysis of network metrics was used to inform a re-353
design. More repeater nodes were added to the network, and existing repeater nodes were re-354
located to improve performance and prevent network outages, leading to the final network 355
configuration shown in Figure 8. After re-design, the network did not experience any more 356
spontaneous outages. A histogram of PDR values for all paths in the network (Figure 9) over a 357
two-day period, a month after network reconfiguration, shows that about 80% of total PDR 358
values were within a desirable 85-90% performance range, with more than half of all paths 359
experiencing 100% PDR (particularly nodes spaced less than 50 m apart). Nodes communicated 360
not only with their closest neighbors, but also with more distant nodes to maintain the mesh 361
network topology. In the final deployment, network nodes had, at all times, a stable connection 362
to at least two neighbors located 50 meters or closer. 363
Figure 10 shows a 24-hour sample of average PDR for all paths in the final network 364
configuration. The average network-wide PDR was within an 85-95% window, showing only 365
slight, and gradual fluctuations. The low-frequency excursion from the design threshold did not 366
exhibit any clear correlation with collected environmental variables (such as temperature, or 367
16
humidity). Average network-wide PDR characteristics reflected those shown in Figure 10 for 368
the remainder of the deployment lifetime. 369
Figure 11 shows a sample of PDR evolution for three randomly selected paths in the final 370
network configuration. Though average network PDR remained high, path A exhibited a sudden 371
drop at the four hour mark, only to recover within one hour. Path B exhibited fluctuations of 372
PDR for the entire sample period. While requiring multiple retransmissions of the packets, this 373
path did not drop from the network and its PDR never dropped below 80%. The figure also 374
shows a new network path C being established during this 24-hour window, which is a positive 375
indication that the self-healing network algorithms dynamically allocated paths to take advantage 376
of more stable connections, thus alleviating otherwise relatively less stable connections and 377
creating further redundancy. The formation of new paths between nodes was caused by 378
fluctuation in the radio space, and is further motivation for using adaptive, self-healing network 379
hardware. 380
During network operation there were some instances of paths longer than 200 m 381
experiencing RSSI values greater than -60 dBm, while some paths less than 50 m operated with 382
RSSI values of -80 dBm. This anomalous behavior, although acceptable, was not consistent 383
across the network, and cannot be easily attributed to topographic features of terrain (such as 384
elevation differences, or line of sight). 385
386
5. Discussion 387
We define an optimal network deployment as the one that maximizes PDR, while minimizing 388
RSSI (implying greater transmission distances). On average, the need to retransmit data will be 389
reduced, while spatial coverage of the network will be maximized. Based on this design 390
17
criterion, Figure 7 shows that this value corresponds approximately with -80 dBm, although 391
there begins to be a significant spread of PDR for values below -70 dBm. This waterfall 392
behavior is a further indicator that manufacturer specified RSSI thresholds should not be 393
employed as a proxy for reliability. In the case of this study, it is apparent than no packets can 394
be received beyond the manufacturers specified -89 dBm, but PDR begins to exhibit non-395
desirable qualities well before this threshold. As such, the manufacturer specified RSSI 396
threshold thus presents a worst-case performance indicator, rather than serving as a guide for 397
reliable communications. 398
For the our deployment -70 dBm was chosen as a conservative, but reliable, RSSI threshold 399
to account for adverse effects due unexpected wintertime fluctuations. Given the measured -70 400
dBm needed for acceptable communications reliability, a design path distance was then extracted 401
from Figure 6. For further reliability, the lower 95% confidence bound, rather than the mean 402
RSSI, was used as the design criteria. For the CZO installation, this resulted in the derivation of 403
a 50 m mote-to-mote spacing. Similarly, reliable mote-to-mote spacing would approximately be 404
100 m for the manufacturer-specified threshold of -89 dBm. 405
We propose a three-step design procedure to optimize a WSN deployment (Figure 12). A 406
WSN deployment can be separated into pre-deployment, deployment, and post-deployment 407
phases, which, when carried out properly, will ensure robust and reliable network 408
communications while maximizing battery lifetime and transmission distances (and thus 409
reducing the number of required motes). 410
First, in the pre-deployment phase, a prototype network (or a subset of the actual network) 411
should be deployed a priori to gauge the actual performance of the wireless hardware under 412
operating conditions. The pre-deployment analysis should be carried out in environments similar 413
18
to those of, or preferable at, the actual deployment site, with an attempt to cover a range of 414
terrain parameters. Networks statistics, specifically RSSI and PDR, should be collected for 415
several days, or longer, to capture possible fluctuations in performance. This will place 416
manufacturer specified performance thresholds into context and will permit site-specific network 417
behavior to be evaluated prior to a full-scale deployment. A waterfall plot of the effects of RSSI 418
on PDR (such as the one in Figure 7) should be constructed to extract the site-specific RSSI 419
threshold. The primary design objective entails maintaining stable network links (high PDR 420
values) and ensuring robust network performance. This will prevent network outages and 421
collapse due to the inability of motes to communicate. While maximizing network-wide PDR 422
will reduce re-transmissions, and thus also prolong battery life, there is a cost trade-off 423
associated with placing many repeater nodes. A PDR of 100% is thus not desirable, and may, in 424
fact, be physically unattainable. Calculations should be conducted to evaluate the effect of 425
average PDR on site-specific battery resources, but in most cases a PDR value of 85-90% 426
provides a realistic performance goal. Such an analysis will maximize node-to-node spacing, 427
while reducing drain on battery resources, and above all will facilitate robust and reliable 428
network links. It is critical to develop a statistically meaningful RSSI threshold from the PDR 429
behavior to account for possible fluctuations in performance. This threshold can then be used to 430
determine transmission distances (in this case from Figure 6), and will provide a reliable estimate 431
of the number of nodes that will be required to cover the sensing region. 432
Second, the deployment phase of the final network (placement of sensing nodes, repeater 433
nodes, or star-layout) should be carried out based on the previous analysis of network statistics, 434
keeping in mind that these statistics provide only a statistical bound on performance behavior. 435
Signal attenuation and multi-path propagation characteristics may adversely affect specific 436
19
network links. Brief field checks during the actual placement of nodes should be made to ensure 437
that each network path meets the desired RSSI threshold. Often, simply moving nodes by a few 438
meters will establish a more robust link. 439
Third, in the post-deployment phase, collection of network statistics should be continued in 440
order to evaluate the in situ performance of the network and to capture possible long-term 441
fluctuations in connectivity. Depending on the observed behavior, these statistics should be 442
further analyzed, and adjustments to the network can be made to achieve a desired performance, 443
by adding, removing, or re-locating WSN nodes. 444
In our case, the data obtained through the analysis of the network prototype was used to 445
gauge the future behavior and provided a set of valuable thresholds upon which to re-design the 446
network. Analysis of such results would have saved a significant amount of time if conducted 447
prior to the deployment of the network. A network design procedure using the above steps will, 448
in most cases, ensure that WSN performance will meet the demands of an environmental 449
monitoring application, by maximizing path reliability, battery life, and spatial coverage. 450
While the outlined methods apply to a broad range of available WSN platforms, initial 451
hardware selection should still be strongly guided by built-in network features. The choice of 452
WSN hardware for this deployment was based on battery life, as well as its ability to set up and 453
maintain an adaptive self-healing, multi-hop, mesh network. There is a significant cost if the 454
direct user is required to design and implement low-level control of the WSN hardware, as well 455
as the complex algorithms necessary for efficient channel hopping and mesh topologies. 456
Transmission distances can be further improved by using high-gain antennas to replace the more 457
traditional mote whip antennas. Close attention to details, such as maintaining sufficient battery 458
voltage at each mote, will additionally help to avoid network breakdown in winter. 459
20
6. Conclusions 460
Although WSN technology is continuously improving, no off-the-shelf solution exists yet. 461
Results gathered from the 57-node WSN mesh described in this paper partially fill that gap, by 462
demonstrating a cost-effective means by which to instrument a large, and remote area in complex 463
terrain. Low-power and reliable performance can be achieved by proper evaluation of hardware 464
behavior in the specific field conditions. This deployment successfully covered a 1.5-km 465
transect with 50 m node-to-node distances, and continues to reliably transmit data from more 466
than 300 sensors every 15 minutes. 467
The sheer scale of future monitoring campaigns will demand many network nodes, making it 468
impractical to optimize every single link. In situ network behavior must be quantified to derive 469
average indicators of performance. Gathered over a span of several days, PDR and RSSI should 470
be sufficient metrics to estimate operational path behavior. It is expected that these network 471
characteristics will vary based on local physiographical features. Unless obvious physical 472
obstacles exist (e.g. large rock outcroppings or metal surfaces blocking a path), the average 473
maximum transmission distance extracted from such an analysis should be sufficient to inform 474
design. 475
An open mindset to current limitations of the hardware, and continuous monitoring of 476
network statistics (for at least a year throughout the initial deployment), lead to an iterative 477
deployment approach, which can be split into three distinct phases. Following the design 478
procedure outlined in Figure 12 will result in near-optimal and reliable node-to node spacing. 479
Since maintaining reliable network performance is the most important design imperative, trading 480
transmission distance for a robust, hardened network is the proper decision. More repeater nodes 481
can always be added to improve connectivity, but only collection and analysis of network 482
21
statistics can shed light on actual causes of network failure, and lead to rational strategies for 483
improving performance. Using this approach, the network presented in this study experienced 484
significant boosts in performance, and no further collapses. 485
The implementation of the above deployment strategies should lead to successful WSN 486
deployments of up to 100 nodes, and spanning 1-2 km2. Environmental monitoring deployments 487
of networks beyond this size should use multiple network managers, each responsible for 488
networks of up to 100 motes. This will reduce the likelihood of large-scale network outages 489
caused by bottleneck effects, while simplifying the identification and management of network 490
behavior. Such architecture permits for a manageable approach to scaling, when considering the 491
instrumentation of large areas such as entire water basins, or mountain ranges. The network 492
described in this paper is a prototype for monitoring such large areas, and future work will 493
investigate the feasibility and methods of scaling this deployment strategy to cover significantly 494
larger areas. Using these heuristics deployment time can be significantly reduced, thus allowing 495
more resources to be devoted to the actual scientific program. 496
497
6. Acknowledgements 498
This research was supported by the National Science Foundation, through the Southern Sierra 499
Critical Zone Observatory (EAR-0725097) and a Major Research Instrumentation grant (EAR-500
0619947). We also acknowledge the cooperation Matthew Meadows, who was instrumental in 501
the deployment of this network, and that of of C. Hunsaker, and colleagues at the U.S. Forest 502
Service, Pacific Southwest Research Station. 503
22
7. References
Al Basset Almamou A., R. Wrede, P. Kumar, H. Labiod (2009), and J. Schiller, Performance
evaluation of routing protocols in a real-world WSN, Information Infrastructure Symposium
GIIS, 1-5.
Archrock (2011), Wireless sensor networks, http://www.archrock.com/, January 2011.
Bales R., et al. (2011), Critical Zone Observatories. In press.
Bell G., Federspiel C. (2009), Demonstration of datacenter automation software and hardware
(dash) at the California franchise tax board, Internal report prepared for the California
Energy Commission.
Crossbow (2011), Shipment tracking technology, http://www.xbow.com, January 2011.
Culler D., Doherty L., Hill J., Holden M., Kiers C., Kumar S., Kusuma J., Morris S., Pister
K., Ramchandran K., Robbins B., Scholtz J., Scott M., Szewczyk R., Tang B., Woo A.
(2001), 29 palms fixed/mobile experiment: Tracking vehicles with a uav-delivered sensor
network, UC Berkeley Technical Report.
Dust (2006), Technical overview of Time Synchronized Mesh Protocol (TSMP), whitepaper,
Hayward, CA.
Dust (2011), Embedded wireless sensor networks for monitoring and control,
http://www.dustnetworks.com/, January 2011.
Ember (2011), ZigBee wireless network systems, http://www.ember.com/, January 2011.
EME (2011), Electronically monitored ecosystems, http://www.emesystems.com/, January 2011.
23
Emerson (2010), Emerson Wireless Process Management, http://www2.emersonprocess.com/en-
US/plantweb/wireless/, January 2011.
Etzel B., Braun H. (2005), High-Performance Wireless Internet Connection to Mount Laguna
Observatory, Bulletin of the American Astronomical Society, 32, 1428.
Friis H (1971), Introduction to radio and radio antennas, Spectrum, IEEE, 8(4), 55-61.
Friis H.(1946), A note on a simple transmission formula, Proceedings of the IRE, 34, 254-256.
Hart K. , Martinez K., Environmental sensor networks: A revolution in the earth system science?
(2006), Earth-Science Reviews, 78(3-4),177 — 191.
Hasler A., Talzi I., Beutel J., Tschudin C., Gruber S. (2008), Wireless sensor networks in
permafrost research — concept, requirements, implementation and challenges, Proceedings
of the 9th International Conference on Permafrost, vol. Fairbanks, Alaska, USA.
Honeywell (2010), Honeywell Industrial Wireless Mesh Solutions,
http:/www.honeywell.com/ps/wireless, January 2011.
IEEE (2009), Institute of Electric and Electronics Engineers Standard for Information
technology - Telecommunications and information exchange between systems - Local and
metropolitan area networks - Specific requirements Part 15.4: Wireless Medium Access
Control (MAC) and Physical Layer (PHY) Specifications for Low Rate Wireless Personal
Area Networks (LR-WPANs), Technical Standard IEEE802.15.4-2009.
Ingelrest F., Barrenetxea G., Schaefer G., Vetterli M., Couach O., Parlange M., SensorScope:
Application-Specific Sensor Network for Environmental Monitoring., ACM Transactions
on Sensor Networks, 6(2), doi: 10.1145/1689239.1689247.
24
ISA (2009), Wireless systems for industrial automation: Process control
and related applications, International Society of Automation technical standard, ISA-
100.11a-2009.
Karl H., and Willig A. (2005), Protocols and Architectures for Wireless Sensor Networks, John
Wiley & Sons, Hoboken, N.J.
Kerkez B., Watteyne T, Magliocco M., S. Glaser, Pister K. (2009), Feasibility analysis of
controller design for adaptive channel hopping, First International Workshop on
Performance Methodologies and Tools for Wireless Sensor Networks (WSNPerf), Pisa Italy.
Kim S., Pakzad S., Culler D., Demmel J., Fenves G., Glaser S., Turon M. (2007), Health
monitoring of civil infrastructures using wireless sensor networks, Proceedings of the 6th
international conference on Information processing in sensor networks, New York, NY,
USA, 254—263.
Kraus J. (1988), Anetnnas. McGraw-Hill, New York, NY, USA.
Libelium (2011), Wireless sensor networks, http://www.libelium.com/, January 2011.
Mainwaring A., Culler D., Polastre J. , Szewczyk R. , Anderson J. (2002), Wireless sensor
networks for habitat monitoring, WSNA ’02: Proceedings of the 1st ACM international
workshop on Wireless sensor networks and applications, New York, NY, USA, 88—97.
Nevis (2011), Persistent LAN security and network access control
http://www.nevisnetworks.com, January 2011.
25
Oestges C., Villacieros M., Vanhoenacker-Janvier D. (2009), Radio channel characterization for
moderate antenna heights in forest areas, IEEE Transactions on Vehicular Technology,
58 (8), 4031-4035, doi: 10.1109/TVT.2009.2024947
Porter J., Arzberger P., Braun H.-W., Bryant P., Gage S., Hansen T., Hanson P., Lin ., Lin F.-
P., Kratz T., Michener W., Shapiro S., Williams T. (2005), Wireless sensor networks for
ecology, BioScience, 55(7), 561—572.
Ramanathan N., Harmon T., Balzano L., Estrin D., Hansen D., Jay J., Kaiser W., Sukhatme
G. (2006), Designing wireless sensor networks as a shared resource for sustainable
development, Information and Communication Technologies and Development, 256-265,
doi: 10.1109/ICTD.2006.301863.
Rice J., Spencer F. (2008), Structural health monitoring sensor development for the imote2
platform, SPIE Smart Structures Conference/NDE.
Rice, R., and R. C. Bales (2010), Embedded-sensor network design for snow cover
measurements around snow pillow and snow course sites in the Sierra Nevada of California,
Water Resour. Res., 46, W03537, doi:10.1029/2008WR007318.
Sentilla (2011), Enterprise management for energy, http://www.sentilla.com/, January 2011.
Song J., Han S., Mok A., Chen D., Lucas M., Nixon M., Pratt W (2008), Wirelesshart: Applying
wireless technology in real-time industrial process control. Real-Time and Embedded
Technology and Applications Symposium 2008, 377–386.
Szewczyk R., Osterweil E., Polastre J., Hamilton M., Mainwaring A., Estrin D. (2004), Habitat
monitoring with sensor networks, Commun. ACM, 47. 34-40.
26
Tanenbaum S. (2003), Computer Network. Prentice Hall, New Jersey, NJ.
Trubilowicz, J., K. Cai, and M. Weiler (2009), Viability of motes for hydrological
measurement, Water Resour. Res., 45, W00D22, doi:10.1029/2008WR007046.
Watteyne T., Mehta A., Pister K. (2009), Reliability through frequency diversity: why channel
hopping makes sense, PE-WASUN ’09: Proceedings of the 6th ACM symposium on
Performance evaluation of wireless ad hoc, sensor, and ubiquitous networks, New York,
NY, USA, 116—123.
Watteyne T., Lanzisera S., Mehta A., Pister K. (2010), Mitigating Multipath Fading Through
Channel Hopping in Wireless Sensor Networks, IEEE International Conference on
Communications (ICC), Cape Town, South Africa.
27
List of Figures
Figure 1. Mote architecture.
Figure 2. WSN Topologies.
Figure 3. Sensor node architecture.
Figure 4. Snapshot of the prototype network.
Figure 5. Network behavior over a 25-day period, starting on September 15, 2009.
Figure 6. Plot of RSSI vs. path distance.
Figure 7. A plot of PDR as a function of RSSI.
Figure 8. Final wireless sensor network layout at the Kings River Experiment Watershed.
Figure 9. A histogram of PDR values for all paths in the network.
Figure 10. Average network-wide PDR over a 24-hour period
Figure 11. The fluctuations of PDR over time for three specific paths in the network.
Figure 12. The three phases of a WSN deployment.
28
Table 1. Sensor characteristics.
Parameter Sensor Manufacturer Accuracy
Snow depth Ultrasonic Judd Communications ±1 cm
Volumetric water content EC-TM Decagon ±3% VWC
Matric potential MPS-1 Decagon Calibration dependent (max ±40%)
Solar radiation LI-200 LI-COR ±5%
Humidity and Temperature SHT15 Sensirion ±2%, ±0.5∘C
29
Table 2. Power consumption of various components in the network.
Component Volts Amps (idle mode)
Mote 3 0.02 (< 2.5x10-5)
Data logger 12 0.016 (2x10-4)
WSN network manager 12 0.06
Sensors 5-12 0.15-0.2 (0)
Cellphone modem 12 0.2
Embedded computer 5 0.4
30
Figure 1. Mote Architecture.
31
Figure 2. WSN network topologies. (a) WSN star-topology: all motes have a direct link to the network manager; the span of this network is limited by the distance between a mote and the manager. (b) WSN mesh-topology: motes exchange information with neighbors to create a redundant multi-hop network.
32
Figure 3. Sensor node architecture: (1) mote, (2) custom data-logger to interface the sensor array, (3) on-site storage, (4) 12V battery, (5) snow-depth sensor, (6) humidity and temperature sensor, (7) solar radiation sensor, (8) 10W solar panel, (9) external 8dBi antenna, (10) four soil moisture, temperature, and matric potential sensors at varying depths.
33
Figure 4. Snapshot of the prototype network. Circles indicate locations of motes. The figure lists path-specific RSSI values in dBm (referenced to 1 mW).
34
Figure 5. Network behavior over a 25-day period, starting on September 15, 2009. Network resets are evident as the sudden spikes in number of network paths. The average RSSI of nodes in the network dropped as more paths were created (more nodes joined the network). A rainstorm, evident around October 3, coincided with a significant drop of paths from the network.
09/16 09/20 09/25 09/30 10/05 10/101020304050
Num
ber o
f Pa
ths
09/16 09/20 09/25 09/30 10/05 10/107570656055
Avg.
RSS
I(d
Bm)
09/16 09/20 09/25 09/30 10/05 10/100
0.250.5
0.751
Date (2009)
RH
(%)
35
Figure 6. Plot of RSSI vs. path distance. The RSSI decreased with an increase in path distance, and significantly fluctuated around the mean. A significant number of in situ radio links experienced RSSI values below the manufacturer-specified -89 dBm threshold at approximately 100-120 m.
10 50 90 130 170 210110
100
90
80
70
60
50
40
30
20
10
Path distance (m)
RSS
I (dB
m)
Manufacturer specified RSSI thershold
95% confidence regionMean RSSIIdealized Friis model
36
Figure 7. A plot of PDR as a function of RSSI. Gray points reflect observed behavior, while the solid black line represents an idealized waterfall fit. Most packets are transmitted successfully for an RSSI above -80 dBm, after which there is a sudden drop in PDR. The dashed line provides a conservative, and lower bound design curve.
1
0.6
0.8
0.4
0.2
-80 -60 -40-100
RSSI (dBm)
PD
R (
%)
37
Figure 8. Final wireless sensor network layout at the Kings River Experiment Watershed. The base station, located at an eddy flux tower, houses a network manager and acts as a central data aggregation point.
38
Figure 9. A histogram of PDR values for all paths in the network following a reconfiguration of the network. About 80% of all network paths are performing within the desired 85-90% design value, and over 50% of all paths are at 100% PDR.
0 0.2 0.4 0.6 0.8 10
1000
2000
3000
4000
5000
6000
7000
8000
PDR (%)
Path
Occ
urre
nces
39
Figure 10. Average network-wide PDR over a 24-hour period; no clear correlation exists between deviations in PDR and either temperature or humidity.
0
0.2
0.4
0.6
0.8
11
PDR
and
Rel
ativ
e H
umid
ity (%
)
Time (hours)
0 5 10 15 200
2
4
6
8
10
Tem
pera
ture
(°C
)
PDRHumidityTempreature
40
Figure 11. The fluctuations of PDR over time for three specific paths in the network over a 24-hour period.
5 10 15 20
0.75
0.8
0.85
0.9
0.95
1
Time (hours)
PDR
(%)
Path APath BPath C
41
Figure 12. The three phases of a WSN deployment.