24
Nanoseismic Monitoring: Method and First Applications Manfred Joswig Institut für Geophysik, Universität Stuttgart, 70174 Stuttgart, Germany This article is dedicated to my academic teacher, Prof. Dr. Hans-Peter Harjes, Bochum, Germany, at his 65. birthday. SUMMARY We introduce the concept of nanoseismic monitoring aimed at determining the sources of seismic energy radiation in (slant) distances of 10 m to 10 km, and magnitude ranges down to M L –3.0. Recording seismometers are arranged as tripartite array of 100 m layout around the three-comp center site, forming a six-channel Seismic Navigating System (SNS). The associated evaluation pro- gram, HypoLine, utilizes network and array processing tools, spectral and po- larization analysis, and displays uncertainty ranges by jackknifing instead of re- siduals. Waveform parameters can be picked close to 0 dB SNR; initial ambi- guities are resolved by constraints on slowness and azimuth. Event processing is highly interactive; it allows for determination of origin time, epicenter, depth, suited half space v p and v s , and M L by just one SNS. The concept was originally developed to improve the aftershock recording capabilities of OSI missions governed by the CTBT regulations for monitoring nuclear underground explosions. However, the first practical application con- sisted in monitoring impact events generated by collapse of sinkholes at the Dead Sea. Additional examples of active fault mapping are presented. Standard M L determination will fail since nanoseismic monitoring is intended to handle events of small signal amplitudes and of short slant distances. To pre- serve the relation to microseismic event scales we introduce an extended con- cept of M L calculation suited for ultra-small events. INTRODUCTION Nanoseismic monitoring is an application of passive seismic field investigations tuned to ultimate sensitivity. We propose the term 'nano' since our scheme describes the acquisition and processing of seismic waves generated by sources which are two to three magnitudes below the target of standard microseismic network surveillance (e.g., Lee & Stewart, 1981). In terms of seismic magnitude, it corresponds to ranges down to M L –3.0, for source to sen- sor (slant) distances of 10 m to 10 km. However, the ultimate sensitivity runs blind without

Tutorial Nanoseismic

Embed Size (px)

DESCRIPTION

nanoseismic

Citation preview

Page 1: Tutorial Nanoseismic

Nanoseismic Monitoring: Method and First Applications

Manfred JoswigInstitut für Geophysik, Universität Stuttgart, 70174 Stuttgart, Germany

This article is dedicated to my academic teacher, Prof. Dr. Hans-Peter Harjes,Bochum, Germany, at his 65. birthday.

SUMMARYWe introduce the concept of nanoseismic monitoring aimed at determining thesources of seismic energy radiation in (slant) distances of 10 m to 10 km, andmagnitude ranges down to ML –3.0. Recording seismometers are arranged astripartite array of 100 m layout around the three-comp center site, forming asix-channel Seismic Navigating System (SNS). The associated evaluation pro-gram, HypoLine, utilizes network and array processing tools, spectral and po-larization analysis, and displays uncertainty ranges by jackknifing instead of re-siduals. Waveform parameters can be picked close to 0 dB SNR; initial ambi-guities are resolved by constraints on slowness and azimuth. Event processingis highly interactive; it allows for determination of origin time, epicenter, depth,suited half space vp and vs, and ML by just one SNS.

The concept was originally developed to improve the aftershock recordingcapabilities of OSI missions governed by the CTBT regulations for monitoringnuclear underground explosions. However, the first practical application con-sisted in monitoring impact events generated by collapse of sinkholes at theDead Sea. Additional examples of active fault mapping are presented.

Standard ML determination will fail since nanoseismic monitoring is intendedto handle events of small signal amplitudes and of short slant distances. To pre-serve the relation to microseismic event scales we introduce an extended con-cept of ML calculation suited for ultra-small events.

INTRODUCTION

Nanoseismic monitoring is an application of passive seismic field investigations tuned toultimate sensitivity. We propose the term 'nano' since our scheme describes the acquisitionand processing of seismic waves generated by sources which are two to three magnitudesbelow the target of standard microseismic network surveillance (e.g., Lee & Stewart, 1981).In terms of seismic magnitude, it corresponds to ranges down to ML –3.0, for source to sen-sor (slant) distances of 10 m to 10 km. However, the ultimate sensitivity runs blind without

Page 2: Tutorial Nanoseismic

Nanoseismic monitoring I: theory- 2-

two additional criteria: autonomy in event location and robustness against noise. Autonomyis required because the smallest events are only seen by the nearest station, and it isachieved by complementing that station to a Seismic Navigating System (SNS) as de-scribed below. Robustness against noise is attained by signal processing. Additional sensi-tivity is reached by innovative approaches of the dedicated-purpose software 'HypoLine'lowering the processing threshold to near 0 dB SNR.

Applications of nanoseismic monitoring comprise aftershock surveys for natural andman-made seismic sources, e.g. for on-site inspections (OSI) of CTBT, the mapping of ac-tive fault and sinkhole activity, and the monitoring of volcanic and induced seismicity. Thefirst investigation relating its results explicitly to the approach of nanoseismic monitoringwas reported for the sinkhole survey at Dead Sea (Joswig & Wust-Bloch, 2002; Joswig etal., 2002; Wust-Bloch & Joswig, 2003, 2004a).

FIELD LAYOUT OF SNS

The requirements for the field installation are maximum sensitivity, and some means forautonomy in event location. During site selection, we care – in this order of ranking – forlow ambient ground noise, for source proximity, and for good coupling to ground by solidrock outcrops. We do ignore intentionally any infrastructure constraints, like those forpower supply, communication facilities, shelter housing, or accessibility for regular main-tenance. What we get then is uncompromized performance, however, suited just for short-term campaigns of a few nights (we skip daytime records due to their high noise level any-how). The equipment resembles the spirit of reduction to necessities: It consists of one six-channel data logger, a ruggedized notebook, three 100 m cable drums, and four lightweight,short period seismometers (one three-comp, three vertical). Its total weight is some 20 kg,and with tent, sleeping bags, and food, two men with backpack can carry it to any point incountryside, and stay overnight – a truly ultra-portable solution. Thus our field campaignsresemble more the work of refraction surveys than semi-permanent network installations.

Autonomy in event location is achieved by the station layout, plus the suited processingsoftware 'HypoLine' as described below. The optimum layout for our equipment is a tri-partite array of vertical geophones, organized as equilateral triangle, with an additional,three-component seismometer at its center point. The distance from center to outpost rangefrom 30 to 100 m dependent on task, and allow for analog cable transmission without com-promise. To keep the electrical noise low it may be necessary to improve the equipment'sfactory status by additional, resistive load at both cable ends. Field constraints often requirea modification of the intended, equilateral array geometry, and each installation must bemetered individually. Thanks to laser distance meter, GPS, and electronic compass, we canclose this task after 15 min, which adds to the initial 15 min for system setup; initial noisestudies can be performed after 5 min. Our rule of thumb is that 30 min after arrival, thesystem is fully operational with on-line data access and complete, in-field analysis capa-bilities (see Fig. 1). The rapid system setup, together with its ultra-portability turns the SNSinto a truly optimal aftershock monitoring system.

Page 3: Tutorial Nanoseismic

Nanoseismic monitoring I: theory- 3-

We named this system a Seismic Navigating System (SNS) to stress the difference toboth, the single seismic stations on the one, and the full microseismic networks on the otherhand. The SNS installation efforts make it compare to single sites, but SNS-providedsource locations are close to a seismic network. In the initial application for on-site inspec-tion of aftershocks, the SNS was intended to guide or navigate (sic) the further installationof a fully featured microseismic network. The idea of SNS is not restricted to run just onesystem. Any team of two trained students could operate one SNS (and communicate resultswireless); for the small scale field works of sinkhole monitoring, one crew deployed eventhree of them simultaneously. The extra efforts in SNS installation pay off by new possi-bilities, e.g., to calibrate the magnitude scale, and to correct for an eventual, systematic biasin event location. However, multi-SNS records could only be observed for some few,stronger events; for the majority of weak events, we are left with the autonomous, singleSNS processing.

EVENT DETECTOR OR EVENT DETECTIVE

The very first problem of digital data processing is the detection of possible events of inter-est in an environment of frequent, sometimes even larger noise bursts. If fact, microseismicnetworks are, besides the large seismic arrays of the 60ies, the first application of statisticaldetection theory to seismology. With recent improvements by pattern recognition and rule-based systems, automated seismogram analysis is a principal field for geoscience datamining (e.g., Joswig, 1990, 1996, 2000, 2003). It reflects a situation where a small fractionof transient signals must be retrieved from large amounts of continuous noise data, andthese signals must be discriminated against intermittent noise bursts. To meet this chal-lenge, the observational situation is assumed to be known, and stable for months and years.

For nanoseismic monitoring, the situation is different in quite every aspect. The amountof data is relatively small, e.g., some few nights of eight-hour records each. Neither signalsnor noise bursts are known a priori, but we expect dozens to hundreds of candidate signalsper night. Thus each new installation restarts the detective task to discover and explain theweak fingerprints of observation targets. For this type of forensic seismology, sonogramsturned out to be the most versatile diagnosis tool. As displayed in Fig. 2, they are not justspectral density matrices, but could be understood as auto-adaptive, broadband, optimumfilter discriminators on local energy spots, and their connectivity in the time-frequencyplane (Joswig, 1995). This powerful 2-D decomposition unveils the typical signatures of,e.g., traffic noise [mono-frequent, symmetrically on- and off-swelling], or sonic booms[spiky with high-frequent tail]. Sonograms clarify the overall context of weak local, re-gional, or teleseismic events when only short-term segments show up in the seismograms,leaving plenty of space for confusion with some kinds of noise.

Given the urgent needs for human detective works, there is no comparable demand forautomated detectors in nanoseismic monitoring campaigns. One could use the pattern rec-ognition of SonoDet to screen out spikes, traffic sweeps, and to mark default onset patterns;however, the main value of sonograms remains in their support of human diagnosis.

Page 4: Tutorial Nanoseismic

Nanoseismic monitoring I: theory- 4-

LEAST SQUARES LOCATION OR JACKKNIFING

The procedures for network location face severe challenges when processing seismogramsdown to the limits of resolution. One may guide the phase picking by discriminative infor-mation in sonograms, one may escape over-interpretation by displaying quantization inter-vals, both in amplitude (Joswig, 1999) and time. But one can not rule out the ambiguity ofneighboring wiggles, and the sparse information of just four SNS stations does not provideany redundancy to control the selection. This is a principal problem of all sparse stationconfigurations, but here it is overcome by introducing a new location approach. This ap-proach identifies the contribution of each single phase to location results, and it is realizedby displaying 'hypolines' in the program 'HypoLine'. Hypolines are the well-known, con-straining curves in event location, i.e., a hyperbola by ∆tP of any two stations, a circle for tS-tP at any single site, and array beams where applicable. In HypoLine, the best solution ischaracterized by the highest concentration of hypolines, resembling the classical textbookexamples of graphical location. More recently, these ideas triggered further research infuzzy location (Lin & Sanford, 2001), updated tutorial papers (Pujol, 2004), and an early,multi-terminal analysis system (Joswig, 1987). A thorough implementation unveils that hy-perbolae are exact solutions just for half space models and zero depth. Just increasing thedepth will leave at surface an area cut of a hyperboloid which, however, stays a hyperbolasince the cut runs parallel to the symmetry axis. A more severe change comes by consider-ing layer models since exceeding the cross-over distance at one leg causes a break in slope,i.e., a discontinuity of the first derivative (see Fig. 3).

A key feature of HyoLine is its handling and specific displaying of data uncertainty inevent location. In the classical approach of matrix inversion, we iterate to the RMS mini-mum of residuals. In graphical and fuzzy location, we care for the size of an area of cross-ing hypolines. Here too we care for an area as criterion of quality, but its size is determinedby the approach of jackknifing: The initial, overdetermined system of equations is brokendown into the permutation of exact sets, then solved individually, and finally evaluated bythe spread of solutions. This scheme is a variation of the most common, "leave one out"jackknife where N observations are subdivided into N samples of size N-1, e.g., to train arti-ficial neural networks on limited observation data. Here we perform a "leave out k" opera-tion with N-k being the dimension of the statistics, i.e., matching the number of parametersto be determined.

To illustrate our procedure, let us consider the simplest case of event location: The threeunknowns (x, y, t0) [with z fixed] are constrained by three P onset times, e.g., t1, t2, and t3. Itturns out that the related three hyperbolae, here defined by (t1-t2), (t1-t3), and (t2-t3), will al-ways share a common, triple junction at c1. This 'coincidence' is in no way a criterion ofquality, but it is the compelling consequence of inverting an exact set of equations. In termsof jackknifing, no permutation took place.

The situation changes if we process four onset times, like for the SNS configuration inFig. 4. The permutation into onset triples gives the four locations c1 = f( t1, t2, t3), c2 = f(t1,t2, t4), c3 = f( t1, t3, t4), and c4 = f(t2, t3, t4). Thus jackknifing yielded four solutions, and their

Page 5: Tutorial Nanoseismic

Nanoseismic monitoring I: theory- 5-

spread is a measure for parameter consistency. Of course, matching observations will su-perpose c1-4. But reflecting their ambiguity for low SNR, any single onset time may bechanged (see later for dynamic features), and will affect three solutions. Just one remains,e.g., changing t2 will alter c1, c2, and c4, but keep c3. The obvious advantage of jackknifingis the display of this modification in solution space, instead of its indirect, often non-linearinfluence to the residuals of input parameters.

For any number of stations N ≥ 3 with P onsets, the maximum number of hyperbolae H isby Eq. (1)

given any time difference stay below the ratio of station distance and velocity (else no hy-perbola exists). Likewise, the upper limit for the number of triple junctions T is given byEq. (2)

These formula govern a strong increase of permutations, as displayed in Table 1 for N =3...12, where N = 4 describes the situation of one SNS.

N 3 4 5 6 7 8 9 10 11 12H 3 6 10 15 21 28 36 45 55 66T 1 4 10 20 35 56 84 120 165 220

Table 1 Stations N, Hyperbolae H, and Triple Junctions T by Jackknife Analysis

Latest with N = 8, the multiplicity of hypolines can not be resolved in any area plot. Onecould escape to cell-hit counts, like for tomographic resolution analysis, and Fig. 5 givesthe appropriate example. Alternatively, one could see this situation as the break even toRMS residual analysis for searching the best location solution. But jackknife analysis couldstill be beneficial since the spread of dense clouds of triple junctions is by far a more real-istic measure of location accuracy than the shaping of a 99% error ellipses.

For N = 4 in the one-SNS layout, the number of permutations is limited, and jackknifingresembles best our demand to trace the influence of any single parameter to the joint solu-tion. Still, HypoLine in some sense supports the residuals of input parameters: Consider,again, the example of changing one P onset. Although we change just one parameter, 3 ofthe 6 parabola and 3 of the 4 triple junctions (plus one tS–tP circle) will alter. HypoLinemarks the affected hypolines by color code (red), but it is hard to see that they all dependon just one update. Independently or guided by jackknifing, the user can place at will a hy-pothetical epicenter in the map of hypolines, and then observe the simulated onsets (yellow)in the seismograms. In this mode, an outlyer of onset picking will graphically show up byits single large residual.

)1(,1

1∑−

=

=N

k

kH

∑−

=

−−=2

1

)2(,)1(N

k

kNkT

Page 6: Tutorial Nanoseismic

Nanoseismic monitoring I: theory- 6-

ARRAY BEAM AND THREE-COMPONENT ANALYSIS

The SNS works with a small station layout, and most events will occur outside the aperture.Hyperbolae location can be extended to this range, but the more appropriate approach is ar-ray processing once sufficient signal coherency exists. The four available traces form asparse array featuring the same problems as for sparse network processing: any singlechange of phase picking will alter the solution significantly, and least squares techniquesare not well suited to describe this dependency. Instead, the principle of jackknifing is ap-plied again, this time by forming the onset triples for exact slowness determination. We getfour independent beams, both for P and for S, from the four array traces (see Fig. 6). Thespread of these beams is an extremely sensitive measure of phase consistency, and helps toconstrain the picking options down to extreme SNR scenarios.

The results of array processing are a key constraint for the final location estimate. Therelated P and S beams are transposed into the map of hypolines, where fan width codes theinherent uncertainty of ±1 sample by phase picking (see Fig. 6). Thus the width will changeif an updated phase picking on resampled traces has reduced the timing accuracy. While theresult of back azimuth supports the location estimate, slowness values are the major dis-criminant for phase type identification, i.e., P, S, surface wave, or acoustic arrival.

For large scale arrays like NORESS or GERESS, beam forming (e.g., Harjes & Henger,1973) and f-k analysis (Capon, 1969) are most versatile tools to improve signal characteris-tics for weak SNR. One would like to have these tools available for sparse arrays as well,but poor SNR improvement and spatial aliasing prevented their use until now. To overcomethis obstacle we applied the principal idea of noise muting in sonograms to the display ofbeam energy b(sx, sy): ratios b(sx, sy) / bm < 1.0 are muted, ratios above are color-codedlogarithmically from light to dark (bm is mean of the slowness map). One gets very light oreven blank maps if beam energy is insignificant against background noise. The beamforming is not a broad-band technique, instead it is applied in small pass bands with domi-nating frequency, somehow resembling the 3-D resolution of full f-k analysis. Fig. 7 showsthe significant dependency on frequency bands, with an additional, strong increase of spa-tial aliasing for higher frequencies. Nonetheless, muted beam energy maps have demon-strated to be very valuable tools for first guess adjustment.

So far we discussed the two extremes of network and array processing where epicentersare either within or far outside the station aperture. However, in nanoseismic monitoringmost events are in the intermediate, network-to-array distance: the SNS aperture is toosmall to cover the whole source region, and events are too weak to be recorded much out-side the station coverage. Luckily, HypoLine is most powerful in this intermediate range ofup to seven times of cable layout, i.e., five times the station aperture. Fig. 6 already ad-dressed this issue by overlaying hyperbolae and beam fans. Beyond the five times aperturelimit, HypoLine performs a pure array analysis, with the well-known restrictions on depthdetermination.

Compared to the sophisticated tools for network and array processing, the three-compo-nent analysis is still in an early development stage. We know from past experience that de-

Page 7: Tutorial Nanoseismic

Nanoseismic monitoring I: theory- 7-

termination of dominant patterns in particle motion gets increasingly complex for shortersource-station distances (Klumpen & Joswig, 1983). It was our choice to implement firstthe jackknifing and beam forming approaches while the particle motion display had so far alesser priority; it is pending for future releases.

SLIDING THROUGH PARAMETER SPACE

One ruling design feature of HypoLine was to display all relevant information simultane-ously on screen. In network mode, for example, we see the seismograms with onsets(picked and simulated), the filter settings, the epicenter map with hypolines, and the depthalong the velocity profile. Any change of any single parameter, e.g., a change in onset time,will be updated instantaneously in all affected calculations. Thanks to current computationpower, this change and the related redraw of plots take just a fraction of second, thus ren-dering some ten screens per second is possible. This is movie speed, and virtually one can'slide' now in parameter space, altering the discrete pick of a new value to a continuoustransfer. The feature is reminiscent of the virtual reality in computer games: shifting theavatar will immediately update his view to the world. In HypoLine, the sliding property isrealized for any dimension possible: onset times can slide in time, epicenter coordinatesslide in space, array beams slide in slowness space, magnitudes slide along size, and, com-putationally most expensive, hypocenter depth slides along the velocity profile.

What may initially look as a gimmick, turns out to one of the most valuable features ofHypoLine when processing weak, ambiguous events in an unknown environment. Dozensof solutions can be checked on-line within seconds, and experienced 'sliders' can foreseethe effects for the spread of triple junctions. Thus one is not blindly trapped to any local oreven global minimum of residual RMS error, but one can experience a selection of the mostplausible solutions in a line of equal quality choices; plausibility means consistency withgeological constraints, seismological experience, and obvious features if field. We havelearned to appreciate the sliding selection for two, very delicate tasks of any seismic loca-tion issue: the depth determination, and its related definition of the layer model. One maywonder why at all we must care for velocities but experience showed that nanoseismicmonitoring resolves events in such shallow layers that even the lowest velocity of regionalmodels will not match the observed travel times. Instead, a simple half space with adaptedvP, and eventually adapted Poisson's ratio for even lowered vS, sufficiently models traveltime effects in the upper sediments. This result may surprise but it just reflects the simplesource-to-SNS geometry with similar travel paths for all stations.

Three unknowns for the hypocenter, one for the origin time, and one for vP – how couldwe constrain this system by just five knowns of a single SNS, i.e., the four P onsets and one3-component S? The set of equations is exact but any noise causes a multitude of imperfectsolutions with related depth-velocity ratios. Luckily, the sliding approaches for depth andfor vP parameters exhibit opposite tendencies for the hypolines as seen by Fig. 8.In addition, the apparent velocities from beam forming constitute upper constraints for thevelocity model. Thus one can identify the most reasonable candidate solution from just oneSNS, without any prior knowledge of field properties.

Page 8: Tutorial Nanoseismic

Nanoseismic monitoring I: theory- 8-

MAGNITUDE AND ITS DISTANCE CORRETION

Associating nanoseismic events with ML is an attractive option since it relates the observedenergies to intuitively known scales, and one may expect results well below ML –1.0.Negative values as such are no problem for the ML calculation rules, but the event distancemust conform to the defined boundaries. It is here where we detour, as most other pub-lished results for negative ML do too. Strictly spoken, one can not report ML for events be-low 10 km if the chosen extension to the standard distance correction curve is not explicitlygiven. Fig, 9 summarizes the most common approaches, and one recognizes the huge effectof slope that may result in more than a magnitude difference. While the slope of zero forthe original definition of Richter (1958) is simply unreasonable, Bullen & Bolt (1985) andLahr (1889) suggest values that extrapolate to extremely small magnitudes below 1 km.Less drastic corrections result from slopes by Bakun & Joyner (1984), and Di Grazia et al.(2001) who actually suggested a correction of Lahr (1989) for small distances.

HypoLine is not tied to any fixed slope, instead it may change values according to actualattenuation profiles. In the nomenclature of the last paragraph, the slope of the distance cor-rection is just another choice in the parameter sliding approach. Fig. 10 gives an examplefor data from sinkhole monitoring at the Dead Sea (Wust-Bloch & Joswig, 2004): Oneevent is recorded by three SNS at 30, 100, and 300 m, respectively. A slope of -1.0 for –log(A0) will adjust all measurements to a ML –1.4. The slope is in agreement with Bakum& Joyner (1984) but offset by –0.5 from their curve to approximate Lahr (1989) above 3km.

Another difficulty in magnitude determination comes by the low SNR of nanoseismicevents, combined with the mandatory Wood-Anderson restitution. WA as a transformationto ground displacement performs similar to a band bass: down from Nyquist, it increasinglyenhances the low frequencies, but below 0.8 s (the eigen period of the instrument) the sig-nal amplitude decreases again. The 0.8 s focus is fine for local earthquakes with related en-ergy and SNR, but it fails for nanoseismic events. Either one stays with 0.8 s, and deter-mines noise amplitudes irrelevant to earthquake size, or one applies an additional high passof, e.g., 10 Hz to raise event amplitudes above noise. This is sketched in Fig. 11, where thecross marks ML. The open circle indicates a magnitude ML' determined from an 1/T cor-rected amplitude, with T as the period [sec] of picking. This scheme mimics the 1/T cor-rection for mb (e.g., Lay & Wallace, 1995). For most nanoseismic signals, the correctionwould raise ML between half and one magnitude, but it is not clear at all if this procedureshould be applied. In our reports, we stick with the uncorrected ML but keep in mind that itwas determined off the 0.8 s focus.

FIRST APPLICATIONS

Nanoseismic monitoring pushes the threshold of event processing to the limits of resolutionbut it demands precious human resources for minor results. One may wonder what at alljustifies this uncomfortable 'digging in the dust'? In principle, there are three situations thatmake nanoseismic monitoring an attractive, new option: (I) There are no larger signals

Page 9: Tutorial Nanoseismic

Nanoseismic monitoring I: theory- 9-

available, (II) larger signals do appear but their recording would demand extended obser-vation times, and (III) larger signals do appear in due time but the recording demands fastresponse.

Typical examples for the first situation are the on-site inspections (OSI) of the CTBTO,the sinkhole monitoring, and some applications of induced seismicity. In fact, the OSI de-mands triggered the initial development of SNS and HypoLine modules; the challenge is indetecting and locating ML –2.0 aftershocks of underground nuclear explosions within thesearch area of 1000 km2. SNS will be tested extensively during the next Directed Exerciseof CTBTO in autumn 2004. The result of sinkhole monitoring are described in part II(Wust-Bloch & Joswig, 2004a), it was the first full-scale application of nanoseismic moni-toring.

The standard scenario for the second situation is by active fault mapping. The principalintention is resolving recent faults, and it motivated most microseismic studies and justifiedthe installation of most local networks. Improved resolution could be achieved by tempo-rarily denser networks, but logistics and funding constraints set severe limits to their opera-tion. An escape would be possible if (i) we could lower the processing threshold to weakerevents, and (ii) more weak events do happen and make up for reduced observation time. Totest these hypotheses, an initial field survey was conducted near Murcia, South Spain (Häge& Joswig, 2004). The SNS was installed just on top of the Fortuna Basin fault system. Theachieved processing threshold was Ml -1 in 10 km distance (equivalent Ml -2 in 1,5 km andMl -3 in 100 m); actually the Figs. 1, 4-8, and 11 documented the results for one examplewith ML –2.1 at 1.4 km (slant) distance. 24 events could be located during two 8 h measur-ing campaigns over night. Their distribution in the magnitude-frequency plot of Fig. 12 fitssurprisingly well the extrapolation of regional catalog data, although we compare 16 hourswith 20 years of data, and extend the Gutenberg-Richter relation down four magnitudes.Besides fault mapping, our investigations contribute some new insights to the questionwhether there is a lower limit in the Gutenberg-Richter relation that is not induced by cata-log incompleteness.

The third situation is not necessarily linked to nanoseismic events. Instead, we use theflexibility of SNS for an ultra-fast field response: It may just need hours to the field, and ittakes another half hour to have the system fully operational. Wust-Bloch & Joswig (2004b)describe an application where aftershock recording started just five hours after the 11 Feb2004 (Ml 5.0) northern Dead Sea earthquake. One may use this information to deduce de-tails of the source process, or to guide the installation of further stations for aftershock sur-veying.

CONCLUSIONS

Nanoseismic monitoring is introduced as an extension of microseismic network surveys.For the latter, processing is rather straight forward, it goes from phase picking at many sta-tions to inversion, and to residual analysis. Only occasionally, results are updated in a feed-back manner, running again the loop of inversion. For nanoseismic monitoring, however,

Page 10: Tutorial Nanoseismic

Nanoseismic monitoring I: theory- 10-

the processing is a constant loop of trial and error based on sparse information withoutmuch of a reference, e.g., an a priori velocity model. To handle this challenge, we intro-duced concepts to exploit both the network and the array analysis, to display signal energyin time-frequency and in slowness space, to trace the influence of any single informationonto the location result, to access error bars by jackknifing, and to slide trough hundreds ofoptions in parameter space in a virtual reality manner. Nonetheless, the results will not pre-sent 'final truth' but the most plausible solution suited merely for compilation and statisticalinterpretation.

There is still much room for improvement. In our world of nano-events, the eventualcapture of a ML 0.0 earthquake is a clear source of reference that could be exploited for ML

–2.0 events by cross correlation (e.g., Joswig & Schulte-Theis, 1993).

On the one hand, methods in seismology start to approach seismic prospection ideas dueto the increased number of stations (e.g., TLE 2003). On the other hand, we introducedfield techniques for seismology that resemble short-term seismic refraction surveys thanksto the increased number of events. This could lead the path to a vast range of applications insmall-scale seismic hazard assessment, structure or engineering surveying, forensic seis-mology, and emergency support.

ACKNOWLEDGEMENTS

This work was encouraged and partly supported by Dr. Gideon Leonard from the IsraelAtomic Energy Commission, while Dr. Gilles H. Wust-Bloch pioneered many applications,tested the software, and helped to improve the manuscript. The term nanoseismology arosesometime in our frequent discussions, and was coined by Dr. Gideon Leonard. Martin Hägeand Georg Auernhammer assisted in the field test in Spain.

REFERENCES

Bakun, W. H. & W. B. Joyner (1984). The ML scale in Central California, Bull. Seism. Soc.Am. 74, 1827-1843.

Bullen, K. E., & B. A. Bolt (1985). An introduction to the theory of seismology, CambridgeUniv. Press, Cambridge GB.

Capon, J. (1969). High-resolution frequency-wavenumber spectrum analysis, Proc. IEEE57, 1408-1418.

Di Grazia, G., H. Langer, A. Ursino, L. Scarfi & S. Gresta (2001). On the estimate ofearthquake magnitude at a local seismic network, An. Geophys. 44, 579-591.

Häge, M. & M. Joswig (2004). Mapping active faults in the Murcia region, Spain by nano-seismic monitoring, Poster ESC 2004, Potsdam.

Harjes, H.-P. & M. Henger (1973). Array-Seismologie, Z. Geophysik 39, 865-905.

Page 11: Tutorial Nanoseismic

Nanoseismic monitoring I: theory- 11-

Joswig, M. (1987). Methoden zur automatischen Erfassung und Auswertung von Erdbebenin seismischen Netzen und ihre Realisierung beim Aufbau des lokalen 'BOCHUMUNIVERSITY GERMANY' - Netzes, Dissertation (PhD thesis), Wissenschaftl.Veröffentl., no. A23, 1-124, Inst. f. Geophysik, Ruhr-Univ., Bochum.

Joswig, M. (1990). Pattern recognition for earthquake detection, Bull. Seism. Soc. Am. 80,170-186.

Joswig, M. (1995). Automated classification of local earthquake data in the BUG small ar-ray, Geophys. J. Int. 120, 262-286.

Joswig, M. (1996). Pattern recognition techniques in seismic signal processing, in 2. work-shop on Application of Artificial Intelligence Techniques in Seismology and Engi-neering Seismology, eds. M. Garcia-Fernandez, G. Zonno, Cahiers du Centre Eu-ropeen de Geodynamique et de Seismologie, 12, 37-56, Luxembourg.

Joswig, M. (1999). Automated Processing of seismograms by SparseNet, Seism. Res. Let-ters 70, 705-711.

Joswig, M. (2000). Automated event location by seismic arrays and recent methods for en-hancement, in Advances in seismic event location, eds. C. H. Thurber & N. Rabi-nowitz, Kluwer, Dordrecht, 205-230.

Joswig, M. (2003). 1. Stuttgart summer school on geoscience data mining,http://www.geophys.uni-stuttgart.de/~joswig/sss/sosomenu.html.

Joswig, M. & H. Schulte-Theis (1993). Master-event correlations of weak local earthquakesby dynamic waveform matching, Geophys. J. Int. 113, 562-574.

Joswig, M. & G. H. Wust-Bloch (2002). Monitoring material failure in sinkholes in theDead Sea area by four-partite small array (Seismic Navigation System - SNS). IsraelGeological Society, Annual Meeting, Maagen, Israel. p. 54.

Joswig, M., H. Wust-Bloch & G. Leonard (2002). Nanoseismology: Development of anintegrated technique for the monitoring of nanoseismic events triggered by naturalsubsurface material failure, Israel Atomic Energy Commission, Report # 2791458,26p.

Klumpen, E. & M. Joswig (1993). Automated reevaluation of local earthquake data by ap-plication of generic polarization patterns for P- and S-onsets, Computers &Geosciences 19, 223-231.

Lahr, J. C. (1989). Hypoellipse/version 2.0: a computer program for determining localearthquake hypocentral parameters, amgnitude, and first motion patterns, U.S. Geol.Surv. Open File Rep. 89/116, pp 81.

Lay, T. & Wallace, T. C. (1995). Modern global seismology, Academic Press, San DiegoCA.

Lee, W. H. K. & S. W. Stewart (1981). Principles and applications of microearthquakenetworks, Academic Press, New York.

Lin, K.-W. & A. R. Sanford (2001). Improving regional earthquake locations using a modi-fied G matrix and fuzzy logic, Bull. Seism. Soc. Am. 91, 82-93.

Pujol, J. (2004). Earthquake location tutorial: graphical approach and approximate epicen-tral location techniques, Seism. Res. Let. 75, 63-74.

Richter, C. F. (1958). Elementary seismology, Freeman, San Francisco CA.

Page 12: Tutorial Nanoseismic

Nanoseismic monitoring I: theory- 12-

TLE (2003). Special section: Solid-earth seismology: Initiatives from IRIS, The LeadingEdge 22, No 3, 218-271.

Wust-Bloch, G. H. & M. Joswig (2003). Nanoseismic monitoring and analysis of extremelylow-energy signals associated with subsurface material failures in unconsolidatedlayered media. ESG-EGU, Annual Meeting, Nice, France, Contribution #EAEO3-A-09794, Geophysical Research Abstracts, Vol. 5, 09794, 2003.

Wust-Bloch, G. H. & M. Joswig (2004a). Pre-collapse identification of sinkholes inunconsolidated media at Dead Sea area by nanoseismic monotoring, Geophys. J. Int.(in review).

Wust-Bloch, G. H. & M. Joswig (2004b). Nanoseismic monitoring of aftershocks from the11 Feb 2004 (Ml 5.0) northern Dead Sea earthquake, abstract ESC, Potsdam.

Page 13: Tutorial Nanoseismic

Fig. 1

Screen layout of HypoLine showing a candidate event at the threshold of processingcapabilities. The seismograms were acquired by the four SNS stations sketched in thezoom map.

seismogram zoom window of 14 sec

array

3comp

map

zoom map

depth profile

SNS

seismogram window of 40 sec

Page 14: Tutorial Nanoseismic

sonogram, i.e. detectable signal energy noise variance

power spectral density matrix noise mean

scaled, filtered seismogram

Fig. 2

Seismogram and related power spectral densitiy matrix and sonogram. The psd matrix isobtained by sliding FFT, and binned logarithmically for frequency and amplitude. Thesonogram adds prewhitening and noise muting, and clearly enhances the display ofshort-term signal energy.

Page 15: Tutorial Nanoseismic

Fig. 3

a) Hyperbola in half-space model. b) Degradation of hyperbola by layer model. Inthis case the source is just below the first layer boundary at 2.1 km depth, withchange of velocity from 3.5 to 5.7 km/s. Exceeding the cross-over distance at justone leg causes a break in slope.

zoom mapmap depth profile

a)

b)

Page 16: Tutorial Nanoseismic

Fig. 4

Processing results for the candidate event of Fig. 1. The sonograms guide the phasepicking for the four weak onsets, the jackknifing gives four triple junctions (red dotsin the zoom map). For the adjustment of the epicenter, additional information from thetS-tP circle (dotted green circle segment) and the two array beams for P and S onset(yellow fans) is considered.

epicenter

depth

Page 17: Tutorial Nanoseismic

Fig. 5

Event location by jackknifing for a local, six-station network. The large numberof hypolines degrades the visibility of maps. Instead, cell-hit counts can be color-coded and get displayed in the inlet. The red circles mark the automatedlydetermined maximum concentration of hypolines.

cell hit counts

Page 18: Tutorial Nanoseismic

Fig. 6

Application of jackknifing to beamforming for the candidate event of Fig. 1.The four Pc and Sc phases determine four tri-partite beams for P and S. Theirspread is extremely sensitive again misadjustment. The lower two tracesdisplay beam overlays according to the selected time differences, the map ofhypoline indicated accuracy by width of the yellow beam fans. Note the highlyreliable results despite the low SNR at single traces.

confirm P beam confirm S beam

Page 19: Tutorial Nanoseismic

Fig. 7

Beamforming with different filter bands. An 0.8 s window is adjusted at the Ponset of the candidate event of Fig. 1. Signal energy dominates above 6 Hz whilebelow the beam steers to noise sources, .e.g., the microseism from the shore linesin SE. Spatial aliasing increases with frequency, and muting suppresses mostspots above 12 Hz.

2-4 Hz 4-6 Hz 6-8 Hz

8-10 Hz 10-12 Hz 12-14 Hz

Page 20: Tutorial Nanoseismic

Fig. 8

Estimation of optimum depth and half space vP by parameter sliding. Variationof depth causes opposite changes of hyperbolae and circles; circles finallyvanish if wave front can’t reach surface in given travel time. Variation of vP willincrease the spread of triple junctions if the optimum solution is altered.

1.3 km

increase depth: hyperbolae grow, circles shrink/vanish

increase vP: hyperbolae & circles grow, spread grows

optimum solution

1.5 km 2.0 km

1.0 km

0.5 km/s

0.6 km/s 0.7 km/s 0.8 km/s

Page 21: Tutorial Nanoseismic

Fig. 9

Magnitude-distance correction for ML by different authors. The solid part of eachcurve marks the valid distance range, while the dashed part is an extrapolation tosmaller distances with given slope.

~ 1.6 * log(R) __ Joswig e.g. HypoLine

__ Bakun & Joyner (1984) e.g. PITSA

__ Lahr (1989) e.g. HYPOELLIPSE

- - Di Grazia et al (2001)

__ Bullen & Bolt (1985)__ Richter (1958)

~ 1.0 * log(R)

~ 1.0 * log(R)

~ 1.27 * log(R)

~ 2.56 * log(R)

Page 22: Tutorial Nanoseismic

Fig. 10

Calibration of distance correction. One event is recorded at different distances,and the Wood-Anderson amplitudes are plotted. The slope of the distancecorrection must match all data to define a common magnitude, here ML -1.4 byslope -1.0.

Distance

MMLL

Maximal Amplitude

Page 23: Tutorial Nanoseismic

Fig. 11

Magnitude determination for the candidate event of Fig. 1. In the standard Wood-Anderson simulation in the red box, the noise dominates and it would determine themaximum amplitude. After high pass filtering signal amplitudes increase, and yield aresult of ML -2.1 in 1.4 km (slant) distance for one event of the data set in Fig. 12.

WA simulation

WA with 10 Hz HP

Page 24: Tutorial Nanoseismic

Fig. 12

Active fault mapping by nanoseismic monitoring. Normalized to the number ofevents per year, the magnitude-frequency distribution of 16 h field data by SNSmatches very well the extrapolation from 20 years regional monitoring. The resultsindicate that there is no cut-off in the Gutenberg-Richter relation before ML -1.0(from Häge & Joswig, 2004).