100
Flight Recorder Localization Following at-Sea Plane Crashes André Filipe Pereira Rocha Dissertação para a obtenção de Grau de Mestre em Engenharia Aeroespacial Júri Presidente: Prof. Doutor João Manuel Lage de Miranda Lemos Orientador: Prof. Doutor João Pedro Castilho Pereira Santos Gomes Vogal: Prof. Doutor José Manuel Bioucas Dias Outubro 2011

Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

  • Upload
    vanlien

  • View
    221

  • Download
    1

Embed Size (px)

Citation preview

Page 1: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

Flight Recorder Localization Following at-Sea PlaneCrashes

André Filipe Pereira Rocha

Dissertação para a obtenção de Grau de Mestre em

Engenharia Aeroespacial

Júri

Presidente: Prof. Doutor João Manuel Lage de Miranda LemosOrientador: Prof. Doutor João Pedro Castilho Pereira Santos GomesVogal: Prof. Doutor José Manuel Bioucas Dias

Outubro 2011

Page 2: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

ii

Page 3: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

Acknowledgments

I am very thankful to Professor Joao Pedro Gomes, who guided me throughout this work and was always

available to clarify my doubts. I have to say i learned a lot from him.

I would also like to thank Marco M. Morgado and Pinar Oguz-Ekim for the assistance they gave me.

Equally, i would like to express my gratitude to my lab partner Ehsan Zamanizadeh for his help and

companionship.

Of course, i am grateful to all my dearest friends, who supported me and gave me strength to carry on

with this thesis.

My last and most heartfelt thanks are to my entire family, especially my mother Teresa and my father

Antonio, without whom none of this would be possible. Thank you!

iii

Page 4: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

iv

Page 5: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

Resumo

Em 2009, a queda do Air France 447 sobre o Atlantico motivou a comunidade internacional de seguranca

na aviacao a procurar novas e mais eficientes tecnicas de localizacao de aeronaves apos acidentes so-

bre o mar. As ”caixas negras” dos avioes incluem um localizador submarino, denominado underwater

locator beacon, que transmite sinais acusticos com o objectivo de ser detectado e localizado por um

conjunto de hidrofones correctamente posicionados. O acidente do AF 447 evidenciou que os metodos

existentes de localizacao de aeronaves sao susceptıveis a falhas, dado que os beacons transmitiram

ininterruptamente ate esgotarem as suas baterias, e mesmo assim os destrocos nao foram encontra-

dos.

Esta tese visa realizar um estudo de simulacao, em condicoes razoavelmente realistas, para quan-

tificar os ganhos de precisao na localizacao quando algumas das actuais especificacoes dos beacons

sao alteradas. Especificamente, pretende-se desenvolver uma ferramenta de simulacao para avaliar

o impacto de modificar a frequencia e a forma de onda do sinal acustico, a potencia do beacon, e os

algoritmos de localizacao usados.

O trabalho e tripartido: simular a propagacao do sinal no canal submarino recorrendo ao programa

Bellhop, estimar as distancias beacon-hidrofones usando as propriedades de auto-correlacao do sinal

transmitido como proposto pela teoria da deteccao de sinais, e aplicar os algoritmos de localizacao TOA

e TDOA.

Os resultados deste trabalho sugerem diminuir a frequencia do sinal acustico e aumentar a potencia do

beacon. Adicionalmente, confirma-se que o metodo de localizacao TOA e o mais indicado para este

problema.

Palavras-chave: Underwater Locator Beacon, Canal Acustico Submarino, Teoria da Deteccao,

Localizacao da Fonte, Time Of Arrival, Time-Difference Of Arrival

v

Page 6: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

vi

Page 7: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

Abstract

The well-known 2009 Air France 447 accident over the North Atlantic has driven the international avia-

tion safety community to pursue new and more efficient techniques of aircraft localization following an

at-sea crash. The aircraft’s ”black boxes” include an underwater locating device, called underwater loca-

tor beacon, which transmits acoustic signals with the purpose of being detected and located by a set of

conveniently positioned hydrophones. The AF 447 crash evidenced that the existing methods of aircraft

localization are very susceptible to failure, as the beacons transmitted non-stop until their batteries ran

out, and even so the wreckage could not be found.

This thesis aims to carry out a simulation study, in reasonably realistic conditions, to quantify the preci-

sion gains in aircraft localization when some specifications of the currently used beacons are modified.

Specifically, a simulation tool is developed to evaluate the impacts of changing the acoustic signal’s fre-

quency and waveform, the beacon’s power, and the source localization algorithms.

The work is threefold: simulate signal propagation in the underwater channel resorting to the Bellhop

computer program, estimate the beacon-hydrophones distances using the transmitted signal’s auto-

correlation properties as indicated by signal detection theory, and apply the source localization algo-

rithms TOA and TDOA.

The results presented in this work suggest decreasing the acoustic signal’s frequency and increasing

the beacon’s acoustic power. Plus, it is confirmed that the source localization methodology TOA is very

well suited to this problem.

Keywords: Underwater Locator Beacon, Underwater Acoustic Channel, Detection Theory,

Source Localization, Time Of Arrival, Time-Difference Of Arrival

vii

Page 8: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

viii

Page 9: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

Contents

Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii

Resumo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v

Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii

List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi

List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv

Nomenclature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xx

1 Introduction 1

1.1 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.2 Underwater Locator Beacon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.2.1 State of the Art . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.2.2 Alternative Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

1.3 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

1.4 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2 Underwater Acoustic Channel 13

2.1 Sound Propagation in the Ocean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

2.1.1 Multipath Propagation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

2.1.2 Absorption Loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

2.1.2.1 Seawater . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

2.1.2.2 Sea Floor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

2.1.3 Scattering Loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

2.1.4 Spreading Loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

2.1.5 Ambient Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

2.2 Bellhop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

2.3 Geoacoustic Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

3 Delay Estimation 33

3.1 Transmitter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

3.1.1 Sinusoid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

3.1.2 Chirp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

3.1.3 QAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

ix

Page 10: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

3.2 Receiver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

3.2.1 Alternative Receiver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

3.2.2 State-of-the-art Receiver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

4 Source Localization 55

4.1 Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

4.2 TOA Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

4.3 TDOA Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

5 Results 63

5.1 Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

5.2 Range Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

5.3 Source Localization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

6 Conclusions & Future Work 73

Bibliography 79

x

Page 11: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

List of Tables

1.1 Specifications of the current underwater locator beacons . . . . . . . . . . . . . . . . . . . 7

2.1 Listing of the conventional sound propagation paths in the ocean . . . . . . . . . . . . . . 17

2.2 Bellhop environmental file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

2.3 Bellhop bottom bathymetry file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

2.4 DECK 41 database numbers and corresponding sea bottom properties . . . . . . . . . . 32

3.1 Characteristics of each of the possibly transmitted signals: sine, chirp, QAM and state-of-

the-art sine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

xi

Page 12: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

xii

Page 13: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

List of Figures

1.1 Aircraft flight-data-flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

1.2 De Havilland Comet I 1953 air crash over Calcutta . . . . . . . . . . . . . . . . . . . . . . 2

1.3 Typical FDR and CVR aircraft placement . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.4 Problem statement illustration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.5 Assembly FDR + ULB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.6 Current ULB search scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

1.7 AF 447 accident: search area, tail recovery and aircraft fragments . . . . . . . . . . . . . 8

1.8 AF 447 accident: sea-bottom-lodged FDRs and debris spreading . . . . . . . . . . . . . . 9

2.1 Schematic describing the communication problem . . . . . . . . . . . . . . . . . . . . . . 13

2.2 Ray paths in an ideal homogeneous environment . . . . . . . . . . . . . . . . . . . . . . . 14

2.3 Ray paths in a real ocean environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

2.4 Linearized SSP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

2.5 Snell’s law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

2.6 Draft of the standard ray paths in the ocean . . . . . . . . . . . . . . . . . . . . . . . . . . 17

2.7 Aperture angle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

2.8 Multipath propagation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

2.9 Micro and macro-multipath . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

2.10 Delay-spread pattern . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

2.11 Evolution of sound attenuation in seawater with frequency . . . . . . . . . . . . . . . . . . 23

2.12 Spherical and cylindrical spreading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

2.13 Power spectral density of the ocean noise . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

2.14 GDA software interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

3.1 Discrete-time system implemented in continuous-time . . . . . . . . . . . . . . . . . . . . 34

3.2 Steps of the underwater range-estimation process . . . . . . . . . . . . . . . . . . . . . . 34

3.3 Transmitter block . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

3.4 Periodic sequence of pulses to be transmitted . . . . . . . . . . . . . . . . . . . . . . . . . 36

3.5 Sinusoidal wave modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

3.6 Frequency spectrum shift resulting from the modulation process . . . . . . . . . . . . . . 37

3.7 Structure of the transmitted signal containing a digital message . . . . . . . . . . . . . . . 39

xiii

Page 14: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

3.8 Waveform of a discrete-time sinusoid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

3.9 Waveform of a discrete-time chirp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

3.10 Exemplification of the digital modulation schemes ASK, PSK and FSK . . . . . . . . . . . 41

3.11 Symbol constellations of QAM-4 and QAM-16 . . . . . . . . . . . . . . . . . . . . . . . . . 43

3.12 Waveform of the baseband QAM-4 signal . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

3.13 Power spectral density of the baseband QAM-4 signal . . . . . . . . . . . . . . . . . . . . 44

3.14 Waveform of the passband QAM-4 signal . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

3.15 Power spectral density of the passband QAM-4 signal . . . . . . . . . . . . . . . . . . . . 45

3.16 Receiver architecture when the alternative signals (sine, chirp, QAM) are transmitted . . . 46

3.17 Magnitude and phase responses of the alternative receiver’s bandpass filter . . . . . . . . 47

3.18 Demodulator block . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

3.19 Sinusoid auto-correlation function, and sinusoid-noise cross-correlation function . . . . . 49

3.20 Sinusoid, chirp and QAM auto-correlation functions . . . . . . . . . . . . . . . . . . . . . . 49

3.21 Evolution of the correlation function at the matched filter exit . . . . . . . . . . . . . . . . . 51

3.22 Decomposition of the physical array into virtual surface and bottom-reflected images . . . 52

3.23 Receiver architecture when the state-of-the-art signal (sine) is transmitted . . . . . . . . . 52

3.24 Magnitude and phase responses of the state-of-the-art receiver’s bandpass filter . . . . . 53

4.1 Source localization framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

5.1 Localization scenario 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

5.2 Localization scenario 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

5.3 Localization scenario 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

5.4 Earth-centred earth-fixed, elipsoid and source-centred frames . . . . . . . . . . . . . . . . 65

5.5 Range estimation error as a function of the ULB-hydrophone distance . . . . . . . . . . . 66

5.6 Range estimation error as a function of the SNR at the receiver . . . . . . . . . . . . . . . 66

5.7 TOA source localization error as a function of the dimension of the receiving convex hull . 68

5.8 TDOA source localization error as a function of the dimension of the receiving convex hull 69

5.9 TDOA source localization error as a function of the dimension of the receiving convex hull

(zoomed) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

5.10 TOA source localization error as a function of the transmitter’s source level . . . . . . . . 70

5.11 TOA source localization error as a function of the transmitter’s source level (zoomed) . . . 70

5.12 TDOA source localization error as a function of the transmitter’s source level . . . . . . . 71

5.13 TDOA source localization error as a function of the transmitter’s source level (zoomed) . . 71

6.1 Block diagram of the inverse problem methodology for source localization . . . . . . . . . 75

xiv

Page 15: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

Nomenclature

Abbreviations and Acronyms

AOA Angle-Of-Arrival.

ASASI The Australian Society of Air Safety Investigators.

ASK Amplitude Shift-Keying.

AUV Autonomous Underwater Vehicle.

BEA Bureau d’Enquetes et d’Analyses pour la securite de l’aviation civile.

CVR Cockpit Voice Recorder.

ECEF Earth-Centred Earth-Fixed.

ELT Emergency Locator Transmitter.

EUROCAE European Organization for Civil Aviation Equipment.

FD Finite-Difference.

FDR Flight Data Recorder.

FE Finite Element.

FFP Fast Field Program.

FIR Finite Impulse Response.

FSK Frequency Shift-Keying.

GDA GEBCO Digital Atlas.

GEBCO General Bathymetric Chart of the Oceans.

GPS Global Positioning System.

GTRS Generalized Trust Region Subproblems.

ICAO International Civil Aviation Organization.

IIR Infinite Impulse Response.

xv

Page 16: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

KKT Karush-Kuhn-Tucker.

LKP Last Known Position.

LLA Latitude Longitude Altitude.

LS Least Squares.

NetCDF Network Common Data Form.

NM Normal Mode.

NRZ Non-Return-to-Zero.

OOK On-Off Keying.

PE Parabolic Equation.

PSD Power Spectral Density.

PSK Phase Shift-Keying.

QAM Quadrature Amplitude Modulation.

R-LS Range-based Least Squares.

RBR Refracted-Bottom-Reflected.

RMS Root Mean Square.

ROV Remotely Operated Vehicle.

RR Refracted-Refracted.

RRC Reference-Receiver-Centred.

RSR Refracted-Surface-Reflected.

RSS Received Signal Strength.

RTT Round Trip Time.

RZ Return-to-Zero.

SA State of the Art.

SAR Search And Rescue.

SNR Signal-to-Noise Ratio.

SOFAR Sound Fixing And Ranging.

SR-LS Squared-Range-based Least Squares.

SRD-LS Squared-Range-Difference-based Least Squares.

xvi

Page 17: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

SSP Sound Speed Profile.

TDOA Time-Difference Of Arrival.

TOA Time Of Arrival.

UAC Underwater Acoustic Channel.

ULB Underwater Locator Beacon.

WGS World Geodetic System.

WOSS World Ocean Simulation System.

Source Localization

x Coordinate vector of the source.

a′i Coordinate vector of the ith receiver in the RRC coordinate frame.

ai Coordinate vector of the ith receiver.

εi Error of the ith receiver measured distance.

di Distance-difference to the source at the ith receiver relative to the reference receiver.

k Number of receiver arrays.

m Number of receivers.

ri Distance from the source to the ith receiver.

x, y, z Components of the coordinate vector of the source.

xi, yi, zi Components of the coordinate vector of the ith receiver.

Signal Processing

β Chirp frequency variation rate.

δ[n] Discrete-time Dirac delta function.

ω0 Transmitted signal angular frequency.

ωc Carrier angular frequency.

φ Carrier wave phase offset.

τ Pulse length.

ϕ Initial phase of a sinusoid.

A Transmitted signal amplitude.

Bf Bandpass filter bandwidth.

xvii

Page 18: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

bi Bandpass filter coefficients.

Bs Transmitted signal bandwidth.

f0 Transmitted signal frequency.

fc Carrier frequency.

fs Sampling frequency.

g[n] Discrete-time input delay-spread function.

I[n] QAM in-phase digital symbol stream.

N Bandpass filter order.

Nτ Pulse length in signal samples.

Nl Discrete-time time delay of the lth sound ray at the receiver.

Np Pulse repetition period in signal samples.

Npeakref Sample number at which the correlation peak is detected at the reference receiver.

Npeak Sample number at which the correlation peak is detected.

P Signal Power.

Q[n] QAM quadrature digital symbol stream.

r Transmitter-receiver distance.

rref Transmitter–reference-receiver distance.

s[n] Discrete-time signal leaving the generator.

Tp Pulse repetition period.

tpeakref Time instant at which the correlation peak is detected at the reference receiver.

tpeak Time instant at which the correlation peak is detected.

X(f) Frequency spectrum of a generic signal x.

x[n] Discrete-time transmitted signal.

y[n] Discrete-time received signal.

Pxx(fk) Periodogram spectral estimator of a generic signal x.

xc[n] Discrete-time chirp signal.

xs[n] Discrete-time sinusoid signal.

xqam[n] Discrete-time QAM signal.

xviii

Page 19: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

ym[n] Discrete-time multipath term of the received signal.

yn[n] Discrete-time ambient noise term of the received signal.

ybb[n] Discrete-time signal at the demodulator output.

ybp[n] Discrete-time signal at the bandpass filter output.

ymf [n] Discrete-time signal at the matched filter output.

Underwater Acoustics

αp Compressional sound-wave attenuation.

αs Shear sound-wave attenuation.

αw Seawater sound attenuation.

αThorpe Seawater sound attenuation calculated using Thorpe’s formula.

δ(t) Continuous-time Dirac delta function.

ρ Density.

τl Continuous-time time delay of the lth sound ray at the receiver.

θ Ray angle relative to the horizontal.

al Amplitude of the lth sound ray at the receiver.

c Speed of sound.

cp Compressional sound-wave speed.

cs Shear sound-wave speed.

D Cylindrical spreading depth.

f Sound frequency.

g(t) Continuous-time input delay-spread function.

I Sound wave intensity.

L Number of propagating sound rays.

p Sound pressure.

R Spherical/cylindrical spreading radius.

S Water salinity.

s Shipping activity factor.

SL Source Level.

xix

Page 20: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

T Water temperature.

w Wind speed.

x(t) Continuous-time transmitted signal.

y(t) Continuous-time received signal.

z Water column depth.

Ns(f) Shipping noise power.

Nt(f) Turbulence noise power.

Nw(f) Waves noise power.

Nth(f) Thermal noise power.

N(f) Total noise power.

ym(t) Continuous-time multipath term of the received signal.

yn(t) Continuous-time ambient noise term of the received signal.

xx

Page 21: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

Chapter 1

Introduction

Contemporary aircraft are required to be equipped with a Flight Data Recorder (FDR), commonly re-

ferred to as a ”black box”. This device is designed to keep track of specific aircraft performance param-

eters over the flight (e.g. pressure altitude, normal acceleration, engine thrust command, etc.), and is at

the core of accident investigations; the FDR is also important in improving air safety, engine performance

and material degradation matters.

In fact, most modern FDRs are capable of tracking more than 1000 parameters of interest. ICAO lists

the mandatory flight parameters to be recorded (see [1]). These parameters should provide enough

information to accurately determine the airplane flight path, speed, attitude, engine power, configuration

of lift and drag devices, and operation, in accordance with the aircraft flight-data-flow schematic shown

in Fig. 1.1.

In the genesis of the aviation era, the early 1950s, a series of well-publicized air disasters menaced

the industry. Indeed, no plausible cause to the crashes was to be found, given that no witnesses were

present and what was left of the airplane was often so damaged that no conclusions could be drawn from

it. Sometimes the wreckage could not even be retrieved. The de Havilland Comet I, the wonder aircraft

of the time, crashed five times between October 1952 and April 1954. Fig. 1.2 is an extremely graphic

illustration of the magnitude of one of these catastrophes. All over the world, people’s discomfort grew

stronger as each crash occurred; the January 1954 accident in the Mediterranean was the last straw

(see e.g. [2]). At that point, the running investigations of each calamity had reached a dead end; indeed,

the scientists could not determine their cause. Then, Australian fuel chemist David Warren, whose father

had ironically died in a 1934 plane crash, came up with the idea of recording the flight crew’s conversa-

tion, as well as other flight data; through this, in the event of an air disaster, all relevant flight data was

available and would deeply aid the determination of the crash cause (and eventually prevent it). This

milestone is well documented in [3], and marks the introduction of the Cockpit Voice Recorder (CVR),

and the FDR, in the aviation industry.

Thus, a FDR must be designed to survive a plane crash; this is, the FDR has to remain intact after

the crash. Hence, EUROCAE specifies a series of tests that this device has to endure in order to be

certified. These tests cover crash impact, penetration resistance, static crush, high and low temperature

1

Page 22: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

fires, deep sea pressure, sea water immersion and fluid immersion. In addition, to minimize the proba-

bility of damage to the recording, the FDR should be placed as far aft as possible, according to [1]. This

means that the FDR is usually mounted in the tail section, as can be seen in Fig. 1.3, so that the front

of the aircraft can act as a buffer to reduce the impact on the recorder.

Figure 1.1: Aircraft flight-data-flow [4].

Figure 1.2: In May 1953, the Comet I crash over Calcutta raised 43 casualties. Picture taken from [2].

2

Page 23: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

Figure 1.3: Typical FDR and CVR aircraft placement.

The next issue to be handled is finding the ”black box” after the accident. Aircraft tracking may be

attainable through radar and GPS data, thus providing a fairly accurate estimate of the airplane’s posi-

tion just before the crash; still, if GPS and radar data is not available, the aircraft’s position prior to the

accident may be derived from the aircraft’s flight plan. In the event of a plane crash, the last information

available on the aircraft’s position is the best first estimate of the wreckage’s location. Whenever those

positions match, i.e., whenever the plane’s last known position (LKP) coincides with its remains location,

the FDR should be relatively easy to find, since it is required to be painted in a distinctive orange or

yellow color and carry reflective material, in order to facilitate its location.

Nonetheless, in many cases the aircraft still travels long distances since the impact moment. Conse-

quently, its last known position is just a coarse estimate of its wreckage’s position, and of the FDR’s

location. In these situations the investigators do not know the debris’ whereabouts, and other search

strategies must be employed.

An Emergency Locator Transmitter (ELT) is a tool for locating accidented aircrafts. It became mandatory

in all U.S. aircrafts after the 1972 disaster which victimized two Congress members in southeast Alaska

(see e.g. [5]); a 39 days search effort could not locate the wreckage. This device’s operating principle is

very straightforward: the beacon (ELT) is automatically triggered following the crash, through an inertia

switch (senses impact forces) or a water-activated switch (senses water contact); then, the beacon starts

transmitting an electromagnetic signal that is detected by a set of satellites which, in turn, retransmit the

signal to ground stations where search and rescue operations are organized. The aircraft’s position is

determined by trilateration. There are three types of distress signals: the 121.5MHz, the 243MHz and

the 406MHz. The first two are analog signals containing no information (tones); the last one is a digital

message carrying information such as the aircraft’s country code and its current GPS position, which is

an additional help to the rescue endeavor.

3

Page 24: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

Nevertheless, there are accidents in which the aircraft is flying over a sea or an ocean; subsequently,

the debris fall into the water. When they just float on the water, the conventional ELT is able to provide

the wreckage’s position. However, if the aircraft sinks and becomes submerged, the ELT loses its worth

because the electromagnetic waves are heavily attenuated underwater, becoming impossible for them to

reach sea surface. In these cases, a special locating device needs to be used – the Underwater Locator

Beacon (ULB).

1.1 Problem Statement

Consider a plane flying above an ocean. To simplify this explanation assume that the aircraft is flying in

the North Atlantic region. Somewhere in that area the airplane crashes, sinks, and becomes submerged.

GPS, radar or flight plan data provide a coarse estimation of the aircraft’s position. Search and Rescue

(SAR) teams get dispatched to the wreckage’s estimated location (white circle in Fig. 1.4). However,

GPS and radar information does not include the depth at which the wreckage is lodged; this fact makes

the search even more difficult, especially in deep seas. Consequently, in order to refine and narrow the

search, another source of information about the aircraft’s position is needed. Usually an ELT would be

able to assist in the search efforts; nevertheless, if the airplane sank, chances are that the FDR and its

respective locator became submerged as well. Therefore, the conventional ELT is unable to provide any

information to the search teams, because electromagnetic waves are massively attenuated underwater.

Thus, in the case of a plane crash at sea, the underwater locator beacon is the device suited for aircraft

and FDR localization.

Figure 1.4: Problem statement illustration. The circumferences represent the evolution of the searchradius.

4

Page 25: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

In Fig. 1.4, there are three different search radii corresponding to three different sources on the

crashed aircraft’s position. The larger circumference denotes the search area provided by the last

GPS/radar/flight plan position of the plane. The intermediate region, bounded by the red circumfer-

ence, stands for the enhanced estimate of the wreckage’s whereabouts provided by the current ULB

technology available in the market. This work’s goal is to develop a refined locating algorithm able to

supply a searching area such as the one defined by the green circumference.

1.2 Underwater Locator Beacon

The ULB is a device emitting acoustic signals, because sound waves are less attenuated in (conduc-

tive) water than electromagnetic ones. Indeed, acoustic waves can travel several kilometers underwater,

and actually reach the sea surface. SAR ships are equipped with acoustic signal receivers and spread

around the aircraft’s expected position. If one of the ships picks up the beacon’s signal, the transmit-

ter’s (ULB, FDR, and aircraft) position can be established through reception algorithms. For historical

reasons, the beacon, displayed in Fig. 1.5, is often referred to as a pinger and the sound waves as

pings.

Figure 1.5: Assembly FDR + ULB. The underwater locator beacon is the small cylindrical tube attachedto the flight data recorder.

1.2.1 State of the Art

The contemporary ULBs found on most aircraft are supplied by two companies, Teledyne Benthos and

Dukane Seacom. The cutting-edge aircraft beacons are the Benthos manufactured ELP-362A/TDD/TDM,

and its competing counterpart DK140 produced by Dukane. Both of them function on a decades old

operating principle, described in Fig. 1.6. While the ULB is emitting the distress signal, the search

teams are spread around the first estimate of the aircraft’s position, as depicted by the white circle in

Fig. 1.4. The SAR teams are equipped with directional hydrophones, which are devices sensitive to

underwater sound, capable of receiving and measuring it. Plus, the directional feature means that the

hydrophone only detects sound coming from a particular direction, which increases the device’s sensitiv-

ity and precision. The hydrophone is thereby able to provide the distress signal’s azimuth and strength,

5

Page 26: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

corresponding to the direction and distance at which the wreckage is lodged; a weaker signal means

that the aircraft is further, whereas a stronger signal implies that the aircraft is closer. Along with the

directional hydrophone, the search gear may also include an omnidirectional receiver to confirm the

transmitter’s heading, verifying that the signal’s strength is greater at the correct direction. This localiza-

tion method follows from the cooperation between a Received Signal Strength (RSS) algorithm with a

Angle-Of-Arrival(AOA) one.

Figure 1.6: Current ULB search scheme. Whereas the ULBs are omnidirectional devices, the hy-drophones have a typical directivity of 30 ◦ between 3 dB limits.

Table 1.1 compiles the most significant specifications of the Dukane and Teledyne beacons. Even

though the signal’s frequency varies within a 2 kHz frequency band, it should be emphasized that these

ULBs emit signals containing no information, called tones or tonals. The frequency fluctuation is due to

the lack of precision in the transmitting devices (ULBs).

6

Page 27: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

Teledyne Benthos ELP-

362A/TDD/TDMDukane Seacom DK140

DimensionsLength (cm) 10.16 7.54

Diameter (cm) 3.30 3.30

Weight (g) 190 139

Operating frequency (kHz) 37.5 (±1) 37.5 (±1)

Acoustic output (dB re 1µPa@1m) 160.5 160.5

Pulse length (ms) 10 10

Pulse repetition rate (pulses/sec) 1 0.9

Battery operating lifeStand-by — 6 years

Operating life > 30 days 30 days

Maximum operating depth (m) 6096 6096

Activation

Water activated, with activation de-

lay options of 1, 2, 3, 4, 16, and 32

days or 5, 10, 20 and 40 minutes

Water activated

Table 1.1: Specifications of the current underwater locator beacons, taken from [6] and [7].

1.2.2 Alternative Solutions

On 1 June 2009, the Air France flight 447 headlined worldwide news. An Airbus A330-200, carrying 216

passengers and 12 crew members, disappeared over the South Atlantic during a night flight from Rio de

Janeiro, Brazil, to Paris, France. The accident is estimated to have occurred around 2h15, and between

2h10 and 2h15 the aircraft reported 24 technical problems, such as autopilot unavailability and electrical

failures (see [8]). Subsequently, Brazilian and French authorities, aided by the U.S. Navy, developed a

committed search effort to retrieve the misfortuned aircraft and its victims. Fig. 1.7 presents a layout of

prominent images related to this accident.

The initial search region was outlined based on the airplane’s route and last known position. This

corresponded to more than 17000 km2, a search radius of 40NM (approximately 74 km). The initial res-

cue campaigns paid off, as 16 days after the crash, on June 17th, 50 bodies and more than 1000 floating

pieces of the aircraft had been found. However, the search for the two FDRs proved unsuccessful. Be-

tween 10 June and 10 July, French and American search teams acoustically explored the area looking

for the ULBs, but they could not find them. It is important to remember that ULBs have a 30 day pe-

riod of operating life; thus, by July 10th the search was worthless, as the ULB batteries had certainly

run out. The search endeavor just described consisted of the phase I of the exploration. Phase II was

unsuccessfully carried out between 27 July and 17 August 2009 with the aid of side-looking sonar and a

remotely operated vehicle (ROV). Phase III was performed from 2 April to 24 May 2010, also employed

side-scan sonar and ROVs, with the additional help of AUVs, though the remaining wreckage was not

7

Page 28: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

Figure 1.7: Pictures about the AF 447 accident. The upper picture illustrates the search area set by theinvestigating authorities, with a search radius of 40 NM. The bottom images exhibit the recovery of thetail section, on the left, as well as the countless aircraft fragments retrieved from the disaster site.

located. Still, notice that the search area was reduced from the coarse 17000 km2 to 2000 km2. Finally,

in 2011, phases IV and V were able to provide the wreckage’s position. The two FDRs were found 1 day

apart, the first being located 1 May 2011. BEA, the French entity responsible for the investigation, re-

ported that the devices were in good conditions, and the ULBs were present. The debris found in these

search phases was located at depths ranging 3800m to 4000m. Fig. 1.8 presents a series of pictures

taken from Ref. [9] evidencing the wreckage’s physical spreading as well as the state of deterioration of

the FDRs.

Once again, it should be emphasized that the ULBs did not accomplish their task; consequently, 35

Me were spent on the 5 phases of the investigation. It was only thanks to the Remora 6000 ROV that

most of the wreckage was discovered. This was one of the issues that triggered BEA to create an in-

8

Page 29: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

Figure 1.8: Pictures taken from [9]. The upper images show the submerged FDRs lodged at the seafloor. The bottom illustration depicts the spreading of the debris around the crash zone.

ternational working group aiming at the introduction of new technologies to safeguard flight data and/or

to facilitate the localization and recovery of on-board recorders. The report presented in [10] suggests

several measures that could be taken to improve FDR localization:

• Regular transmission of basic aircraft parameters to a ground station;

• Triggered transmission of flight data to a ground station when an upcoming catastrophic event is

detected;

• Deployable ELTs with GPS position broadcasting;

• Increased autonomy of ULBs – 90 days instead of 30 days;

9

Page 30: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

• Use of a lower frequency for ULBs;

• ULBs transmitting only when interrogated, in order to improve their autonomy.

The first three procedures are meant to improve the LKP estimate, whereas the fourth will enable

longer periods of searching activity, making it more likely to recover the ULBs. The two remaining

proposals will be crucial to this work.

1.3 Objectives

In this thesis, a simulation tool is to be designed to:

1. Allow the determination of the sound waves’ travel time. This means that it is presumed that

the transmitter (ULB) is operating as a transponder that can be interrogated, as recommended in

[10, 11, 12]. Assuming a reasonable sound speed it is easy to set the transmitter-receiver distance.

The transmitter’s position is then calculated using Time Of Arrival (TOA) or Time-Difference Of

Arrival (TDOA) localization algorithms;

2. Evaluate how a frequency decrease influences both the maximum operating depth and range of

the ULB. Indeed, underwater attenuation of sound waves increases with increasing frequency.

This also impacts the Signal-to-Noise Ratio (SNR) at the receiver;

3. Compare the use of tonals with signals having richer frequency content and better auto-correlation

properties, which might considerably improve the receiver effectiveness, particularly in rejecting

ambient noise. Further, the incorporation of information in the waveforms has the potential to

greatly expedite the seeking process, as these digital signals may convey valuable data, such as

the ULB depth or the FDR records.

The purpose of this thesis is to develop and use the referred simulation tool to assess the improve-

ment in ULB localization that can be achieved when the proposed changes to the current ULB localiza-

tion technology are employed. The simulation tool is implemented in the Matlab software.

1.4 Thesis Outline

There are three principal steps to be taken in order to solve the ULB localization problem.

• Chapter 2 presents the main traits of the unique underwater acoustic environment, allowing for a

better understanding of the distorting agents the ULB transmitted signal is subject to;

10

Page 31: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

• Chapter 3 exposes the signal processing techniques, at both the transmitter and the receiver,

which lead to the determination of the signal’s TOA and TDOA;

• Chapter 4 presents the TOA and TDOA range-based source localization algorithms, which typi-

cally rely on omnidirectional hydrophones. The currently used AOA method is supported by the

directionality of the hydrophones, which can be problematic if they are not correctly positioned.

The upside of using directionality is that it increases the SNR at the receiver; ergo, employing

directional receivers is indeed an interesting possibility. The proposed approach can be used with

both directional and omnidirectional receivers.

Each of these chapters begins with a succinct summary of its contents, as well as a brief explanation

bridging the previous chapter.

In Chapter 5 the simulation results are presented and discussed, highlighting the contrasting perfor-

mances between the state-of-the-art system and the proposed one. As expected, all of the proposed

alternatives outperform the state-of-the-art concept, increasing the maximum ULB-hydrophone distance

for which signal detection is accomplished, and even providing very good estimates of the source’s coor-

dinates. This is mainly due to the signal’s frequency decrease, and partly merit of the source localization

algorithms. Concerning the latter, it is confirmed that the TOA method overcomes its TDOA counterpart

in terms of localization accuracy.

Chapter 6 summarizes the core findings of this dissertation, adding some suggestions for further work

on this topic.

11

Page 32: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

12

Page 33: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

Chapter 2

Underwater Acoustic Channel

ULB localization is primarily a long distance communication problem, diagramed in Fig. 2.1:

Figure 2.1: Schematic describing the communication problem at hands.

The transmitter block is thoroughly detailed in Chapter 3. Concisely, the underwater locator beacon

is the transmitter of the acoustic signal, which undergoes the effects of the Underwater Acoustic Chan-

nel (UAC) before reaching the receiver. The UAC behaves as a distorting attenuating waveguide for the

sound waves radiated by the transmitter. In addition to the distorting and attenuating effects of the UAC,

there are other sources of sound which act as a noisy interference to the ULB signal. The receiver’s

goal is to make up for all these disturbances so that the transmitted signal can be recovered from the

received signal. The receiver structure is also addressed in Chapter 3.

The emitted signal is shaped according to the waveform of choice (e.g. sine), and is radiated in the

form of sound waves, instead of the conventional electromagnetic waves. The UAC is indeed a quite

hostile wireless communication medium, as wave attenuation and multipath propagation make it difficult

to propagate signals to great distances. The acoustic signal can indeed be detected, though it is very

distorted. This reality is aggravated when electromagnetic waves are employed; electromagnetic signals

only propagate a few hundred meters, at best, before they completely fade. On the other hand, despite

also enduring considerable attenuation, sound waves can propagate to several kilometers, thus consti-

tuting the preferred solution for underwater wireless communications.

In this chapter the underwater acoustic channel is characterized. Section 2.1 begins with a descrip-

tion of the main features of sound propagation in seawater, and its subsections list and discuss the

main sources of signal power loss at the receiver. These include multipath propagation plus absorption,

spreading and scattering losses, as well as ambient noise. Bellhop is a ray-tracing program utilized to

simulate the UAC response and is presented in Section 2.2. The oceanographic databases used in this

13

Page 34: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

work are the subject of Section 2.3.

2.1 Sound Propagation in the Ocean

The ocean is an acoustic waveguide limited above by the sea surface and below by the sea floor.

Sound waves at middle and high frequencies (> 5 kHz) can be fairly modelled as propagating along

paths or rays through the ocean. In a homogeneous environment the ray paths would follow straight

lines radiating from the source and eventually reaching the receiver [13], as illustrated in Fig. 2.2. Still

referring to Fig. 2.2, it should be emphasized that range and depth are the two relevant dimensions in

the study of underwater sound propagation.

Figure 2.2: Ray paths in an ideal homogeneous environment. The ray tracks are straight lines radiat-ing from the omnidirectional source. Range and depth are the appropriate coordinates for underwatersound-propagation analysis.

However, the ocean is not a homogeneous medium. In particular, the Sound Speed Profile (SSP) is

highly variable, both spatially and temporally, inducing a bending of the sound rays towards regions of

low sound speed. This phenomenon is known as refraction. Moreover, rays bounce off the sea surface

and the sea floor, as these act as reflecting surfaces. Fig. 2.3 presents a typical oceanic sound speed

profile on the left, and the consequent ray bending phenomenon on the right, along with sea surface and

sea floor reflection.

14

Page 35: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

Figure 2.3: Typical SSP encountered in the ocean. The sound speed variability causes a bending of the

sound rays, as can be seen on the right side. Sea surface and sea floor reflections are also accounted

for.

Figure 2.4: The segment i of the linearized SSP has a matching sound speed ci.

Sound propagates along paths in accordance with Snell’s law

cos(θ)

c= const . (2.1)

Recalling the spatial variability of the sound speed, the water column can be divided into multiple

layers, each one with a specific sound speed c. This corresponds to partition the SSP into several linear

fragments as shown in Fig. 2.4 for the SSP presented in Fig. 2.3. As θ is the angle between the ray

15

Page 36: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

and the horizontal (grazing angle), it is apparent that sound waves tend to curve towards the zone of

minimum sound speed, as depicted in Fig. 2.5 where c1 < c2.

Figure 2.5: Snell’s law. Medium 1 and Medium 2 have different physical properties, mainly the speed of

sound. The variable θ refers to the angle between propagating sound waves and the horizontal. In this

case θ1 > θ2, because c1 < c2, and as a result there is a curving of the acoustic rays.

The spatial and temporal variability of the sound speed in water is an outcome of the inhomogeneity

of the physical properties of the ocean. Specifically, the speed of sound in the ocean is an increasing

function of water temperature, salinity and pressure, the latter being directly related to depth [14]. This

means that sound speed (c) is (mainly) dependent on three independent variables: temperature (T ),

salinity (S) and depth (z). (2.2) is an empirical function evidencing this dependence:

c(T, S, z) = 1449.2 + 4.6T − 0.055T 2 + 0.00029T 3 + (1.34− 0.01T )(S − 35) + 0.016z . (2.2)

In the previous equation, the sound velocity has units of meters per second, temperature is in de-

grees Celsius, salinity is in parts per thousand and water depth is in meters. Albeit (2.2) is somewhat

inaccurate when propagation distances are to be derived from time of flight measurements [14], it is very

often used.

Sound rays are divided according to the nature of their propagation paths as follows:

• Refracted-Refracted (RR) – rays propagating via refracted paths only;

• Refracted-Surface-Reflected (RSR) – rays bouncing off the sea surface;

• Refracted-Bottom-Reflected (RBR) – rays rebounding off the sea floor (sea bottom);

• Surface-Reflected Bottom-Reflected (SRBR) – rays reflecting off both the sea surface and the sea

floor.

There are five standard types of sound propagation paths which can be observed in the ocean. Fig.

2.6 shows an undetailed sketch of these characteristic acoustic rays, and Table 2.1 adds some of their

general properties.

16

Page 37: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

Figure 2.6: Draft of the usual ray paths in the ocean.

Ray path designation Figure letter Classification Water depthSurface duct A RSR Deep waterDeep sound channel B RR Deep waterConvergence zone C RSR Deep waterShallow water D RBR/SRBR Shallow waterArctic E RSR Deep water

Table 2.1: Listing of the conventional sound propagation paths in the ocean. Accepted nomenclatureappears in column 1, whilst path classification is given in the third column. Figure letter column relateseach path to Fig. 2.6. All paths but the shallow water one refer to deep water propagation.

Surface-duct propagation: Surface-duct propagation occurs whenever there is an isothermal mixed

layer just beneath the sea surface, acting as a natural waveguide due to the slight increase of sound

speed with depth, and causing a portion of the radiated acoustic energy to become trapped in the sur-

face duct. This ray trapping phenomenon only occurs if the sound source is placed in the mixed layer,

and only the not too steep rays are trapped; the steepest beams diffuse via deep refracted paths. Fig.

2.7 is enlightening concerning this matter, as it illustrates an omnidirectional source and several rays

diffusing from it with different aperture angles. The aperture angle θ is the angle between the sound ray

and the horizontal at the source.

The surface-duct structure implies the forming of a shadow zone, where sound does not propagate.

Additionally, it is frequency selective, meaning that low frequency sound is not trapped due to the mixed

layer. Moreover, given its RSR classification, all ray paths endure sea-surface-associated losses. De-

spite all of these adversities, the surface duct is generally an excellent waveguide.

17

Page 38: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

Figure 2.7: Three distinct rays leaving the transmitter through different departure angles; each departureangle θ corresponds to the transmitter aperture angle in respect with each ray. The angles fluctuatefrom −180 ◦ to 180 ◦, with the positive direction being that of rays pointing upwards. Ray 3 is clearly thesteepest one, whereas ray 2 has a negative aperture angle. The emitter’s aperture angle is the highestof the rays’ departure angles (in this case, θ3).

Deep-sound-channel propagation: The deep sound channel, also known as SOFAR channel, is a

natural waveguide where sound waves propagate exclusively via refracted paths, not suffering power

losses due to sea surface or sea floor interactions. A necessary condition for the existence of low-loss

refraction paths is that the deep-sound-channel axis, the axis of minimum sound speed, is below the

sea surface. Thanks to the low transmission losses, acoustic signals in the SOFAR channel have been

detected over distances of thousands of kilometers, and even half way around the world [14].

Convergence-zone propagation: The convergence-zone structure occurs whenever three necessary

premises are fulfilled: the sound speed at the bottom exceeds that at the source, and the source is

near the surface and in a region of decreasing sound speed with depth (see [14]). Then, the sound

transmitted from the near-surface source forms a downward directed beam which, after following a deep

refracted path, reappears near the surface to produce a zone of high sound intensity (convergence)

at a distance of tens of kilometers from the source. This phenomenon is repetitive in range, with the

distance between the high-intensity near-surface regions named convergence-zone range. This kind

of propagation allows for long-range underwater communications employing acoustic signals of high

intensity and low distortion, as there are only sea surface reflections.

Shallow-water propagation: The principal attribute of shallow-water propagation is that the SSP is

downward refracting or nearly constant over depth; this implies that long-range propagation is attained

solely through bottom interacting paths. This type of propagation is a quite complex process, as the

effects of sea surface, sea floor and water volume are all highly relevant; more importantly, these effects

are spatially and temporally varying. Once again, the cutoff frequency phenomenon is present, as the

shallow-water acoustic channel ceases to act as a waveguide for sound frequencies below a certain

minimum, causing energy radiated by the source to propagate directly into the sea floor. Hence, the

shallow water environment is quite adverse to long-distance communications.

18

Page 39: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

Arctic propagation: Propagation in the Arctic Ocean is typified by an upward refracting profile over the

entire water depth causing energy to endure repeated reflections at the underside of the ice. The SSP

can often be modelled as two linear segments, with a steep gradient in the upper part creating a strong

surface duct, and a standard hydrostatic-pressure-gradient below. The rays are partly channelled be-

neath the surface duct, whereas the remaining ones follow deep refracted paths. All rays within a certain

aperture cone (commonly ±20 ◦) propagate to long ranges without sea floor interference; ergo, the pri-

mary power loss mechanism is clearly associated with the sea surface. However, low frequency sound

is not efficiently trapped in the Arctic sound channel, becoming bottom interacting, and thus extremely

lossy. On the other hand, high frequency loss is verified due to reflections on the rough underside of the

ice. This gives rise to an intermediate frequency band yielding optimum propagation.

Although this exposition may be alarming as far as effective sound propagation is concerned, it must

be said the conditions for sound propagation in our cases of interest are fairly good. The transmitting

ULB lies at the ocean bottom, while the receiving hydrophones are placed near the sea surface, and the

distances between them are usually small enough to assume a flat sea floor. As there are no physical

obstacles to the diffusing sound, some acoustic rays propagate via RR paths; naturally, other rays prop-

agate through RSR, RBR and SRBR paths, and even rays enduring multiple sea floor and sea surface

reflections are present. Most of these rays are in fact detected at the receivers.

A pertinent final remark is that in typical underwater acoustic applications it is assumed that environ-

mental variables, such as sound speed profile, water depth and bottom composition, are invariant with

range. This is, in fact, an approximation because there is always some degree of lateral variability in the

ocean. There are a few situations when this range dependence can strongly deviate the actual acoustic

field pattern from the simplified range-independent acoustic-field solution.

2.1.1 Multipath Propagation

There is yet another very important peculiarity in underwater acoustic communications: multipath prop-

agation. In reality, a much larger number of ray paths coexist with the ones just listed. Therefore, in a

typical underwater acoustic transmission the ray paths leaving the emitter amount to several hundred,

and differ widely between themselves.

Thus, there is usually a direct path, a surface-reflected one, a bottom-reflected ray, a surface-reflected

bottom-reflected beam, a bottom-reflected surface-reflected path, and the list would indefinitely con-

tinue. This more realistic scenario is outlined in Fig. 2.8. Still, from the previous standard ray paths

discussion, it should be straightforward that the direct path beam may not exist, as ray refraction results

in sea floor or sea surface reflections, especially in shallow waters. In this manner, the majority of the

rays are actually sea-surface or sea-floor (or both) reflected.

19

Page 40: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

Figure 2.8: Multipath propagation. This example is simplified for better understanding, so just four

distinct rays are diffusing from the source. A – direct path; B – surface-reflected path; C – bottom-

reflected path; D – bottom-reflected surface-reflected path.

Because the sound waves propagate along many dissimilar tracks, thereby travelling different dis-

tances at a nearly constant speed, a delay-spread pattern is obtained at each receiver. Consider a

transmission beginning at time instant 0; all rays start propagating at the same time but the direct path is

inevitably the shortest, hence the reception of the direct ray occurs first at time t1. Then, at time instant t2

the surface-reflected beam arrives; later, at time t3, the bottom-reflected wave reaches the receiver; this

sequence goes on until all relevant rays arrive at the receiver. This multipath delay-spread phenomenon

is illustrated in Fig. 2.10. Notice that the less lossy ray paths have the highest signal amplitudes and,

conversely, the more lossy waves present little power. Furthermore, the channel response differs from

receiver to receiver; comparing e.g. receiver 1 with receiver 2, the direct ray is more intense and arrives

earlier in the latter. Multipath delay-spreads exceed 60ms for typical medium-range channels, extending

to 80ms in longer-range transmissions [13].

The multipath feature of underwater acoustic transmissions is both spatially and temporally varying.

Therefore, the ray paths are divided into two portions:

• Macro-multipath propagation – characteristics of the beams depend on slowly varying and quasi-

deterministic properties of the ocean. The scale of spatial fluctuations (of sound speed, water

density, etc.) is large in comparison to the sound wavelength, and there are no rapid stochastic

fluctuations in the environment;

• Micro-multipath propagation – characteristics of the rays depend on rapidly varying stochastic

properties of the ocean. Small-scale fluctuations of environmental properties are important in

20

Page 41: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

sound diffusion.

For small-amplitude or large-scale environmental fluctuations (macro-multipath) the sound may fol-

low a single perturbed path within a ray tube; this description matches the situation depicted in Fig. 2.8,

where all rays trail a simple path. As the amplitude of the fluctuations increases and their scale de-

creases, the single path may split into many micropaths within the ray tube (micro-multipath). Then, the

sound is modelled as staying within a ray tube surrounding the nominal ray. This condition is illustrated in

Fig. 2.9, which is a more realistic version of Fig. 2.8 since it incorporates the micro and macro-multipath

distinction. Only the direct path is represented to obtain a clearer drawing.

Figure 2.9: Micro and macro-multipath structures for the direct path in Fig. 2.8. Macro-multipath isthe nominal beam effective for small-amplitude or large-scale environmental fluctuations. The micro-multipath process results from rapid and small-scale variations of the ocean’s properties.

The communication system input/output relationship, evidenced in Fig. 2.1, can be expressed in

terms of the channel multipath by use of the input delay-spread function. This is a function relating

the channel response to the time delays of the all the rays and their respective amplitude. The usual

morphology of the input-delay spread function is presented in Fig. 2.10 for the rays depicted in Fig. 2.8.

Thus, it is apparent that the input delay-spread function resembles a train of Dirac impulses.

The received signal y can be decomposed in two terms, one denoting the contribution of the multipath

attenuating effects (ym) and the other expressing the influence of underwater noise (yn):

y(t) = ym(t) + yn(t) . (2.3)

The received signal fraction ym depends on the transmitted signal x and the input delay-spread

function g as follows:

21

Page 42: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

ym(t) = x(t) ? g(t) = g(t) ? x(t) =

∞∫−∞

g(τ)x(t− τ) dτ . (2.4)

(2.4) can be simplified knowing that the input delay-spread function is a train of Dirac delta functions

g(t) =

L∑l=1

alδ(t− τl) , (2.5)

where L is the number of incoming rays at the receiver, al is the lth ray amplitude and τl is the

corresponding time delay. Inserting (2.5) into (2.4):

ym(t) =

∞∫−∞

[ L∑l=1

alδ(τ − τl)]x(t− τ) dτ . (2.6)

Figure 2.10: Delay-spread pattern for the situation depicted in Fig. 2.8. Each peak corresponds to the

arrival of a ray, with the stronger ones necessarily coinciding with the direct paths (less lossy). The Dirac

impulses point out the macro-multipath structure of each ray, whereas its associated fluctuations mark

the micro-multipath input delay-spread function.

The slowly varying macro-multipath structure influences primarily the delay and amplitude of the

cluster of arrivals for each ray tube. On the other hand, the rapidly varying micro-multipath structure

influences the detailed shape of the arrival for that ray tube and induces a temporal spreading of the

22

Page 43: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

arrivals for each tube.

2.1.2 Absorption Loss

The signal losses encountered by propagating sound and the ambient noise present in the ocean mainly

affect the signal-to-noise ratio at the receiver. Seawater and sea floor absorption is a sound attenuating

mechanism consisting of the conversion of the acoustic energy of sound waves into another form of

energy, which is then retained by the lossy agent (seawater or sea floor, in this case).

2.1.2.1 Seawater

The absorption of sound by water is a consequence of the conversion of acoustic energy into heat. This

loss mechanism is highly dependent on frequency, meaning that high frequency signals are far more

attenuated that low frequency ones. The frequency dependence of seawater sound attenuation is well

patented in Fig. 2.11.

Figure 2.11: Evolution of sound attenuation in seawater with frequency. Generally, the attenuation riseswith increasing frequency. The seawater attenuation function can be partitioned into four different termsassociated with four distinct sources of sound attenuation in seawater.

The frequency dependence of attenuation can be roughly divided into four regimes of different phys-

ical origin. Region I is related to low-frequency propagation-duct cutoff or, in other words, leakage out

of the deep sound channel. The main mechanisms associated with regions II and III are chemical relax-

ations of boric acid B(OH)3 and magnesium sulfate MgSO4, respectively. Region IV is dominated by

the shear and bulk viscosity related to salt water.

23

Page 44: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

A coarse but frequently used expression relating seawater attenuation and sound frequency is

αw = 3.3× 10−3 +0.11f2

1 + f2+

44f2

4100f + f2+ 3× 10−4f2 , (2.7)

where each term is sequentially associated with regions I to IV. The units of attenuation and frequency

in (2.7) are decibels per kilometer and kilohertz, respectively.

2.1.2.2 Sea Floor

The sea floor is undoubtedly the most complex boundary to underwater sound propagation, as its

topography and sound-reflecting characteristics strongly change across the ocean. Sea floor sound-

absorption occurs because the reflection of energy is incomplete, and sound penetrates into the bottom.

The structure of the ocean bottom generally consists of a thin stratification of sediments overlying the

ocean crust in the deep ocean, whereas over continental crust or continental shelves the stratification

is thicker. Ocean sediments are usually modelled as fluids, whereas continental or ocean crust is an

elastic medium, composed by solid material.

Sound waves can be classified according to the way they propagate as compressional waves (p-waves)

or shear waves (s-waves). Compressional waves oscillate and travel in the same direction by a series

of compression and expansion movements; p-waves can be generated in both liquids and solids. In

the shear wave structure the particles oscillate perpendicularly to the direction of propagation; s-waves

require a solid material for effective propagation, and therefore are not able to travel in liquids or gasses.

Thus, sea floor sediments only support compressional sound waves, i.e., only p-waves actually prop-

agate into the bottom, thereby being partially absorbed; the shear sound waves would be perfectly

reflected at the sea floor. Nevertheless, from the previous discussion it is obvious that the water column

also does not support s-waves, as the seawater is a liquid, wherefore shear waves generally do not prop-

agate in the ocean. Still, if there is no sediment overlying the continental or ocean crust, or the sediment

is extremely thin, the medium supports both compressional and shear acoustic waves. In light of these

facts, it is clear that sea floor properties strongly influence the amount of absorption propagating-sound

is subject to. Refer to [14] for a more complete study of sound absorption by sea bottom reflection.

Along these lines, a geoacoustic model of the ocean bottom is needed to perform an underwater sound

propagation analysis. This model should detail the true thickness and properties of sediment and

rock layers within the seabed to a depth termed the effective acoustic penetration depth. The depth-

dependent material properties are:

• Compressional wave speed cp;

• Shear wave speed cs;

• Compressional wave attenuation αp;

• Shear wave attenuation αs;

24

Page 45: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

• Material density ρ.

The construction of an exhaustive geoacoustic model for a particular ocean area is a tremendous

task.

Sea floor attenuation also increases linearly with frequency. Furthermore, bottom materials are three-

to-four orders of magnitude more lossy than seawater. For example, at a frequency of 100Hz seawater

attenuation is 0.004 dB/km, whereas it reaches 2 dB/km in basalt and 63 dB/km in silt sediments.

2.1.3 Scattering Loss

The sea surface and the sea floor are rough reflecting surfaces for sound propagating in the ocean.

Thereafter, a signal travelling through the ocean will see its strength reduced by scattering effects. Sound

scattering occurs when the acoustic energy is reflected at one of the ocean boundaries, yet the reflection

is not specular but scattered in a multitude of directions. This loss mechanism is well illustrated by sea

bottom and ice reflections of Fig. 2.6 (paths B and E, respectively).

The primary scattering loss cause is obvious, as the energy that is scattered in a direction other than

the direction of the receiver is effectively lost. Another source of signal power decrease is wave inter-

ference: the field scattered away from the specular direction and, in particular, the backscattered field

(reverberation) acts as negative interference for the main field.

Furthermore, the seawater attenuation previously discussed is partly due to scattering agents. This

seawater volume scattering also generates a reverberant acoustic field, and is thought to be caused by

biological organisms. A more thorough review of volume scattering mechanisms is presented in [14].

Sea surface, sea floor and seawater scattering losses increase with increasing frequency.

2.1.4 Spreading Loss

The spreading loss is simply a measure of the signal weakening as it propagates outward from the

source. Sound propagation can be modelled as occurring in accordance to two different geometries.

In regions close to the source, the power radiated by the transmitter is equally distributed over the surface

area of a sphere surrounding the source. Thereby, the wavefront radiates spherically and the signal

energy is attenuated by a factor of R−2 or, in other words, the signal intensity is inversely proportional to

the surface of the sphere: I ∝ 14πR2 , where R is the sphere radius.

The spherical spreading in the nearfield is followed by a transition region towards cylindrical spreading

which applies only at longer distances. The farfield region corresponds to propagation in a waveguide

limited above by the sea surface and below by the sea bottom, and the sound intensity becomes inversely

proportional to the surface of a cylinder of radiusR and depthD: I ∝ 12πRD . The cylindrical and spherical

spreading of sound is depicted in Fig. 2.12.

25

Page 46: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

Figure 2.12: Spherical and cylindrical spreading. Spherical spreading is verified in the near-source zone

(R ≤ D), and sound is attenuated by R−2. The farfield region (R � D) is dominated by cylindrical

spreading, and signal losses are proportional to R−1.

2.1.5 Ambient Noise

In underwater acoustic communications, ambient noise is an issue because it masks the signal of in-

terest. The several different sources of noise in the ocean are generally grouped into four categories:

turbulence, waves, thermal and shipping noise.

Thus, one is interested in somehow modelling the effect of noise in the received signal. As ocean noise

affects the signal-to-noise ratio at the receiver, it would be worthy to obtain a reasonable estimation of

the noise power. In this manner, noise sources are described resorting to a continuous Power Spectral

Density (PSD) function. The following empirical formulas were taken from [15], and present the noise

power N in dB re 1µPaHz−1 as a function of frequency f in kHz:

10 log10Nt(f) = 17− 30 log10 f (2.8)

10 log10Nw(f) = 50 + 7.5w12 + 20 log10 f − 40 log10(f + 0.4) (2.9)

10 log10Nth(f) = −15 + 20 log10 f (2.10)

10 log10Ns(f) = 40 + 20(s− 0.5) + 26 log10 f − 60 log10(f + 0.03) , (2.11)

where each equation sequentially refers to the noise sources listed before, w is the wind speed (in

m/s) and s is the shipping activity factor, which ranges from 0 to 1 for low and high shipping activity,

respectively. The total PSD of the ambient noise is the sum of all the preceding:

26

Page 47: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

N(f) = Nt(f) +Nw(f) +Nth(f) +Ns(f) . (2.12)

Turbulence noise influences only the low-frequency region, f < 10Hz. Shipping activity noise is

deeply felt at the frequency band 10 − 100Hz, whereas surface motion caused by wind-driven waves

is the dominant source of noise at the frequency region 100Hz − 100 kHz; most underwater acoustic

communications use signals whose frequency lies in this last frequency interval where wave noise is

supreme. Thermal noise becomes relevant at high frequencies, for f > 100 kHz.

Finally, it is important to point out that the ocean ambient noise can be modelled as Gaussian noise,

having a Gaussian probability density function, with the power spectral density indicated in (2.12) and

illustrated in Fig. 2.13 for two distinct cases: no wind with shipping activity of 0.5 (solid) and a moderate

wind at 10m/s (dotted).

Figure 2.13: PSD of the ocean noise. It can be decomposed into four different terms: water turbulence,shipping activity, wave motion and thermal noise. When there is no wind affecting the ocean environment(with shipping activity s = 0.5), the noise PSD is that of the solid line; in the presence of a 10m/s wind, thenoise PSD becomes that of the dotted line. The noise spectrum is stronger at low and high frequencies;the wave-noise frequency band is the weakest.

2.2 Bellhop

Since this work is devoted to the development of a simulation tool designed to easily change the receiv-

ing hydrophones or the ULB (transmitter) positions, the waveform or frequency of the transmitted signal,

27

Page 48: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

among other parameters, and allow the receivers to determine the source’s position, only simulated data

is used. Besides, real experimental data is not available. Even though from a receiver’s point of view

the transmitter’s position is unknown, one has to establish the simulation scenario for the localization

problem; specifically, the receivers and transmitter depths are to be given as a simulation input, as well

as the range separation between them. This section addresses the computer program used to simulate

the acoustic field generated by the emitter.

Sound propagation in the ocean is mathematically described by the wave equation, whose parameters

and boundary conditions are indicative of the ocean environment. Naturally, one is interested in the

available computer solutions to the wave equation; there are essentially five types of models for sound

propagation in the ocean:

• Fast Field Program (FFP);

• Normal Mode (NM);

• Ray;

• Parabolic Equation (PE);

• Finite-Difference (FD) or Finite-Element (FE).

The last three models permit to account for range variations in ocean properties (such as SSP, water

density and bottom properties), hence being range-dependent, whereas the first two models are range-

independent.

Bellhop is a freely distributed ray-tracing code incorporated in the Acoustic Toolbox (check [16]). An

exhaustive description of this acoustic model, including a detailed theoretical study of the wave theory

behind it, is presented in [17]. Bellhop is designed in order to perform two-dimensional acoustic ray

tracing for a given sound speed profile or a given sound speed field, in ocean waveguides with flat

or variable absorbing boundaries [18]. Output options include ray coordinates, travel time, amplitude,

eigenrays, acoustic pressure or transmission loss (either coherent, incoherent or semi-coherent).

In order to accurately simulate the acoustic field generated by a source in the ocean, Bellhop requires

three input files. The main environmental input file is outlined in Table 2.2. This is the file that sets

the localization scenario, i.e., establishes the receivers and source depths and the range separation

between them or, in other words, the receivers and source relative positions. Other important variables

are defined in this file, such as the acoustic frequency or the sound speed profile. Bellhop determines

the seawater attenuation via Thorpe’s formula:

αThorpe =40f2

4100 + f2+

0.1f2

1 + f2. (2.13)

This expression is identical to (2.7), except it does not include the terms related to Regions I and IV of

the seawater attenuation spectrum, corresponding to the lower and the higher frequencies, respectively.

28

Page 49: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

Parameter DescriptionTITLE Title of the simulation.FREQ Frequency of the emitted sound in Hertz.

NMEDIA Number of water layers with different properties. Bellhop restrictsit to one.

TOPOPT

TOPOPT(1) Type of interpolation to be used for the SSP. ’N’ for N2-linear isusually chosen.

TOPOPT(2) Type of top (sea surface) boundary condition. ’V’ for vacuumabove top is usually chosen.

TOPOPT(3) Attenuation units. ’W’ for decibels per wavelength is usually cho-sen.

TOPOPT(4) Added volume attenuation. ’T’ to specify that volume attenuationcalculated through Thorpe’s formula (2.13) is to be added.

SSP

SSP(1)NMESH Number of mesh points used in the internal discretization. The

code will automatically calculate it if zero is chosen.SIGMA RMS roughness at the interface. Bellhop restricts it to zero.Z(NSSP) Depth at bottom of medium in meters.

SSP(2)

Z(i) Depth in meters of point i (1 < i < NSSP ) of the SSP.CP(i) p-wave speed in meters per second at point i of the SSP.CS(i) s-wave speed in meters per second at point i of the SSP.RHO(i) Water density in grams per cubic centimeter at point i of the SSP.AP(i) p-wave attenuation with TOPOPT(3) units at point i of the SSP.AS(i) s-wave attenuation with TOPOPT(3) units at point i of the SSP.

BOTOPTBOTOPT(1) Type of bottom (sea floor) boundary condition. ’F’ for reflection

coefficient from a file is usually chosen.BOTOPT(2) ’*’ to obtain the bottom bathymetry from a file is usually chosen.SIGMA Interfacial roughness in meters.

NSD Number of source depths.SD(1:NSD) The source depths in meters.

NRD Number of receiver depths.RD(1:NRD) The receiver depths in meters.

NR Number of receiver ranges.R(1:NR) The receiver ranges in kilometers.

RTYPE Run type. ’A’ is chosen to generate an amplitude-delay file inASCII.

NBEAMS Number of beams. The program calculates it automatically if setto zero.

ALPHA(1:NBEAMS) Beam angles in degrees. Negative angles means the rays leavetowards the surface.

Table 2.2: Bellhop environmental file. The parameters are to be sequentially written in a ′.env′ text file.

From Table 2.2, it is clear that the remaining two input files contain the bottom bathymetry description

and the bottom reflection-coefficient data. The sea floor bathymetry is detailed in a ′.bty′ file having the

format indicated in Table 2.3. The bathymetric profile is composed of pairs range-depth, meaning that at

some range Ri the sea bottom depth is Zi. Bellhop considers the source to be at range zero, so when

building the bottom topographic profile it is important to have that in mind.

The reflection coefficient file is generated through a Bounce run. Bounce is a program incorporated in

the same acoustic toolbox as Bellhop, and computes the reflection coefficients given an adequate input.

The Bounce environmental file is very similar to Bellhop’s until the parameters BOTOPT; BOTOPT(1)

is altered to ’A’, whereas the option BOTOPT(2) is disregarded. The following line of the ′.env′ file is

29

Page 50: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

Parameter Description

TYPE Type of bathymetric profile evolution. ’L’ for piecewise linear, or ’C’ forcurvilinear. ’L’ option is advised.

NPOINTS Number of R-Z points to follow.

R-Z R(i)Range in kilometers of point i (1 < i < NPOINTS) of the bathymet-ric profile. Range must vary between zero and the maximum receiverrange.

Z(i) Depth in meters of point i of the bathymetry profile.

Table 2.3: Structure of the Bellhop bottom bathymetry file. The parameters are to be sequentially writtenin a ′.bty′ text file.

filled with the bottom properties, which are similar in syntax with the ones described in option SSP(2),

although referring to the bottom sediments’ characteristics instead of the water column’s ones.

In the end, Bellhop’s output is able to provide the delay profile and respective amplitudes, which ulti-

mately emulate the effects of the underwater channel in the transmitted signal in accordance with (2.6).

2.3 Geoacoustic Database

A geoacoustic database is needed in this work to obtain the data which serves as input to Bellhop’s

(and Bounce’s) simulations. Thus, we resort to the GEBCO (General Bathymetric Chart of the Oceans)

bathymetric database of the world’s oceans, adopt the World Ocean Atlas 2009 material on SSPs, and

utilize the DECK 41 bottom sediments database. These are a result of an extensive field work to collect

enough information so that reasonably realistic models of the world’s oceans can be accessed. These

databases are included in the WOSS (World Ocean Simulation System) library, which is a network sim-

ulator that can be extended to simulate underwater sound propagation. A short technical description of

the WOSS library is presented in [19].

The GEBCO bathymetric database is gathered in a NetCDF (Network Common Data Form) file, which

is an array-programming science-oriented type of file, ergo ideal to the creation of a large database.

Matlab would usually be able to open and explore the information contained in the NetCDF file, but the

bathymetric profile variable is too large to be stored in a Matlab variable. For that reason, a software

named GDA (GEBCO Digital Atlas) has been developed for use in viewing and accessing data from

GEBCO’s gridded data sets. Ref. [20] is a helpful introduction to this software.

The GDA software interface permits selecting a geographic area of the world by chart number, by chart

area, by geographic latitude and longitude limits or by a user controlled zoom box. Fig. 2.14 illustrates

the GDA interface representing the whole world map, i.e., when a specific area has not been selected

yet. When the oceanic area of interest is picked, the GDA software exports an easily treatable ASCII

file. This bathymetric file begins with some general information, such as the maximum and minimum lat-

itudes and longitudes of the area it concerns, the spacing between consecutive points in the map (30′′ or

(8.333× 10−3) ◦), which is representative of the precision this database offers, among other parameters.

After this preamble, the interest data is organized in three different columns:

30

Page 51: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

• The first and second columns respectively present the longitude and latitude values in degrees,

with a decimal precision chosen upon the export process of the ASCII file;

• The third column matches the longitude-latitude pair of the first two columns with a sea bottom

depth (in meters); in other words, the sea floor depth at the latitude and longitude of the first two

columns is indicated in the last column.

The information presented in these three columns is then utilized to build the localization-problem

scenario’s bottom topography.

Figure 2.14: GDA software interface exhibiting the world map.

The World Atlas 2009 SSP database contains monthly, seasonal and annual averages of sound

speed profiles for several locations in the world. The available location resolution is 1 ◦ × 1 ◦; that is,

the SSP database is incorporated in a NetCDF file having 180 latitude values ranging from 89.5 ◦ to

−89.5 ◦, and 360 longitude values varying between −179.5 ◦ and 179.5 ◦, both with a 1 ◦ spacing between

consecutive values. Furthermore, the SSP file has 33 not equally spaced values of standard depths

covering the 0 − 5500 depth interval (values in meters). Thus, for a specified latitude and longitude, the

database provides the depth-dependent sound speed profile, linearized over 33 depths, indicating the

speed of sound (in meters per second) for each of those depths.

The sea floor sediment database is also detailed in a NetCDF file. This database has a resolution of one

minute of arc per one minute of arc (1′× 1′), with 10801 latitude values extending from 90 ◦ to −90 ◦, and

21601 longitude values from 180 ◦ to −180 ◦, both with a 0.016667 ◦ spacing between consecutive values.

This makes up a total of 233312401 coordinates, each one corresponding to a latitude-longitude pair and

having an indexed number describing the estimated sea bottom properties at those coordinates. Note

that there are only a few coordinates for which the attributed sea bottom properties result from actual

31

Page 52: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

measurements. Table 2.4 presents the decoding of each indexed number, which may vary between 1

and 11.

Databasenumber

Sea floortype p-wave

speedcp (m/s)

s-wavespeedcs (m/s)

Densityρ (g/cm3)

p-wave at-tenuationαp (dB/λ)

s-wave at-tenuationαs (dB/λ)

1 Gravel 1800 180z0.3 2.0 0.6 1.52 Sand 1650 110z0.3 1.9 0.8 2.53 Silt 1575 80z0.3 1.7 1.0 1.54 Clay 1500 100 1.5 0.2 1.05 Ooze 2400 1000 2.2 0.2 0.56 Mud — — — — —

7Rocks,Rocksfragment

— — — — —

8 Organicmaterial — — — — —

9

Nodules,slab orconcre-tions

— — — — —

10 Rigid — — — — —

11 Notoceanic — — — — —

Table 2.4: DECK 41 database numbers and corresponding sea bottom properties, taken from [14].

Regarding Table 2.4, a series of observations must be made. First, the absence of data from number

6 to number 9 is justified by the lack of available information on sediment properties. In our computer

implementation, we assign to each of these bottom types identical properties to those of the closest

match in types 1 through 5. On the other hand, number 10 denotes a rigid bottom, with no overlying

sediments, and number 11 stands for continental areas. For perfectly rigid bottoms Bellhop uses its own

set of internal parameters. On a final note, the term z in the s-wave speed formulas of the first three

numbers is the already discussed effective acoustic penetration depth.

32

Page 53: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

Chapter 3

Delay Estimation

Continuous-time signals are signals whose values vary continuously with time, i.e., they are functions

whose domain is an uncountable set (R). They are associated with a physical quantity, typically electrical

(e.g. current) in which case the signal is implemented by an electrical circuit. If a continuous-time

function is sampled at a high enough rate, a matching discrete-time signal is obtained; in other words,

no information is lost when converting a continuous-time signal into its discrete-time equivalent at a high

sampling frequency. A discrete-time signal is defined as a varying function whose domain is a countable

set (N).

It is convenient to treat discrete-time over continuous-time signals in this signal processing exposition,

since in Matlab implementation signal handling is limited to discrete-time. In this way, the UAC response

synthesis presented in Chapter 2 has to be adjusted. Specifically, the UAC response formula (2.6),

which refers to continuous-time signals, has to be rewritten to fit the discrete-time case:

ym[n] = x[n] ? g[n] = g[n] ? x[n] =

∞∑k=−∞

[ L∑l=1

alδ(k −Nl)]x(n− k) . (3.1)

In (3.1) the discrete-time convolution is explicit, with the variables n, k and the lth ray time-delay Nl

substituting continuous-time variables t, τ and τl. The ray amplitudes al are a measure of the absorption,

scattering and spreading losses of the ocean environment.

In communications, continuous-time signals propagate through a physical medium, but through sampling

and reconstruction an equivalent end-to-end discrete-time system can be assumed when designing

transmitter/receiver algorithms and signalling schemes (Fig. 3.1). Thus, the continuous-time signals

are reconstructed to discrete-time at the receiver, while the opposite process occurs at the transmitter.

Additionally, both the transmitter and the receiver structures include a transducer to respectively convert

electric signals to acoustic waves and vice-versa.

The next pertinent step is to set the ULB emitted signal x[n]. There are several interesting signals

which guarantee efficiency in both the transmission and the reception, albeit only three have been se-

lected for this study: sine, chirp and QAM. Section 3.1 presents the most relevant properties of these

three signals along with a brief review of the characteristics of the state-of-the-art emitted signal.

33

Page 54: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

Figure 3.1: Discrete-time system implemented in continuous-time.

Once signal x[n] is assigned it is immediate to obtain y[n], the receiver input. Thereafter, signal process-

ing techniques are used to determine the sound waves’ travel time. Section 3.2 discusses these signal

processing techniques for each kind of emitted signal, differentiating the first three types, which share

the same receiver architecture, from the state-of-the-art receiver, which is slightly different as it has to

account for the uncertainty in the transmitted signal’s frequency.

The four main steps of this long-distance wireless communication problem are diagramed in Fig.3.2:

Figure 3.2: Steps of the underwater range-estimation process. Chapter 3 deals with steps 2 and 4.

3.1 Transmitter

The transmitter structure is:

Figure 3.3: Transmitter block.

The generator produces the wave signal in baseband, defining all the significant signal characteris-

tics, such as its waveform, nominal frequency or bandwidth. Then, the baseband signal s is frequency-

34

Page 55: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

shifted by a modulating carrier of frequency fc, creating the passband signal x. The D/C converter

transforms the discrete-time signal into a continuous-time one. The amplifier amplifies the continuous-

time modulated signal and drives the transducer to produce sound waves.

This work aims to evaluate the digital system performance when four different types of signals are em-

ployed: sine wave with high uncertain frequency (state-of-the-art) and, on the other hand, sine, chirp and

QAM with lower carrier frequency. This section continues with an introduction to the definitions common

to all the signals.

First, it is important to recall that in the problem at hand the transmitter is the ULB, which is attached to

the FDR, lodged at the bottom of the ocean, and emitting sound. Thus, the transmitted signal power is

typically expressed in terms of the acoustic output strength (dB re 1µPa@1m). The transmitter Source

Level (SL) is a measure of its acoustic power, and is the intensity level relative to the intensity of a plane

wave with a RMS-pressure of 1µPa taken at the reference distance 1m from the source. ICAO imposes

the standard ULB source level as 160.5 dB re 1µPa@1m.

The intensity of the reference plane wave of 1µPa is proven to be I0 = 0.667 × 10−18 W/m2, based

on (3.3) and assuming an average sound speed in the ocean of c = 1500m/s and a water density

of ρ = 1000 kg/m3. The intensity of the source radiated sound is thereupon determined via (3.2) and

the known values of SL (converted to linear units) and I0. Having set the source intensity (in W/m2),

the source power must obviously be obtained multiplying the intensity by an area quantity. The sound

spreads in the nearfield zone of the source in a spherical fashion, as seen in Subsection 2.1.4, and the

transmitter power is thereby calculated multiplying the sound intensity at 1m by the area of a sphere

with a 1m radius, (3.4).

SL =I1I0

(3.2)

I =p2

ρc(3.3)

P = 4π(1m)2I0SL (3.4)

Using the standard values (SL)dB = 160.5 dB re 1µPa@1m and I0 = 0.667 × 10−18 W/m2, the ULB

acoustic power is P = 0.0940W .

Moreover, the average power of a zero-mean discrete-time signal x is

P(x[n]

)= E

[x2[n]

]= limN→∞

[1

2N + 1

N∑n=−N

[x2[n]

]]. (3.5)

As the transmitted signal’s power is the mean of the squared signal, it is unequivocal that the power

depends on the waveform and amplitude of the signal. For fixed transmitted power, the amplitude of

each of the to-be-studied signals is a function of this parameter.

An essential aspect of the generator is that it periodically generates finite-duration signals. The duration

of the signal is denominated pulse length (τ ), and the time-spacing between transmissions is called

35

Page 56: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

pulse repetition period (Tp). Fig. 3.4 illustrates these concepts. For fixed transmitted power, shorter

signals have higher amplitudes than longer ones.

Figure 3.4: Periodic sequence of pulses to be transmitted.

The sampling frequency fs is a crucial parameter defining the number of samples per unit of time

taken from a continuous signal to make a discrete one, or vice-versa. This sampling process takes place

at the D/C converter, where discrete-time signals are converted to continuous-time, in the transmitter,

and the opposite occurs in the receiver at the C/D converter. In this work, the sampling frequency at

the receiver is equal to that at the emitter for simplicity. The Nyquist sampling theorem states that the

sampling frequency must be greater than twice the maximum frequency of the signal being sampled.

Hence, it is plain that the sampling frequency is not equal for all four signals, being greater for the

state-of-the-art one. Moreover, the pulse length and the pulse repetition period are converted from the

continuous-time domain to the discrete-time framework through multiplication by the sampling frequency.

Subsequently, the pulse length is a number of samples Nτ , and two consecutive signal transmissions

are separated by Np samples.

Nτ = τfs (3.6)

Np = Tpfs (3.7)

The generator also influences the transmitted signal frequency f0. Namely, the frequency of signal

s at the output of the generator must be f0 − fc to compensate for the frequency shift performed by the

carrier. Thereafter, the frequency of the signal x leaving the modulator is (f0 − fc) + fc, as intended.

The carrier is a sinusoidal wave which is modulated with an information-bearing input signal. The modu-

lating processing is shown in the scheme of Fig. 3.5, as well as in (3.8). The parameter φ is the carrier’s

phase offset.

36

Page 57: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

Figure 3.5: Sinusoidal wave modulation. The generated signal s[n] is modulated by a cosine of frequency

fc.

x[n] = s[n]Re{ej(ωcn+φ)} (3.8)

The carrier signal is in fact the real part of a complex sinusoid, yielding a cosine with carrier frequency

fc. The modulator shifts the frequency spectrum of the generated signal s although the signal bandwidth

remains unaltered, i.e., the signal frequency spread around the central frequency stays the same. Thus,

the unmodulated signal s has the intuitive designation of baseband signal, whereas the modulated signal

x is the passband signal. This frequency spectrum shift is exemplified in Fig. 3.6.

Figure 3.6: Frequency spectrum shift resulting from the modulation process. The baseband signalpresents lower frequencies than the passband signal.

Table 3.1 gathers the values of the significant parameters just discussed for each of the signals of

interest:

37

Page 58: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

SignalPower P

(W)

Pulse

length τ

(ms)

Pulse

repe-

tition

period

Tp (s)

Transmitted

signal fre-

quency f0

(kHz)

Signal

bandwidth

Bs

Carrier

frequency

fc (kHz)

Sampling

frequency

fs (kHz)

Sine 0.0940 50 1.0 10 1τ 10 50

Chirp 0.0940 50 1.0 10 2 kHz 10 50

QAM 0.0940 50 1.0 10 2 kHz 10 50

SA sine 0.0940 10 1.0 37± 1 1τ 37.5 85

Table 3.1: Characteristics of each of the possibly transmitted signals: sine, chirp, QAM and state-of-the-

art sine.

There are some remarks to be made regarding the values presented in Table 3.1. First, notice that

the pulse repetition period and the signal power are chosen equal for all the signals. Nonetheless, all

the remaining parameters vary from signal to signal. Starting with the pulse length, the proposed signal

alternatives possess a higher pulse length than the state-of-the-art one, as it should be easier to detect

the incoming signal at the receiver if it is longer. Additionally, the frequency decrease relative to the

state-of-the-art signal is expected to greatly improve the receiver effectiveness, since the sound waves

are much less attenuated in water, consequently the SNR at the hydrophone is higher. Furthermore, the

SA sine has a nominal frequency of 37.5 kHz but due to the lack of precision of the generator it may vary

in a 2 kHz band from 36.5 kHz to 38.5 kHz. This frequency imprecision is simulated in Matlab through

the creation of a sine whose frequency is a random number taken from a uniform distribution with mean

37.5. Regarding the QAM and chirp signals, recognize that their frequency spectrum is spread around

a relatively large band. On the contrary, the sine bandwidth, which is simply the inverse of the signal

duration, is much lower (approximately 100Hz for the SA sine and 20Hz for the other). In terms of

the long-distance communication problem, it is known from Shannon’s work that a signal with a larger

bandwidth may contain more information. In operational terms, our waveforms with wider bandwidth

have auto-correlation functions (and ambiguity functions) that have narrow main peaks and hence allow

more precise determination of time delays. Finally, it is clear that the sampling frequency has to be

higher for the SA sine, as its nominal frequency is also higher. The adopted sampling frequency values

are typical for underwater communication systems.

Before proceeding to the presentation of each of the signals of interest, it is suited to address digital

communication. In fact, one of the proposed innovations to the current aircraft-search technology is to

endow the ULB with the ability to transmit relevant parameters such as its depth or the FDR logs. This

is attained thanks to digital modulation techniques, which consist of modulating the carrier wave by a

digital bit stream containing the intended information (e.g. depth). As a result, the ULB transmitted signal

38

Page 59: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

would be composed of a preamble (sine, chirp or QAM) used at the receiver to detect the incoming signal

through its auto-correlation properties, followed by the digital message:

Figure 3.7: Structure of the transmitted signal containing a digital message. The preamble is either thesinusoid, the chirp or the QAM signal, whereas the digital message is predominantly conveyed throughASK, FSK, PSK or QAM modulating processes.

In effect, the sine, chirp or QAM preamble is essential to detect the digital message because its

waveforms and characteristics are known by the receiver, unlike the digital message’s shape which

depends e.g. on the ULB depth and FDR records. Thus, the receiver is able to generate a replica of the

preamble, but cannot replicate the digital message.

3.1.1 Sinusoid

A real-valued, discrete-time sinusoid is defined as:

xs[n] = A sin(2πf0n+ ϕ) , (3.9)

where A is the amplitude, f0 is the frequency in Hertz, and ϕ is the initial phase, or phase offset, in

radians. The average power of a discrete-time sine wave is

P(xs[n]

)=A2

2. (3.10)

A sinusoid with amplitude 0.4336 (amounting to an average power of 0.0940), of frequency 10 kHz

and sampled at a 50 kHz rate is partially represented in Fig. 3.8. This corresponds to the output of the

transmitter when the alternative sine is employed. The state-of-the-art sinusoid has a similar waveform,

only with different signal frequency, sampling rate and amplitude (0.6056).

3.1.2 Chirp

A chirp signal is generally defined as a sinusoid having a linearly changing frequency over time:

xc[n] = A cos((ω0 +

1

2βn)n+ ϕ

), (3.11)

where ω0 is the nominal angular frequency, and β is the frequency variation rate. The average power

of a discrete-time chirp is equally

39

Page 60: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

Figure 3.8: Portion of a discrete-time sine wave of average power 0.0940 (amplitude 0.4336), frequency10 kHz, initial phase of π/2, sampled at 50 kHz.

P(xc[n]

)=A2

2. (3.12)

Fig. 3.9 depicts a modulated chirp signal with amplitude 0.4336 (providing an average power of

0.0940), linearly swept frequencies ranging from 9 kHz to 11 kHz during a time interval matching the

pulse length, which corresponds to 2500 samples at 50 kHz sampling frequency.

Figure 3.9: Modulated chirp signal with initial frequency 9 kHz and final frequency 11 kHz. The signalamplitude is 0.4336, whereas the sampling frequency is 50 kHz.

40

Page 61: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

3.1.3 QAM

Before proceeding to the presentation of the QAM (Quadrature Amplitude Modulation) signal, it is con-

venient to introduce a few concepts about passband digital communication.

A digital baseband signal differs from a generic discrete-time signal because it can only take one of two

possible values: one value representing bit 0 and another standing for bit 1. Thus, a digital receiver

not only samples the continuous-time signal but also uses blocks of samples to decide whether a given

symbol represents bit 0 or bit 1. In this sense, digital systems offer greater immunity to noise distur-

bances than their analog rivals, since the noise-caused distortions to the digital signal are generally

small enough so that the bit sequence can be correctly identified at the receiver. On the contrary, slight

variations on the analog signal may deeply affect the way the receiver interprets it. Along these lines,

it is clear that the sinusoid and chirp signals fit the analog modulation description, whereas the QAM

signal suits the digital modulation frame. Nonetheless, we do not need to extract digital content from

signals, so any type of waveform is suitable as long as it has good auto-correlation properties for delay

estimation.

The digital baseband bit stream is represented by the commonly named line codes; the most important

line codes are the NRZ (Non-Return-to-Zero) unipolar, the NRZ polar, the RZ (Return-to-Zero) polar and

the Manchester code (see [21] and [22]). The digital baseband signal may modulate one or more of the

following parameters of a sinusoid carrier: amplitude, phase and frequency. Thus, a sequence of bits

may be transmitted by altering the characteristics of the carrier in three basic manners:

Figure 3.10: Exemplification of the digital modulation schemes ASK, PSK and FSK.

In light of Fig. 3.10, it is straightforward to conclude that:

• The ASK (Amplitude Shift-Keying) modulating structure changes the amplitude of the reference

signal in a way to reflect the transmission of bit 0 or bit 1. In the case of Fig. 3.10 where bit 0 is

41

Page 62: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

represented by a null amplitude, the modulation OOK (On-Off Keying) is obtained;

• The PSK (Phase Shift-Keying) modulating process modifies the carrier phase upon the bit transi-

tions;

• The FSK (Frequency Shift-Keying) modulation changes the carrier’s frequency in consonance with

the bit transitions. Thus, in Fig. 3.10 bit 0 is represented by the signal portions with lower frequency

and, consequently, bit 1 is represented by the higher frequency parts of the signal.

The QAM digital modulation architecture conveys two digital symbol streams by changing the ampli-

tudes of two carrier waves using the ASK/PSK scheme. The two carrier sinusoids are out of phase with

each other by 90 ◦ and are thereby quadrature carriers of quadrature components. The modulated waves

are summed and the resulting waveform is a combination of both the PSK and the ASK modulations.

Thus, a discrete-time QAM signal is defined as

xqam[n] = I[n] cos(2πfcn) +Q[n] sin(2πfcn) , (3.13)

where I[n] and Q[n] are the two symbol streams to be transmitted. The in-phase symbol stream is

obtained as

I[n] =∑k

akp(n− kN) . (3.14)

In (3.14) ak is the amplitude of symbol k, p is the shape of the signalling pulses (root-raised-cosine

[21], in this case), and N is the number of samples per symbol interval. The quadrature sequence Q[n]

is generated similarly.

There is a crucial aspect not yet discussed: the definition of symbol. So far, the only presented symbols

were bit 0 and bit 1. Therefore, the modulation order of the analyzed modulation schemes is 2, as the said

schemes only allow the transmission of 2 different symbols (0 and 1). In general, the digital modulation

systems are classified according to the number of symbols M they are able to transmit as M-ary. For

example, if the number of different symbols that can be transmitted using the QAM architecture is 16,

then the modulation is called QAM-16; in this case, each symbol is constituted by a combination of 4 bits

(24 [bits] encode 1 of 16 [symbols]). In this thesis the QAM modulation used is QAM-4, whose signal

constellation can be seen in Fig. 3.11 along with the signal constellation of QAM-16.

42

Page 63: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

Figure 3.11: Symbol constellation of both the QAM-4 and QAM-16 modulations. Each symbol is repre-

sented by a matching sequence of bits.

Figure 3.12: Baseband QAM-4 signal.

The QAM-4 modulation represented in Fig. 3.12 depicts the QAM signal before it goes through the

modulator and is frequency-shifted by the sinusoidal carrier; it is ergo the baseband signal. The bit

43

Page 64: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

transitions are very clear through phase shifts (PSK) and amplitude shifts (ASK). The QAM baseband

signal frequency spectrum is represented in Fig. 3.13.

Figure 3.13: Power spectral density of the baseband QAM-4 signal.

A passband QAM-4 signal with amplitude 0.6618, bandwidth of 2 kHz around the 10 kHz central fre-

quency and sampled at a rate of 50 kHz is represented in Fig. 3.14. Its PSD is presented in Fig. 3.15.

Figure 3.14: Passband QAM-4 signal.

44

Page 65: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

Figure 3.15: Power spectral density of the passband QAM-4 signal.

3.2 Receiver

In simulation, the signal at the receiver is determined as a sum of two different contributions. The first

one is related to the UAC response, specifically the multipath propagation, and is described by (3.1).

After applying (3.1) to the emitted signal, one obtains a time-delayed and amplitude-attenuated version

of x[n]. The other contribution does not depend on the transmitted signal, as it is the ocean ambient

noise addressed in Subsection 2.1.5. In Matlab, the noise is added to ym[n] using the awgn function,

standing for Additive White Gaussian Noise, and requiring as only input the SNR intended for its output.

The SNR at the receiver is calculated through the quotient between the power of the UAC-attenuated

ym[n] and the noise power derived from (2.12).

The receiver’s primary concern is to distinguish between the ocean ambient noise and the transmitted

signal. That is, as soon as the receiving hydrophone is operational, it will begin sensing sound waves

produced by noise sources. The receiver’s aim is to disregard these noisy signals but when the actual

ULB transmitted signal arrives, the receiver must be able to detect it. Once the ULB signal is recognized,

it is processed in order to provide the sound waves’ travel time.

This section is divided into two subsections, the first one describing the proposed alternative receiver

structure, and the second presenting the state-of-the-art receiver organization.

3.2.1 Alternative Receiver

The alternative receiver architecture is:

45

Page 66: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

Figure 3.16: Receiver conformation when the alternative signals (sine, chirp, QAM) are transmitted.

The ensemble [transducer + C/D converter] converts the analog acoustic signal to a discrete-time

version of the received signal. Then, the first buffer aims to partition the discrete-time received signal

into a pre-fixed number of samples. In this way, the signal processing effort becomes lighter, since the

signal operations do not have to be performed on a very high number of samples. Buffer 1 size equals

Nτ .

The bandpass filter is essential in partially removing the disturbances of the ocean ambient noise. Its

goal is to filter the received signal so that only the transmitted signal’s frequency band remains. Hence,

it is obvious that the transmitted signal’s characteristics have to be entirely known, which is the case;

however, in the state-of-the-art system that is not true, as will be discussed in the following subsection.

Thus, the bandpass filter has a bandwidth of 2 kHz when the chirp and QAM signals are employed. When

the sinusoid is used, the filter bandwidth should be the inverse of the signal length, 20Hz. Nonetheless,

a filter with such a narrow bandwidth is difficult to realize in practice. Therefore, the filter bandwidth of

the sine wave is chosen as a reasonable 300Hz. A bandpass filter with a bandwidth of 300Hz around

the 10 kHz central frequency is typified in Fig. 3.17 by its magnitude and phase response diagrams. This

corresponds to the filter suitable to receive the sine signal; the chirp and QAM filters are quite similar

to this, only with a larger bandwidth. The bandpass filter was designed thanks to the Matlab function

fdesign.bandpass.

Still, there are some comments to be made about the filter construction. First, the central frequency

of the filter is 10 kHz because the received signal is the passband signal; the next step in the processing

network is, in fact, the frequency shift that leads to the baseband signal. Secondly, a linear phase FIR

filter must be chosen. As a matter of fact, all frequency components of a signal are delayed according

to the derivative of phase (group delay) when passing through a filter. Therefore, a linear-phase FIR

filter is preferable over an IIR filter because the former is typically more stable and, more importantly, it

ensures that all frequency components in the passband are equally delayed, thus preserving the shape

of (weakly attenuated) signals. The bandpass filter has to be linear phase so that there is no signal

46

Page 67: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

Figure 3.17: Magnitude and phase responses with frequency of the sinusoid bandpass filter. The centralfrequency is 10 kHz, the 3 dB bandwidth is 300Hz, whereas the 6 dB bandwidth is 600Hz. The attenuationin the lower and upper stopbands is 50 dB.

distortion due to the time delay of frequencies relative to one another.

The bandpass filter output ybp is a function of its input y and the filter coefficients bi:

ybp[n] =

N∑i=0

biy[n− i] , (3.15)

where N is the filter order. In the sine case the filter order is 566, whereas for the QAM and chirp

signals the filter order is 100.

Additionally, one is now in a position to calculate the ocean noise power spectral density in Watts. In

fact, the noise power formulas given in Subsection 2.1.5 have units of dB re 1µPaHz−1. Assuming that

the noise power spectrum is constant over the bandpass filter bandwidth, the noise power in Watts is

determined as

P(yn[n]

)=p0

2

ρcN(f)Bf , (3.16)

where p0 is the reference pressure of 1µPa, ρ and c are the water density and the sound speed taken

at the receiver position (with the aid of the geoacoustic database), N(f) is the noise power from (2.12)

(in linear units) and Bf is the filter bandwidth.

The demodulator block is detailed in Fig. 3.18. The demodulator must not only frequency-shift the

received signal back to baseband, but also make up for the unknown phase delay φ between the trans-

mitter and the receiver oscillators. Accordingly, the output of the demodulator relates to the generated

signal s[n] as

47

Page 68: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

ybb[n] = s[n] cos(φ) + j(s[n] sin(φ)

)= s[n]ejφ . (3.17)

Figure 3.18: Demodulator block. The output of the passband filter ybp[n] is multiplied by a sine anda cosine, the in-phase and quadrature components are summed, and the demodulator output ybb[n] isobtained.

The in-phase and quadrature signals are frequency-shifted versions of the passband signal enter-

ing the demodulator. Thus, their frequency spectrum lies around the generated signal’s s[n] nominal

frequency – they are the real and imaginary parts, respectively, of the baseband transmitted signal. In

effect, one may conceptually multiply the quadrature signal by j and sum its in-phase equivalent to ob-

tain the complex baseband transmitted signal. The phase offset φ between the transmitter’s and the

receiver’s carrier waves translates into a constant complex term ejφ that is harmless to this signal pro-

cessing. However, if the demodulation were done only by a cosine, this phase delay could seriously

jeopardize the signal processing chain, since φ = π/2 rad would yield a null signal at the demodulator’s

output.

Next, the second buffer increases the sample number of the signal being processed. The buffer 2 size

is equal to Np −Nτ . Subsequently, the signal enters the matched filter which consists of correlating the

received signal with an exact replica of the transmitted signal. Once again, recognize that the transmit-

ted signal’s features have to be known. The purpose of this cross-correlation function is to detect the

transmitted signal amongst the ocean noise. Thus, refer to Fig. 3.19 to understand that the correlation

of a sinusoid with a sinusoid, called auto-correlation, yields a unique function with a high peak, unlike the

cross-correlation of a sinusoid with noise. Hence, one can use these correlation properties to distinguish

noise from signal. The auto-correlation functions of the sine, chirp and QAM signals can be compared

in Fig. 3.20.

48

Page 69: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

Figure 3.19: Sinusoid auto-correlation function, and sinusoid-noise cross-correlation function.

Figure 3.20: Sinusoid, chirp and QAM auto-correlation functions.

The cross-correlation function is

ymf [n] = Rybps[n] =

∞∑m=−∞

ybb∗[m]s[n+m] , (3.18)

where the asterisk denotes the complex conjugate of the baseband received signal, and s is the exact

replica of the baseband transmitted signal. The cross-correlation resembles very much the convolution

49

Page 70: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

between two signals. In fact, whereas convolution consists of reversing a signal, shifting it and multiplying

by another signal, the cross-correlation only involves shifting it and multiplying (no reversing).

By observing Figs. 3.19 and 3.20 one acknowledges that there are two ways the signal detection may be

done. One way to distinguish signal from noise is to divide the maximum of the correlation function by its

mean, and decide that there is signal present if this ratio is above some pre-fixed threshold. This method

works extremely well for the chirp and QAM signals, which have very pointy or sharp auto-correlation

functions. In this perspective, the sinusoid auto-correlation function performs worse.

The adopted solution is actually to decide that the signal is present if the maximum of the correlation

function exceeds a pre-defined threshold. In this manner, it is verified that the QAM and chirp auto-

correlation functions also perform better than the sinusoid one, as they possess higher peaks.

Once the peak detector measures a peak above the threshold, the ULB signal is detected, and its time

delay is the sample number at which the strongest peak occurred. Remember that the origin of times

(time 0) is the time instant when the reference receiver sends an interrogating signal to the transponder-

acting ULB. Therefore, the transmitter–reference-receiver distance can be determined as

rref =tpeakref

2c =

Npeakref2fs

c . (3.19)

The remaining non-transmitter-synchronized receivers determine their respective receiver-transmitter

Round Trip Time (RTT) as the difference between the time instant they detected the correlation peak, and

the already determined RTT of the reference receiver, which is equal to its tpeak. Hence, the transmitter-

receiver distance is generally computed as

r =tpeak −

tpeakref

2

2c =

Npeak −Npeakref

2

2fsc . (3.20)

Fig. 3.21 outlines the peak detection.

50

Page 71: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

Figure 3.21: Evolution of the correlation function at the matched filter exit. The correlation peaks indicate

that the ULB signal has been received.

Two observations are in order. Naturally, when another of the periodically transmitted ULB pulses is

received a new peak is detected. Nonetheless, that fact causes no trouble as the system is expecting

incoming signals every Tp − τ seconds (Np −Nτ samples). In fact, it only reassures that the first detec-

tion was correctly made. So, only the first detected peak counts for the estimation of the sound waves’

RTT.

Furthermore, the peak detector assigns as tpeak the time instant at which the strongest incoming ray ar-

rives. That often corresponds to the direct-path ray, hence providing a good measure of the transmitter-

receiver distance, since the direct-path beam follows a roughly straight line from the emitter to the re-

ceiver in typical ULB localization scenarios, where the ULB is lodged at the ocean bottom and the

receivers are placed near the surface, as described in Section 2.1. Whenever the direct ray does not

arrive, or in the chance it is not the strongest one, another ray’s time delay is casted as the signal’s

arrival time. Though this fact results in some inaccuracy, there is no way of telling which ray-path the

strongest arriving beam took.

As a final remark, it should be mentioned that the ray-arrival-pattern induced by multipath propagation

could be used to virtually increase the dimension of the receiving arrays, if we could match each ray ar-

rival with a certain ray path (RSR, SRBR, etc.), which we cannot. This array mirroring method is outlined

in Fig. 3.22.

51

Page 72: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

Figure 3.22: Decomposition of the physical array into virtual surface and bottom-reflected images. Each

arrival is matched to the propagation delay between the source and the associated image hydrophone.

For simplicity, the ray paths are approximated by straight lines, and only the surface and bottom-reflected

paths are treated. The other ray paths are handled similarly.

3.2.2 State-of-the-art Receiver

The signal processing chain for receiving the state-of-the-art sine wave is drawn in Fig. 3.23.

Figure 3.23: Receiver configuration when the state-of-the-art signal (sine) is transmitted.

52

Page 73: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

It is clear the receiver structure is very similar to that of the alternative receiver. The main differences

arise from the fact that the sine frequency is not entirely known. This subsection intends to highlight the

disparities between this receiver and the alternative one.

First, buffer 1 has the same function as before, only with a smaller size. In fact, as the sampling frequency

is now 85 kHz and the pulse length is 10ms, the first buffer is able to store 850 signal samples. The

analysis is similar for the second buffer.

The bandpass filter is designed in the same way as before. Nevertheless, a remark concerning its

bandwidth is in order. Recall that the sine bandwidth is the inverse of its length, which means that the

filter bandwidth should be 100Hz, or the more realistic 300Hz. However, since the signal frequency is

only known to vary in a 2 kHz band around the central frequency of 37.5 kHz, the filter bandwidth has

to be 2 kHz to cover this uncertainty. Consequently, there is a decrease in the receiver performance

since the tolerated noise spectrum unnecessarily increases. The bandpass filter frequency response is

represented in Fig. 3.24, and the filter order N is 144.

Figure 3.24: Magnitude and phase responses with frequency of the state-of-the-art sinusoid bandpass

filter. The central frequency is 37.5 kHz, the 3 dB bandwidth is 2 kHz, whereas the 6 dB bandwidth is

4 kHz. The attenuation in the lower and upper stopbands is 50 dB.

The receiver replica of the generated signal is not so accurate also due to the frequency uncertainty.

In reality, because the receiver is not aware of the transmitted signal’s frequency, it has to estimate it

thanks to the periodogram of the received signal. The periodogram of the signal ybb[n] is an estimate of

its spectral density:

Pybbybb(fk) ≡ I(fk) =1

N

∣∣∣∣∣N−1∑n=0

ybb[n]e−j2πfkn

∣∣∣∣∣2

=1

N|Ybb(fk)|2 , (3.21)

53

Page 74: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

where the frequencies fk belong to the frequency interval of interest (the baseband interval), N

is in this case the number of signal samples (matches the size of buffer 2), and Ybb(fk) reflects the

importance of frequency fk in signal ybb[n]. Thus, the generator replicates the baseband signal with a

frequency corresponding to the frequency of maximum amplitude of the periodogram plot. There is often

an error in this estimation, especially with increasing noise power, which affects the cross-correlation

effectiveness. Therefore, the inaccuracy in the transmitted signal’s replica is another source of receiver

underperformance.

54

Page 75: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

Chapter 4

Source Localization

As soon as the signal processing steps to compute the transmitter-receiver distance estimation are com-

pleted, it is possible to resolve the ULB localization. Concerning source localization problems, there are

two major divisions:

• If the source is transmitting signals with the purpose of being detected, and eventually localized, it

is called a cooperative source. This constitutes active source localization;

• In case the source is not transmitting signals, it can only be spotted and located if it inadvertently

reflects one or more signals from signal sources in the environment, which may then be processed

to provide the relevant source’s whereabouts. Passive source localization is based on these prin-

ciples.

Hence, it is evident that this work tackles active source localization. The first section of this chapter

outlines the source localization problem framework. Section 4.2 reports the TOA localization method,

whereas Section 4.3 addresses source localization based on range-difference measurements.

4.1 Formulation

Given the overlap of the symbols used in this work to describe the different approached problems, it is

convenient to clarify the notation for this chapter:

• Scalar values are represented by lowercase letters (example: x);

• Vectors are represented by boldface lowercase letters (example: x);

• Matrices are represented by boldface uppercase letters (example: A);

• The identity matrix of order n is denoted by In;

55

Page 76: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

• The all-zero matrix of order n× k is designated by 0n×k;

• The transpose of a matrix H is referred to as HT ;

• The inverse of a matrix H is denoted by H−1;

• Some quantities are denoted by the same or similar letters used in previous chapters to designate

other unrelated quantities (example: x, y and z). In spite of the possibility of raising a little confu-

sion, the option of maintaining the usually used symbols for each problem is preferred.

Let x = [x y z]T express the unknown source’s coordinate vector. Consider a set of m receivers,

and let ai = [xi yi zi]T denote the known coordinates of the ith receiver. Additionally, assume that the

receivers are grouped in k equal arrays, each composed by m/k receivers, and spread around the

source. This conjuncture is depicted in Fig. 4.1, and constitutes the general source localization setup

for this work.

Figure 4.1: Source localization framework. In this example, there are 4 arrays of 6 receivers each,thereby yielding a total of 24 receivers (m = 24, k = 4).

4.2 TOA Algorithm

The TOA source localization algorithm consists of estimating x from the measured source-receiver dis-

tances. In this sense, let ri represent the noisy observation of the distance between the transmitter and

the ith receiver

ri = ‖x− ai‖+ εi, i = 1, . . . ,m . (4.1)

56

Page 77: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

In the previous equation, εi stands for the error between the true source-receiver distance and the

measured one at the ith receiver. These distances ri are obtained as described in Chapter 3; conse-

quently, it is apparent that the TOA algorithm is inherently an active source localization method, since

these distances are set on the assumption that the source and the receivers are synchronized.

The paradigm for source localization problems addressed in this work is to find an adequate cost func-

tion involving the known variables, the receiver’s positions and the transmitter-receiver distances in the

TOA predicament, and the unknown parameter, the source’s position, which is then determined via the

minimization of the cost function f

minimize f(x) =∑mi=1

(‖x− ai‖p − rip

)q,

x(4.2)

where the parameters p and q define the nature of the cost function. In general, smaller values

for those parameters are preferred as they place less weight on outlier measurements, enabling more

accurate solutions for x. On the other hand, the smaller those terms get, the more difficult it is to find

the minimum of the nonconvex cost function f . The least squares methodology is often adopted (q = 2)

with two particular approaches: the R-LS (Range-based Least Squares, p = 1, q = 2) and the SR-LS

(Squared-Range-based Least Squares, p = 2, q = 2). Their respective cost functions are:

minimize∑mi=1

(ri − ‖x− ai‖

)2x

(4.3)

minimize∑mi=1

(‖x− ai‖2 − ri2

)2.

x(4.4)

Ref. [23] and the references therein are useful readings on source localization algorithms. Albeit

localization with p = 2, q = 2 is somewhat sensitive to the presence of outliers, it is used here because it

leads to efficient global solution methods for the optimization problems. For that reason, the SR-LS cost

function resolution is delineated next.

Although it is nonconvex, the SR-LS cost function is shown to have a global optimal solution that can be

efficiently computed. First, let us transform (4.4) into a constrained minimization problem:

minimize{∑m

i=1

(α− ai

Tx+ ‖ai‖2 − ri2)2

: ‖x‖2 = α}.

x ∈ Rn, α ∈ R(4.5)

In (4.5) n is the dimension of the ambient space, naturally 3 in this case. Using y = [xT , α]T , (4.5)

can be rewritten in matrix form as

minimize{‖Ay − b‖2 : yTDy + 2fTy = 0

},

y ∈ Rn+1(4.6)

where

57

Page 78: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

A =

−2a1

T 1...

...

−2amT 1

(4.7)

b =

r1

2 − ‖a1‖2...

rm2 − ‖am‖2

(4.8)

D =

In 0n×1

01×n 0

(4.9)

f =

0n×10.5

. (4.10)

For the algorithm to perform well it is necessary for A to have full column rank, which in particular

implies that ATA is nonsingular. This can be obtained by proper placement of the sensors.

The constrained optimization problem (4.6) is also nonconvex, which would mean that only subopti-

mal solutions can be provided. However, (4.6) fits the Generalized Trust Region Subproblems (GTRS)

description, since it consists of minimizing a quadratic function subject to a single quadratic constraint.

GTRS problems, although commonly nonconvex, possess necessary and sufficient optimality conditions

from which efficient solution methods can be derived. In [23] and the references therein it is shown that

the optimal solution of (4.6) is y ∈ Rn+1 if and only if there exists λ ∈ R such that

(ATA+ λD)y = AT b− λf (4.11)

yTDy + 2fTy = 0 (4.12)

ATA+ λD � 0 . (4.13)

Accordingly, the optimal solution of the SR-LS cost function is

y(λ) = (ATA+ λD)−1(AT b− λf) . (4.14)

Before computing (4.14) it is obviously necessary to obtain λ, which can be done through:

ϕ(λ) ≡ y(λ)TDy(λ) + 2fT y(λ) = 0, λ ∈ I . (4.15)

The interval I consists of all λ for which ATA + λD is positive definite, which immediately implies

that

58

Page 79: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

I =

(− 1

λ1(D,ATA),∞), (4.16)

where λ1 is the first generalized eigenvalue of the matrix pair (D,ATA). The SR-LS estimate of the

source’s position is given by the first n components of the solution vector y in (4.14).

In the Matlab implementation, once the matrices are all defined, the solution of (4.15) is attained by

finding the λ that minimizes the absolute value of the function ϕ(λ); then, it is immediate to calculate the

solution of (4.14). The bisection method is a viable and more efficient alternative to solve the equation.

4.3 TDOA Algorithm

The TDOA method estimates the source’s position based on the receivers’ positions and the distance-

difference measurements. These are conceptually obtained by subtracting from range-measurements a

common range measurement to a reference sensor (receiver 1):

di = ri − r1 . (4.17)

In practice, the TDOA algorithm is used when it is not possible to determine the time origin, i.e., when

one does not know the time instant at which the original transmission occurred. So, the range-difference

measurements result from time-difference of arrival measurements at the receivers.

When applying the TDOA algorithm the reference receiver is located at the origin. Therefore, the coor-

dinate system must be shifted to the Reference-Receiver-Centred (RRC) coordinate frame before and

shifted-back after performing the TDOA technique. Besides, the TDOA method does not depend on a

cooperative source to succeed since the range-differences can be measured if the source reflects other

parasite signals. Hence, the TDOA algorithm is suitable for both active and a passive localization which

gives this scheme an advantage over its TOA counterpart.

Source localization from range-difference measurements is tackled in [23] and the references therein.

Specifically, the SRD-LS approach is worked to substantiate that it may satisfactorily be solved. This

methodology is presented next.

The range-difference measurements can be written as

di = ‖x− a′i‖ − ‖x‖, i = 1, . . . ,m , (4.18)

which, when squared, yields the following equation in the vector x:

− 2di‖x‖ − 2a′iTx = di

2 − ‖a′i‖2, i = 1, . . . ,m , (4.19)

where a′i denotes the ith receiver position in the RRC referential. Thus, a reasonable way to estimate

x is via the minimization of the LS criterion, in the same spirit of SR-LS:

59

Page 80: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

minimize∑mi=1

(− 2a′

iTx− 2di‖x‖ − gi

)2.

x ∈ Rn(4.20)

The parameter gi stands for gi = di2 − ‖a′

i‖2. Similarly to the SR-LS problem, it is convenient to

reformulate (4.20) as a constrained LS problem with y = [xT , ‖x‖]T :

minimize{‖By − g‖2 : yTCy = 0, yn+1 ≥ 0

},

y ∈ Rn+1(4.21)

where

B =

−2a′

1T −2d1

......

−2a′mT −2dm

(4.22)

C =

In 0n×1

01×n −1

. (4.23)

As in (4.7), matrix B should have full column rank.

The SRD-LS cost function of (4.21) has two quadratic constraints (a linear constraint is a special case of

a general quadratic constraint), while the SR-LS cost function involves only a single quadratic constraint.

So, unlike the GTRS problems, there are no known necessary and sufficient optimality conditions for

nonconvex quadratic optimization problems with two quadratic constraints. Still, it is proved in [23] that

y ∈ Rn+1 is an optimal solution of (4.21) if there exists λ ∈ R such that

(BTB + λC)y = BTg (4.24)

BTB + λC � 0 (4.25)

yTCy = 0, yn+1 ≥ 0 . (4.26)

These premisses follow from the KKT (Karush-Kuhn-Tucker) conditions. Accordingly, the optimal

solution of the SRD-LS cost function is

z ≡ y(λ) = (BTB + λC)−1BTg , (4.27)

where λ is obtained by

yTCy = 0 . (4.28)

If z satisfies the condition zn+1 ≥ 0, then z is a global optimal solution of (4.21). Otherwise, that

solution has to be found via another procedure.

60

Page 81: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

Before proceeding it is pertinent to define the intervals

I0 = (α0,∞) (4.29)

I1 = (α1, α0) (4.30)

I2 = (α2, α1) , (4.31)

where

αi = −1

λi(C,BTB)

i = 1, . . . , n (4.32)

α0 = − 1

λn+1(C,BTB)

. (4.33)

The generalized eigenvalues of the matrix pair (C,BTB) are denoted by λi, i = 1, . . . , n+1. Since

BTB is positive definite and C has one negative eigenvalue and n strictly positive eigenvalues, it follows

that α0 is positive while αi for i = 1, . . . , n are negative:

αn ≤ αn−1 ≤ · · · ≤ α1 < 0 < α0 . (4.34)

It is demonstrated in [23] that I1 is the set of all λ for which BTB + λC is positive definite. Further-

more, the union of intervals I0 ∪ I2 is the set of all λ for which BTB + λC has exactly one negative

eigenvalue and n positive eigenvalues.

Thus, a routine can be defined to robustly obtain the global optimal solution of (4.20):

1. Apply (4.27) and (4.28) to determine the vector z ∈ Rn+1. If zn+1 ≥ 0 stop, and the output of the

procedure is the vector made of the first n components of z;

2. Find all roots λ1, . . . , λp of

yT (λ)Cy(λ) = 0, λ ∈ I0 ∪ I2 (4.35)

for which the (n + 1)th component of y(λi) is non-negative;

3. Let z be the vector with the smallest objective function among the vectors 0, y(λ1), . . . , y(λp);

4. The output of the procedure is the vector made of the first n components of z.

The only implementation difficulty of the described procedure is how to find all the roots of (4.35).

Note that since BTB is positive definite it follows that BTB and C can be simultaneously diagonalized,

i.e., there exists a nonsingular (n+ 1)× (n+ 1) matrix P which

61

Page 82: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

P TBTBP = diag(γ1, . . . , γn+1) (4.36)

P TCP = diag(δ1, . . . , δn+1) . (4.37)

Thereafter, (4.35) reads

N(λ)

D(λ)≡n+1∑j=1

fj2δj

(γj + λδj)2= 0 , (4.38)

where f = P TBTg. It is immediate that

N(λ)

D(λ)≡n+1∑j=1

fj2

δj

(λ+γjδj)2

= 0 . (4.39)

Furthermore, the rational function of (4.39) can be expanded as a sum of simple fractions. In Matlab,

the given partial fraction expansion (4.39) is converted to polynomial form using the residue function.

The numerator N(λ) has at most order 2[(n+1)− 1] = 2n and the denominator has order 2(n+1). The

Matlab function roots is able to provide the roots of the final polynomial equation.

Another method of resolving (4.38) is to multiply this equation by the product of all the denominators∏n+1j=1 (γj + λδj)

2, transforming it into a polynomial equation of order 2n

n+1∑j=1

fj2δj

n+1∏k=1,k 6=j

(γk + λδk)2 = 0 . (4.40)

62

Page 83: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

Chapter 5

Results

5.1 Scenarios

The simulation results were obtained for three different localization setups. These localization scenarios

are rather deterministic instead of being random or stochastic. This is inherently due to the nature of the

crashed-aircraft localization issue as discussed in Chapter 1: there is a well-defined search radius and

coordinated operation of search vessels around the wreckage’s expected position.

Along these lines, three topologies are defined:

1. Scenario 1: 4 receiving arrays, each composed by 10 hydrophones, are symmetrically spread

around the source, as sketched in Fig. 5.1;

2. Scenario 2: 4 receiving arrays, each composed by 10 hydrophones, are spread around the source

in a non-symmetrical fashion, as laid out in Fig. 5.2;

3. Scenario 3: 4 receiving arrays, each composed by 10 hydrophones, are spread in a non-efficient

way since the source does not lie inside the convex hull, as depicted in Fig. 5.3.

All of these cases are characterized by a high depth discrepancy between the source and the re-

ceivers as they are located at the sea floor and the sea surface, respectively.

The source is fixed at 38 ◦N, 33 ◦W and sunk at a 2296m depth, matching the ocean depth at that ge-

ographical position. These coordinates correspond to a point in the North Atlantic. Since Bellhop does

not allow range-variant properties of the ocean, the bottom sediment properties and the sound speed

profile used in the simulation are chosen at the source position, and are straightforwardly extracted from

the geoacoustic database.

The localization scenarios are set up in the source-centred frame, with the receivers being placed ac-

cording to the intended topology; the receivers’ depths vary between 6m and 42m, with a 4m step

between consecutive depths. Then, the receivers’ coordinates are transformed to the ECEF coordinate

system; thereafter, the LLA arrays’ coordinates are determined converting the cartesian coordinates to

63

Page 84: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

Figure 5.1: Localization scenario 1. The sea-surface-placed receivers are in a symmetrical configurationaround the centred ocean-bottom-lodged source.

Figure 5.2: Localization scenario 2. The sea-surface-placed receivers are in a non-symmetrical config-uration around the centred ocean-bottom-lodged source.

ellipsoidal ones, in the WGS84 datum. Subsequently, the bottom bathymetry is easily derived from the

GEBCO database. Note, however, that the localization problem is tackled in the source-centred frame,

notably the localization estimates are provided in that coordinate system; the frame changes just pointed

out are performed only to define the bottom bathymetry.

64

Page 85: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

Figure 5.3: Localization scenario 3. The sea-surface-placed receivers are in a badly-designed configu-ration not surrounding the ocean-bottom-lodged source.

Figure 5.4: Earth-centred earth-fixed, elipsoid and source-centred frames.

5.2 Range Estimation

The range estimation algorithms are evaluated resorting to the localization scenario 1. In this manner, it

is easy to check how the transmitter-hydrophone distance affects the observations. As the transmitter-

receiver distances vary, the signal-to-noise ratio at the receivers vary accordingly, ergo the measured

distances evolution with the SNR at the receiver is inherently obtained.

65

Page 86: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

Figure 5.5: Range estimation error as a function of the ULB-hydrophone distance. There is an increaseof the range-error with the arrays’ range. In this way, the sinusoid outperforms all the other signals,whereas the SA sinusoid presents the worst results.

Figure 5.6: Range estimation error as a function of the SNR at the receiver. There is an increase of therange-error with decreasing signal-to-noise ratio at the receiver.

The errors presented in Figs. 5.5 and 5.6 are the mean of the resulting errors for the 4 arrays,

and are given in absolute and relative values, respectively. The estimated ranges deviate from the real

transmitter-receiver distances due to two reasons:

66

Page 87: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

1. The ray paths do not follow straight lines from the emitter to the receiver – the curvilinear ray

trajectories are longer than the rectilinear transmitter-receiver distance;

2. The receiving apparatus is not perfect, and therefore the noise introduced by the UAC will cause

the receivers to calculate the distances with a certain imprecision. In fact, it is verified that the

distance pattern measured within an array is non-coherent; an example of this non-coherence is a

deeper hydrophone measuring a higher distance than a shallower one.

In this fashion, it is easy to understand the presented graphs. The range-error increases with increas-

ing range because the distance-difference between the curvilinear and the rectilinear lines connecting

the ULB to the hydrophones increases, and the signal-to-noise ratio decreases. Furthermore, it is seen

that there is a threshold transmitter-receiver distance and signal-to-noise ratio up to which the ranges

are reasonably estimated. Once that threshold is crossed, the error quickly increases with the ULB-

hydrophone distance.

Naturally, the state-of-the-art system performs much worse than the alternatives: increasing the transmitter-

receiver distance past 3 km severely deteriorates the estimated ranges’ precision for the SA sine, whereas

this threshold is approximately 13 km for the remaining signals. Thus, the maximum ULB-hydrophone

range for which detection is accomplished is enhanced by 10 km.

Based on Fig. 5.6, acknowledge that the error evolution with the SNR is approximately equal for all the

signals. Thus, it is verified that the SNR decreases much faster with the emitter-receiver range when

the SA sine is employed, due to its high frequency. On the other hand, the sinusoid performance with

the ULB-hydrophone distance is better than the other signals’ because of its bandpass filter’s narrow

bandwidth (300Hz), contrasting with the 2 kHz bandwidth of the chirp and QAM signals. Hence, the

sinusoid’s bandpass filter best limits the received noise power.

5.3 Source Localization

It was confirmed in this simulation work that, in order to provide good estimates of the source’s position,

the source localization algorithms need the estimated ranges within a receiving array to be coherent.

However, as seen in the previous section, the receiver’s structure is susceptible to noise disturbances.

Two corrections are applied to counter the noise-induced deviations to the measured ranges:

1. For each array, the shallower receiver is assigned with the mean of the estimated ranges in that

array, and lower coherent distances are attributed to the subsequent deeper hydrophones. It is

assumed that the ULB is sunk at the mean ocean depth of the analyzed area, from which the

angle of arrival of the direct path is readily obtained from the array-source measured range;

2. A correction in the measured distances is introduced to reflect the non-straightness of the ray

paths. Thus, these estimated parameters are multiplied by an empirical factor to yield more cor-

67

Page 88: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

rect distances; the most adequate factor was found to be 0.9925.

This data treatment is legitimate since it is certain that the ULB is lodged at the ocean bottom, and

hence the deeper hydrophones are mandatorily closer to it than the shallower ones. In addition, the

range scaling by a factor of 0.9925 is justifiable as one knows beforehand that the ray paths are not

straight lines.

Topologies 2 and 3 were chosen to perform the source localization study because they are in theory

more complicated than scenario 1. Therefore, if the localization algorithms show a good behavior for

these scenarios, then topology 1 should be no problem.

The localization scenario 2 is used to compute the localization error versus the size of the convex hull.

That is the maximum array-source range in a given situation. The arrays’ distances to the ULB differ

800m between consecutive arrays; referring to Fig. 5.2, this means that if range 1 equals e.g. 6 km,

then range 2 is 5.2 km, range 3 is 4.4 km and range 4 equals 3.6 km. When the dimension of the convex

hull increases, it means that the farthest array’s distance to the emitter increases, and the other arrays’

ranges increase accordingly.

Topology 3 is used to evaluate the influence of the ULB source level on the localization error. In this

problem, the arrays are fixed at coordinates ai = [13500 2500 zi]T , i = 1, . . . , 10 (array 1), ai =

[7500 8500 zi]T , i = 11, . . . , 20 (array 2), ai = [1500 2500 zi]

T , i = 21, . . . , 30 (array 3) and ai =

[7500 − 3500 zi]T , i = 31, . . . , 40 (array 4). Recall that the source is placed at x = [0 0 2296]T , and the

depths zi vary between 6m and 42m. Array 1 is the farthest from the source, being located at approxi-

mately 14 km from it.

The localization errors in the next graphics are the norm of the absolute error vector:

Figure 5.7: TOA source localization error as a function of the dimension of the receiving convex hull(topology 2). There is an increase of the TOA source localization error with increasing convex hull size.

68

Page 89: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

Figure 5.8: TDOA source localization error as a function of the dimension of the receiving convex hull

(topology 2). There is an increase of the TDOA source localization error with increasing convex hull size.

Figure 5.9: TDOA source localization error as a function of the dimension of the receiving convex hull

(topology 2), zoomed at the lower convex hull sizes for which the TDOA source localization error is very

small when the alternative signals are employed.

Foremost, it is clear that the localization error increases with increasing transmitter-receiver distance,

as well as with decreasing ULB power. This is due to the increasing range estimation error, which sub-

69

Page 90: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

Figure 5.10: TOA source localization error as a function of the transmitter’s source level (topology 3).There is a decrease of the TOA source localization error with increasing ULB power.

Figure 5.11: TOA source localization error as a function of the transmitter’s source level (topology 3),zoomed at the source levels for which the TOA source localization error is very small when the alternativesignals are employed.

sequently affects the computed source position. Another straightforward observation is that the TOA

method offers much greater accuracy than its TDOA equivalent. This is well patented by the y axis scale

in Fig. 5.8. It is ergo the recommended source localization methodology for this type of problems, if

transmit schedules are perfectly known.

Regarding Figs. 5.7 to 5.9, once the ocean noise begins dominating the range estimation process, the

70

Page 91: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

Figure 5.12: TDOA source localization error as a function of the transmitter’s source level (topology 3).There is a decrease of the TDOA source localization error with increasing ULB power.

Figure 5.13: TDOA source localization error as a function of the transmitter’s source level (topology3), zoomed at the source levels for which the TDOA source localization error is very small when thealternative signals are employed.

localization algorithms become erratic. That is, since they are given very noisy range estimates, their

output is also nearly random. Hence, if we perform the same simulation repeatedly, with all the same

conditions, the output of the TOA and the TDOA algorithms will diverge. That explains the non-logical

evolution of the localization errors with the highest convex hull sizes (from 3 km, in the SA sine case, and

from 13 km for the remaining signals).

71

Page 92: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

Still referring to Figs. 5.7 to 5.9, we observe that for a convex hull size not bigger than approximately

14 km, the localization error is less than 500m when the TOA algorithm and the alternative signals are

employed. The SA signal performs much worse, only guaranteeing a good source’s position estimate

for convex hull dimensions under 3 km. The TDOA technique is seen to provide fairly good estimates up

to a convex hull dimension of approximately 13 km for the alternative signals, and 2.5 km for the state of

the art.

Concerning Figs. 5.10 to 5.13, it is observed that the localization-error evolution with ULB power is an

asymptotic function, which means that there is a set of transmitter acoustic powers for which the best

performance is achieved. The minimum of these values is approximately 165 dB re 1µPa@1m for the al-

ternative signals, and source localization in the SA case is only successful for SL = 305 dB re 1µPa@1m,

approximately.

Even though our simulations do not cover the possibility of transmitting a digital message containing

information such as the FDR depth, that could easily be used to confirm our localization estimates,

regarding the z coordinate.

72

Page 93: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

Chapter 6

Conclusions & Future Work

This thesis tackled the localization of an aircraft following an at-sea plane crash. Chapter 1 introduced

this problem, highlighting aircraft catastrophes which provided motivation for this work. Specifically, the

well-known AF 447 accident over the North Atlantic, which triggered the international aviation safety

community to develop a committed effort in order to improve the localization of submerged aircraft, was

described.

Chapter 2 presented the main characteristics of the ocean environment as an acoustic channel. It

was clearly established that the underwater acoustic channel may be a very distorting communications

channel, preventing long-range wireless communications, though in typical crashed and sunk aircraft

situations the UAC is not that adverse. The computer program Bellhop, through which we simulate un-

derwater sound propagation in our simulation, was also addressed, as well as the geoacoustic database

describing the world’s oceans properties.

Chapter 3 listed the steps that lead to the determination of the ULB-hydrophone distances. All the signal

processing techniques carried out at both the transmitter and the receiver, except for actual range-based

source localization algorithms, were detailed there, and it was pointed out that the QAM and chirp signals

should perform better than the sinusoid, in terms of rejecting the ocean ambient noise, because they

have narrower auto-correlation functions. This was not confirmed because of the sinusoid’s narrower

bandpass filter. The defects of the state-of-the-art system were also remarked.

The source localization algorithms proposed in this work were described in Chapter 4. The TOA method-

ology is simpler than its TDOA equivalent, and both are thought to perform better than the currently used

AOA technique, and simplify field operations by avoiding receiver directionality issues.

The main contributions of this thesis were presented in Chapter 5. Section 5.2 proved that the state-of-

the-art system is inadequate when the distance separating the ULB from the hydrophones is too large,

as the ULB signal’s high frequency prevents propagation to long ranges due to high path losses. Addi-

tionally, it was verified that the proposed alternatives allow an increase of approximately 10 km on the

maximum range for which signal detection is accomplished. In this sense, the sinusoid outperformed

the other signals because its receiver’s bandpass filter has a narrower bandwidth. Furthermore, Section

5.3 allowed us to compare the implemented TOA and TDOA algorithms, clearly demonstrating that the

73

Page 94: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

range-based methodology is naturally more accurate than its range-difference-based counterpart. That

is due to the TDOA susceptibility to the measurements accuracy, as this algorithm depends on very sub-

tle delay-differences between sensors. The standardized source level of 160.5 dB re 1µPa@1m permits

a correct source localization for a maximum convex hull dimension of approximately 3 km and 14 km for

the SA sine and the proposed alternatives, respectively, when the TOA technique is used; these values

decrease to 2.5 km and 13 km in the TDOA case. It was also seen that an increase of the ULB’s source

level is advantageous, as the localization error decreased with increasing ULB power. We also found

that the source localization algorithms work better if the source lies inside the convex hull; this finding is

corroborated by several authors who study range-based localization.

In conclusion, we propose a frequency decrease of the ULB transmitted acoustic signal. Likewise, em-

ploying the TOA algorithm to yield the source’s position is strongly advised. This obviously implies that

the ULB must function as a transponder.

Further work in this theme involves:

1. Exploring the pulse repetition feature to smooth the signal delay measures with time, which should

provide more accurate estimated ranges;

2. Developing and testing a potentially more precise range-based source localization algorithm, hav-

ing a cost function with p = 1, q = 1. In this way, the localization error verified when there is a

breakdown of the range estimates would be lessened;

3. Developing a source localization algorithm constraining the z coordinate. In this manner, the depth

information contained in the proposed ULB digital message is fully exploited;

4. Expanding our simulation tool to compare a model of the environment with the real data, as seen

in Fig. 6.1, and adjust the model parameters accordingly [24]. When the discrepancies between

the observed data and the model data are small enough, the model is a good approximate to the

reality, and the source’s position is determined. The most crucial unresolved aspect of this inverse

problem methodology is how to build the comparing block.

74

Page 95: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

Figure 6.1: Block diagram of the inverse problem methodology for source localization.

75

Page 96: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

76

Page 97: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

Bibliography

[1] ICAO. ICAO annex 6 to the Convention on International Civil Aviation, 9 edition.

[2] William D. Perreault and Anthony Vandyk. Did jet age come too soon? Life Magazine, pages 51–52

and 54, January 1954.

[3] Jeremy Sear. The arl ’black box’ flight recorder – invention and memory. Bachelor of arts thesis,

Faculty of Arts, The University of Melbourne, October 2001.

[4] Neil A. H. Campbell. The evolution of flight data analysis. In ASASI Regional Seminar, 2007.

[5] Aircraft Accident Report / Pan Alaska Airways, Ltd. Cessna 31OC, N1812H missing between An-

chorage and Juneau, Alaska October 16, 1972. Technical Report NTSB-AAR-73-1, National Trans-

portation Safety Board, January 1973.

[6] Teledyne Benthos. Product Catalog 2010, 2010.

[7] Dukane Seacom. DK140 Underwater Acoustic Beacon.

[8] Interim report on the accident on 1st june 2009 to the Airbus A330-203 registered F-GZCP operated

by Air France flight AF 447 Rio de Janeiro – Paris. Technical report, BEA, 2009.

[9] Enquete de securite sur l’accident du vol AF 447-A330-203 / recuperation des enregisteurs de vol

et des pieces de l’avion. Technical report, BEA, 2011.

[10] Flight data recover working group. Technical report, BEA, 2009.

[11] Dale Green. Recovering data and voice recorders following at-sea crashes. In Oceans 2010 IEEE

– Sydney, May 2010.

[12] Steven M. Shope. Spread spectrum underwater location beacon system. United States Patent,

August 1990.

[13] David Brady and James C. Preisig. Wireless Communications: Signal Processing Perspectives,

chapter 8. Prentice Hall, 1998.

[14] Finn B. Jensen, William A.Kuperman, Michael B. Porter, and Henrik Schmidt. Computational Ocean

Acoustics, chapter 1. Springer, 1994.

77

Page 98: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

[15] Milica Stojanovic. On the relationship between capacity and distance in an underwater acoustic

communication channel. In WUWNet ’06 Proceedings of the 1st ACM international workshop on

Underwater networks, September 2006.

[16] Acoustic toolbox. http://oalib.hlsresearch.com/.

[17] Michael B. Porter. The kraken normal mode program. Technical report, SACLANT Undersea

Research Centre, October 1995.

[18] Orlando Camargo Rodriguez. General description of the BELLHOP ray tracing program. Technical

report, Physics Department, Signal Processing Laboratory, Faculdade de Ciencias e Tecnologia,

Universidade do Algarve, June 2008.

[19] World ocean system (woss) library. Technical report, World Ocean System, August 2010.

[20] Grid Viewing and Data Access Software for GEBCO’s gridded data sets: User’s Guide, Version

2.13.

[21] John Proakis and Masoud Salehi. Digital Communications. McGraw-Hill Sci-

ence/Engineering/Math, 5 edition, November 2007.

[22] Fernando D. Nunes. Telecomunicacoes course notes. 2007.

[23] Amir Beck, Petre Stoica, and Jian Li. Exact and approximate solutions of source localization prob-

lems. IEEE Transactions on Signal Processing, 56(5), May 2008.

[24] Ehsan Zamanizadeh, Joao Gomes, and Jose M. Bioucas-Dias. Source localization from time-

differences of arrival using high-frequency communication signals. In Oceans 2011 IEEE/MTS -

Kona, September 2011.

[25] Peter Ashford. Flight data recorders: Built, tested to remain intact after a crash. Avionics News,

47(3):68–69, 2010.

[26] David Warren. A device for assisting investigation into aircraft accidents. Mechanical Engineering

Technical Memorandum 142, Aeronautical Research Laboratories, 1954.

[27] Robert J. Urick. Principles of Underwater Sound. Peninsula Pub, August 1996.

[28] Interim report no2 on the accident on 1st june 2009 to the Airbus A330-203 registered F-GZCP

operated by Air France flight AF 447 Rio de Janeiro – Paris. Technical report, BEA, 2009.

[29] BEA – AF447 accident. http://www.bea.aero/fr/enquetes/vol.af.447/vol.af.447.php.

[30] Lawrence D. Stone, Colleen Keller, Thomas L. Kratzke, and Johan Strumpfer. Search analysis for

the location of the AF447 underwater wreckage. Report to bureau d’enquetes et d’analyses pour

la securite de l’aviation civile, METRON Scientific Solutions, January 2011.

[31] Steven M. Kay. Fundamentals of Statistical Signal Processing, volume 2: Detection Theory. Pren-

tice Hall, February 1998.

78

Page 99: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

[32] The GEBCO 08 grid. Technical report, General Bathymetric Chart of the Oceans.

[33] Joao P. Gomes. Processamento de sinais course slides. 2010.

[34] Ronald W. Schafer, John R. Buck, and Alan V. Oppenheim. Discrete-Time Signal Processing.

Prentice Hall, January 1999.

[35] Pinar Oguz-Ekim, Joao P. Gomes, Joao Xavier, and Paulo Oliveira. Recursive localization of nodes

and time-recursive tracking in sensor networks using noisy range measurements. IEEE Transac-

tions on Signal Processing, 59(8), August 2011.

79

Page 100: Flight Recorder Localization Following at-Sea Plane Crashes · Flight Recorder Localization Following at-Sea Plane ... I am very thankful to Professor Joao Pedro ... confirma-se

80