13
Research Article The Control Packet Collision Avoidance Algorithm for the Underwater Multichannel MAC Protocols via Time-Frequency Masking Yang Yu, Jie Shi, Ke He, and Peng Han School of Marine Science and Technology, Northwestern Polytechnical University, Xi’an 710072, China Correspondence should be addressed to Yang Yu; [email protected] Received 24 December 2015; Revised 28 April 2016; Accepted 9 May 2016 Academic Editor: Paolo Renna Copyright © 2016 Yang Yu et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Establishing high-speed and reliable underwater acoustic networks among multiunmanned underwater vehicles (UUVs) is basic to realize cooperative and intelligent control among different UUVs. Nevertheless, different from terrestrial network, the propagation speed of the underwater acoustic network is 1500 m/s, which makes the design of the underwater acoustic network MAC protocols a big challenge. In accordance with multichannel MAC protocols, data packets and control packets are transferred through different channels, which lowers the adverse effect of acoustic network and gradually becomes the popular issues of underwater acoustic networks MAC protocol research. In this paper, we proposed a control packet collision avoidance algorithm utilizing time-frequency masking to deal with the control packets collision in the control channel. is algorithm is based on the scarcity of the noncoherent underwater acoustic communication signals, which regards collision avoiding as separation of the mixtures of communication signals from different nodes. We first measure the W-Disjoint Orthogonality of the MFSK signals and the simulation result demonstrates that there exists time-frequency mask which can separate the source signals from the mixture of the communication signals. en we present a pairwise hydrophones separation system based on deep networks and the location information of the nodes. Consequently, the time-frequency mask can be estimated. 1. Introduction Underwater acoustic networks are the key technology to real- ize cooperative and intelligent control among multi-UUVs [1, 2]. However, compared with terrestrial wireless network, the propagation velocity of underwater acoustic networks is only 1500 m/s and the available bandwidth of underwater acoustic networks is very limited. Moreover, the time delay, Doppler extension, and noise interference cannot be avoided either. e adverse factors mentioned above make the key technol- ogy of underwater acoustic network communication MAC protocol design big challenges and restrict the improvements of underwater acoustic network. Presently, most researchers classify underwater acoustic network MAC protocols into 3 types: contention-free, contention-based, and hybrid based on the difference of multiuser access mechanism. Figure 1 has illustrated the existing underwater acoustic network MAC protocols and its categorizing [3, 4]. Considering the long-time delay of underwater acoustic channel, maintaining the real-time state between adjacent nodes is different. Concise contention-free protocol is firstly used in the underwater acoustic network, which includes frequence division multiple access (FDMA), time division multiple access (TDMA), and code division multiple access (CDMA). FDMA separates available frequency band into different subbands and allocates the subband specifically to the node, which is simple and reliable. However, the low bandwidth availability ratio is a big disadvantage [3–5]. In order to solve the problem, orthogonal frequency division multiplexing (OFDM) has been introduced to FDMA. e principle of FDMA is choosing proper frequency band to communicate according to different communication dis- tances and therefore improves the bandwidth availability ratio [6, 7]. To improve channel utilization fundamentally, researchers begin to study the MAC protocol based on TDMA. e ST-MAC protocol, which translates the issue of Hindawi Publishing Corporation Discrete Dynamics in Nature and Society Volume 2016, Article ID 2437615, 12 pages http://dx.doi.org/10.1155/2016/2437615

Research Article The Control Packet Collision …downloads.hindawi.com/journals/ddns/2016/2437615.pdfResearch Article The Control Packet Collision Avoidance Algorithm for the Underwater

  • Upload
    others

  • View
    16

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Research Article The Control Packet Collision …downloads.hindawi.com/journals/ddns/2016/2437615.pdfResearch Article The Control Packet Collision Avoidance Algorithm for the Underwater

Research ArticleThe Control Packet Collision AvoidanceAlgorithm for the Underwater Multichannel MACProtocols via Time-Frequency Masking

Yang Yu Jie Shi Ke He and Peng Han

School of Marine Science and Technology Northwestern Polytechnical University Xirsquoan 710072 China

Correspondence should be addressed to Yang Yu nwpuyuynwpueducn

Received 24 December 2015 Revised 28 April 2016 Accepted 9 May 2016

Academic Editor Paolo Renna

Copyright copy 2016 Yang Yu et alThis is an open access article distributed under the Creative Commons Attribution License whichpermits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

Establishing high-speed and reliable underwater acoustic networks amongmultiunmanned underwater vehicles (UUVs) is basic torealize cooperative and intelligent control among different UUVs Nevertheless different from terrestrial network the propagationspeed of the underwater acoustic network is 1500ms which makes the design of the underwater acoustic networkMAC protocolsa big challenge In accordance with multichannel MAC protocols data packets and control packets are transferred throughdifferent channels which lowers the adverse effect of acoustic network and gradually becomes the popular issues of underwateracoustic networks MAC protocol research In this paper we proposed a control packet collision avoidance algorithm utilizingtime-frequency masking to deal with the control packets collision in the control channel This algorithm is based on the scarcityof the noncoherent underwater acoustic communication signals which regards collision avoiding as separation of the mixturesof communication signals from different nodes We first measure the W-Disjoint Orthogonality of the MFSK signals and thesimulation result demonstrates that there exists time-frequency mask which can separate the source signals from the mixture ofthe communication signals Then we present a pairwise hydrophones separation system based on deep networks and the locationinformation of the nodes Consequently the time-frequency mask can be estimated

1 Introduction

Underwater acoustic networks are the key technology to real-ize cooperative and intelligent control amongmulti-UUVs [12] However compared with terrestrial wireless network thepropagation velocity of underwater acoustic networks is only1500ms and the available bandwidth of underwater acousticnetworks is very limited Moreover the time delay Dopplerextension and noise interference cannot be avoided eitherThe adverse factors mentioned above make the key technol-ogy of underwater acoustic network communication MACprotocol design big challenges and restrict the improvementsof underwater acoustic network Presently most researchersclassify underwater acoustic network MAC protocols into 3types contention-free contention-based and hybrid basedon the difference of multiuser access mechanism Figure 1 hasillustrated the existing underwater acoustic network MACprotocols and its categorizing [3 4]

Considering the long-time delay of underwater acousticchannel maintaining the real-time state between adjacentnodes is different Concise contention-free protocol is firstlyused in the underwater acoustic network which includesfrequence division multiple access (FDMA) time divisionmultiple access (TDMA) and code division multiple access(CDMA) FDMA separates available frequency band intodifferent subbands and allocates the subband specifically tothe node which is simple and reliable However the lowbandwidth availability ratio is a big disadvantage [3ndash5] Inorder to solve the problem orthogonal frequency divisionmultiplexing (OFDM) has been introduced to FDMA Theprinciple of FDMA is choosing proper frequency band tocommunicate according to different communication dis-tances and therefore improves the bandwidth availabilityratio [6 7] To improve channel utilization fundamentallyresearchers begin to study the MAC protocol based onTDMA The ST-MAC protocol which translates the issue of

Hindawi Publishing CorporationDiscrete Dynamics in Nature and SocietyVolume 2016 Article ID 2437615 12 pageshttpdxdoiorg10115520162437615

2 Discrete Dynamics in Nature and Society

FDMA

Underwater acoustic network MAC protocols

Contention-free Hybrid Contention-based

HandshakeRandom accessP-MACHCFMAUW-MACHSR-TDMACDMATDMA

NOGO-MAC UW-OFDMAC

ST-MACCT-TDMA

IPool-ADELIN DACAP Slotted FAMA

DSSS STUMP G-TDMA

CDMAB POCA-CDMA

Single channelWA-TDMA

APCAP R-MAC MACA-U RIPT SF-MAC

CSMAALOHA

Multichannel

RM-MACAUMIMO-MACDMC-MACRCAMACMM-MACCUMACCOPE-MAC

Figure 1 Most researchers classify underwater acoustic networkMAC protocols into 3 types contention-free contention-based and hybird

multiple joints time slot to the problem of vertex-coloringwas proposed in [8] Reference [9] puts forward DSSSprotocol which utilizes transmission delay of underwateracoustic channel and arranges conflict-free transmission con-currently The STUMP protocol which eases the limitationof time synchronization was present in [10] Different fromTDMA CDMA protocols distinguish different users throughpseudonoise which has high channel utilization and simplealgorithm but still cannot avoid its inherent ldquonear-far effectrdquo

Compared with the above-mentioned contention-freeprotocol this allocates channel beforehand and contention-based protocol which allocates channel based on the needof nodes has higher channel utilization The essence ofcontention-based protocol is based on channel reserva-tion Nodes reserve channel resources through handshakingexchange message before initiating data communication [1112] The multichannel MAC protocols transmit handshakingexchange message via independent channels initiates infor-mation transmission among multinode pairs concurrentlyutilizes network bandwidth and reduces the consumptionwhen network load is heavy [13ndash15] and attracts attention ofthe researchers recently The prospective of the multichannelMAC protocols is solving new problems which are faced withmultichannel protocols especially the collision problem incontrol channel Zhou et al from University of Connecticutadopt joint detection of adjacent nodes to tackle triple hiddenterminal problems typical in multichannel MAC protocols[16]

In this paper we proposed a control packet collisionavoidance algorithm utilizing time-frequency masking todeal with the control packets collision in the control channelThis algorithm is based on the scarcity of incoherent under-water acoustic communication signals and regards collisionavoiding as the separation of the mixtures of communicationsignals from different nodes The remaining contents of thepaper are organized as follows Section 2 briefly discussesthe W-Disjoint Orthogonality and the scarcity of the MFSKsignal The simulation result demonstrates that there existstime-frequency mask which can separate the source signals

from the mixture of the communication signals Section 3outlines the proposed separation system discusses the low-level featurewe sued and gives details about the deepnetworkincluding its structure as well as training method Section 4shows the simulation result about the source separationsystem in different conditions including different signal-noise ratio and different bandwidth ratio

2 W-Disjoint Orthogonality ofthe MFSK Signals

As is known to all the MFSK is a classic noncoherent com-munication modulation scheme that has been consideredas a robust modulation to the complex underwater acousticchannel Because of its lower bandwidth efficiency than thecoherent modulation such as PSK modulation the MFSKmodulation is not considered as a good choice for the physicallayer of the underwater acoustic networks However thelower bandwidth efficiency means that the MFSK signal issparse in time and frequency domain Same as the speechsignals mentioned in [17] the MFSK mixtures can be sepa-rated into several sources by using time-frequency maskingThe received signal can be seen as MFSK mixture when thecontrol packets collide and the sparsity of the MFSK signalin time and frequency domain offers the potential for dealingwith the collision of the control packets

In this section we focus on theW-Disjoint Orthogonalityand the sparsity of theMFSK signals showing that there existtime-frequency mask which can separate the source signalsfrom the mixture of the MFSK signals We only consider theMFSK modulation as the physical layer of the underwateracoustic networks in this paper

Same as the model of speech mixture the model of theMFSK mixture can be written as follows

119909 (119905) =

119873

sum

119894=1

119904119894 (1)

Discrete Dynamics in Nature and Society 3

With the short-time Fourier transform we obtain themodel of the MFSK in time and frequency domain through

119883(119898 119891) =

119873

sum

119894=1

119878119894(119898 119891) (2)

where 119898 = 1 119872 119891 = 1 119865 are the indexes of timeframe and frequency point respectively Assuming119883(119898 119891) isW-Disjoint Orthogonality at least one of the119873 nodes signalswill be nonzero for a given (119898 119891) To separate node signalfrom the mixture 119883(119898 119891) we create the time-frequencymask for each node respectively and apply these masks tothe mixture to obtain the original node signal For instancewith the defining maskM

119894for the node 119894

M119894=

1 119878119896(119898 119891) = 0

0 otherwise(3)

which actually is an indicator function theMFSK signal fromnode 119896 will be derived via

119878119894(119898 119891) =M

119894times 119883 (119898 119891) forall119898 119891 (4)

However for the MFSK signal the assumption aboutthe W-Disjoint Orthogonality is not strictly satisfied Whenthe sparsity of the MFSK signals in time and frequencydomain is taken into consideration the approximate W-Disjoint Orthogonality will be satisfied In order to measuretheW-Disjoint Orthogonality of the T-Fmask the combinedperformance criteria PSR and SIR which are proposed byYilmaz and Rickard in [17] will be used

PSR119894=

1003817100381710038171003817M119894119878119894 (119898 119891)1003817100381710038171003817

2

1003817100381710038171003817119878119894 (119898 119891)1003817100381710038171003817

2

SIR119894=

1003817100381710038171003817M119894119878119894 (119898 119891)1003817100381710038171003817

2

10038171003817100381710038171003817M119894sum119873

119894=1119894 =119894119878119894(119898 119891)

10038171003817100381710038171003817

2

WDO119894=

1003817100381710038171003817M119894119878119894 (119898 119891)1003817100381710038171003817

2minus10038171003817100381710038171003817M119894sum119873

119894=1119894 =119894119878119894(119898 119891)

10038171003817100381710038171003817

2

1003817100381710038171003817119878119894 (119898 119891)1003817100381710038171003817

2

= PSR119894minusPSR119894

SIR119894

(5)

With a view to the quite small probability of the collisionof more than two data packets in control channel we haveproduced a series of MFSK mixture only including two nodesignals and calculated the PSR SIR and WDO of the T-FMask by the use of Monte Carlo method Thereinto the T-TMask for source separation is derived as follows

M119894=

1 20 log1003816100381610038161003816119878119894 (119898 119891)

10038161003816100381610038161003816100381610038161003816119883 (119898 119891)

1003816100381610038161003816

ge minus6 dB

0 otherwise

where 119883(119898 119891) =119873

sum

119894=1

119878119894(119898 119891) 119873 = 2

(6)

PSR versus bandwidth ratio

025 0375 05 0625 0750125Bandwidth ratio

084

086

088

09

092

094

096

PSR

Figure 2 The PSR result

According to the definition of the W-Disjoint Orthogo-nality the corresponding T-F mask becomes closer to theW-Disjoint Orthogonality as the signal being more sparse Thesparsity of MFSK signal is reflected by bandwidth ratio andthe lower the bandwidth ratio of the signal is the higher itssparsity becomes

By the conclusion of [17] it could be thought that mixedsignal is able to be demixed through a T-F mask when thevalue ofWDO is close to 1 In accordance with the simulationresult shown on Figures 2 3 and 4 we believe that an existingT-F mask could separate sources from the MFSK mixturewith high quality when the bandwidth ratio is greater than05

3 The Source Separation System UtilizesDeep Networks

In this section we will outline the proposed separationsystem discuss the low-level featureswe used and give detailsabout the deep networks including its structure and trainingmethod

31 Observation in Time and Frequency Domain and Low-Level Features Used in Deep Networks We assume that thereare 119870 hydrophones in the node and the mixture includesseveral MFSK signals from 119873 nodes The mixture receivedby the hydrophone 119896 can be obtained from

119909119896 (119905) =

119870

sum

119894=1

119904119894lowast ℎ119894119896+ 119899119896 (119905) (7)

where ℎ119894119896

is the channel impulse response between thehydrophone 119896 and node 119894 and the lowast denotes convolutionThen by using the short-time Fourier transform (STFT)we can obtain the mixture signal mapped into the time-frequency domain from

119883119896(119898 119891) =

119869

sum

119894=1

119878119894(119898 119891)119867

119896119894(119891) + 119873

119896(119898 119891) (8)

4 Discrete Dynamics in Nature and Society

SIR versus bandwidth ratio

4

6

8

10

12

14

16

18

SIR

025 0375 05 0625 0750125Bandwidth ratio

Figure 3 The SIR result

WDO versus bandwidth ratio

025 0375 05 0625 0750125Bandwidth ratio

065

07

075

08

085

09

095

WD

O

Figure 4 TheWDO result

where119898 = 1 119872 and 119891 = 1 119865 are the time frame andfrequency bin indices respectively and 119883

119896(119898 119891) = F(119909

119896)

119878119894= F(119904

119894) 119873119896= F(119899

119896) F(sdot) denote short-time Fourier

transformWe use119883119896|119895(119898 119891) representing the component of

the node 119894which takes in themixture received by hydrophone119896 Thus

119883119896|119895(119898 119891) = 119878

119894(119898 119891) sdot 119867

119896119894(119891) (9)

As shown in Section 2 MFSK signal is approximate W-Disjoint Orthogonality when bandwidth ratio is less than 05Then as shown in (10) the mixture119883

119896which received by the

hydrophone 119896 can be demixed by using the T-F masks M119894

corresponding node 119894 119894 = 1 119873 Consider the following

119896(119898 119891) =

119873

sum

119894=1

M119894sdot 119883119896(119898 119891) + 119873

119896(119898 119891) (10)

There is a nature choice that uses the orientation of theMFSK signals to estimate the corresponding T-F mask of

the nodes with the pairwise hydrophone Thus the time-frequency mask is related to the location information of thecurrent input signals (the channel impulse response and arraymanifold) That is

M119894(119898 119891)

= 119875 (119883119894(119898 119891) isin node 119894 | 119878

1sdot sdot sdot 119878119873 1198671119896sdot sdot sdot 119867119873119896)

(11)

Obviously for other hydrophone output the time-frequency maskM corresponding to the same node remainsthe same When we obtain the time-frequency M

119894corre-

sponding to node 119894 we can then separate the single usercommunication signals of node 119894 through ISTFT operationThe mixture signals received by the multiarray include 119873single user signals from 119873 nodes with different locationsThe probability distribution of a single T-F point belongingto a certain node such as 119894 can be described by a Gaussiandistribution with mean 120583

119894and variance 120590

119894 Here 120583

119894and

variance 120590119894can be interpreted as the mean value and variance

of direction of arrival (DOA) of the signal coming from thenode 119894 respectively According to the central limit theoremsfor all T-F points the probability distribution of mixturesignals including 119873 node signals can be described by themixture of 119873 Gaussian distribution namely Gaussian mix-ture model (GMM) Therefore we can describe the mixturepattern of signals through a GMM based on space featuresThe parameters of the GMM can be obtained on the basis ofthe existing observation datasetsThus the probability of eachT-F point belonging to node 1 node 2 and node119873 can beestimatedThen the source signals can be recovered throughISTFT operation

32 Outline of the Source Separation System As shown inFigure 5 the inputs to the system are the two channel MFSKmixtures We perform short-time Fourier transform (STFT)for each channel and obtain the T-F representation of theinput signals 119883

119860(119898 119891) and 119883

119861(119898 119891) where 119898 = 1 119872

and 119891 = 1 119865 are the time frame and frequency binindices respectively The low-level features that is mixingvector (MV) interaural level and phase difference (IPDILD)which can be derived from (12)ndash(13) are then estimated ateach T-F unit Consider

z (119898 119891) =W (119891) x (119898 119891)1003817100381710038171003817W (119891) x (119898 119891)

1003817100381710038171003817

x (119898 119891) =[119883119860(119898 119891) 119883

119861(119898 119891)]

119879

100381710038171003817100381710038171003817[119883119860(119898 119891) 119883

119861(119898 119891)]

119879100381710038171003817100381710038171003817

(12)

where W(119891) is a whitening matrix with each row beingone eigenvector of 119864(x(119898 119891)x119867(119898 119891)) the superscript 119867 isHermitian transpose and sdot is Frobenius norm

120572 (119898 119891) = 20 log10(

100381610038161003816100381610038161003816100381610038161003816

119883119860(119898 119891)

119883119861(119898 119891)

100381610038161003816100381610038161003816100381610038161003816

)

120601 (119898 119891) = ang(119883119860(119898 119891)

119883119861(119898 119891)

)

(13)

Discrete Dynamics in Nature and Society 5

nK

AB

A

B F

f

1Mm1

F

K

1

F

1

F

1

Mixture

Targetspeech

ISTFTB-channel

ISTFTA-channel

STFTB-channel

STFTA-channel

maskProbability

maskProbability

Ung

roup

Gro

upDeep

network n

Deepnetwork 1

network NDeep

MV

IPDILD

MV

IPDILD

Block 1

Block 1

Block n

Block NBlock n

Block N

XA(m f)

middot middot middotmiddot middot middotmiddot middot middot

middot middot middotmiddot middot middotmiddot middot middot

middot middot middotmiddot middot middotmiddot middot middot

middot middot middotmiddot middot middotmiddot middot middot

middot middot middotmiddot middot middot

middot middot middot middot middot middot

middot middot middot

middot middot middotmiddot middot middotmiddot middot middot

XB(m f)

AB

A

Bu(1m)

u(nm)

u(Nm)

(N minus 1)K + 1

XR(m f)

XL(m f)

1simK

(N minus 1)K + 1simF

(n minus 1)K + 1simnK

(n minus 1)K + 1

T-F mask

Figure 5 The architecture of the proposed system using deep neural network based time-frequency masking for blind source separation

where | sdot | takes the absolute value of its argument and ang(sdot)finds the phase angle

Next we group the low-level features into 119873 blocks(only along the frequency bins 119891) The block 119899 includes 119870frequency bins ((119899 minus 1)119870 + 1 119899119870) where 119870 = 119865119873 Webuild119873 deep networks with each corresponding to one blockand use them to estimate the direction of arrivals (DOAs)of the sources The low-level features as the input of thedeep networks are composed by the IPD ILD and MV thatis u(119898 119891) = [z119879(119898 119891) 120572(119898 119891) 120601(119898 119891)]119879 Through unsu-pervised learning and the sparse autoencoder [18] in deepnetworks high-level features (coded positional informationof the sources) are extracted and used as inputs for the outputlayer (ie the softmax regression) of the networksThe outputof softmax regression is a source occupation probability(ie the time-frequency mask) of each block (through theungroup operation T-F units in the same block are assignedwith the same source occupation probability) of themixturesThen the sources can be recovered applying the T-F mask tothemixtures followed by the inverse STFT (ISTFT)The deepnetworks are pretrained by using a greedy layer-wise trainingmethod

33 The Deep Networks As described in the beginning ofthis section we group the low-level features into 119873 blocksand build 119873 individual deep networks which have the samearchitecture to classify the DOAs of the current input T-F point in each block The deep network which is usedto estimate the T-F mask is composed of two-layer deepautoencoder and one layer softmax classifier

More specifically we split the whole space to 119869 rangeswith respect to the hydrophones and separate the target andinterferers based on different orientation ranges (DOAs withrespect to the receiver node) where they are locatedWe applythe softmax classifier to perform the classification task andthe inputs to the classifier that is the high-level features a(2)

are produced by the deep autoencoder Assuming that theposition of the target in the current input T-F point remainsunchanged the deepnetwork estimates the probability119901(119892

119894=

119894 | u(119898119891)

) of the orientation of the current input samplebelonging to the orientation index 119894 With the estimatedorientation (obtained by selecting the maximum probabilityindex) of each input T-F point we cluster the T-F pointswhich have the same orientation index to get the probabilitymask and obtain the T-F mask from the probability maskthrough the ungroup operation Note that each T-F point inthe same block is assigned the same probability The numberof sources can also be estimated from the probability mask byusing a predefined probability threshold typically chosen as02 in our experiments

331 Deep Autoencoder An autoencoder is an unsupervisedlearning algorithm based on backpropagation It aims tolearn an approximation u of the input u It appears tobe learning a trivial identity function but by using someconstraints on the learning process such as limiting thenumber of neurons activated it discloses some interestingstructures about the data Figure 6 shows the architectureof a single layer autoencoder The difference between classicneural network and autoencoder is the training objectiveThe objective of the classic neural networks is to minimizethe difference between the label of input training data andthe output of the network However the objective of theautoencoder is to minimize the difference between the inputtraining dataset and the output of the network As shown inFigure 6 the output of the autoencoders can be defined asu = sigm(W(2)a(1) + b

(2)

) with a(1) = sigm(W(1)u + b(1))where the function sigm(u) = 1(1 + exp(minusu)) is the logisticfunction W(1) isin R119881times119884 b(1) isin R119881 W(2) isin R119884times119881 andb(2)

isin R119884 119881 is the number of hidden layer neurons and 119884is the number of input layer neurons which is the same as

6 Discrete Dynamics in Nature and Society

+1

+1

W(1)

a(1)

b(1)

W(2)

b(2)

u

a1

a2

a3

a4

u1

u2

u3

u

u1

u2

u3

Figure 6 The single layer autoencoder

that of the output layer neuronsW(1) is a matrix containingthe weights of connections between the input layer neuronsand hidden layer neurons Similar toW(1) W(2) contains theweights of connections between the hidden layer neurons andthe output layer neurons b(1) is a vector of the bias valuesadded to the hidden layer neurons and b(2) is the vectorfor the output layer neurons Θ refers to the parameter setcomposed of weights W and bias b The neuron V is ldquoactiverdquowhen the output 119886V of this neuron is close to 1 which meansthat the function sigm(W(1)V u + b(1)V ) asymp 1 For ldquoinactiverdquoneurons however the output is close to 0 which means thefunction sigm(W(1)V u + b(1)V ) asymp 0 where W(1)V denotes theweights of connections between the hidden layer neuron Vand the input layer neurons which is the Vth rowof thematrixW(1) b(1)V is the Vth element of the vector b(1) which is the biasvalue added to the hidden layer neuron VThe superscript 119894 ofb(119894)W(119894) and a(119894) denotes the 119894th layer of the deep network

With the sparsity constraint most of the neurons areassumed to be inactive More specifically 119886V = sigm(W(1)V u +b(1)V ) denotes the activation value of the hidden layer unit V inthe autoencoder Generalizing this for the unit 119894 in the hiddenlayer the average activation V of unit Vwith the input sampleu(119898)

can be defined as follows

V =1

119872

119872

sum

119898=1

119886V (u(119898)) (14)

where119872 is the number of training samples and u(119898)

is the119898th input training sample Next the sparsity constraint V =120588 is enforced where 120588 is the parameter preset before trainingtypically small such as 120588 = 3 times 10minus3 To achieve the sparsityconstraint we use the penalty term in the cost function ofsparse autoencoders as follows

119881

sum

V=1KL (120588 V) =

119881

sum

V=1120588 log

120588

V+ (1 minus 120588) log

1 minus 120588

1 minus V (15)

The penalty term is essentially a Kullback-Leibler (KL)divergence Now the cost function 119869sparse(W b) of the sparseautoencoder can be written as follows

119869sparse (W b) =1

2u minus u2 + 120573

119881

sum

V=1KL (120588 V) (16)

where 120573 controls the weight of the penalty term In ourproposed system the cost function 119869sparse(W b) is minimizedusing the limited memory BFGS (L-BFGS) optimizationalgorithm and the single layer sparse autoencoder is trainedby using the backpropagation algorithm

After finishing the training of single layer sparse autoen-coder we discard the output layer neurons the relativeweights W(2) and bias b

(2)

and only save the input layerneurons W(1) and b(1) The output of the hidden layera(1) is used as the input samples of the next single layersparse autoencoder Repeating these steps like stacking theautoencoders we could build a deep autoencoder from twoor more single layer sparse autoencoders In our proposedsystem we use two single layer autoencoders to build a deepautoencoder The stacking procedures show on the right partof Figure 7

Lots of studies on deep autoencoders show that with thedeep architecture (more than one hidden layer) deep autoen-coder could build up more complex representation from thesample low-level features capture the underlying regularitiesof the data and improve the qualities of recognition That iswhy we use deep autoencoder in our proposed system

There are however several difficulties associated withobtaining the optimized weights of deep autoencoders Onechallenge is the presence of local optima In particulartraining a neural network using supervised learning involvessolving a highly nonconvex optimization problem that isfinding a set of network parameters (W b) to minimizethe training error u minus u2 In the deep autoencoder theoptimization problem with bad local optima turns out to berife and training with gradient descent no longer works wellAnother challenge is the ldquodiffusion of gradientsrdquoWhen usingbackpropagation to compute the derivatives the gradientsthat are propagated backwards (from the output layer to theearlier layers of the network) rapidly diminish in magnitudeas the depth of the network increases As a result thederivative of the overall cost with respect to the weights in theearlier layers is very smallThus when using gradient descentthe weights of the earlier layers change slowly However ifthe initializing parameter is already close to the optimizedvalues the gradient descent works well That is the idea ofldquogreedy layer-wiserdquo training where the layers of the networksare trained one by one as shown in the left part of Figure 7

First we use the backpropagation algorithm to train thefirst sparse autoencoder (only including one hidden layer)with the data label being the inputs In our proposed systemthe input data is the 4119870-dimensional feature vector u Asa result of the first-layer training we get a set of networkparameters Θ(1) (ie the parameters W(1) and b(1)) of thefirst-layer sparse autoencoder and a new dataset a(1) (ie thefeatures I shown in Figure 9)which is the output of the hidden

Discrete Dynamics in Nature and Society 7

Copy

Copy

Copy

Copy

Copy

Copy

Input

Input

Output

Output

Features I

Features II

(Features I)

autoencoderSecond sparse

autoencoderFirst sparse

autoencoderDeep sparse

middot middot middot

middot middot middot middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot middot middot middot

middot middot middot

middot middot middot

b(1)

b(2)

+1

+1a(1)Vminus1a(1)Vminus2 a(1)V

b(1)

u1u2+1

+1

u(nm) u(nm)

W(1) W(1)

u4k u4kminus1 u1 u2 u4ku4kminus1

a(1)1a(1)2a(1)3 a(1)1 a(1)2 a(1)3

a(2)Vminus1a(2)Vminus2 a(2)Va(2)1 a(2)2 a(2)3

a(2)Vminus1 a(2)Vminus2a(2)V a(2)1a(1)2a(1)3

a(1)Vminus1 a(1)Vminus2a(1)V

+1

+1

a(1)1a(1)2a(1)3a(1)Vminus1 a(1)Vminus2a(1)V

u4k u4kminus1 u2 u1 u(nm)

a(1)(nm)

a(1)(nm)

a(1)(nm)

W(2)

a(2)(nm)

W(2)

a(2)(nm)

b(2)

a(1)V a(1)Vminus1a(1)Vminus2 a(1)3 a(1)2 a(1)1

u(m nK) u(m nK)u(m (n minus 1)K + 1) u(m (n minus 1)K + 1)

(a) (b)

Figure 7The illustration of greedy layer-wise training and stacking (a) is the procedure of greedy layer-wise training and (b) is the procedureof stacking of sparse autoencoders

layer neurons (the activity state of the hidden layer neurons)by using this parameter set Next we use a(1) as the inputs tothe second sparse autoencoder After the second autoencoderis trained we can get the network parameters Θ(2) of thesecond sparse autoencoder and the new dataset a(1) (ie thefeature II in Figure 9) for the training of next single layerneural network We then repeat the above steps until the lastlayer (ie the softmax regression in our proposed system)Finally we obtain a pretrained deep autoencoder by stackingall the autoencoders and use Θ(1) and Θ(2) as the initializedparameters for this pretrained deep autoencoder The featureII is the high-level feature and can be used as the trainingdataset for softmax regression discussed next

332 SoftmaxClassifier In our proposed system the softmaxclassifier based on softmax regression was used to estimatethe probabilities of the current input T-F point u

(119899119898)belong-

ing to the orientation index 119895 by the deep autoencoder withthe extracted high-level features a(2)

(119899119898)as inputs

The softmax regression generalizes the classical logisticregression (for binary classification) to multiclass classifica-tion problems Different from the logistic regression the datalabel of the softmax regression is an integer value between 1and 119869 here 119869 is the number of data classes More specificallyin our proposed system for 119869 classes119872 samples dataset wasused to train the 119899th deep network

(a(2)(1198991) 119892(1198991)) (a(2)

(119899119898) 119892(119899119898)) (a(2)

(119873119872) 119892(119873119872)

) (17)

where119892(119899119898)

is the label of the119898th sample a(2)(119899119898)

andwill be setto 119895 if a(2)

(119899119898)belongs to class 119895The architecture of the softmax

classifier is shown in Figure 8Given an input a(2)

(119899119898) the output 119901(119892

(119899119898)= 119895 |

a(2)(119899119898)) = 119890120579119879

119895a(2)(119899119898)sum

119869

119895=1119890120579119879

119895a(2)(119899119898) of the layer neuron 119895 gives the

probability of the input a(2)(119899119898)

belonging to the class 119895 Similarto the logistic regression the output of the softmax regressioncan be written as follows

hΘ (a(2)

(119899119898)) =

[[[[[[[[[[[

[

119901 (119892(119899119898)

= 1 | a(2)(119899119898)Θ)

119901 (119892(119899119898)

= 119895 | a(2)(119899119898)Θ)

119901 (119892(119899119898)

= 119869 | a(2)(119899119898)Θ)

]]]]]]]]]]]

]

=1

sum119869

119895=1119890Θ119895a(2)(119899119898)

[[[[[[[[[[[[

[

119890Θ119879

1a(2)(119899119898)

119890Θ119879

119895a(2)(119899119898)

119890Θ119879

119869a(2)(119899119898)

]]]]]]]]]]]]

]

(18)

8 Discrete Dynamics in Nature and Society

Input

Output

a(2)Vminus1

a(2)V

b(3) +1

W(3)

a(2)1

a(2)2

a(2)(nm)

p(g(nm) = J | a(2)(nm)

)

p(g(nm) = 1 | a(2)(nm)

)

p(g(nm) = j | a(2)(nm)

)

Figure 8 A calssical architecture of the softmax classifier

Here Θ isin R119869times119881 is the network parameter of the softmax(119881 is the dimension of the input a(2)

(119899119898) and 119869 is the number of

the classes within the input data)Θ119879119895represents the transpose

of the 119895th row of Θ and contains the parameters of theconnections between the neuron 119895 of the output layer and theinput sample a(2)

(119899119898)

The softmax regression and logistic regression have thesame minimization objective The cost function 119869softmax(Θ)of the softmax classifier can be generalized from the logisticregression function and written as follows

119869softmax (Θ) = minus1

119872

[

[

119872

sum

119898=1

119869

sum

119895=1

1 119892(119899119898)

= 119895

sdot log119901 (119892(119899119898)

= 119895 | a(2)(119899119898)Θ)]

]

+120582

2

119869

sum

119895=1

119881

sum

V=1[Θ119895V]2

(19)

where 1sdot is the indicator function so that1a true statement = 1 and 1a false statement = 0The element Θ

119895V of the Θ contains the parameters of theconnections between the neuron 119895 of the output layer andthe neuron V of the input layer (1205822)sum119869

119895=1sum119868

119894=1(Θ119895V)2

is a regularization term that enforces the cost function119869softmax(Θ) to be strictly convex where 120582 is a weight decayparameter predefined by the users

In our proposed system we extend the label 119892(119899119898)

of thetraining dataset to a vector g

(119899119898)isin R119869 and set the 119895th element

119892119895of g(119899119898)

to 1 and other elements to 0 when the input samplea(2)(119899119898)

belongs to class 119895 such as g(119899119898)

= [0 0 1 0 0]119879

The cost function 119869softmax(Θ) can be written as follows byusing vectorization

119869softmax (Θ) = minus1

119872[

119872

sum

119898=1

(g(119899119898))119879 hΘ (a

(2)

(119899119898))]

+120582

2

119869

sum

119895=1

119868

sum

119894=1

(Θ119895119894)2

(20)

The softmax classifier can be trained by using the L-BFGSalgorithm based on a dataset in order to find an optimalparameter setΘ for minimizing the cost function 119869softmax(Θ)In our proposed system the dataset for softmax classifiertraining is composed by two parts The first part is the inputsample a(2)

(119899119898)(feature II) calculated from the last hidden

layer of the deep autoencoder The second part is the datalabel g

(119899119898)isin R119869 where the 119895th element119892

119895of the g

(119899119898)will be

set to 1 when the input sample belongs to the source locatedin the range of DOAs of index 119895

We stack the softmax classifier and deep autoencodertogether after the training is completed as shown on theleft part of Figure 9 Finally we use the training dataset andL-BFGS algorithm to fine-tune the deep network with theinitialized parameters W(1) b(1) W(2) b(2) W(3) and b(3)obtained from the sparse autoencoders and softmax classifiertraining The training phase of the sparse autoencoders andsoftmax classifier are called pretraining phase and the stack-ingtraining of the overall network that is deep networkis called fine-tuning phase In the pretraining phase theshallow neural networks that is sparse autoencoders andsoftmax classifier are training individually using the outputof current layer as the input for the next layer In the fine-tuning phase we use the L-BFGS algorithm (ie a gradientdescent method) to minimize the difference between theoutput of the deep network and the label of training datasetThe gradient descent works well because the initializedparameters obtained from the pretraining phase include asignificant amount of ldquopriorrdquo information about the inputdata through unsupervised learning

4 Experiments

In this section we describe the communication system usedin the simulation system and show the separation results indifferent SNR and data rates It is important to note thatbit error rate (BER) is an extremely significant performanceindex for a pragmatic communication system in this sectiontherefore we will evaluate the separation quality of pragmaticsystem by BER and regard the BER performance of singleuser MFSK communication as the baseline under the samecondition

There is no need to take the multipath propagation ofchannels into consideration because of the relatively closedistance among each node in view of the communicationof multi-UUV We thus adopt the additive white Gaussiannoise channel model and Monte Carlo method to calculatethe system BER in the simulation such as the simulationsystem shown in Figure 10 and we also take the BER of

Discrete Dynamics in Nature and Society 9

b(1)

b(2) b(3)b(3)

+1

+1+1 +1

u4k

u4kminus1

u1

u2

u(nm)W(3)W(3) W(2)

W(1)

a(2)(nm)

a(1)(nm)

a(2)1

a(2)2

a(2)3

a(2)(nm)

a(2)1

a(2)2

a(2)3

a(1)1

a(1)2

a(1)3

a(2)Vminus2

a(2)Vminus1

a(2)V

a(1)Vminus2

a(1)Vminus1

a(1)V

p(g1 = 1 | u(nm))

p(gj = j | u(nm))

p(gJ = J | u(nm))

g1

gj

gJ

g(nm)

Input

Input

Softmaxclassifier

Softmaxclassifier

p(gJ = J | a(2)(nm)

)

p(g1 = 1 | a(2)(nm)

)

p(gj = j | a(2)(nm)

)

g1

gj

gJ

g(nm)

Copy

MVILDIPD

MVILDIPD

a(2)Vminus2

a(2)Vminus1

a(2)V

Features I Features IIFeatures II

u(m nK)

u(m (n minus 1)K + 1)

Figure 9 The illustration of stacking deep autoencoder and softmax classifier

Demodulation DemodulationbaselineBER

proposedBER

Demodulation DemodulationbaselineBER

proposedBER

TimedelayMixture Source separation

system+

AWGN channel

AWGN channelMFSK passband

MFSK passbandMFSK baseband

MFSK baseband

Source A

Source B

Figure 10 The simulation system used in the experiment

the single user communication as the baseline The sam-pling rate of our simulation is 16 kHz and the order ofthe modulation is 16 without any error-correction codingThe bandwidth of the MFSK modulationdemodulation is640Hz and the bandwidth ratio varies from 0625 to 0125corresponding to the bit rate 400 bitss to 80 bitss

Frist supposewe can obtain the optimal T-Fmask that isusing (6) to calculate T-F mask and then get the simulationresult of BER being changing with SNR as well as bandwidthratio and make a comparison with baseline

As shown in Tables 1 and 2 it can be seen that fromthe simulation result the BER similar with baseline can beattained byT-Fmaskmethodwhen bandwidth ratio lt 0417Furthermore the BER with T-F mask would be even lowerthan that of baseline under the same condition when SNRis very low that is SNR = minus20 The reason behind theabove phenomenon is that it can be seen from (6) that some

frequency points are adjusted to zero by the T-F mask whichpromotes the SNR of signal objectively and obtains lowerBER In the actual system however it is a highly challengingtask to accurately estimate the T-Fmask under such low SNRwhich can also be verified in later simulation

According to Section 3 we estimate the T-F mask byusing the orientation information of the nodes that is theDOA of the MFSK signal received by nodes Therefore weintroduce time delay 120591 to simulate pairwise hydrophonereceiver in the simulation and divide the space into 37 blocksalong the horizontal direction at the same time in order tocorrespond from minus90 to +90 horizontally with the step sizeof 5 in a horizontal space We estimate the T-F mask bysource separation system described in Section 3 and compareit to the BER performance of baseline under the condition ofdifferent SNR and bandwidth ratio the simulation result isshown in Table 3

10 Discrete Dynamics in Nature and Society

Table 1 The BER of the simulation system with the optimum T-F mask

Bandwidth ratio SNR = 30 dB SNR = 20 dB SNR = 10 dB SNR = 5 dB SNR = 0 dB0625 530E minus 02 530E minus 02 536E minus 02 550E minus 02 595E minus 020500 296E minus 02 295E minus 02 303E minus 02 316E minus 02 356E minus 020417 140E minus 02 138E minus 02 138E minus 02 139E minus 02 139E minus 020313 900E minus 06 160E minus 05 130E minus 05 200E minus 05 450E minus 050250 300E minus 06 400E minus 06 100E minus 06 400E minus 06 400E minus 060208 000E + 00 100E minus 06 100E minus 06 000E + 00 000E + 000156 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000125 000E + 00 000E + 00 000E + 00 000E + 00 000E + 00Bandwidth ratio SNR = minus5 dB SNR = minus10 dB SNR = minus15 dB SNR = minus20 dB 0625 764E minus 02 147E minus 01 340E minus 01 562E minus 01 0500 426E minus 02 682E minus 02 198E minus 01 424E minus 01 0417 148E minus 02 267E minus 02 110E minus 01 309E minus 01 0313 125E minus 04 308E minus 03 323E minus 02 157E minus 01 0250 900E minus 06 112E minus 04 713E minus 03 754E minus 02 0208 200E minus 06 219E minus 04 469E minus 03 462E minus 02 0156 000E + 00 210E minus 05 113E minus 03 146E minus 02 0125 000E + 00 000E + 00 200E minus 05 402E minus 03

Table 2 The BER of the baseline

Bandwidth ratio SNR = 30 dB SNR = 20 dB SNR = 10 dB SNR = 5 dB SNR = 0 dB0625 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000500 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000417 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000313 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000250 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000208 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000156 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000125 000E + 00 000E + 00 000E + 00 000E + 00 000E + 00Bandwidth ratio SNR = minus5 dB SNR = minus10 dB SNR = minus15 dB SNR = minus20 dB 0625 443E minus 04 508E minus 02 362E minus 01 726E minus 01 0500 100E minus 06 791E minus 03 216E minus 01 647E minus 01 0417 000E + 00 713E minus 04 119E minus 01 570E minus 01 0313 000E + 00 000E + 00 342E minus 02 435E minus 01 0250 000E + 00 000E + 00 965E minus 03 328E minus 01 0208 000E + 00 000E + 00 340E minus 03 253E minus 01 0156 000E + 00 000E + 00 292E minus 04 141E minus 01 0125 000E + 00 000E + 00 190E minus 05 723E minus 02

As shown in Table 3 it is observed that the BER per-formance of the proposed system is much the same as thebaseline when SNR gt 20 dB and bandwidth ratio lt 0417which is consistent with the result when using the optimal T-F mask When SNR lt 20 dB however the BER performanceof the proposed system begins to decline for a big error of T-Fmask estimationmade by the lower SNR of the signals whichresults in the system performance degradation

5 Summary

In this paper we point at the problem of control packets colli-sion avoiding existing widely in multichannel MAC protocol

and on the basis of the sparsity of noncoherent modulation-MFSK in time-frequency domain separate the sources fromthe MFSK mixture caused by packets collision through theuse of T-F masking method First we indicate the sparsityof MFSK signal with bandwidth ratio and demonstrate therelation between bandwidth ratio and PSR SIR and WDOby means of the simulation experiment Then we establishthe source separation system based on deep networks and themodel of MFSK communication system taking single userMFSK communication as the baseline to compare the BERperformance of proposed system and baseline under differentcondition of SNR and bandwidth ratio The simulation resultshows that first the optimal T-F masking could obtain

Discrete Dynamics in Nature and Society 11

Table 3 The BER of the proposed system

Bandwidth ratio SNR = 30 dB SNR = 20 dB SNR = 10 dB SNR = 5 dB SNR = 0 dB0625 580E minus 02 730E minus 02 636E minus 02 630E minus 02 359E minus 010500 326E minus 02 345E minus 02 393E minus 02 416E minus 02 336E minus 010417 146E minus 02 178E minus 02 148E minus 02 499E minus 02 214E minus 010313 100E minus 05 689E minus 04 950E minus 03 300E minus 02 190E minus 010250 700E minus 06 210E minus 04 600E minus 03 100E minus 02 160E minus 010208 100E minus 06 900E minus 05 300E minus 03 800E minus 03 130E minus 010156 000E + 00 300E minus 05 980E minus 04 200E minus 03 130E minus 010125 000E + 00 700E minus 06 100E minus 05 900E minus 04 120E minus 01Bandwidth ratio SNR = minus5 dB SNR = minus10 dB SNR = minus15 dB SNR = minus20 dB 0625 494E minus 01 0500 426E minus 01 0417 448E minus 01 0313 425E minus 01 0250 490E minus 01 0208 420E minus 01 0156 400E minus 01 0125 250E minus 01

the same BER performance as the baseline under lowerbandwidth ratio second the proposed system could obtainthe similar BER performance as the baseline under higherSNR third the BER performance of the proposed systemdeclines rapidly under the condition of lower SNR for lowerSNR leads to a greater error in the estimation of T-F mask Inthe future work we will adjust the structure of deep networksin subsequent research work to promote the performanceof proposed system under the condition of low SNR andmultipath propagation presenting in the underwater channelAs a future research topic it also deserves the possibility thatthe bioinspired computing models and algorithms are usedfor the underwater multichannel MAC protocols such as theP systems (inspired from the structure and the functioningof cells) [19 20] and evolutionary computation (motivatedby evolution theory of Darwin) [21 22]

Competing Interests

The authors declare that there are no competing interestsregarding the publication of this paper

Acknowledgments

This research was supported partially by the Natural ScienceBasis Research Plan in Shaanxi Province of China (Programno 2014JQ8355)

References

[1] Z Li C Yang N Ding S Bogdan and T Ge ldquoRobust adaptivemotion control for underwater remotely operated vehicleswith velocity constraintsrdquo International Journal of ControlAutomation and Systems vol 10 no 2 pp 421ndash429 2012

[2] A S K Annamalai R Sutton C Yang P Culverhouse andS Sharma ldquoRobust adaptive control of an uninhabited surfacevehiclerdquo Journal of Intelligent amp Robotic Systems vol 78 no 2pp 319ndash338 2015

[3] K Chen M Ma E Cheng F Yuan and W Su ldquoA survey onMAC protocols for underwater wireless sensor networksrdquo IEEECommunications Surveys and Tutorials vol 16 no 3 pp 1433ndash1447 2014

[4] R Otnes A Asterjadhi P Casari et al Underwater AcousticNetworking Techniques Springer Berlin Germany 2012

[5] S Climent A Sanchez J V Capella N Meratnia and JJ Serrano ldquoUnderwater acoustic wireless sensor networksadvances and future trends in physical MAC and routinglayersrdquo Sensors vol 14 no 1 pp 795ndash833 2014

[6] I M Khalil Y Gadallah M Hayajneh and A KhreishahldquoAn adaptive OFDMA-based MAC protocol for underwateracoustic wireless sensor networksrdquo Sensors vol 12 no 7 pp8782ndash8805 2012

[7] F Bouabdallah and R Boutaba ldquoA distributed OFDMAmedium access control for underwater acoustic sensors net-worksrdquo in Proceedings of the IEEE International Conference onCommunications (ICC rsquo11) June 2011

[8] C C Hsu K F Lai C F Chou and K C J Lin ldquoSt-mac spatial-temporal mac scheduling for underwater sensornetworksrdquo in Proceedings of the 28th Conference on ComputerCommunications (INFOCOM rsquo09) pp 1827ndash1835 IEEE Rio deJaneiro Brazil April 2009

[9] Y-D Chen C-Y Lien S-W Chuang and K-P Shih ldquoDSSSa TDMA-based MAC protocol with Dynamic Slot SchedulingStrategy for underwater acoustic sensor networksrdquo in Proceed-ings of the IEEE OCEANS pp 1ndash6 Santander Spain June 2011

[10] K Kredo II P Djukic and P Mohapatra ldquoStump exploitingposition diversity in the staggered tdma underwater mac pro-tocolrdquo in Proceedings of the IEEE 28th Conference on ComputerCommunications (INFOCOM rsquo09) pp 2961ndash2965 April 2009

12 Discrete Dynamics in Nature and Society

[11] M S Marcal Molins ldquoSlotted FAMA a MAC protocol forunderwater acoustic networksrdquo in Proceedings of the IEEEOCEANS 2006 Boston Mass USA September 2006

[12] N Chirdchoo W-S Soh and K C Chua ldquoRIPT a receiver-initiated reservation-based protocol for underwater acousticnetworksrdquo IEEE Journal on Selected Areas in Communicationsvol 26 no 9 pp 1744ndash1753 2008

[13] Z Zhou Z Peng P Xie J-H Cui and Z Jiang ldquoExploringrandom access and handshaking techniques in underwaterwireless acoustic networksrdquo Eurasip Journal on Wireless Com-munications and Networking vol 2013 article 95 2013

[14] YW ChihminChao ldquoAmultiple rendezvousmultichannelmacprotocol for underwater sensor networksrdquo IEEETransactions onSystems Man and Cybernetics Systems vol 43 no 1 pp 128ndash138 2010

[15] H Ramezani and G Leus ldquoDMC-MAC dynamic multi-channel MAC in underwater acoustic networksrdquo in Proceedingsof the 21st European Signal Processing Conference (EUSIPCO rsquo13)pp 1ndash5 Marrakech Morocco September 2013

[16] Z Zhou Z Peng J-H Cui and Z Jiang ldquoHandling triplehidden terminal problems for multichannel MAC in long-delayunderwater sensor networksrdquo IEEE Transactions on MobileComputing vol 11 no 1 pp 139ndash154 2010

[17] O Yilmaz and S Rickard ldquoBlind separation of speech mixturesvia time-frequency maskingrdquo IEEE Transactions on SignalProcessing vol 52 no 7 pp 1830ndash1847 2004

[18] T N Sainath B Kingsbury and B Ramabhadran ldquoAuto-encoder bottleneck features using deep belief networksrdquo inProceedings of the IEEE International Conference on AcousticsSpeech and Signal Processing (ICASSP rsquo12) pp 4153ndash4156 KyotoJapan March 2012

[19] X Zhang L Pan and A Paun ldquoOn the universality of axonp systemrdquo IEEE Transactions on Neural Networks and LearningSystems vol 26 no 11 pp 2816ndash2829 2015

[20] X Zhang Y Liu B Luo and L Pan ldquoComputational power oftissue P systems for generating control languagesrdquo InformationSciences vol 278 pp 285ndash297 2014

[21] X Zhang Y Tian and Y Jin ldquoA knee point-driven evolutionaryalgorithm for many-objective optimizationrdquo IEEE Transactionson Evolutionary Computation vol 19 no 6 pp 761ndash776 2015

[22] X Zhang Y Tian R Cheng and Y Jin ldquoAn efficient approachto nondominated sorting for evolutionary multiobjective opti-mizationrdquo IEEE Transactions on Evolutionary Computation vol19 no 2 pp 201ndash213 2015

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 2: Research Article The Control Packet Collision …downloads.hindawi.com/journals/ddns/2016/2437615.pdfResearch Article The Control Packet Collision Avoidance Algorithm for the Underwater

2 Discrete Dynamics in Nature and Society

FDMA

Underwater acoustic network MAC protocols

Contention-free Hybrid Contention-based

HandshakeRandom accessP-MACHCFMAUW-MACHSR-TDMACDMATDMA

NOGO-MAC UW-OFDMAC

ST-MACCT-TDMA

IPool-ADELIN DACAP Slotted FAMA

DSSS STUMP G-TDMA

CDMAB POCA-CDMA

Single channelWA-TDMA

APCAP R-MAC MACA-U RIPT SF-MAC

CSMAALOHA

Multichannel

RM-MACAUMIMO-MACDMC-MACRCAMACMM-MACCUMACCOPE-MAC

Figure 1 Most researchers classify underwater acoustic networkMAC protocols into 3 types contention-free contention-based and hybird

multiple joints time slot to the problem of vertex-coloringwas proposed in [8] Reference [9] puts forward DSSSprotocol which utilizes transmission delay of underwateracoustic channel and arranges conflict-free transmission con-currently The STUMP protocol which eases the limitationof time synchronization was present in [10] Different fromTDMA CDMA protocols distinguish different users throughpseudonoise which has high channel utilization and simplealgorithm but still cannot avoid its inherent ldquonear-far effectrdquo

Compared with the above-mentioned contention-freeprotocol this allocates channel beforehand and contention-based protocol which allocates channel based on the needof nodes has higher channel utilization The essence ofcontention-based protocol is based on channel reserva-tion Nodes reserve channel resources through handshakingexchange message before initiating data communication [1112] The multichannel MAC protocols transmit handshakingexchange message via independent channels initiates infor-mation transmission among multinode pairs concurrentlyutilizes network bandwidth and reduces the consumptionwhen network load is heavy [13ndash15] and attracts attention ofthe researchers recently The prospective of the multichannelMAC protocols is solving new problems which are faced withmultichannel protocols especially the collision problem incontrol channel Zhou et al from University of Connecticutadopt joint detection of adjacent nodes to tackle triple hiddenterminal problems typical in multichannel MAC protocols[16]

In this paper we proposed a control packet collisionavoidance algorithm utilizing time-frequency masking todeal with the control packets collision in the control channelThis algorithm is based on the scarcity of incoherent under-water acoustic communication signals and regards collisionavoiding as the separation of the mixtures of communicationsignals from different nodes The remaining contents of thepaper are organized as follows Section 2 briefly discussesthe W-Disjoint Orthogonality and the scarcity of the MFSKsignal The simulation result demonstrates that there existstime-frequency mask which can separate the source signals

from the mixture of the communication signals Section 3outlines the proposed separation system discusses the low-level featurewe sued and gives details about the deepnetworkincluding its structure as well as training method Section 4shows the simulation result about the source separationsystem in different conditions including different signal-noise ratio and different bandwidth ratio

2 W-Disjoint Orthogonality ofthe MFSK Signals

As is known to all the MFSK is a classic noncoherent com-munication modulation scheme that has been consideredas a robust modulation to the complex underwater acousticchannel Because of its lower bandwidth efficiency than thecoherent modulation such as PSK modulation the MFSKmodulation is not considered as a good choice for the physicallayer of the underwater acoustic networks However thelower bandwidth efficiency means that the MFSK signal issparse in time and frequency domain Same as the speechsignals mentioned in [17] the MFSK mixtures can be sepa-rated into several sources by using time-frequency maskingThe received signal can be seen as MFSK mixture when thecontrol packets collide and the sparsity of the MFSK signalin time and frequency domain offers the potential for dealingwith the collision of the control packets

In this section we focus on theW-Disjoint Orthogonalityand the sparsity of theMFSK signals showing that there existtime-frequency mask which can separate the source signalsfrom the mixture of the MFSK signals We only consider theMFSK modulation as the physical layer of the underwateracoustic networks in this paper

Same as the model of speech mixture the model of theMFSK mixture can be written as follows

119909 (119905) =

119873

sum

119894=1

119904119894 (1)

Discrete Dynamics in Nature and Society 3

With the short-time Fourier transform we obtain themodel of the MFSK in time and frequency domain through

119883(119898 119891) =

119873

sum

119894=1

119878119894(119898 119891) (2)

where 119898 = 1 119872 119891 = 1 119865 are the indexes of timeframe and frequency point respectively Assuming119883(119898 119891) isW-Disjoint Orthogonality at least one of the119873 nodes signalswill be nonzero for a given (119898 119891) To separate node signalfrom the mixture 119883(119898 119891) we create the time-frequencymask for each node respectively and apply these masks tothe mixture to obtain the original node signal For instancewith the defining maskM

119894for the node 119894

M119894=

1 119878119896(119898 119891) = 0

0 otherwise(3)

which actually is an indicator function theMFSK signal fromnode 119896 will be derived via

119878119894(119898 119891) =M

119894times 119883 (119898 119891) forall119898 119891 (4)

However for the MFSK signal the assumption aboutthe W-Disjoint Orthogonality is not strictly satisfied Whenthe sparsity of the MFSK signals in time and frequencydomain is taken into consideration the approximate W-Disjoint Orthogonality will be satisfied In order to measuretheW-Disjoint Orthogonality of the T-Fmask the combinedperformance criteria PSR and SIR which are proposed byYilmaz and Rickard in [17] will be used

PSR119894=

1003817100381710038171003817M119894119878119894 (119898 119891)1003817100381710038171003817

2

1003817100381710038171003817119878119894 (119898 119891)1003817100381710038171003817

2

SIR119894=

1003817100381710038171003817M119894119878119894 (119898 119891)1003817100381710038171003817

2

10038171003817100381710038171003817M119894sum119873

119894=1119894 =119894119878119894(119898 119891)

10038171003817100381710038171003817

2

WDO119894=

1003817100381710038171003817M119894119878119894 (119898 119891)1003817100381710038171003817

2minus10038171003817100381710038171003817M119894sum119873

119894=1119894 =119894119878119894(119898 119891)

10038171003817100381710038171003817

2

1003817100381710038171003817119878119894 (119898 119891)1003817100381710038171003817

2

= PSR119894minusPSR119894

SIR119894

(5)

With a view to the quite small probability of the collisionof more than two data packets in control channel we haveproduced a series of MFSK mixture only including two nodesignals and calculated the PSR SIR and WDO of the T-FMask by the use of Monte Carlo method Thereinto the T-TMask for source separation is derived as follows

M119894=

1 20 log1003816100381610038161003816119878119894 (119898 119891)

10038161003816100381610038161003816100381610038161003816119883 (119898 119891)

1003816100381610038161003816

ge minus6 dB

0 otherwise

where 119883(119898 119891) =119873

sum

119894=1

119878119894(119898 119891) 119873 = 2

(6)

PSR versus bandwidth ratio

025 0375 05 0625 0750125Bandwidth ratio

084

086

088

09

092

094

096

PSR

Figure 2 The PSR result

According to the definition of the W-Disjoint Orthogo-nality the corresponding T-F mask becomes closer to theW-Disjoint Orthogonality as the signal being more sparse Thesparsity of MFSK signal is reflected by bandwidth ratio andthe lower the bandwidth ratio of the signal is the higher itssparsity becomes

By the conclusion of [17] it could be thought that mixedsignal is able to be demixed through a T-F mask when thevalue ofWDO is close to 1 In accordance with the simulationresult shown on Figures 2 3 and 4 we believe that an existingT-F mask could separate sources from the MFSK mixturewith high quality when the bandwidth ratio is greater than05

3 The Source Separation System UtilizesDeep Networks

In this section we will outline the proposed separationsystem discuss the low-level featureswe used and give detailsabout the deep networks including its structure and trainingmethod

31 Observation in Time and Frequency Domain and Low-Level Features Used in Deep Networks We assume that thereare 119870 hydrophones in the node and the mixture includesseveral MFSK signals from 119873 nodes The mixture receivedby the hydrophone 119896 can be obtained from

119909119896 (119905) =

119870

sum

119894=1

119904119894lowast ℎ119894119896+ 119899119896 (119905) (7)

where ℎ119894119896

is the channel impulse response between thehydrophone 119896 and node 119894 and the lowast denotes convolutionThen by using the short-time Fourier transform (STFT)we can obtain the mixture signal mapped into the time-frequency domain from

119883119896(119898 119891) =

119869

sum

119894=1

119878119894(119898 119891)119867

119896119894(119891) + 119873

119896(119898 119891) (8)

4 Discrete Dynamics in Nature and Society

SIR versus bandwidth ratio

4

6

8

10

12

14

16

18

SIR

025 0375 05 0625 0750125Bandwidth ratio

Figure 3 The SIR result

WDO versus bandwidth ratio

025 0375 05 0625 0750125Bandwidth ratio

065

07

075

08

085

09

095

WD

O

Figure 4 TheWDO result

where119898 = 1 119872 and 119891 = 1 119865 are the time frame andfrequency bin indices respectively and 119883

119896(119898 119891) = F(119909

119896)

119878119894= F(119904

119894) 119873119896= F(119899

119896) F(sdot) denote short-time Fourier

transformWe use119883119896|119895(119898 119891) representing the component of

the node 119894which takes in themixture received by hydrophone119896 Thus

119883119896|119895(119898 119891) = 119878

119894(119898 119891) sdot 119867

119896119894(119891) (9)

As shown in Section 2 MFSK signal is approximate W-Disjoint Orthogonality when bandwidth ratio is less than 05Then as shown in (10) the mixture119883

119896which received by the

hydrophone 119896 can be demixed by using the T-F masks M119894

corresponding node 119894 119894 = 1 119873 Consider the following

119896(119898 119891) =

119873

sum

119894=1

M119894sdot 119883119896(119898 119891) + 119873

119896(119898 119891) (10)

There is a nature choice that uses the orientation of theMFSK signals to estimate the corresponding T-F mask of

the nodes with the pairwise hydrophone Thus the time-frequency mask is related to the location information of thecurrent input signals (the channel impulse response and arraymanifold) That is

M119894(119898 119891)

= 119875 (119883119894(119898 119891) isin node 119894 | 119878

1sdot sdot sdot 119878119873 1198671119896sdot sdot sdot 119867119873119896)

(11)

Obviously for other hydrophone output the time-frequency maskM corresponding to the same node remainsthe same When we obtain the time-frequency M

119894corre-

sponding to node 119894 we can then separate the single usercommunication signals of node 119894 through ISTFT operationThe mixture signals received by the multiarray include 119873single user signals from 119873 nodes with different locationsThe probability distribution of a single T-F point belongingto a certain node such as 119894 can be described by a Gaussiandistribution with mean 120583

119894and variance 120590

119894 Here 120583

119894and

variance 120590119894can be interpreted as the mean value and variance

of direction of arrival (DOA) of the signal coming from thenode 119894 respectively According to the central limit theoremsfor all T-F points the probability distribution of mixturesignals including 119873 node signals can be described by themixture of 119873 Gaussian distribution namely Gaussian mix-ture model (GMM) Therefore we can describe the mixturepattern of signals through a GMM based on space featuresThe parameters of the GMM can be obtained on the basis ofthe existing observation datasetsThus the probability of eachT-F point belonging to node 1 node 2 and node119873 can beestimatedThen the source signals can be recovered throughISTFT operation

32 Outline of the Source Separation System As shown inFigure 5 the inputs to the system are the two channel MFSKmixtures We perform short-time Fourier transform (STFT)for each channel and obtain the T-F representation of theinput signals 119883

119860(119898 119891) and 119883

119861(119898 119891) where 119898 = 1 119872

and 119891 = 1 119865 are the time frame and frequency binindices respectively The low-level features that is mixingvector (MV) interaural level and phase difference (IPDILD)which can be derived from (12)ndash(13) are then estimated ateach T-F unit Consider

z (119898 119891) =W (119891) x (119898 119891)1003817100381710038171003817W (119891) x (119898 119891)

1003817100381710038171003817

x (119898 119891) =[119883119860(119898 119891) 119883

119861(119898 119891)]

119879

100381710038171003817100381710038171003817[119883119860(119898 119891) 119883

119861(119898 119891)]

119879100381710038171003817100381710038171003817

(12)

where W(119891) is a whitening matrix with each row beingone eigenvector of 119864(x(119898 119891)x119867(119898 119891)) the superscript 119867 isHermitian transpose and sdot is Frobenius norm

120572 (119898 119891) = 20 log10(

100381610038161003816100381610038161003816100381610038161003816

119883119860(119898 119891)

119883119861(119898 119891)

100381610038161003816100381610038161003816100381610038161003816

)

120601 (119898 119891) = ang(119883119860(119898 119891)

119883119861(119898 119891)

)

(13)

Discrete Dynamics in Nature and Society 5

nK

AB

A

B F

f

1Mm1

F

K

1

F

1

F

1

Mixture

Targetspeech

ISTFTB-channel

ISTFTA-channel

STFTB-channel

STFTA-channel

maskProbability

maskProbability

Ung

roup

Gro

upDeep

network n

Deepnetwork 1

network NDeep

MV

IPDILD

MV

IPDILD

Block 1

Block 1

Block n

Block NBlock n

Block N

XA(m f)

middot middot middotmiddot middot middotmiddot middot middot

middot middot middotmiddot middot middotmiddot middot middot

middot middot middotmiddot middot middotmiddot middot middot

middot middot middotmiddot middot middotmiddot middot middot

middot middot middotmiddot middot middot

middot middot middot middot middot middot

middot middot middot

middot middot middotmiddot middot middotmiddot middot middot

XB(m f)

AB

A

Bu(1m)

u(nm)

u(Nm)

(N minus 1)K + 1

XR(m f)

XL(m f)

1simK

(N minus 1)K + 1simF

(n minus 1)K + 1simnK

(n minus 1)K + 1

T-F mask

Figure 5 The architecture of the proposed system using deep neural network based time-frequency masking for blind source separation

where | sdot | takes the absolute value of its argument and ang(sdot)finds the phase angle

Next we group the low-level features into 119873 blocks(only along the frequency bins 119891) The block 119899 includes 119870frequency bins ((119899 minus 1)119870 + 1 119899119870) where 119870 = 119865119873 Webuild119873 deep networks with each corresponding to one blockand use them to estimate the direction of arrivals (DOAs)of the sources The low-level features as the input of thedeep networks are composed by the IPD ILD and MV thatis u(119898 119891) = [z119879(119898 119891) 120572(119898 119891) 120601(119898 119891)]119879 Through unsu-pervised learning and the sparse autoencoder [18] in deepnetworks high-level features (coded positional informationof the sources) are extracted and used as inputs for the outputlayer (ie the softmax regression) of the networksThe outputof softmax regression is a source occupation probability(ie the time-frequency mask) of each block (through theungroup operation T-F units in the same block are assignedwith the same source occupation probability) of themixturesThen the sources can be recovered applying the T-F mask tothemixtures followed by the inverse STFT (ISTFT)The deepnetworks are pretrained by using a greedy layer-wise trainingmethod

33 The Deep Networks As described in the beginning ofthis section we group the low-level features into 119873 blocksand build 119873 individual deep networks which have the samearchitecture to classify the DOAs of the current input T-F point in each block The deep network which is usedto estimate the T-F mask is composed of two-layer deepautoencoder and one layer softmax classifier

More specifically we split the whole space to 119869 rangeswith respect to the hydrophones and separate the target andinterferers based on different orientation ranges (DOAs withrespect to the receiver node) where they are locatedWe applythe softmax classifier to perform the classification task andthe inputs to the classifier that is the high-level features a(2)

are produced by the deep autoencoder Assuming that theposition of the target in the current input T-F point remainsunchanged the deepnetwork estimates the probability119901(119892

119894=

119894 | u(119898119891)

) of the orientation of the current input samplebelonging to the orientation index 119894 With the estimatedorientation (obtained by selecting the maximum probabilityindex) of each input T-F point we cluster the T-F pointswhich have the same orientation index to get the probabilitymask and obtain the T-F mask from the probability maskthrough the ungroup operation Note that each T-F point inthe same block is assigned the same probability The numberof sources can also be estimated from the probability mask byusing a predefined probability threshold typically chosen as02 in our experiments

331 Deep Autoencoder An autoencoder is an unsupervisedlearning algorithm based on backpropagation It aims tolearn an approximation u of the input u It appears tobe learning a trivial identity function but by using someconstraints on the learning process such as limiting thenumber of neurons activated it discloses some interestingstructures about the data Figure 6 shows the architectureof a single layer autoencoder The difference between classicneural network and autoencoder is the training objectiveThe objective of the classic neural networks is to minimizethe difference between the label of input training data andthe output of the network However the objective of theautoencoder is to minimize the difference between the inputtraining dataset and the output of the network As shown inFigure 6 the output of the autoencoders can be defined asu = sigm(W(2)a(1) + b

(2)

) with a(1) = sigm(W(1)u + b(1))where the function sigm(u) = 1(1 + exp(minusu)) is the logisticfunction W(1) isin R119881times119884 b(1) isin R119881 W(2) isin R119884times119881 andb(2)

isin R119884 119881 is the number of hidden layer neurons and 119884is the number of input layer neurons which is the same as

6 Discrete Dynamics in Nature and Society

+1

+1

W(1)

a(1)

b(1)

W(2)

b(2)

u

a1

a2

a3

a4

u1

u2

u3

u

u1

u2

u3

Figure 6 The single layer autoencoder

that of the output layer neuronsW(1) is a matrix containingthe weights of connections between the input layer neuronsand hidden layer neurons Similar toW(1) W(2) contains theweights of connections between the hidden layer neurons andthe output layer neurons b(1) is a vector of the bias valuesadded to the hidden layer neurons and b(2) is the vectorfor the output layer neurons Θ refers to the parameter setcomposed of weights W and bias b The neuron V is ldquoactiverdquowhen the output 119886V of this neuron is close to 1 which meansthat the function sigm(W(1)V u + b(1)V ) asymp 1 For ldquoinactiverdquoneurons however the output is close to 0 which means thefunction sigm(W(1)V u + b(1)V ) asymp 0 where W(1)V denotes theweights of connections between the hidden layer neuron Vand the input layer neurons which is the Vth rowof thematrixW(1) b(1)V is the Vth element of the vector b(1) which is the biasvalue added to the hidden layer neuron VThe superscript 119894 ofb(119894)W(119894) and a(119894) denotes the 119894th layer of the deep network

With the sparsity constraint most of the neurons areassumed to be inactive More specifically 119886V = sigm(W(1)V u +b(1)V ) denotes the activation value of the hidden layer unit V inthe autoencoder Generalizing this for the unit 119894 in the hiddenlayer the average activation V of unit Vwith the input sampleu(119898)

can be defined as follows

V =1

119872

119872

sum

119898=1

119886V (u(119898)) (14)

where119872 is the number of training samples and u(119898)

is the119898th input training sample Next the sparsity constraint V =120588 is enforced where 120588 is the parameter preset before trainingtypically small such as 120588 = 3 times 10minus3 To achieve the sparsityconstraint we use the penalty term in the cost function ofsparse autoencoders as follows

119881

sum

V=1KL (120588 V) =

119881

sum

V=1120588 log

120588

V+ (1 minus 120588) log

1 minus 120588

1 minus V (15)

The penalty term is essentially a Kullback-Leibler (KL)divergence Now the cost function 119869sparse(W b) of the sparseautoencoder can be written as follows

119869sparse (W b) =1

2u minus u2 + 120573

119881

sum

V=1KL (120588 V) (16)

where 120573 controls the weight of the penalty term In ourproposed system the cost function 119869sparse(W b) is minimizedusing the limited memory BFGS (L-BFGS) optimizationalgorithm and the single layer sparse autoencoder is trainedby using the backpropagation algorithm

After finishing the training of single layer sparse autoen-coder we discard the output layer neurons the relativeweights W(2) and bias b

(2)

and only save the input layerneurons W(1) and b(1) The output of the hidden layera(1) is used as the input samples of the next single layersparse autoencoder Repeating these steps like stacking theautoencoders we could build a deep autoencoder from twoor more single layer sparse autoencoders In our proposedsystem we use two single layer autoencoders to build a deepautoencoder The stacking procedures show on the right partof Figure 7

Lots of studies on deep autoencoders show that with thedeep architecture (more than one hidden layer) deep autoen-coder could build up more complex representation from thesample low-level features capture the underlying regularitiesof the data and improve the qualities of recognition That iswhy we use deep autoencoder in our proposed system

There are however several difficulties associated withobtaining the optimized weights of deep autoencoders Onechallenge is the presence of local optima In particulartraining a neural network using supervised learning involvessolving a highly nonconvex optimization problem that isfinding a set of network parameters (W b) to minimizethe training error u minus u2 In the deep autoencoder theoptimization problem with bad local optima turns out to berife and training with gradient descent no longer works wellAnother challenge is the ldquodiffusion of gradientsrdquoWhen usingbackpropagation to compute the derivatives the gradientsthat are propagated backwards (from the output layer to theearlier layers of the network) rapidly diminish in magnitudeas the depth of the network increases As a result thederivative of the overall cost with respect to the weights in theearlier layers is very smallThus when using gradient descentthe weights of the earlier layers change slowly However ifthe initializing parameter is already close to the optimizedvalues the gradient descent works well That is the idea ofldquogreedy layer-wiserdquo training where the layers of the networksare trained one by one as shown in the left part of Figure 7

First we use the backpropagation algorithm to train thefirst sparse autoencoder (only including one hidden layer)with the data label being the inputs In our proposed systemthe input data is the 4119870-dimensional feature vector u Asa result of the first-layer training we get a set of networkparameters Θ(1) (ie the parameters W(1) and b(1)) of thefirst-layer sparse autoencoder and a new dataset a(1) (ie thefeatures I shown in Figure 9)which is the output of the hidden

Discrete Dynamics in Nature and Society 7

Copy

Copy

Copy

Copy

Copy

Copy

Input

Input

Output

Output

Features I

Features II

(Features I)

autoencoderSecond sparse

autoencoderFirst sparse

autoencoderDeep sparse

middot middot middot

middot middot middot middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot middot middot middot

middot middot middot

middot middot middot

b(1)

b(2)

+1

+1a(1)Vminus1a(1)Vminus2 a(1)V

b(1)

u1u2+1

+1

u(nm) u(nm)

W(1) W(1)

u4k u4kminus1 u1 u2 u4ku4kminus1

a(1)1a(1)2a(1)3 a(1)1 a(1)2 a(1)3

a(2)Vminus1a(2)Vminus2 a(2)Va(2)1 a(2)2 a(2)3

a(2)Vminus1 a(2)Vminus2a(2)V a(2)1a(1)2a(1)3

a(1)Vminus1 a(1)Vminus2a(1)V

+1

+1

a(1)1a(1)2a(1)3a(1)Vminus1 a(1)Vminus2a(1)V

u4k u4kminus1 u2 u1 u(nm)

a(1)(nm)

a(1)(nm)

a(1)(nm)

W(2)

a(2)(nm)

W(2)

a(2)(nm)

b(2)

a(1)V a(1)Vminus1a(1)Vminus2 a(1)3 a(1)2 a(1)1

u(m nK) u(m nK)u(m (n minus 1)K + 1) u(m (n minus 1)K + 1)

(a) (b)

Figure 7The illustration of greedy layer-wise training and stacking (a) is the procedure of greedy layer-wise training and (b) is the procedureof stacking of sparse autoencoders

layer neurons (the activity state of the hidden layer neurons)by using this parameter set Next we use a(1) as the inputs tothe second sparse autoencoder After the second autoencoderis trained we can get the network parameters Θ(2) of thesecond sparse autoencoder and the new dataset a(1) (ie thefeature II in Figure 9) for the training of next single layerneural network We then repeat the above steps until the lastlayer (ie the softmax regression in our proposed system)Finally we obtain a pretrained deep autoencoder by stackingall the autoencoders and use Θ(1) and Θ(2) as the initializedparameters for this pretrained deep autoencoder The featureII is the high-level feature and can be used as the trainingdataset for softmax regression discussed next

332 SoftmaxClassifier In our proposed system the softmaxclassifier based on softmax regression was used to estimatethe probabilities of the current input T-F point u

(119899119898)belong-

ing to the orientation index 119895 by the deep autoencoder withthe extracted high-level features a(2)

(119899119898)as inputs

The softmax regression generalizes the classical logisticregression (for binary classification) to multiclass classifica-tion problems Different from the logistic regression the datalabel of the softmax regression is an integer value between 1and 119869 here 119869 is the number of data classes More specificallyin our proposed system for 119869 classes119872 samples dataset wasused to train the 119899th deep network

(a(2)(1198991) 119892(1198991)) (a(2)

(119899119898) 119892(119899119898)) (a(2)

(119873119872) 119892(119873119872)

) (17)

where119892(119899119898)

is the label of the119898th sample a(2)(119899119898)

andwill be setto 119895 if a(2)

(119899119898)belongs to class 119895The architecture of the softmax

classifier is shown in Figure 8Given an input a(2)

(119899119898) the output 119901(119892

(119899119898)= 119895 |

a(2)(119899119898)) = 119890120579119879

119895a(2)(119899119898)sum

119869

119895=1119890120579119879

119895a(2)(119899119898) of the layer neuron 119895 gives the

probability of the input a(2)(119899119898)

belonging to the class 119895 Similarto the logistic regression the output of the softmax regressioncan be written as follows

hΘ (a(2)

(119899119898)) =

[[[[[[[[[[[

[

119901 (119892(119899119898)

= 1 | a(2)(119899119898)Θ)

119901 (119892(119899119898)

= 119895 | a(2)(119899119898)Θ)

119901 (119892(119899119898)

= 119869 | a(2)(119899119898)Θ)

]]]]]]]]]]]

]

=1

sum119869

119895=1119890Θ119895a(2)(119899119898)

[[[[[[[[[[[[

[

119890Θ119879

1a(2)(119899119898)

119890Θ119879

119895a(2)(119899119898)

119890Θ119879

119869a(2)(119899119898)

]]]]]]]]]]]]

]

(18)

8 Discrete Dynamics in Nature and Society

Input

Output

a(2)Vminus1

a(2)V

b(3) +1

W(3)

a(2)1

a(2)2

a(2)(nm)

p(g(nm) = J | a(2)(nm)

)

p(g(nm) = 1 | a(2)(nm)

)

p(g(nm) = j | a(2)(nm)

)

Figure 8 A calssical architecture of the softmax classifier

Here Θ isin R119869times119881 is the network parameter of the softmax(119881 is the dimension of the input a(2)

(119899119898) and 119869 is the number of

the classes within the input data)Θ119879119895represents the transpose

of the 119895th row of Θ and contains the parameters of theconnections between the neuron 119895 of the output layer and theinput sample a(2)

(119899119898)

The softmax regression and logistic regression have thesame minimization objective The cost function 119869softmax(Θ)of the softmax classifier can be generalized from the logisticregression function and written as follows

119869softmax (Θ) = minus1

119872

[

[

119872

sum

119898=1

119869

sum

119895=1

1 119892(119899119898)

= 119895

sdot log119901 (119892(119899119898)

= 119895 | a(2)(119899119898)Θ)]

]

+120582

2

119869

sum

119895=1

119881

sum

V=1[Θ119895V]2

(19)

where 1sdot is the indicator function so that1a true statement = 1 and 1a false statement = 0The element Θ

119895V of the Θ contains the parameters of theconnections between the neuron 119895 of the output layer andthe neuron V of the input layer (1205822)sum119869

119895=1sum119868

119894=1(Θ119895V)2

is a regularization term that enforces the cost function119869softmax(Θ) to be strictly convex where 120582 is a weight decayparameter predefined by the users

In our proposed system we extend the label 119892(119899119898)

of thetraining dataset to a vector g

(119899119898)isin R119869 and set the 119895th element

119892119895of g(119899119898)

to 1 and other elements to 0 when the input samplea(2)(119899119898)

belongs to class 119895 such as g(119899119898)

= [0 0 1 0 0]119879

The cost function 119869softmax(Θ) can be written as follows byusing vectorization

119869softmax (Θ) = minus1

119872[

119872

sum

119898=1

(g(119899119898))119879 hΘ (a

(2)

(119899119898))]

+120582

2

119869

sum

119895=1

119868

sum

119894=1

(Θ119895119894)2

(20)

The softmax classifier can be trained by using the L-BFGSalgorithm based on a dataset in order to find an optimalparameter setΘ for minimizing the cost function 119869softmax(Θ)In our proposed system the dataset for softmax classifiertraining is composed by two parts The first part is the inputsample a(2)

(119899119898)(feature II) calculated from the last hidden

layer of the deep autoencoder The second part is the datalabel g

(119899119898)isin R119869 where the 119895th element119892

119895of the g

(119899119898)will be

set to 1 when the input sample belongs to the source locatedin the range of DOAs of index 119895

We stack the softmax classifier and deep autoencodertogether after the training is completed as shown on theleft part of Figure 9 Finally we use the training dataset andL-BFGS algorithm to fine-tune the deep network with theinitialized parameters W(1) b(1) W(2) b(2) W(3) and b(3)obtained from the sparse autoencoders and softmax classifiertraining The training phase of the sparse autoencoders andsoftmax classifier are called pretraining phase and the stack-ingtraining of the overall network that is deep networkis called fine-tuning phase In the pretraining phase theshallow neural networks that is sparse autoencoders andsoftmax classifier are training individually using the outputof current layer as the input for the next layer In the fine-tuning phase we use the L-BFGS algorithm (ie a gradientdescent method) to minimize the difference between theoutput of the deep network and the label of training datasetThe gradient descent works well because the initializedparameters obtained from the pretraining phase include asignificant amount of ldquopriorrdquo information about the inputdata through unsupervised learning

4 Experiments

In this section we describe the communication system usedin the simulation system and show the separation results indifferent SNR and data rates It is important to note thatbit error rate (BER) is an extremely significant performanceindex for a pragmatic communication system in this sectiontherefore we will evaluate the separation quality of pragmaticsystem by BER and regard the BER performance of singleuser MFSK communication as the baseline under the samecondition

There is no need to take the multipath propagation ofchannels into consideration because of the relatively closedistance among each node in view of the communicationof multi-UUV We thus adopt the additive white Gaussiannoise channel model and Monte Carlo method to calculatethe system BER in the simulation such as the simulationsystem shown in Figure 10 and we also take the BER of

Discrete Dynamics in Nature and Society 9

b(1)

b(2) b(3)b(3)

+1

+1+1 +1

u4k

u4kminus1

u1

u2

u(nm)W(3)W(3) W(2)

W(1)

a(2)(nm)

a(1)(nm)

a(2)1

a(2)2

a(2)3

a(2)(nm)

a(2)1

a(2)2

a(2)3

a(1)1

a(1)2

a(1)3

a(2)Vminus2

a(2)Vminus1

a(2)V

a(1)Vminus2

a(1)Vminus1

a(1)V

p(g1 = 1 | u(nm))

p(gj = j | u(nm))

p(gJ = J | u(nm))

g1

gj

gJ

g(nm)

Input

Input

Softmaxclassifier

Softmaxclassifier

p(gJ = J | a(2)(nm)

)

p(g1 = 1 | a(2)(nm)

)

p(gj = j | a(2)(nm)

)

g1

gj

gJ

g(nm)

Copy

MVILDIPD

MVILDIPD

a(2)Vminus2

a(2)Vminus1

a(2)V

Features I Features IIFeatures II

u(m nK)

u(m (n minus 1)K + 1)

Figure 9 The illustration of stacking deep autoencoder and softmax classifier

Demodulation DemodulationbaselineBER

proposedBER

Demodulation DemodulationbaselineBER

proposedBER

TimedelayMixture Source separation

system+

AWGN channel

AWGN channelMFSK passband

MFSK passbandMFSK baseband

MFSK baseband

Source A

Source B

Figure 10 The simulation system used in the experiment

the single user communication as the baseline The sam-pling rate of our simulation is 16 kHz and the order ofthe modulation is 16 without any error-correction codingThe bandwidth of the MFSK modulationdemodulation is640Hz and the bandwidth ratio varies from 0625 to 0125corresponding to the bit rate 400 bitss to 80 bitss

Frist supposewe can obtain the optimal T-Fmask that isusing (6) to calculate T-F mask and then get the simulationresult of BER being changing with SNR as well as bandwidthratio and make a comparison with baseline

As shown in Tables 1 and 2 it can be seen that fromthe simulation result the BER similar with baseline can beattained byT-Fmaskmethodwhen bandwidth ratio lt 0417Furthermore the BER with T-F mask would be even lowerthan that of baseline under the same condition when SNRis very low that is SNR = minus20 The reason behind theabove phenomenon is that it can be seen from (6) that some

frequency points are adjusted to zero by the T-F mask whichpromotes the SNR of signal objectively and obtains lowerBER In the actual system however it is a highly challengingtask to accurately estimate the T-Fmask under such low SNRwhich can also be verified in later simulation

According to Section 3 we estimate the T-F mask byusing the orientation information of the nodes that is theDOA of the MFSK signal received by nodes Therefore weintroduce time delay 120591 to simulate pairwise hydrophonereceiver in the simulation and divide the space into 37 blocksalong the horizontal direction at the same time in order tocorrespond from minus90 to +90 horizontally with the step sizeof 5 in a horizontal space We estimate the T-F mask bysource separation system described in Section 3 and compareit to the BER performance of baseline under the condition ofdifferent SNR and bandwidth ratio the simulation result isshown in Table 3

10 Discrete Dynamics in Nature and Society

Table 1 The BER of the simulation system with the optimum T-F mask

Bandwidth ratio SNR = 30 dB SNR = 20 dB SNR = 10 dB SNR = 5 dB SNR = 0 dB0625 530E minus 02 530E minus 02 536E minus 02 550E minus 02 595E minus 020500 296E minus 02 295E minus 02 303E minus 02 316E minus 02 356E minus 020417 140E minus 02 138E minus 02 138E minus 02 139E minus 02 139E minus 020313 900E minus 06 160E minus 05 130E minus 05 200E minus 05 450E minus 050250 300E minus 06 400E minus 06 100E minus 06 400E minus 06 400E minus 060208 000E + 00 100E minus 06 100E minus 06 000E + 00 000E + 000156 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000125 000E + 00 000E + 00 000E + 00 000E + 00 000E + 00Bandwidth ratio SNR = minus5 dB SNR = minus10 dB SNR = minus15 dB SNR = minus20 dB 0625 764E minus 02 147E minus 01 340E minus 01 562E minus 01 0500 426E minus 02 682E minus 02 198E minus 01 424E minus 01 0417 148E minus 02 267E minus 02 110E minus 01 309E minus 01 0313 125E minus 04 308E minus 03 323E minus 02 157E minus 01 0250 900E minus 06 112E minus 04 713E minus 03 754E minus 02 0208 200E minus 06 219E minus 04 469E minus 03 462E minus 02 0156 000E + 00 210E minus 05 113E minus 03 146E minus 02 0125 000E + 00 000E + 00 200E minus 05 402E minus 03

Table 2 The BER of the baseline

Bandwidth ratio SNR = 30 dB SNR = 20 dB SNR = 10 dB SNR = 5 dB SNR = 0 dB0625 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000500 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000417 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000313 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000250 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000208 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000156 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000125 000E + 00 000E + 00 000E + 00 000E + 00 000E + 00Bandwidth ratio SNR = minus5 dB SNR = minus10 dB SNR = minus15 dB SNR = minus20 dB 0625 443E minus 04 508E minus 02 362E minus 01 726E minus 01 0500 100E minus 06 791E minus 03 216E minus 01 647E minus 01 0417 000E + 00 713E minus 04 119E minus 01 570E minus 01 0313 000E + 00 000E + 00 342E minus 02 435E minus 01 0250 000E + 00 000E + 00 965E minus 03 328E minus 01 0208 000E + 00 000E + 00 340E minus 03 253E minus 01 0156 000E + 00 000E + 00 292E minus 04 141E minus 01 0125 000E + 00 000E + 00 190E minus 05 723E minus 02

As shown in Table 3 it is observed that the BER per-formance of the proposed system is much the same as thebaseline when SNR gt 20 dB and bandwidth ratio lt 0417which is consistent with the result when using the optimal T-F mask When SNR lt 20 dB however the BER performanceof the proposed system begins to decline for a big error of T-Fmask estimationmade by the lower SNR of the signals whichresults in the system performance degradation

5 Summary

In this paper we point at the problem of control packets colli-sion avoiding existing widely in multichannel MAC protocol

and on the basis of the sparsity of noncoherent modulation-MFSK in time-frequency domain separate the sources fromthe MFSK mixture caused by packets collision through theuse of T-F masking method First we indicate the sparsityof MFSK signal with bandwidth ratio and demonstrate therelation between bandwidth ratio and PSR SIR and WDOby means of the simulation experiment Then we establishthe source separation system based on deep networks and themodel of MFSK communication system taking single userMFSK communication as the baseline to compare the BERperformance of proposed system and baseline under differentcondition of SNR and bandwidth ratio The simulation resultshows that first the optimal T-F masking could obtain

Discrete Dynamics in Nature and Society 11

Table 3 The BER of the proposed system

Bandwidth ratio SNR = 30 dB SNR = 20 dB SNR = 10 dB SNR = 5 dB SNR = 0 dB0625 580E minus 02 730E minus 02 636E minus 02 630E minus 02 359E minus 010500 326E minus 02 345E minus 02 393E minus 02 416E minus 02 336E minus 010417 146E minus 02 178E minus 02 148E minus 02 499E minus 02 214E minus 010313 100E minus 05 689E minus 04 950E minus 03 300E minus 02 190E minus 010250 700E minus 06 210E minus 04 600E minus 03 100E minus 02 160E minus 010208 100E minus 06 900E minus 05 300E minus 03 800E minus 03 130E minus 010156 000E + 00 300E minus 05 980E minus 04 200E minus 03 130E minus 010125 000E + 00 700E minus 06 100E minus 05 900E minus 04 120E minus 01Bandwidth ratio SNR = minus5 dB SNR = minus10 dB SNR = minus15 dB SNR = minus20 dB 0625 494E minus 01 0500 426E minus 01 0417 448E minus 01 0313 425E minus 01 0250 490E minus 01 0208 420E minus 01 0156 400E minus 01 0125 250E minus 01

the same BER performance as the baseline under lowerbandwidth ratio second the proposed system could obtainthe similar BER performance as the baseline under higherSNR third the BER performance of the proposed systemdeclines rapidly under the condition of lower SNR for lowerSNR leads to a greater error in the estimation of T-F mask Inthe future work we will adjust the structure of deep networksin subsequent research work to promote the performanceof proposed system under the condition of low SNR andmultipath propagation presenting in the underwater channelAs a future research topic it also deserves the possibility thatthe bioinspired computing models and algorithms are usedfor the underwater multichannel MAC protocols such as theP systems (inspired from the structure and the functioningof cells) [19 20] and evolutionary computation (motivatedby evolution theory of Darwin) [21 22]

Competing Interests

The authors declare that there are no competing interestsregarding the publication of this paper

Acknowledgments

This research was supported partially by the Natural ScienceBasis Research Plan in Shaanxi Province of China (Programno 2014JQ8355)

References

[1] Z Li C Yang N Ding S Bogdan and T Ge ldquoRobust adaptivemotion control for underwater remotely operated vehicleswith velocity constraintsrdquo International Journal of ControlAutomation and Systems vol 10 no 2 pp 421ndash429 2012

[2] A S K Annamalai R Sutton C Yang P Culverhouse andS Sharma ldquoRobust adaptive control of an uninhabited surfacevehiclerdquo Journal of Intelligent amp Robotic Systems vol 78 no 2pp 319ndash338 2015

[3] K Chen M Ma E Cheng F Yuan and W Su ldquoA survey onMAC protocols for underwater wireless sensor networksrdquo IEEECommunications Surveys and Tutorials vol 16 no 3 pp 1433ndash1447 2014

[4] R Otnes A Asterjadhi P Casari et al Underwater AcousticNetworking Techniques Springer Berlin Germany 2012

[5] S Climent A Sanchez J V Capella N Meratnia and JJ Serrano ldquoUnderwater acoustic wireless sensor networksadvances and future trends in physical MAC and routinglayersrdquo Sensors vol 14 no 1 pp 795ndash833 2014

[6] I M Khalil Y Gadallah M Hayajneh and A KhreishahldquoAn adaptive OFDMA-based MAC protocol for underwateracoustic wireless sensor networksrdquo Sensors vol 12 no 7 pp8782ndash8805 2012

[7] F Bouabdallah and R Boutaba ldquoA distributed OFDMAmedium access control for underwater acoustic sensors net-worksrdquo in Proceedings of the IEEE International Conference onCommunications (ICC rsquo11) June 2011

[8] C C Hsu K F Lai C F Chou and K C J Lin ldquoSt-mac spatial-temporal mac scheduling for underwater sensornetworksrdquo in Proceedings of the 28th Conference on ComputerCommunications (INFOCOM rsquo09) pp 1827ndash1835 IEEE Rio deJaneiro Brazil April 2009

[9] Y-D Chen C-Y Lien S-W Chuang and K-P Shih ldquoDSSSa TDMA-based MAC protocol with Dynamic Slot SchedulingStrategy for underwater acoustic sensor networksrdquo in Proceed-ings of the IEEE OCEANS pp 1ndash6 Santander Spain June 2011

[10] K Kredo II P Djukic and P Mohapatra ldquoStump exploitingposition diversity in the staggered tdma underwater mac pro-tocolrdquo in Proceedings of the IEEE 28th Conference on ComputerCommunications (INFOCOM rsquo09) pp 2961ndash2965 April 2009

12 Discrete Dynamics in Nature and Society

[11] M S Marcal Molins ldquoSlotted FAMA a MAC protocol forunderwater acoustic networksrdquo in Proceedings of the IEEEOCEANS 2006 Boston Mass USA September 2006

[12] N Chirdchoo W-S Soh and K C Chua ldquoRIPT a receiver-initiated reservation-based protocol for underwater acousticnetworksrdquo IEEE Journal on Selected Areas in Communicationsvol 26 no 9 pp 1744ndash1753 2008

[13] Z Zhou Z Peng P Xie J-H Cui and Z Jiang ldquoExploringrandom access and handshaking techniques in underwaterwireless acoustic networksrdquo Eurasip Journal on Wireless Com-munications and Networking vol 2013 article 95 2013

[14] YW ChihminChao ldquoAmultiple rendezvousmultichannelmacprotocol for underwater sensor networksrdquo IEEETransactions onSystems Man and Cybernetics Systems vol 43 no 1 pp 128ndash138 2010

[15] H Ramezani and G Leus ldquoDMC-MAC dynamic multi-channel MAC in underwater acoustic networksrdquo in Proceedingsof the 21st European Signal Processing Conference (EUSIPCO rsquo13)pp 1ndash5 Marrakech Morocco September 2013

[16] Z Zhou Z Peng J-H Cui and Z Jiang ldquoHandling triplehidden terminal problems for multichannel MAC in long-delayunderwater sensor networksrdquo IEEE Transactions on MobileComputing vol 11 no 1 pp 139ndash154 2010

[17] O Yilmaz and S Rickard ldquoBlind separation of speech mixturesvia time-frequency maskingrdquo IEEE Transactions on SignalProcessing vol 52 no 7 pp 1830ndash1847 2004

[18] T N Sainath B Kingsbury and B Ramabhadran ldquoAuto-encoder bottleneck features using deep belief networksrdquo inProceedings of the IEEE International Conference on AcousticsSpeech and Signal Processing (ICASSP rsquo12) pp 4153ndash4156 KyotoJapan March 2012

[19] X Zhang L Pan and A Paun ldquoOn the universality of axonp systemrdquo IEEE Transactions on Neural Networks and LearningSystems vol 26 no 11 pp 2816ndash2829 2015

[20] X Zhang Y Liu B Luo and L Pan ldquoComputational power oftissue P systems for generating control languagesrdquo InformationSciences vol 278 pp 285ndash297 2014

[21] X Zhang Y Tian and Y Jin ldquoA knee point-driven evolutionaryalgorithm for many-objective optimizationrdquo IEEE Transactionson Evolutionary Computation vol 19 no 6 pp 761ndash776 2015

[22] X Zhang Y Tian R Cheng and Y Jin ldquoAn efficient approachto nondominated sorting for evolutionary multiobjective opti-mizationrdquo IEEE Transactions on Evolutionary Computation vol19 no 2 pp 201ndash213 2015

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 3: Research Article The Control Packet Collision …downloads.hindawi.com/journals/ddns/2016/2437615.pdfResearch Article The Control Packet Collision Avoidance Algorithm for the Underwater

Discrete Dynamics in Nature and Society 3

With the short-time Fourier transform we obtain themodel of the MFSK in time and frequency domain through

119883(119898 119891) =

119873

sum

119894=1

119878119894(119898 119891) (2)

where 119898 = 1 119872 119891 = 1 119865 are the indexes of timeframe and frequency point respectively Assuming119883(119898 119891) isW-Disjoint Orthogonality at least one of the119873 nodes signalswill be nonzero for a given (119898 119891) To separate node signalfrom the mixture 119883(119898 119891) we create the time-frequencymask for each node respectively and apply these masks tothe mixture to obtain the original node signal For instancewith the defining maskM

119894for the node 119894

M119894=

1 119878119896(119898 119891) = 0

0 otherwise(3)

which actually is an indicator function theMFSK signal fromnode 119896 will be derived via

119878119894(119898 119891) =M

119894times 119883 (119898 119891) forall119898 119891 (4)

However for the MFSK signal the assumption aboutthe W-Disjoint Orthogonality is not strictly satisfied Whenthe sparsity of the MFSK signals in time and frequencydomain is taken into consideration the approximate W-Disjoint Orthogonality will be satisfied In order to measuretheW-Disjoint Orthogonality of the T-Fmask the combinedperformance criteria PSR and SIR which are proposed byYilmaz and Rickard in [17] will be used

PSR119894=

1003817100381710038171003817M119894119878119894 (119898 119891)1003817100381710038171003817

2

1003817100381710038171003817119878119894 (119898 119891)1003817100381710038171003817

2

SIR119894=

1003817100381710038171003817M119894119878119894 (119898 119891)1003817100381710038171003817

2

10038171003817100381710038171003817M119894sum119873

119894=1119894 =119894119878119894(119898 119891)

10038171003817100381710038171003817

2

WDO119894=

1003817100381710038171003817M119894119878119894 (119898 119891)1003817100381710038171003817

2minus10038171003817100381710038171003817M119894sum119873

119894=1119894 =119894119878119894(119898 119891)

10038171003817100381710038171003817

2

1003817100381710038171003817119878119894 (119898 119891)1003817100381710038171003817

2

= PSR119894minusPSR119894

SIR119894

(5)

With a view to the quite small probability of the collisionof more than two data packets in control channel we haveproduced a series of MFSK mixture only including two nodesignals and calculated the PSR SIR and WDO of the T-FMask by the use of Monte Carlo method Thereinto the T-TMask for source separation is derived as follows

M119894=

1 20 log1003816100381610038161003816119878119894 (119898 119891)

10038161003816100381610038161003816100381610038161003816119883 (119898 119891)

1003816100381610038161003816

ge minus6 dB

0 otherwise

where 119883(119898 119891) =119873

sum

119894=1

119878119894(119898 119891) 119873 = 2

(6)

PSR versus bandwidth ratio

025 0375 05 0625 0750125Bandwidth ratio

084

086

088

09

092

094

096

PSR

Figure 2 The PSR result

According to the definition of the W-Disjoint Orthogo-nality the corresponding T-F mask becomes closer to theW-Disjoint Orthogonality as the signal being more sparse Thesparsity of MFSK signal is reflected by bandwidth ratio andthe lower the bandwidth ratio of the signal is the higher itssparsity becomes

By the conclusion of [17] it could be thought that mixedsignal is able to be demixed through a T-F mask when thevalue ofWDO is close to 1 In accordance with the simulationresult shown on Figures 2 3 and 4 we believe that an existingT-F mask could separate sources from the MFSK mixturewith high quality when the bandwidth ratio is greater than05

3 The Source Separation System UtilizesDeep Networks

In this section we will outline the proposed separationsystem discuss the low-level featureswe used and give detailsabout the deep networks including its structure and trainingmethod

31 Observation in Time and Frequency Domain and Low-Level Features Used in Deep Networks We assume that thereare 119870 hydrophones in the node and the mixture includesseveral MFSK signals from 119873 nodes The mixture receivedby the hydrophone 119896 can be obtained from

119909119896 (119905) =

119870

sum

119894=1

119904119894lowast ℎ119894119896+ 119899119896 (119905) (7)

where ℎ119894119896

is the channel impulse response between thehydrophone 119896 and node 119894 and the lowast denotes convolutionThen by using the short-time Fourier transform (STFT)we can obtain the mixture signal mapped into the time-frequency domain from

119883119896(119898 119891) =

119869

sum

119894=1

119878119894(119898 119891)119867

119896119894(119891) + 119873

119896(119898 119891) (8)

4 Discrete Dynamics in Nature and Society

SIR versus bandwidth ratio

4

6

8

10

12

14

16

18

SIR

025 0375 05 0625 0750125Bandwidth ratio

Figure 3 The SIR result

WDO versus bandwidth ratio

025 0375 05 0625 0750125Bandwidth ratio

065

07

075

08

085

09

095

WD

O

Figure 4 TheWDO result

where119898 = 1 119872 and 119891 = 1 119865 are the time frame andfrequency bin indices respectively and 119883

119896(119898 119891) = F(119909

119896)

119878119894= F(119904

119894) 119873119896= F(119899

119896) F(sdot) denote short-time Fourier

transformWe use119883119896|119895(119898 119891) representing the component of

the node 119894which takes in themixture received by hydrophone119896 Thus

119883119896|119895(119898 119891) = 119878

119894(119898 119891) sdot 119867

119896119894(119891) (9)

As shown in Section 2 MFSK signal is approximate W-Disjoint Orthogonality when bandwidth ratio is less than 05Then as shown in (10) the mixture119883

119896which received by the

hydrophone 119896 can be demixed by using the T-F masks M119894

corresponding node 119894 119894 = 1 119873 Consider the following

119896(119898 119891) =

119873

sum

119894=1

M119894sdot 119883119896(119898 119891) + 119873

119896(119898 119891) (10)

There is a nature choice that uses the orientation of theMFSK signals to estimate the corresponding T-F mask of

the nodes with the pairwise hydrophone Thus the time-frequency mask is related to the location information of thecurrent input signals (the channel impulse response and arraymanifold) That is

M119894(119898 119891)

= 119875 (119883119894(119898 119891) isin node 119894 | 119878

1sdot sdot sdot 119878119873 1198671119896sdot sdot sdot 119867119873119896)

(11)

Obviously for other hydrophone output the time-frequency maskM corresponding to the same node remainsthe same When we obtain the time-frequency M

119894corre-

sponding to node 119894 we can then separate the single usercommunication signals of node 119894 through ISTFT operationThe mixture signals received by the multiarray include 119873single user signals from 119873 nodes with different locationsThe probability distribution of a single T-F point belongingto a certain node such as 119894 can be described by a Gaussiandistribution with mean 120583

119894and variance 120590

119894 Here 120583

119894and

variance 120590119894can be interpreted as the mean value and variance

of direction of arrival (DOA) of the signal coming from thenode 119894 respectively According to the central limit theoremsfor all T-F points the probability distribution of mixturesignals including 119873 node signals can be described by themixture of 119873 Gaussian distribution namely Gaussian mix-ture model (GMM) Therefore we can describe the mixturepattern of signals through a GMM based on space featuresThe parameters of the GMM can be obtained on the basis ofthe existing observation datasetsThus the probability of eachT-F point belonging to node 1 node 2 and node119873 can beestimatedThen the source signals can be recovered throughISTFT operation

32 Outline of the Source Separation System As shown inFigure 5 the inputs to the system are the two channel MFSKmixtures We perform short-time Fourier transform (STFT)for each channel and obtain the T-F representation of theinput signals 119883

119860(119898 119891) and 119883

119861(119898 119891) where 119898 = 1 119872

and 119891 = 1 119865 are the time frame and frequency binindices respectively The low-level features that is mixingvector (MV) interaural level and phase difference (IPDILD)which can be derived from (12)ndash(13) are then estimated ateach T-F unit Consider

z (119898 119891) =W (119891) x (119898 119891)1003817100381710038171003817W (119891) x (119898 119891)

1003817100381710038171003817

x (119898 119891) =[119883119860(119898 119891) 119883

119861(119898 119891)]

119879

100381710038171003817100381710038171003817[119883119860(119898 119891) 119883

119861(119898 119891)]

119879100381710038171003817100381710038171003817

(12)

where W(119891) is a whitening matrix with each row beingone eigenvector of 119864(x(119898 119891)x119867(119898 119891)) the superscript 119867 isHermitian transpose and sdot is Frobenius norm

120572 (119898 119891) = 20 log10(

100381610038161003816100381610038161003816100381610038161003816

119883119860(119898 119891)

119883119861(119898 119891)

100381610038161003816100381610038161003816100381610038161003816

)

120601 (119898 119891) = ang(119883119860(119898 119891)

119883119861(119898 119891)

)

(13)

Discrete Dynamics in Nature and Society 5

nK

AB

A

B F

f

1Mm1

F

K

1

F

1

F

1

Mixture

Targetspeech

ISTFTB-channel

ISTFTA-channel

STFTB-channel

STFTA-channel

maskProbability

maskProbability

Ung

roup

Gro

upDeep

network n

Deepnetwork 1

network NDeep

MV

IPDILD

MV

IPDILD

Block 1

Block 1

Block n

Block NBlock n

Block N

XA(m f)

middot middot middotmiddot middot middotmiddot middot middot

middot middot middotmiddot middot middotmiddot middot middot

middot middot middotmiddot middot middotmiddot middot middot

middot middot middotmiddot middot middotmiddot middot middot

middot middot middotmiddot middot middot

middot middot middot middot middot middot

middot middot middot

middot middot middotmiddot middot middotmiddot middot middot

XB(m f)

AB

A

Bu(1m)

u(nm)

u(Nm)

(N minus 1)K + 1

XR(m f)

XL(m f)

1simK

(N minus 1)K + 1simF

(n minus 1)K + 1simnK

(n minus 1)K + 1

T-F mask

Figure 5 The architecture of the proposed system using deep neural network based time-frequency masking for blind source separation

where | sdot | takes the absolute value of its argument and ang(sdot)finds the phase angle

Next we group the low-level features into 119873 blocks(only along the frequency bins 119891) The block 119899 includes 119870frequency bins ((119899 minus 1)119870 + 1 119899119870) where 119870 = 119865119873 Webuild119873 deep networks with each corresponding to one blockand use them to estimate the direction of arrivals (DOAs)of the sources The low-level features as the input of thedeep networks are composed by the IPD ILD and MV thatis u(119898 119891) = [z119879(119898 119891) 120572(119898 119891) 120601(119898 119891)]119879 Through unsu-pervised learning and the sparse autoencoder [18] in deepnetworks high-level features (coded positional informationof the sources) are extracted and used as inputs for the outputlayer (ie the softmax regression) of the networksThe outputof softmax regression is a source occupation probability(ie the time-frequency mask) of each block (through theungroup operation T-F units in the same block are assignedwith the same source occupation probability) of themixturesThen the sources can be recovered applying the T-F mask tothemixtures followed by the inverse STFT (ISTFT)The deepnetworks are pretrained by using a greedy layer-wise trainingmethod

33 The Deep Networks As described in the beginning ofthis section we group the low-level features into 119873 blocksand build 119873 individual deep networks which have the samearchitecture to classify the DOAs of the current input T-F point in each block The deep network which is usedto estimate the T-F mask is composed of two-layer deepautoencoder and one layer softmax classifier

More specifically we split the whole space to 119869 rangeswith respect to the hydrophones and separate the target andinterferers based on different orientation ranges (DOAs withrespect to the receiver node) where they are locatedWe applythe softmax classifier to perform the classification task andthe inputs to the classifier that is the high-level features a(2)

are produced by the deep autoencoder Assuming that theposition of the target in the current input T-F point remainsunchanged the deepnetwork estimates the probability119901(119892

119894=

119894 | u(119898119891)

) of the orientation of the current input samplebelonging to the orientation index 119894 With the estimatedorientation (obtained by selecting the maximum probabilityindex) of each input T-F point we cluster the T-F pointswhich have the same orientation index to get the probabilitymask and obtain the T-F mask from the probability maskthrough the ungroup operation Note that each T-F point inthe same block is assigned the same probability The numberof sources can also be estimated from the probability mask byusing a predefined probability threshold typically chosen as02 in our experiments

331 Deep Autoencoder An autoencoder is an unsupervisedlearning algorithm based on backpropagation It aims tolearn an approximation u of the input u It appears tobe learning a trivial identity function but by using someconstraints on the learning process such as limiting thenumber of neurons activated it discloses some interestingstructures about the data Figure 6 shows the architectureof a single layer autoencoder The difference between classicneural network and autoencoder is the training objectiveThe objective of the classic neural networks is to minimizethe difference between the label of input training data andthe output of the network However the objective of theautoencoder is to minimize the difference between the inputtraining dataset and the output of the network As shown inFigure 6 the output of the autoencoders can be defined asu = sigm(W(2)a(1) + b

(2)

) with a(1) = sigm(W(1)u + b(1))where the function sigm(u) = 1(1 + exp(minusu)) is the logisticfunction W(1) isin R119881times119884 b(1) isin R119881 W(2) isin R119884times119881 andb(2)

isin R119884 119881 is the number of hidden layer neurons and 119884is the number of input layer neurons which is the same as

6 Discrete Dynamics in Nature and Society

+1

+1

W(1)

a(1)

b(1)

W(2)

b(2)

u

a1

a2

a3

a4

u1

u2

u3

u

u1

u2

u3

Figure 6 The single layer autoencoder

that of the output layer neuronsW(1) is a matrix containingthe weights of connections between the input layer neuronsand hidden layer neurons Similar toW(1) W(2) contains theweights of connections between the hidden layer neurons andthe output layer neurons b(1) is a vector of the bias valuesadded to the hidden layer neurons and b(2) is the vectorfor the output layer neurons Θ refers to the parameter setcomposed of weights W and bias b The neuron V is ldquoactiverdquowhen the output 119886V of this neuron is close to 1 which meansthat the function sigm(W(1)V u + b(1)V ) asymp 1 For ldquoinactiverdquoneurons however the output is close to 0 which means thefunction sigm(W(1)V u + b(1)V ) asymp 0 where W(1)V denotes theweights of connections between the hidden layer neuron Vand the input layer neurons which is the Vth rowof thematrixW(1) b(1)V is the Vth element of the vector b(1) which is the biasvalue added to the hidden layer neuron VThe superscript 119894 ofb(119894)W(119894) and a(119894) denotes the 119894th layer of the deep network

With the sparsity constraint most of the neurons areassumed to be inactive More specifically 119886V = sigm(W(1)V u +b(1)V ) denotes the activation value of the hidden layer unit V inthe autoencoder Generalizing this for the unit 119894 in the hiddenlayer the average activation V of unit Vwith the input sampleu(119898)

can be defined as follows

V =1

119872

119872

sum

119898=1

119886V (u(119898)) (14)

where119872 is the number of training samples and u(119898)

is the119898th input training sample Next the sparsity constraint V =120588 is enforced where 120588 is the parameter preset before trainingtypically small such as 120588 = 3 times 10minus3 To achieve the sparsityconstraint we use the penalty term in the cost function ofsparse autoencoders as follows

119881

sum

V=1KL (120588 V) =

119881

sum

V=1120588 log

120588

V+ (1 minus 120588) log

1 minus 120588

1 minus V (15)

The penalty term is essentially a Kullback-Leibler (KL)divergence Now the cost function 119869sparse(W b) of the sparseautoencoder can be written as follows

119869sparse (W b) =1

2u minus u2 + 120573

119881

sum

V=1KL (120588 V) (16)

where 120573 controls the weight of the penalty term In ourproposed system the cost function 119869sparse(W b) is minimizedusing the limited memory BFGS (L-BFGS) optimizationalgorithm and the single layer sparse autoencoder is trainedby using the backpropagation algorithm

After finishing the training of single layer sparse autoen-coder we discard the output layer neurons the relativeweights W(2) and bias b

(2)

and only save the input layerneurons W(1) and b(1) The output of the hidden layera(1) is used as the input samples of the next single layersparse autoencoder Repeating these steps like stacking theautoencoders we could build a deep autoencoder from twoor more single layer sparse autoencoders In our proposedsystem we use two single layer autoencoders to build a deepautoencoder The stacking procedures show on the right partof Figure 7

Lots of studies on deep autoencoders show that with thedeep architecture (more than one hidden layer) deep autoen-coder could build up more complex representation from thesample low-level features capture the underlying regularitiesof the data and improve the qualities of recognition That iswhy we use deep autoencoder in our proposed system

There are however several difficulties associated withobtaining the optimized weights of deep autoencoders Onechallenge is the presence of local optima In particulartraining a neural network using supervised learning involvessolving a highly nonconvex optimization problem that isfinding a set of network parameters (W b) to minimizethe training error u minus u2 In the deep autoencoder theoptimization problem with bad local optima turns out to berife and training with gradient descent no longer works wellAnother challenge is the ldquodiffusion of gradientsrdquoWhen usingbackpropagation to compute the derivatives the gradientsthat are propagated backwards (from the output layer to theearlier layers of the network) rapidly diminish in magnitudeas the depth of the network increases As a result thederivative of the overall cost with respect to the weights in theearlier layers is very smallThus when using gradient descentthe weights of the earlier layers change slowly However ifthe initializing parameter is already close to the optimizedvalues the gradient descent works well That is the idea ofldquogreedy layer-wiserdquo training where the layers of the networksare trained one by one as shown in the left part of Figure 7

First we use the backpropagation algorithm to train thefirst sparse autoencoder (only including one hidden layer)with the data label being the inputs In our proposed systemthe input data is the 4119870-dimensional feature vector u Asa result of the first-layer training we get a set of networkparameters Θ(1) (ie the parameters W(1) and b(1)) of thefirst-layer sparse autoencoder and a new dataset a(1) (ie thefeatures I shown in Figure 9)which is the output of the hidden

Discrete Dynamics in Nature and Society 7

Copy

Copy

Copy

Copy

Copy

Copy

Input

Input

Output

Output

Features I

Features II

(Features I)

autoencoderSecond sparse

autoencoderFirst sparse

autoencoderDeep sparse

middot middot middot

middot middot middot middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot middot middot middot

middot middot middot

middot middot middot

b(1)

b(2)

+1

+1a(1)Vminus1a(1)Vminus2 a(1)V

b(1)

u1u2+1

+1

u(nm) u(nm)

W(1) W(1)

u4k u4kminus1 u1 u2 u4ku4kminus1

a(1)1a(1)2a(1)3 a(1)1 a(1)2 a(1)3

a(2)Vminus1a(2)Vminus2 a(2)Va(2)1 a(2)2 a(2)3

a(2)Vminus1 a(2)Vminus2a(2)V a(2)1a(1)2a(1)3

a(1)Vminus1 a(1)Vminus2a(1)V

+1

+1

a(1)1a(1)2a(1)3a(1)Vminus1 a(1)Vminus2a(1)V

u4k u4kminus1 u2 u1 u(nm)

a(1)(nm)

a(1)(nm)

a(1)(nm)

W(2)

a(2)(nm)

W(2)

a(2)(nm)

b(2)

a(1)V a(1)Vminus1a(1)Vminus2 a(1)3 a(1)2 a(1)1

u(m nK) u(m nK)u(m (n minus 1)K + 1) u(m (n minus 1)K + 1)

(a) (b)

Figure 7The illustration of greedy layer-wise training and stacking (a) is the procedure of greedy layer-wise training and (b) is the procedureof stacking of sparse autoencoders

layer neurons (the activity state of the hidden layer neurons)by using this parameter set Next we use a(1) as the inputs tothe second sparse autoencoder After the second autoencoderis trained we can get the network parameters Θ(2) of thesecond sparse autoencoder and the new dataset a(1) (ie thefeature II in Figure 9) for the training of next single layerneural network We then repeat the above steps until the lastlayer (ie the softmax regression in our proposed system)Finally we obtain a pretrained deep autoencoder by stackingall the autoencoders and use Θ(1) and Θ(2) as the initializedparameters for this pretrained deep autoencoder The featureII is the high-level feature and can be used as the trainingdataset for softmax regression discussed next

332 SoftmaxClassifier In our proposed system the softmaxclassifier based on softmax regression was used to estimatethe probabilities of the current input T-F point u

(119899119898)belong-

ing to the orientation index 119895 by the deep autoencoder withthe extracted high-level features a(2)

(119899119898)as inputs

The softmax regression generalizes the classical logisticregression (for binary classification) to multiclass classifica-tion problems Different from the logistic regression the datalabel of the softmax regression is an integer value between 1and 119869 here 119869 is the number of data classes More specificallyin our proposed system for 119869 classes119872 samples dataset wasused to train the 119899th deep network

(a(2)(1198991) 119892(1198991)) (a(2)

(119899119898) 119892(119899119898)) (a(2)

(119873119872) 119892(119873119872)

) (17)

where119892(119899119898)

is the label of the119898th sample a(2)(119899119898)

andwill be setto 119895 if a(2)

(119899119898)belongs to class 119895The architecture of the softmax

classifier is shown in Figure 8Given an input a(2)

(119899119898) the output 119901(119892

(119899119898)= 119895 |

a(2)(119899119898)) = 119890120579119879

119895a(2)(119899119898)sum

119869

119895=1119890120579119879

119895a(2)(119899119898) of the layer neuron 119895 gives the

probability of the input a(2)(119899119898)

belonging to the class 119895 Similarto the logistic regression the output of the softmax regressioncan be written as follows

hΘ (a(2)

(119899119898)) =

[[[[[[[[[[[

[

119901 (119892(119899119898)

= 1 | a(2)(119899119898)Θ)

119901 (119892(119899119898)

= 119895 | a(2)(119899119898)Θ)

119901 (119892(119899119898)

= 119869 | a(2)(119899119898)Θ)

]]]]]]]]]]]

]

=1

sum119869

119895=1119890Θ119895a(2)(119899119898)

[[[[[[[[[[[[

[

119890Θ119879

1a(2)(119899119898)

119890Θ119879

119895a(2)(119899119898)

119890Θ119879

119869a(2)(119899119898)

]]]]]]]]]]]]

]

(18)

8 Discrete Dynamics in Nature and Society

Input

Output

a(2)Vminus1

a(2)V

b(3) +1

W(3)

a(2)1

a(2)2

a(2)(nm)

p(g(nm) = J | a(2)(nm)

)

p(g(nm) = 1 | a(2)(nm)

)

p(g(nm) = j | a(2)(nm)

)

Figure 8 A calssical architecture of the softmax classifier

Here Θ isin R119869times119881 is the network parameter of the softmax(119881 is the dimension of the input a(2)

(119899119898) and 119869 is the number of

the classes within the input data)Θ119879119895represents the transpose

of the 119895th row of Θ and contains the parameters of theconnections between the neuron 119895 of the output layer and theinput sample a(2)

(119899119898)

The softmax regression and logistic regression have thesame minimization objective The cost function 119869softmax(Θ)of the softmax classifier can be generalized from the logisticregression function and written as follows

119869softmax (Θ) = minus1

119872

[

[

119872

sum

119898=1

119869

sum

119895=1

1 119892(119899119898)

= 119895

sdot log119901 (119892(119899119898)

= 119895 | a(2)(119899119898)Θ)]

]

+120582

2

119869

sum

119895=1

119881

sum

V=1[Θ119895V]2

(19)

where 1sdot is the indicator function so that1a true statement = 1 and 1a false statement = 0The element Θ

119895V of the Θ contains the parameters of theconnections between the neuron 119895 of the output layer andthe neuron V of the input layer (1205822)sum119869

119895=1sum119868

119894=1(Θ119895V)2

is a regularization term that enforces the cost function119869softmax(Θ) to be strictly convex where 120582 is a weight decayparameter predefined by the users

In our proposed system we extend the label 119892(119899119898)

of thetraining dataset to a vector g

(119899119898)isin R119869 and set the 119895th element

119892119895of g(119899119898)

to 1 and other elements to 0 when the input samplea(2)(119899119898)

belongs to class 119895 such as g(119899119898)

= [0 0 1 0 0]119879

The cost function 119869softmax(Θ) can be written as follows byusing vectorization

119869softmax (Θ) = minus1

119872[

119872

sum

119898=1

(g(119899119898))119879 hΘ (a

(2)

(119899119898))]

+120582

2

119869

sum

119895=1

119868

sum

119894=1

(Θ119895119894)2

(20)

The softmax classifier can be trained by using the L-BFGSalgorithm based on a dataset in order to find an optimalparameter setΘ for minimizing the cost function 119869softmax(Θ)In our proposed system the dataset for softmax classifiertraining is composed by two parts The first part is the inputsample a(2)

(119899119898)(feature II) calculated from the last hidden

layer of the deep autoencoder The second part is the datalabel g

(119899119898)isin R119869 where the 119895th element119892

119895of the g

(119899119898)will be

set to 1 when the input sample belongs to the source locatedin the range of DOAs of index 119895

We stack the softmax classifier and deep autoencodertogether after the training is completed as shown on theleft part of Figure 9 Finally we use the training dataset andL-BFGS algorithm to fine-tune the deep network with theinitialized parameters W(1) b(1) W(2) b(2) W(3) and b(3)obtained from the sparse autoencoders and softmax classifiertraining The training phase of the sparse autoencoders andsoftmax classifier are called pretraining phase and the stack-ingtraining of the overall network that is deep networkis called fine-tuning phase In the pretraining phase theshallow neural networks that is sparse autoencoders andsoftmax classifier are training individually using the outputof current layer as the input for the next layer In the fine-tuning phase we use the L-BFGS algorithm (ie a gradientdescent method) to minimize the difference between theoutput of the deep network and the label of training datasetThe gradient descent works well because the initializedparameters obtained from the pretraining phase include asignificant amount of ldquopriorrdquo information about the inputdata through unsupervised learning

4 Experiments

In this section we describe the communication system usedin the simulation system and show the separation results indifferent SNR and data rates It is important to note thatbit error rate (BER) is an extremely significant performanceindex for a pragmatic communication system in this sectiontherefore we will evaluate the separation quality of pragmaticsystem by BER and regard the BER performance of singleuser MFSK communication as the baseline under the samecondition

There is no need to take the multipath propagation ofchannels into consideration because of the relatively closedistance among each node in view of the communicationof multi-UUV We thus adopt the additive white Gaussiannoise channel model and Monte Carlo method to calculatethe system BER in the simulation such as the simulationsystem shown in Figure 10 and we also take the BER of

Discrete Dynamics in Nature and Society 9

b(1)

b(2) b(3)b(3)

+1

+1+1 +1

u4k

u4kminus1

u1

u2

u(nm)W(3)W(3) W(2)

W(1)

a(2)(nm)

a(1)(nm)

a(2)1

a(2)2

a(2)3

a(2)(nm)

a(2)1

a(2)2

a(2)3

a(1)1

a(1)2

a(1)3

a(2)Vminus2

a(2)Vminus1

a(2)V

a(1)Vminus2

a(1)Vminus1

a(1)V

p(g1 = 1 | u(nm))

p(gj = j | u(nm))

p(gJ = J | u(nm))

g1

gj

gJ

g(nm)

Input

Input

Softmaxclassifier

Softmaxclassifier

p(gJ = J | a(2)(nm)

)

p(g1 = 1 | a(2)(nm)

)

p(gj = j | a(2)(nm)

)

g1

gj

gJ

g(nm)

Copy

MVILDIPD

MVILDIPD

a(2)Vminus2

a(2)Vminus1

a(2)V

Features I Features IIFeatures II

u(m nK)

u(m (n minus 1)K + 1)

Figure 9 The illustration of stacking deep autoencoder and softmax classifier

Demodulation DemodulationbaselineBER

proposedBER

Demodulation DemodulationbaselineBER

proposedBER

TimedelayMixture Source separation

system+

AWGN channel

AWGN channelMFSK passband

MFSK passbandMFSK baseband

MFSK baseband

Source A

Source B

Figure 10 The simulation system used in the experiment

the single user communication as the baseline The sam-pling rate of our simulation is 16 kHz and the order ofthe modulation is 16 without any error-correction codingThe bandwidth of the MFSK modulationdemodulation is640Hz and the bandwidth ratio varies from 0625 to 0125corresponding to the bit rate 400 bitss to 80 bitss

Frist supposewe can obtain the optimal T-Fmask that isusing (6) to calculate T-F mask and then get the simulationresult of BER being changing with SNR as well as bandwidthratio and make a comparison with baseline

As shown in Tables 1 and 2 it can be seen that fromthe simulation result the BER similar with baseline can beattained byT-Fmaskmethodwhen bandwidth ratio lt 0417Furthermore the BER with T-F mask would be even lowerthan that of baseline under the same condition when SNRis very low that is SNR = minus20 The reason behind theabove phenomenon is that it can be seen from (6) that some

frequency points are adjusted to zero by the T-F mask whichpromotes the SNR of signal objectively and obtains lowerBER In the actual system however it is a highly challengingtask to accurately estimate the T-Fmask under such low SNRwhich can also be verified in later simulation

According to Section 3 we estimate the T-F mask byusing the orientation information of the nodes that is theDOA of the MFSK signal received by nodes Therefore weintroduce time delay 120591 to simulate pairwise hydrophonereceiver in the simulation and divide the space into 37 blocksalong the horizontal direction at the same time in order tocorrespond from minus90 to +90 horizontally with the step sizeof 5 in a horizontal space We estimate the T-F mask bysource separation system described in Section 3 and compareit to the BER performance of baseline under the condition ofdifferent SNR and bandwidth ratio the simulation result isshown in Table 3

10 Discrete Dynamics in Nature and Society

Table 1 The BER of the simulation system with the optimum T-F mask

Bandwidth ratio SNR = 30 dB SNR = 20 dB SNR = 10 dB SNR = 5 dB SNR = 0 dB0625 530E minus 02 530E minus 02 536E minus 02 550E minus 02 595E minus 020500 296E minus 02 295E minus 02 303E minus 02 316E minus 02 356E minus 020417 140E minus 02 138E minus 02 138E minus 02 139E minus 02 139E minus 020313 900E minus 06 160E minus 05 130E minus 05 200E minus 05 450E minus 050250 300E minus 06 400E minus 06 100E minus 06 400E minus 06 400E minus 060208 000E + 00 100E minus 06 100E minus 06 000E + 00 000E + 000156 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000125 000E + 00 000E + 00 000E + 00 000E + 00 000E + 00Bandwidth ratio SNR = minus5 dB SNR = minus10 dB SNR = minus15 dB SNR = minus20 dB 0625 764E minus 02 147E minus 01 340E minus 01 562E minus 01 0500 426E minus 02 682E minus 02 198E minus 01 424E minus 01 0417 148E minus 02 267E minus 02 110E minus 01 309E minus 01 0313 125E minus 04 308E minus 03 323E minus 02 157E minus 01 0250 900E minus 06 112E minus 04 713E minus 03 754E minus 02 0208 200E minus 06 219E minus 04 469E minus 03 462E minus 02 0156 000E + 00 210E minus 05 113E minus 03 146E minus 02 0125 000E + 00 000E + 00 200E minus 05 402E minus 03

Table 2 The BER of the baseline

Bandwidth ratio SNR = 30 dB SNR = 20 dB SNR = 10 dB SNR = 5 dB SNR = 0 dB0625 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000500 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000417 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000313 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000250 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000208 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000156 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000125 000E + 00 000E + 00 000E + 00 000E + 00 000E + 00Bandwidth ratio SNR = minus5 dB SNR = minus10 dB SNR = minus15 dB SNR = minus20 dB 0625 443E minus 04 508E minus 02 362E minus 01 726E minus 01 0500 100E minus 06 791E minus 03 216E minus 01 647E minus 01 0417 000E + 00 713E minus 04 119E minus 01 570E minus 01 0313 000E + 00 000E + 00 342E minus 02 435E minus 01 0250 000E + 00 000E + 00 965E minus 03 328E minus 01 0208 000E + 00 000E + 00 340E minus 03 253E minus 01 0156 000E + 00 000E + 00 292E minus 04 141E minus 01 0125 000E + 00 000E + 00 190E minus 05 723E minus 02

As shown in Table 3 it is observed that the BER per-formance of the proposed system is much the same as thebaseline when SNR gt 20 dB and bandwidth ratio lt 0417which is consistent with the result when using the optimal T-F mask When SNR lt 20 dB however the BER performanceof the proposed system begins to decline for a big error of T-Fmask estimationmade by the lower SNR of the signals whichresults in the system performance degradation

5 Summary

In this paper we point at the problem of control packets colli-sion avoiding existing widely in multichannel MAC protocol

and on the basis of the sparsity of noncoherent modulation-MFSK in time-frequency domain separate the sources fromthe MFSK mixture caused by packets collision through theuse of T-F masking method First we indicate the sparsityof MFSK signal with bandwidth ratio and demonstrate therelation between bandwidth ratio and PSR SIR and WDOby means of the simulation experiment Then we establishthe source separation system based on deep networks and themodel of MFSK communication system taking single userMFSK communication as the baseline to compare the BERperformance of proposed system and baseline under differentcondition of SNR and bandwidth ratio The simulation resultshows that first the optimal T-F masking could obtain

Discrete Dynamics in Nature and Society 11

Table 3 The BER of the proposed system

Bandwidth ratio SNR = 30 dB SNR = 20 dB SNR = 10 dB SNR = 5 dB SNR = 0 dB0625 580E minus 02 730E minus 02 636E minus 02 630E minus 02 359E minus 010500 326E minus 02 345E minus 02 393E minus 02 416E minus 02 336E minus 010417 146E minus 02 178E minus 02 148E minus 02 499E minus 02 214E minus 010313 100E minus 05 689E minus 04 950E minus 03 300E minus 02 190E minus 010250 700E minus 06 210E minus 04 600E minus 03 100E minus 02 160E minus 010208 100E minus 06 900E minus 05 300E minus 03 800E minus 03 130E minus 010156 000E + 00 300E minus 05 980E minus 04 200E minus 03 130E minus 010125 000E + 00 700E minus 06 100E minus 05 900E minus 04 120E minus 01Bandwidth ratio SNR = minus5 dB SNR = minus10 dB SNR = minus15 dB SNR = minus20 dB 0625 494E minus 01 0500 426E minus 01 0417 448E minus 01 0313 425E minus 01 0250 490E minus 01 0208 420E minus 01 0156 400E minus 01 0125 250E minus 01

the same BER performance as the baseline under lowerbandwidth ratio second the proposed system could obtainthe similar BER performance as the baseline under higherSNR third the BER performance of the proposed systemdeclines rapidly under the condition of lower SNR for lowerSNR leads to a greater error in the estimation of T-F mask Inthe future work we will adjust the structure of deep networksin subsequent research work to promote the performanceof proposed system under the condition of low SNR andmultipath propagation presenting in the underwater channelAs a future research topic it also deserves the possibility thatthe bioinspired computing models and algorithms are usedfor the underwater multichannel MAC protocols such as theP systems (inspired from the structure and the functioningof cells) [19 20] and evolutionary computation (motivatedby evolution theory of Darwin) [21 22]

Competing Interests

The authors declare that there are no competing interestsregarding the publication of this paper

Acknowledgments

This research was supported partially by the Natural ScienceBasis Research Plan in Shaanxi Province of China (Programno 2014JQ8355)

References

[1] Z Li C Yang N Ding S Bogdan and T Ge ldquoRobust adaptivemotion control for underwater remotely operated vehicleswith velocity constraintsrdquo International Journal of ControlAutomation and Systems vol 10 no 2 pp 421ndash429 2012

[2] A S K Annamalai R Sutton C Yang P Culverhouse andS Sharma ldquoRobust adaptive control of an uninhabited surfacevehiclerdquo Journal of Intelligent amp Robotic Systems vol 78 no 2pp 319ndash338 2015

[3] K Chen M Ma E Cheng F Yuan and W Su ldquoA survey onMAC protocols for underwater wireless sensor networksrdquo IEEECommunications Surveys and Tutorials vol 16 no 3 pp 1433ndash1447 2014

[4] R Otnes A Asterjadhi P Casari et al Underwater AcousticNetworking Techniques Springer Berlin Germany 2012

[5] S Climent A Sanchez J V Capella N Meratnia and JJ Serrano ldquoUnderwater acoustic wireless sensor networksadvances and future trends in physical MAC and routinglayersrdquo Sensors vol 14 no 1 pp 795ndash833 2014

[6] I M Khalil Y Gadallah M Hayajneh and A KhreishahldquoAn adaptive OFDMA-based MAC protocol for underwateracoustic wireless sensor networksrdquo Sensors vol 12 no 7 pp8782ndash8805 2012

[7] F Bouabdallah and R Boutaba ldquoA distributed OFDMAmedium access control for underwater acoustic sensors net-worksrdquo in Proceedings of the IEEE International Conference onCommunications (ICC rsquo11) June 2011

[8] C C Hsu K F Lai C F Chou and K C J Lin ldquoSt-mac spatial-temporal mac scheduling for underwater sensornetworksrdquo in Proceedings of the 28th Conference on ComputerCommunications (INFOCOM rsquo09) pp 1827ndash1835 IEEE Rio deJaneiro Brazil April 2009

[9] Y-D Chen C-Y Lien S-W Chuang and K-P Shih ldquoDSSSa TDMA-based MAC protocol with Dynamic Slot SchedulingStrategy for underwater acoustic sensor networksrdquo in Proceed-ings of the IEEE OCEANS pp 1ndash6 Santander Spain June 2011

[10] K Kredo II P Djukic and P Mohapatra ldquoStump exploitingposition diversity in the staggered tdma underwater mac pro-tocolrdquo in Proceedings of the IEEE 28th Conference on ComputerCommunications (INFOCOM rsquo09) pp 2961ndash2965 April 2009

12 Discrete Dynamics in Nature and Society

[11] M S Marcal Molins ldquoSlotted FAMA a MAC protocol forunderwater acoustic networksrdquo in Proceedings of the IEEEOCEANS 2006 Boston Mass USA September 2006

[12] N Chirdchoo W-S Soh and K C Chua ldquoRIPT a receiver-initiated reservation-based protocol for underwater acousticnetworksrdquo IEEE Journal on Selected Areas in Communicationsvol 26 no 9 pp 1744ndash1753 2008

[13] Z Zhou Z Peng P Xie J-H Cui and Z Jiang ldquoExploringrandom access and handshaking techniques in underwaterwireless acoustic networksrdquo Eurasip Journal on Wireless Com-munications and Networking vol 2013 article 95 2013

[14] YW ChihminChao ldquoAmultiple rendezvousmultichannelmacprotocol for underwater sensor networksrdquo IEEETransactions onSystems Man and Cybernetics Systems vol 43 no 1 pp 128ndash138 2010

[15] H Ramezani and G Leus ldquoDMC-MAC dynamic multi-channel MAC in underwater acoustic networksrdquo in Proceedingsof the 21st European Signal Processing Conference (EUSIPCO rsquo13)pp 1ndash5 Marrakech Morocco September 2013

[16] Z Zhou Z Peng J-H Cui and Z Jiang ldquoHandling triplehidden terminal problems for multichannel MAC in long-delayunderwater sensor networksrdquo IEEE Transactions on MobileComputing vol 11 no 1 pp 139ndash154 2010

[17] O Yilmaz and S Rickard ldquoBlind separation of speech mixturesvia time-frequency maskingrdquo IEEE Transactions on SignalProcessing vol 52 no 7 pp 1830ndash1847 2004

[18] T N Sainath B Kingsbury and B Ramabhadran ldquoAuto-encoder bottleneck features using deep belief networksrdquo inProceedings of the IEEE International Conference on AcousticsSpeech and Signal Processing (ICASSP rsquo12) pp 4153ndash4156 KyotoJapan March 2012

[19] X Zhang L Pan and A Paun ldquoOn the universality of axonp systemrdquo IEEE Transactions on Neural Networks and LearningSystems vol 26 no 11 pp 2816ndash2829 2015

[20] X Zhang Y Liu B Luo and L Pan ldquoComputational power oftissue P systems for generating control languagesrdquo InformationSciences vol 278 pp 285ndash297 2014

[21] X Zhang Y Tian and Y Jin ldquoA knee point-driven evolutionaryalgorithm for many-objective optimizationrdquo IEEE Transactionson Evolutionary Computation vol 19 no 6 pp 761ndash776 2015

[22] X Zhang Y Tian R Cheng and Y Jin ldquoAn efficient approachto nondominated sorting for evolutionary multiobjective opti-mizationrdquo IEEE Transactions on Evolutionary Computation vol19 no 2 pp 201ndash213 2015

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 4: Research Article The Control Packet Collision …downloads.hindawi.com/journals/ddns/2016/2437615.pdfResearch Article The Control Packet Collision Avoidance Algorithm for the Underwater

4 Discrete Dynamics in Nature and Society

SIR versus bandwidth ratio

4

6

8

10

12

14

16

18

SIR

025 0375 05 0625 0750125Bandwidth ratio

Figure 3 The SIR result

WDO versus bandwidth ratio

025 0375 05 0625 0750125Bandwidth ratio

065

07

075

08

085

09

095

WD

O

Figure 4 TheWDO result

where119898 = 1 119872 and 119891 = 1 119865 are the time frame andfrequency bin indices respectively and 119883

119896(119898 119891) = F(119909

119896)

119878119894= F(119904

119894) 119873119896= F(119899

119896) F(sdot) denote short-time Fourier

transformWe use119883119896|119895(119898 119891) representing the component of

the node 119894which takes in themixture received by hydrophone119896 Thus

119883119896|119895(119898 119891) = 119878

119894(119898 119891) sdot 119867

119896119894(119891) (9)

As shown in Section 2 MFSK signal is approximate W-Disjoint Orthogonality when bandwidth ratio is less than 05Then as shown in (10) the mixture119883

119896which received by the

hydrophone 119896 can be demixed by using the T-F masks M119894

corresponding node 119894 119894 = 1 119873 Consider the following

119896(119898 119891) =

119873

sum

119894=1

M119894sdot 119883119896(119898 119891) + 119873

119896(119898 119891) (10)

There is a nature choice that uses the orientation of theMFSK signals to estimate the corresponding T-F mask of

the nodes with the pairwise hydrophone Thus the time-frequency mask is related to the location information of thecurrent input signals (the channel impulse response and arraymanifold) That is

M119894(119898 119891)

= 119875 (119883119894(119898 119891) isin node 119894 | 119878

1sdot sdot sdot 119878119873 1198671119896sdot sdot sdot 119867119873119896)

(11)

Obviously for other hydrophone output the time-frequency maskM corresponding to the same node remainsthe same When we obtain the time-frequency M

119894corre-

sponding to node 119894 we can then separate the single usercommunication signals of node 119894 through ISTFT operationThe mixture signals received by the multiarray include 119873single user signals from 119873 nodes with different locationsThe probability distribution of a single T-F point belongingto a certain node such as 119894 can be described by a Gaussiandistribution with mean 120583

119894and variance 120590

119894 Here 120583

119894and

variance 120590119894can be interpreted as the mean value and variance

of direction of arrival (DOA) of the signal coming from thenode 119894 respectively According to the central limit theoremsfor all T-F points the probability distribution of mixturesignals including 119873 node signals can be described by themixture of 119873 Gaussian distribution namely Gaussian mix-ture model (GMM) Therefore we can describe the mixturepattern of signals through a GMM based on space featuresThe parameters of the GMM can be obtained on the basis ofthe existing observation datasetsThus the probability of eachT-F point belonging to node 1 node 2 and node119873 can beestimatedThen the source signals can be recovered throughISTFT operation

32 Outline of the Source Separation System As shown inFigure 5 the inputs to the system are the two channel MFSKmixtures We perform short-time Fourier transform (STFT)for each channel and obtain the T-F representation of theinput signals 119883

119860(119898 119891) and 119883

119861(119898 119891) where 119898 = 1 119872

and 119891 = 1 119865 are the time frame and frequency binindices respectively The low-level features that is mixingvector (MV) interaural level and phase difference (IPDILD)which can be derived from (12)ndash(13) are then estimated ateach T-F unit Consider

z (119898 119891) =W (119891) x (119898 119891)1003817100381710038171003817W (119891) x (119898 119891)

1003817100381710038171003817

x (119898 119891) =[119883119860(119898 119891) 119883

119861(119898 119891)]

119879

100381710038171003817100381710038171003817[119883119860(119898 119891) 119883

119861(119898 119891)]

119879100381710038171003817100381710038171003817

(12)

where W(119891) is a whitening matrix with each row beingone eigenvector of 119864(x(119898 119891)x119867(119898 119891)) the superscript 119867 isHermitian transpose and sdot is Frobenius norm

120572 (119898 119891) = 20 log10(

100381610038161003816100381610038161003816100381610038161003816

119883119860(119898 119891)

119883119861(119898 119891)

100381610038161003816100381610038161003816100381610038161003816

)

120601 (119898 119891) = ang(119883119860(119898 119891)

119883119861(119898 119891)

)

(13)

Discrete Dynamics in Nature and Society 5

nK

AB

A

B F

f

1Mm1

F

K

1

F

1

F

1

Mixture

Targetspeech

ISTFTB-channel

ISTFTA-channel

STFTB-channel

STFTA-channel

maskProbability

maskProbability

Ung

roup

Gro

upDeep

network n

Deepnetwork 1

network NDeep

MV

IPDILD

MV

IPDILD

Block 1

Block 1

Block n

Block NBlock n

Block N

XA(m f)

middot middot middotmiddot middot middotmiddot middot middot

middot middot middotmiddot middot middotmiddot middot middot

middot middot middotmiddot middot middotmiddot middot middot

middot middot middotmiddot middot middotmiddot middot middot

middot middot middotmiddot middot middot

middot middot middot middot middot middot

middot middot middot

middot middot middotmiddot middot middotmiddot middot middot

XB(m f)

AB

A

Bu(1m)

u(nm)

u(Nm)

(N minus 1)K + 1

XR(m f)

XL(m f)

1simK

(N minus 1)K + 1simF

(n minus 1)K + 1simnK

(n minus 1)K + 1

T-F mask

Figure 5 The architecture of the proposed system using deep neural network based time-frequency masking for blind source separation

where | sdot | takes the absolute value of its argument and ang(sdot)finds the phase angle

Next we group the low-level features into 119873 blocks(only along the frequency bins 119891) The block 119899 includes 119870frequency bins ((119899 minus 1)119870 + 1 119899119870) where 119870 = 119865119873 Webuild119873 deep networks with each corresponding to one blockand use them to estimate the direction of arrivals (DOAs)of the sources The low-level features as the input of thedeep networks are composed by the IPD ILD and MV thatis u(119898 119891) = [z119879(119898 119891) 120572(119898 119891) 120601(119898 119891)]119879 Through unsu-pervised learning and the sparse autoencoder [18] in deepnetworks high-level features (coded positional informationof the sources) are extracted and used as inputs for the outputlayer (ie the softmax regression) of the networksThe outputof softmax regression is a source occupation probability(ie the time-frequency mask) of each block (through theungroup operation T-F units in the same block are assignedwith the same source occupation probability) of themixturesThen the sources can be recovered applying the T-F mask tothemixtures followed by the inverse STFT (ISTFT)The deepnetworks are pretrained by using a greedy layer-wise trainingmethod

33 The Deep Networks As described in the beginning ofthis section we group the low-level features into 119873 blocksand build 119873 individual deep networks which have the samearchitecture to classify the DOAs of the current input T-F point in each block The deep network which is usedto estimate the T-F mask is composed of two-layer deepautoencoder and one layer softmax classifier

More specifically we split the whole space to 119869 rangeswith respect to the hydrophones and separate the target andinterferers based on different orientation ranges (DOAs withrespect to the receiver node) where they are locatedWe applythe softmax classifier to perform the classification task andthe inputs to the classifier that is the high-level features a(2)

are produced by the deep autoencoder Assuming that theposition of the target in the current input T-F point remainsunchanged the deepnetwork estimates the probability119901(119892

119894=

119894 | u(119898119891)

) of the orientation of the current input samplebelonging to the orientation index 119894 With the estimatedorientation (obtained by selecting the maximum probabilityindex) of each input T-F point we cluster the T-F pointswhich have the same orientation index to get the probabilitymask and obtain the T-F mask from the probability maskthrough the ungroup operation Note that each T-F point inthe same block is assigned the same probability The numberof sources can also be estimated from the probability mask byusing a predefined probability threshold typically chosen as02 in our experiments

331 Deep Autoencoder An autoencoder is an unsupervisedlearning algorithm based on backpropagation It aims tolearn an approximation u of the input u It appears tobe learning a trivial identity function but by using someconstraints on the learning process such as limiting thenumber of neurons activated it discloses some interestingstructures about the data Figure 6 shows the architectureof a single layer autoencoder The difference between classicneural network and autoencoder is the training objectiveThe objective of the classic neural networks is to minimizethe difference between the label of input training data andthe output of the network However the objective of theautoencoder is to minimize the difference between the inputtraining dataset and the output of the network As shown inFigure 6 the output of the autoencoders can be defined asu = sigm(W(2)a(1) + b

(2)

) with a(1) = sigm(W(1)u + b(1))where the function sigm(u) = 1(1 + exp(minusu)) is the logisticfunction W(1) isin R119881times119884 b(1) isin R119881 W(2) isin R119884times119881 andb(2)

isin R119884 119881 is the number of hidden layer neurons and 119884is the number of input layer neurons which is the same as

6 Discrete Dynamics in Nature and Society

+1

+1

W(1)

a(1)

b(1)

W(2)

b(2)

u

a1

a2

a3

a4

u1

u2

u3

u

u1

u2

u3

Figure 6 The single layer autoencoder

that of the output layer neuronsW(1) is a matrix containingthe weights of connections between the input layer neuronsand hidden layer neurons Similar toW(1) W(2) contains theweights of connections between the hidden layer neurons andthe output layer neurons b(1) is a vector of the bias valuesadded to the hidden layer neurons and b(2) is the vectorfor the output layer neurons Θ refers to the parameter setcomposed of weights W and bias b The neuron V is ldquoactiverdquowhen the output 119886V of this neuron is close to 1 which meansthat the function sigm(W(1)V u + b(1)V ) asymp 1 For ldquoinactiverdquoneurons however the output is close to 0 which means thefunction sigm(W(1)V u + b(1)V ) asymp 0 where W(1)V denotes theweights of connections between the hidden layer neuron Vand the input layer neurons which is the Vth rowof thematrixW(1) b(1)V is the Vth element of the vector b(1) which is the biasvalue added to the hidden layer neuron VThe superscript 119894 ofb(119894)W(119894) and a(119894) denotes the 119894th layer of the deep network

With the sparsity constraint most of the neurons areassumed to be inactive More specifically 119886V = sigm(W(1)V u +b(1)V ) denotes the activation value of the hidden layer unit V inthe autoencoder Generalizing this for the unit 119894 in the hiddenlayer the average activation V of unit Vwith the input sampleu(119898)

can be defined as follows

V =1

119872

119872

sum

119898=1

119886V (u(119898)) (14)

where119872 is the number of training samples and u(119898)

is the119898th input training sample Next the sparsity constraint V =120588 is enforced where 120588 is the parameter preset before trainingtypically small such as 120588 = 3 times 10minus3 To achieve the sparsityconstraint we use the penalty term in the cost function ofsparse autoencoders as follows

119881

sum

V=1KL (120588 V) =

119881

sum

V=1120588 log

120588

V+ (1 minus 120588) log

1 minus 120588

1 minus V (15)

The penalty term is essentially a Kullback-Leibler (KL)divergence Now the cost function 119869sparse(W b) of the sparseautoencoder can be written as follows

119869sparse (W b) =1

2u minus u2 + 120573

119881

sum

V=1KL (120588 V) (16)

where 120573 controls the weight of the penalty term In ourproposed system the cost function 119869sparse(W b) is minimizedusing the limited memory BFGS (L-BFGS) optimizationalgorithm and the single layer sparse autoencoder is trainedby using the backpropagation algorithm

After finishing the training of single layer sparse autoen-coder we discard the output layer neurons the relativeweights W(2) and bias b

(2)

and only save the input layerneurons W(1) and b(1) The output of the hidden layera(1) is used as the input samples of the next single layersparse autoencoder Repeating these steps like stacking theautoencoders we could build a deep autoencoder from twoor more single layer sparse autoencoders In our proposedsystem we use two single layer autoencoders to build a deepautoencoder The stacking procedures show on the right partof Figure 7

Lots of studies on deep autoencoders show that with thedeep architecture (more than one hidden layer) deep autoen-coder could build up more complex representation from thesample low-level features capture the underlying regularitiesof the data and improve the qualities of recognition That iswhy we use deep autoencoder in our proposed system

There are however several difficulties associated withobtaining the optimized weights of deep autoencoders Onechallenge is the presence of local optima In particulartraining a neural network using supervised learning involvessolving a highly nonconvex optimization problem that isfinding a set of network parameters (W b) to minimizethe training error u minus u2 In the deep autoencoder theoptimization problem with bad local optima turns out to berife and training with gradient descent no longer works wellAnother challenge is the ldquodiffusion of gradientsrdquoWhen usingbackpropagation to compute the derivatives the gradientsthat are propagated backwards (from the output layer to theearlier layers of the network) rapidly diminish in magnitudeas the depth of the network increases As a result thederivative of the overall cost with respect to the weights in theearlier layers is very smallThus when using gradient descentthe weights of the earlier layers change slowly However ifthe initializing parameter is already close to the optimizedvalues the gradient descent works well That is the idea ofldquogreedy layer-wiserdquo training where the layers of the networksare trained one by one as shown in the left part of Figure 7

First we use the backpropagation algorithm to train thefirst sparse autoencoder (only including one hidden layer)with the data label being the inputs In our proposed systemthe input data is the 4119870-dimensional feature vector u Asa result of the first-layer training we get a set of networkparameters Θ(1) (ie the parameters W(1) and b(1)) of thefirst-layer sparse autoencoder and a new dataset a(1) (ie thefeatures I shown in Figure 9)which is the output of the hidden

Discrete Dynamics in Nature and Society 7

Copy

Copy

Copy

Copy

Copy

Copy

Input

Input

Output

Output

Features I

Features II

(Features I)

autoencoderSecond sparse

autoencoderFirst sparse

autoencoderDeep sparse

middot middot middot

middot middot middot middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot middot middot middot

middot middot middot

middot middot middot

b(1)

b(2)

+1

+1a(1)Vminus1a(1)Vminus2 a(1)V

b(1)

u1u2+1

+1

u(nm) u(nm)

W(1) W(1)

u4k u4kminus1 u1 u2 u4ku4kminus1

a(1)1a(1)2a(1)3 a(1)1 a(1)2 a(1)3

a(2)Vminus1a(2)Vminus2 a(2)Va(2)1 a(2)2 a(2)3

a(2)Vminus1 a(2)Vminus2a(2)V a(2)1a(1)2a(1)3

a(1)Vminus1 a(1)Vminus2a(1)V

+1

+1

a(1)1a(1)2a(1)3a(1)Vminus1 a(1)Vminus2a(1)V

u4k u4kminus1 u2 u1 u(nm)

a(1)(nm)

a(1)(nm)

a(1)(nm)

W(2)

a(2)(nm)

W(2)

a(2)(nm)

b(2)

a(1)V a(1)Vminus1a(1)Vminus2 a(1)3 a(1)2 a(1)1

u(m nK) u(m nK)u(m (n minus 1)K + 1) u(m (n minus 1)K + 1)

(a) (b)

Figure 7The illustration of greedy layer-wise training and stacking (a) is the procedure of greedy layer-wise training and (b) is the procedureof stacking of sparse autoencoders

layer neurons (the activity state of the hidden layer neurons)by using this parameter set Next we use a(1) as the inputs tothe second sparse autoencoder After the second autoencoderis trained we can get the network parameters Θ(2) of thesecond sparse autoencoder and the new dataset a(1) (ie thefeature II in Figure 9) for the training of next single layerneural network We then repeat the above steps until the lastlayer (ie the softmax regression in our proposed system)Finally we obtain a pretrained deep autoencoder by stackingall the autoencoders and use Θ(1) and Θ(2) as the initializedparameters for this pretrained deep autoencoder The featureII is the high-level feature and can be used as the trainingdataset for softmax regression discussed next

332 SoftmaxClassifier In our proposed system the softmaxclassifier based on softmax regression was used to estimatethe probabilities of the current input T-F point u

(119899119898)belong-

ing to the orientation index 119895 by the deep autoencoder withthe extracted high-level features a(2)

(119899119898)as inputs

The softmax regression generalizes the classical logisticregression (for binary classification) to multiclass classifica-tion problems Different from the logistic regression the datalabel of the softmax regression is an integer value between 1and 119869 here 119869 is the number of data classes More specificallyin our proposed system for 119869 classes119872 samples dataset wasused to train the 119899th deep network

(a(2)(1198991) 119892(1198991)) (a(2)

(119899119898) 119892(119899119898)) (a(2)

(119873119872) 119892(119873119872)

) (17)

where119892(119899119898)

is the label of the119898th sample a(2)(119899119898)

andwill be setto 119895 if a(2)

(119899119898)belongs to class 119895The architecture of the softmax

classifier is shown in Figure 8Given an input a(2)

(119899119898) the output 119901(119892

(119899119898)= 119895 |

a(2)(119899119898)) = 119890120579119879

119895a(2)(119899119898)sum

119869

119895=1119890120579119879

119895a(2)(119899119898) of the layer neuron 119895 gives the

probability of the input a(2)(119899119898)

belonging to the class 119895 Similarto the logistic regression the output of the softmax regressioncan be written as follows

hΘ (a(2)

(119899119898)) =

[[[[[[[[[[[

[

119901 (119892(119899119898)

= 1 | a(2)(119899119898)Θ)

119901 (119892(119899119898)

= 119895 | a(2)(119899119898)Θ)

119901 (119892(119899119898)

= 119869 | a(2)(119899119898)Θ)

]]]]]]]]]]]

]

=1

sum119869

119895=1119890Θ119895a(2)(119899119898)

[[[[[[[[[[[[

[

119890Θ119879

1a(2)(119899119898)

119890Θ119879

119895a(2)(119899119898)

119890Θ119879

119869a(2)(119899119898)

]]]]]]]]]]]]

]

(18)

8 Discrete Dynamics in Nature and Society

Input

Output

a(2)Vminus1

a(2)V

b(3) +1

W(3)

a(2)1

a(2)2

a(2)(nm)

p(g(nm) = J | a(2)(nm)

)

p(g(nm) = 1 | a(2)(nm)

)

p(g(nm) = j | a(2)(nm)

)

Figure 8 A calssical architecture of the softmax classifier

Here Θ isin R119869times119881 is the network parameter of the softmax(119881 is the dimension of the input a(2)

(119899119898) and 119869 is the number of

the classes within the input data)Θ119879119895represents the transpose

of the 119895th row of Θ and contains the parameters of theconnections between the neuron 119895 of the output layer and theinput sample a(2)

(119899119898)

The softmax regression and logistic regression have thesame minimization objective The cost function 119869softmax(Θ)of the softmax classifier can be generalized from the logisticregression function and written as follows

119869softmax (Θ) = minus1

119872

[

[

119872

sum

119898=1

119869

sum

119895=1

1 119892(119899119898)

= 119895

sdot log119901 (119892(119899119898)

= 119895 | a(2)(119899119898)Θ)]

]

+120582

2

119869

sum

119895=1

119881

sum

V=1[Θ119895V]2

(19)

where 1sdot is the indicator function so that1a true statement = 1 and 1a false statement = 0The element Θ

119895V of the Θ contains the parameters of theconnections between the neuron 119895 of the output layer andthe neuron V of the input layer (1205822)sum119869

119895=1sum119868

119894=1(Θ119895V)2

is a regularization term that enforces the cost function119869softmax(Θ) to be strictly convex where 120582 is a weight decayparameter predefined by the users

In our proposed system we extend the label 119892(119899119898)

of thetraining dataset to a vector g

(119899119898)isin R119869 and set the 119895th element

119892119895of g(119899119898)

to 1 and other elements to 0 when the input samplea(2)(119899119898)

belongs to class 119895 such as g(119899119898)

= [0 0 1 0 0]119879

The cost function 119869softmax(Θ) can be written as follows byusing vectorization

119869softmax (Θ) = minus1

119872[

119872

sum

119898=1

(g(119899119898))119879 hΘ (a

(2)

(119899119898))]

+120582

2

119869

sum

119895=1

119868

sum

119894=1

(Θ119895119894)2

(20)

The softmax classifier can be trained by using the L-BFGSalgorithm based on a dataset in order to find an optimalparameter setΘ for minimizing the cost function 119869softmax(Θ)In our proposed system the dataset for softmax classifiertraining is composed by two parts The first part is the inputsample a(2)

(119899119898)(feature II) calculated from the last hidden

layer of the deep autoencoder The second part is the datalabel g

(119899119898)isin R119869 where the 119895th element119892

119895of the g

(119899119898)will be

set to 1 when the input sample belongs to the source locatedin the range of DOAs of index 119895

We stack the softmax classifier and deep autoencodertogether after the training is completed as shown on theleft part of Figure 9 Finally we use the training dataset andL-BFGS algorithm to fine-tune the deep network with theinitialized parameters W(1) b(1) W(2) b(2) W(3) and b(3)obtained from the sparse autoencoders and softmax classifiertraining The training phase of the sparse autoencoders andsoftmax classifier are called pretraining phase and the stack-ingtraining of the overall network that is deep networkis called fine-tuning phase In the pretraining phase theshallow neural networks that is sparse autoencoders andsoftmax classifier are training individually using the outputof current layer as the input for the next layer In the fine-tuning phase we use the L-BFGS algorithm (ie a gradientdescent method) to minimize the difference between theoutput of the deep network and the label of training datasetThe gradient descent works well because the initializedparameters obtained from the pretraining phase include asignificant amount of ldquopriorrdquo information about the inputdata through unsupervised learning

4 Experiments

In this section we describe the communication system usedin the simulation system and show the separation results indifferent SNR and data rates It is important to note thatbit error rate (BER) is an extremely significant performanceindex for a pragmatic communication system in this sectiontherefore we will evaluate the separation quality of pragmaticsystem by BER and regard the BER performance of singleuser MFSK communication as the baseline under the samecondition

There is no need to take the multipath propagation ofchannels into consideration because of the relatively closedistance among each node in view of the communicationof multi-UUV We thus adopt the additive white Gaussiannoise channel model and Monte Carlo method to calculatethe system BER in the simulation such as the simulationsystem shown in Figure 10 and we also take the BER of

Discrete Dynamics in Nature and Society 9

b(1)

b(2) b(3)b(3)

+1

+1+1 +1

u4k

u4kminus1

u1

u2

u(nm)W(3)W(3) W(2)

W(1)

a(2)(nm)

a(1)(nm)

a(2)1

a(2)2

a(2)3

a(2)(nm)

a(2)1

a(2)2

a(2)3

a(1)1

a(1)2

a(1)3

a(2)Vminus2

a(2)Vminus1

a(2)V

a(1)Vminus2

a(1)Vminus1

a(1)V

p(g1 = 1 | u(nm))

p(gj = j | u(nm))

p(gJ = J | u(nm))

g1

gj

gJ

g(nm)

Input

Input

Softmaxclassifier

Softmaxclassifier

p(gJ = J | a(2)(nm)

)

p(g1 = 1 | a(2)(nm)

)

p(gj = j | a(2)(nm)

)

g1

gj

gJ

g(nm)

Copy

MVILDIPD

MVILDIPD

a(2)Vminus2

a(2)Vminus1

a(2)V

Features I Features IIFeatures II

u(m nK)

u(m (n minus 1)K + 1)

Figure 9 The illustration of stacking deep autoencoder and softmax classifier

Demodulation DemodulationbaselineBER

proposedBER

Demodulation DemodulationbaselineBER

proposedBER

TimedelayMixture Source separation

system+

AWGN channel

AWGN channelMFSK passband

MFSK passbandMFSK baseband

MFSK baseband

Source A

Source B

Figure 10 The simulation system used in the experiment

the single user communication as the baseline The sam-pling rate of our simulation is 16 kHz and the order ofthe modulation is 16 without any error-correction codingThe bandwidth of the MFSK modulationdemodulation is640Hz and the bandwidth ratio varies from 0625 to 0125corresponding to the bit rate 400 bitss to 80 bitss

Frist supposewe can obtain the optimal T-Fmask that isusing (6) to calculate T-F mask and then get the simulationresult of BER being changing with SNR as well as bandwidthratio and make a comparison with baseline

As shown in Tables 1 and 2 it can be seen that fromthe simulation result the BER similar with baseline can beattained byT-Fmaskmethodwhen bandwidth ratio lt 0417Furthermore the BER with T-F mask would be even lowerthan that of baseline under the same condition when SNRis very low that is SNR = minus20 The reason behind theabove phenomenon is that it can be seen from (6) that some

frequency points are adjusted to zero by the T-F mask whichpromotes the SNR of signal objectively and obtains lowerBER In the actual system however it is a highly challengingtask to accurately estimate the T-Fmask under such low SNRwhich can also be verified in later simulation

According to Section 3 we estimate the T-F mask byusing the orientation information of the nodes that is theDOA of the MFSK signal received by nodes Therefore weintroduce time delay 120591 to simulate pairwise hydrophonereceiver in the simulation and divide the space into 37 blocksalong the horizontal direction at the same time in order tocorrespond from minus90 to +90 horizontally with the step sizeof 5 in a horizontal space We estimate the T-F mask bysource separation system described in Section 3 and compareit to the BER performance of baseline under the condition ofdifferent SNR and bandwidth ratio the simulation result isshown in Table 3

10 Discrete Dynamics in Nature and Society

Table 1 The BER of the simulation system with the optimum T-F mask

Bandwidth ratio SNR = 30 dB SNR = 20 dB SNR = 10 dB SNR = 5 dB SNR = 0 dB0625 530E minus 02 530E minus 02 536E minus 02 550E minus 02 595E minus 020500 296E minus 02 295E minus 02 303E minus 02 316E minus 02 356E minus 020417 140E minus 02 138E minus 02 138E minus 02 139E minus 02 139E minus 020313 900E minus 06 160E minus 05 130E minus 05 200E minus 05 450E minus 050250 300E minus 06 400E minus 06 100E minus 06 400E minus 06 400E minus 060208 000E + 00 100E minus 06 100E minus 06 000E + 00 000E + 000156 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000125 000E + 00 000E + 00 000E + 00 000E + 00 000E + 00Bandwidth ratio SNR = minus5 dB SNR = minus10 dB SNR = minus15 dB SNR = minus20 dB 0625 764E minus 02 147E minus 01 340E minus 01 562E minus 01 0500 426E minus 02 682E minus 02 198E minus 01 424E minus 01 0417 148E minus 02 267E minus 02 110E minus 01 309E minus 01 0313 125E minus 04 308E minus 03 323E minus 02 157E minus 01 0250 900E minus 06 112E minus 04 713E minus 03 754E minus 02 0208 200E minus 06 219E minus 04 469E minus 03 462E minus 02 0156 000E + 00 210E minus 05 113E minus 03 146E minus 02 0125 000E + 00 000E + 00 200E minus 05 402E minus 03

Table 2 The BER of the baseline

Bandwidth ratio SNR = 30 dB SNR = 20 dB SNR = 10 dB SNR = 5 dB SNR = 0 dB0625 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000500 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000417 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000313 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000250 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000208 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000156 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000125 000E + 00 000E + 00 000E + 00 000E + 00 000E + 00Bandwidth ratio SNR = minus5 dB SNR = minus10 dB SNR = minus15 dB SNR = minus20 dB 0625 443E minus 04 508E minus 02 362E minus 01 726E minus 01 0500 100E minus 06 791E minus 03 216E minus 01 647E minus 01 0417 000E + 00 713E minus 04 119E minus 01 570E minus 01 0313 000E + 00 000E + 00 342E minus 02 435E minus 01 0250 000E + 00 000E + 00 965E minus 03 328E minus 01 0208 000E + 00 000E + 00 340E minus 03 253E minus 01 0156 000E + 00 000E + 00 292E minus 04 141E minus 01 0125 000E + 00 000E + 00 190E minus 05 723E minus 02

As shown in Table 3 it is observed that the BER per-formance of the proposed system is much the same as thebaseline when SNR gt 20 dB and bandwidth ratio lt 0417which is consistent with the result when using the optimal T-F mask When SNR lt 20 dB however the BER performanceof the proposed system begins to decline for a big error of T-Fmask estimationmade by the lower SNR of the signals whichresults in the system performance degradation

5 Summary

In this paper we point at the problem of control packets colli-sion avoiding existing widely in multichannel MAC protocol

and on the basis of the sparsity of noncoherent modulation-MFSK in time-frequency domain separate the sources fromthe MFSK mixture caused by packets collision through theuse of T-F masking method First we indicate the sparsityof MFSK signal with bandwidth ratio and demonstrate therelation between bandwidth ratio and PSR SIR and WDOby means of the simulation experiment Then we establishthe source separation system based on deep networks and themodel of MFSK communication system taking single userMFSK communication as the baseline to compare the BERperformance of proposed system and baseline under differentcondition of SNR and bandwidth ratio The simulation resultshows that first the optimal T-F masking could obtain

Discrete Dynamics in Nature and Society 11

Table 3 The BER of the proposed system

Bandwidth ratio SNR = 30 dB SNR = 20 dB SNR = 10 dB SNR = 5 dB SNR = 0 dB0625 580E minus 02 730E minus 02 636E minus 02 630E minus 02 359E minus 010500 326E minus 02 345E minus 02 393E minus 02 416E minus 02 336E minus 010417 146E minus 02 178E minus 02 148E minus 02 499E minus 02 214E minus 010313 100E minus 05 689E minus 04 950E minus 03 300E minus 02 190E minus 010250 700E minus 06 210E minus 04 600E minus 03 100E minus 02 160E minus 010208 100E minus 06 900E minus 05 300E minus 03 800E minus 03 130E minus 010156 000E + 00 300E minus 05 980E minus 04 200E minus 03 130E minus 010125 000E + 00 700E minus 06 100E minus 05 900E minus 04 120E minus 01Bandwidth ratio SNR = minus5 dB SNR = minus10 dB SNR = minus15 dB SNR = minus20 dB 0625 494E minus 01 0500 426E minus 01 0417 448E minus 01 0313 425E minus 01 0250 490E minus 01 0208 420E minus 01 0156 400E minus 01 0125 250E minus 01

the same BER performance as the baseline under lowerbandwidth ratio second the proposed system could obtainthe similar BER performance as the baseline under higherSNR third the BER performance of the proposed systemdeclines rapidly under the condition of lower SNR for lowerSNR leads to a greater error in the estimation of T-F mask Inthe future work we will adjust the structure of deep networksin subsequent research work to promote the performanceof proposed system under the condition of low SNR andmultipath propagation presenting in the underwater channelAs a future research topic it also deserves the possibility thatthe bioinspired computing models and algorithms are usedfor the underwater multichannel MAC protocols such as theP systems (inspired from the structure and the functioningof cells) [19 20] and evolutionary computation (motivatedby evolution theory of Darwin) [21 22]

Competing Interests

The authors declare that there are no competing interestsregarding the publication of this paper

Acknowledgments

This research was supported partially by the Natural ScienceBasis Research Plan in Shaanxi Province of China (Programno 2014JQ8355)

References

[1] Z Li C Yang N Ding S Bogdan and T Ge ldquoRobust adaptivemotion control for underwater remotely operated vehicleswith velocity constraintsrdquo International Journal of ControlAutomation and Systems vol 10 no 2 pp 421ndash429 2012

[2] A S K Annamalai R Sutton C Yang P Culverhouse andS Sharma ldquoRobust adaptive control of an uninhabited surfacevehiclerdquo Journal of Intelligent amp Robotic Systems vol 78 no 2pp 319ndash338 2015

[3] K Chen M Ma E Cheng F Yuan and W Su ldquoA survey onMAC protocols for underwater wireless sensor networksrdquo IEEECommunications Surveys and Tutorials vol 16 no 3 pp 1433ndash1447 2014

[4] R Otnes A Asterjadhi P Casari et al Underwater AcousticNetworking Techniques Springer Berlin Germany 2012

[5] S Climent A Sanchez J V Capella N Meratnia and JJ Serrano ldquoUnderwater acoustic wireless sensor networksadvances and future trends in physical MAC and routinglayersrdquo Sensors vol 14 no 1 pp 795ndash833 2014

[6] I M Khalil Y Gadallah M Hayajneh and A KhreishahldquoAn adaptive OFDMA-based MAC protocol for underwateracoustic wireless sensor networksrdquo Sensors vol 12 no 7 pp8782ndash8805 2012

[7] F Bouabdallah and R Boutaba ldquoA distributed OFDMAmedium access control for underwater acoustic sensors net-worksrdquo in Proceedings of the IEEE International Conference onCommunications (ICC rsquo11) June 2011

[8] C C Hsu K F Lai C F Chou and K C J Lin ldquoSt-mac spatial-temporal mac scheduling for underwater sensornetworksrdquo in Proceedings of the 28th Conference on ComputerCommunications (INFOCOM rsquo09) pp 1827ndash1835 IEEE Rio deJaneiro Brazil April 2009

[9] Y-D Chen C-Y Lien S-W Chuang and K-P Shih ldquoDSSSa TDMA-based MAC protocol with Dynamic Slot SchedulingStrategy for underwater acoustic sensor networksrdquo in Proceed-ings of the IEEE OCEANS pp 1ndash6 Santander Spain June 2011

[10] K Kredo II P Djukic and P Mohapatra ldquoStump exploitingposition diversity in the staggered tdma underwater mac pro-tocolrdquo in Proceedings of the IEEE 28th Conference on ComputerCommunications (INFOCOM rsquo09) pp 2961ndash2965 April 2009

12 Discrete Dynamics in Nature and Society

[11] M S Marcal Molins ldquoSlotted FAMA a MAC protocol forunderwater acoustic networksrdquo in Proceedings of the IEEEOCEANS 2006 Boston Mass USA September 2006

[12] N Chirdchoo W-S Soh and K C Chua ldquoRIPT a receiver-initiated reservation-based protocol for underwater acousticnetworksrdquo IEEE Journal on Selected Areas in Communicationsvol 26 no 9 pp 1744ndash1753 2008

[13] Z Zhou Z Peng P Xie J-H Cui and Z Jiang ldquoExploringrandom access and handshaking techniques in underwaterwireless acoustic networksrdquo Eurasip Journal on Wireless Com-munications and Networking vol 2013 article 95 2013

[14] YW ChihminChao ldquoAmultiple rendezvousmultichannelmacprotocol for underwater sensor networksrdquo IEEETransactions onSystems Man and Cybernetics Systems vol 43 no 1 pp 128ndash138 2010

[15] H Ramezani and G Leus ldquoDMC-MAC dynamic multi-channel MAC in underwater acoustic networksrdquo in Proceedingsof the 21st European Signal Processing Conference (EUSIPCO rsquo13)pp 1ndash5 Marrakech Morocco September 2013

[16] Z Zhou Z Peng J-H Cui and Z Jiang ldquoHandling triplehidden terminal problems for multichannel MAC in long-delayunderwater sensor networksrdquo IEEE Transactions on MobileComputing vol 11 no 1 pp 139ndash154 2010

[17] O Yilmaz and S Rickard ldquoBlind separation of speech mixturesvia time-frequency maskingrdquo IEEE Transactions on SignalProcessing vol 52 no 7 pp 1830ndash1847 2004

[18] T N Sainath B Kingsbury and B Ramabhadran ldquoAuto-encoder bottleneck features using deep belief networksrdquo inProceedings of the IEEE International Conference on AcousticsSpeech and Signal Processing (ICASSP rsquo12) pp 4153ndash4156 KyotoJapan March 2012

[19] X Zhang L Pan and A Paun ldquoOn the universality of axonp systemrdquo IEEE Transactions on Neural Networks and LearningSystems vol 26 no 11 pp 2816ndash2829 2015

[20] X Zhang Y Liu B Luo and L Pan ldquoComputational power oftissue P systems for generating control languagesrdquo InformationSciences vol 278 pp 285ndash297 2014

[21] X Zhang Y Tian and Y Jin ldquoA knee point-driven evolutionaryalgorithm for many-objective optimizationrdquo IEEE Transactionson Evolutionary Computation vol 19 no 6 pp 761ndash776 2015

[22] X Zhang Y Tian R Cheng and Y Jin ldquoAn efficient approachto nondominated sorting for evolutionary multiobjective opti-mizationrdquo IEEE Transactions on Evolutionary Computation vol19 no 2 pp 201ndash213 2015

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 5: Research Article The Control Packet Collision …downloads.hindawi.com/journals/ddns/2016/2437615.pdfResearch Article The Control Packet Collision Avoidance Algorithm for the Underwater

Discrete Dynamics in Nature and Society 5

nK

AB

A

B F

f

1Mm1

F

K

1

F

1

F

1

Mixture

Targetspeech

ISTFTB-channel

ISTFTA-channel

STFTB-channel

STFTA-channel

maskProbability

maskProbability

Ung

roup

Gro

upDeep

network n

Deepnetwork 1

network NDeep

MV

IPDILD

MV

IPDILD

Block 1

Block 1

Block n

Block NBlock n

Block N

XA(m f)

middot middot middotmiddot middot middotmiddot middot middot

middot middot middotmiddot middot middotmiddot middot middot

middot middot middotmiddot middot middotmiddot middot middot

middot middot middotmiddot middot middotmiddot middot middot

middot middot middotmiddot middot middot

middot middot middot middot middot middot

middot middot middot

middot middot middotmiddot middot middotmiddot middot middot

XB(m f)

AB

A

Bu(1m)

u(nm)

u(Nm)

(N minus 1)K + 1

XR(m f)

XL(m f)

1simK

(N minus 1)K + 1simF

(n minus 1)K + 1simnK

(n minus 1)K + 1

T-F mask

Figure 5 The architecture of the proposed system using deep neural network based time-frequency masking for blind source separation

where | sdot | takes the absolute value of its argument and ang(sdot)finds the phase angle

Next we group the low-level features into 119873 blocks(only along the frequency bins 119891) The block 119899 includes 119870frequency bins ((119899 minus 1)119870 + 1 119899119870) where 119870 = 119865119873 Webuild119873 deep networks with each corresponding to one blockand use them to estimate the direction of arrivals (DOAs)of the sources The low-level features as the input of thedeep networks are composed by the IPD ILD and MV thatis u(119898 119891) = [z119879(119898 119891) 120572(119898 119891) 120601(119898 119891)]119879 Through unsu-pervised learning and the sparse autoencoder [18] in deepnetworks high-level features (coded positional informationof the sources) are extracted and used as inputs for the outputlayer (ie the softmax regression) of the networksThe outputof softmax regression is a source occupation probability(ie the time-frequency mask) of each block (through theungroup operation T-F units in the same block are assignedwith the same source occupation probability) of themixturesThen the sources can be recovered applying the T-F mask tothemixtures followed by the inverse STFT (ISTFT)The deepnetworks are pretrained by using a greedy layer-wise trainingmethod

33 The Deep Networks As described in the beginning ofthis section we group the low-level features into 119873 blocksand build 119873 individual deep networks which have the samearchitecture to classify the DOAs of the current input T-F point in each block The deep network which is usedto estimate the T-F mask is composed of two-layer deepautoencoder and one layer softmax classifier

More specifically we split the whole space to 119869 rangeswith respect to the hydrophones and separate the target andinterferers based on different orientation ranges (DOAs withrespect to the receiver node) where they are locatedWe applythe softmax classifier to perform the classification task andthe inputs to the classifier that is the high-level features a(2)

are produced by the deep autoencoder Assuming that theposition of the target in the current input T-F point remainsunchanged the deepnetwork estimates the probability119901(119892

119894=

119894 | u(119898119891)

) of the orientation of the current input samplebelonging to the orientation index 119894 With the estimatedorientation (obtained by selecting the maximum probabilityindex) of each input T-F point we cluster the T-F pointswhich have the same orientation index to get the probabilitymask and obtain the T-F mask from the probability maskthrough the ungroup operation Note that each T-F point inthe same block is assigned the same probability The numberof sources can also be estimated from the probability mask byusing a predefined probability threshold typically chosen as02 in our experiments

331 Deep Autoencoder An autoencoder is an unsupervisedlearning algorithm based on backpropagation It aims tolearn an approximation u of the input u It appears tobe learning a trivial identity function but by using someconstraints on the learning process such as limiting thenumber of neurons activated it discloses some interestingstructures about the data Figure 6 shows the architectureof a single layer autoencoder The difference between classicneural network and autoencoder is the training objectiveThe objective of the classic neural networks is to minimizethe difference between the label of input training data andthe output of the network However the objective of theautoencoder is to minimize the difference between the inputtraining dataset and the output of the network As shown inFigure 6 the output of the autoencoders can be defined asu = sigm(W(2)a(1) + b

(2)

) with a(1) = sigm(W(1)u + b(1))where the function sigm(u) = 1(1 + exp(minusu)) is the logisticfunction W(1) isin R119881times119884 b(1) isin R119881 W(2) isin R119884times119881 andb(2)

isin R119884 119881 is the number of hidden layer neurons and 119884is the number of input layer neurons which is the same as

6 Discrete Dynamics in Nature and Society

+1

+1

W(1)

a(1)

b(1)

W(2)

b(2)

u

a1

a2

a3

a4

u1

u2

u3

u

u1

u2

u3

Figure 6 The single layer autoencoder

that of the output layer neuronsW(1) is a matrix containingthe weights of connections between the input layer neuronsand hidden layer neurons Similar toW(1) W(2) contains theweights of connections between the hidden layer neurons andthe output layer neurons b(1) is a vector of the bias valuesadded to the hidden layer neurons and b(2) is the vectorfor the output layer neurons Θ refers to the parameter setcomposed of weights W and bias b The neuron V is ldquoactiverdquowhen the output 119886V of this neuron is close to 1 which meansthat the function sigm(W(1)V u + b(1)V ) asymp 1 For ldquoinactiverdquoneurons however the output is close to 0 which means thefunction sigm(W(1)V u + b(1)V ) asymp 0 where W(1)V denotes theweights of connections between the hidden layer neuron Vand the input layer neurons which is the Vth rowof thematrixW(1) b(1)V is the Vth element of the vector b(1) which is the biasvalue added to the hidden layer neuron VThe superscript 119894 ofb(119894)W(119894) and a(119894) denotes the 119894th layer of the deep network

With the sparsity constraint most of the neurons areassumed to be inactive More specifically 119886V = sigm(W(1)V u +b(1)V ) denotes the activation value of the hidden layer unit V inthe autoencoder Generalizing this for the unit 119894 in the hiddenlayer the average activation V of unit Vwith the input sampleu(119898)

can be defined as follows

V =1

119872

119872

sum

119898=1

119886V (u(119898)) (14)

where119872 is the number of training samples and u(119898)

is the119898th input training sample Next the sparsity constraint V =120588 is enforced where 120588 is the parameter preset before trainingtypically small such as 120588 = 3 times 10minus3 To achieve the sparsityconstraint we use the penalty term in the cost function ofsparse autoencoders as follows

119881

sum

V=1KL (120588 V) =

119881

sum

V=1120588 log

120588

V+ (1 minus 120588) log

1 minus 120588

1 minus V (15)

The penalty term is essentially a Kullback-Leibler (KL)divergence Now the cost function 119869sparse(W b) of the sparseautoencoder can be written as follows

119869sparse (W b) =1

2u minus u2 + 120573

119881

sum

V=1KL (120588 V) (16)

where 120573 controls the weight of the penalty term In ourproposed system the cost function 119869sparse(W b) is minimizedusing the limited memory BFGS (L-BFGS) optimizationalgorithm and the single layer sparse autoencoder is trainedby using the backpropagation algorithm

After finishing the training of single layer sparse autoen-coder we discard the output layer neurons the relativeweights W(2) and bias b

(2)

and only save the input layerneurons W(1) and b(1) The output of the hidden layera(1) is used as the input samples of the next single layersparse autoencoder Repeating these steps like stacking theautoencoders we could build a deep autoencoder from twoor more single layer sparse autoencoders In our proposedsystem we use two single layer autoencoders to build a deepautoencoder The stacking procedures show on the right partof Figure 7

Lots of studies on deep autoencoders show that with thedeep architecture (more than one hidden layer) deep autoen-coder could build up more complex representation from thesample low-level features capture the underlying regularitiesof the data and improve the qualities of recognition That iswhy we use deep autoencoder in our proposed system

There are however several difficulties associated withobtaining the optimized weights of deep autoencoders Onechallenge is the presence of local optima In particulartraining a neural network using supervised learning involvessolving a highly nonconvex optimization problem that isfinding a set of network parameters (W b) to minimizethe training error u minus u2 In the deep autoencoder theoptimization problem with bad local optima turns out to berife and training with gradient descent no longer works wellAnother challenge is the ldquodiffusion of gradientsrdquoWhen usingbackpropagation to compute the derivatives the gradientsthat are propagated backwards (from the output layer to theearlier layers of the network) rapidly diminish in magnitudeas the depth of the network increases As a result thederivative of the overall cost with respect to the weights in theearlier layers is very smallThus when using gradient descentthe weights of the earlier layers change slowly However ifthe initializing parameter is already close to the optimizedvalues the gradient descent works well That is the idea ofldquogreedy layer-wiserdquo training where the layers of the networksare trained one by one as shown in the left part of Figure 7

First we use the backpropagation algorithm to train thefirst sparse autoencoder (only including one hidden layer)with the data label being the inputs In our proposed systemthe input data is the 4119870-dimensional feature vector u Asa result of the first-layer training we get a set of networkparameters Θ(1) (ie the parameters W(1) and b(1)) of thefirst-layer sparse autoencoder and a new dataset a(1) (ie thefeatures I shown in Figure 9)which is the output of the hidden

Discrete Dynamics in Nature and Society 7

Copy

Copy

Copy

Copy

Copy

Copy

Input

Input

Output

Output

Features I

Features II

(Features I)

autoencoderSecond sparse

autoencoderFirst sparse

autoencoderDeep sparse

middot middot middot

middot middot middot middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot middot middot middot

middot middot middot

middot middot middot

b(1)

b(2)

+1

+1a(1)Vminus1a(1)Vminus2 a(1)V

b(1)

u1u2+1

+1

u(nm) u(nm)

W(1) W(1)

u4k u4kminus1 u1 u2 u4ku4kminus1

a(1)1a(1)2a(1)3 a(1)1 a(1)2 a(1)3

a(2)Vminus1a(2)Vminus2 a(2)Va(2)1 a(2)2 a(2)3

a(2)Vminus1 a(2)Vminus2a(2)V a(2)1a(1)2a(1)3

a(1)Vminus1 a(1)Vminus2a(1)V

+1

+1

a(1)1a(1)2a(1)3a(1)Vminus1 a(1)Vminus2a(1)V

u4k u4kminus1 u2 u1 u(nm)

a(1)(nm)

a(1)(nm)

a(1)(nm)

W(2)

a(2)(nm)

W(2)

a(2)(nm)

b(2)

a(1)V a(1)Vminus1a(1)Vminus2 a(1)3 a(1)2 a(1)1

u(m nK) u(m nK)u(m (n minus 1)K + 1) u(m (n minus 1)K + 1)

(a) (b)

Figure 7The illustration of greedy layer-wise training and stacking (a) is the procedure of greedy layer-wise training and (b) is the procedureof stacking of sparse autoencoders

layer neurons (the activity state of the hidden layer neurons)by using this parameter set Next we use a(1) as the inputs tothe second sparse autoencoder After the second autoencoderis trained we can get the network parameters Θ(2) of thesecond sparse autoencoder and the new dataset a(1) (ie thefeature II in Figure 9) for the training of next single layerneural network We then repeat the above steps until the lastlayer (ie the softmax regression in our proposed system)Finally we obtain a pretrained deep autoencoder by stackingall the autoencoders and use Θ(1) and Θ(2) as the initializedparameters for this pretrained deep autoencoder The featureII is the high-level feature and can be used as the trainingdataset for softmax regression discussed next

332 SoftmaxClassifier In our proposed system the softmaxclassifier based on softmax regression was used to estimatethe probabilities of the current input T-F point u

(119899119898)belong-

ing to the orientation index 119895 by the deep autoencoder withthe extracted high-level features a(2)

(119899119898)as inputs

The softmax regression generalizes the classical logisticregression (for binary classification) to multiclass classifica-tion problems Different from the logistic regression the datalabel of the softmax regression is an integer value between 1and 119869 here 119869 is the number of data classes More specificallyin our proposed system for 119869 classes119872 samples dataset wasused to train the 119899th deep network

(a(2)(1198991) 119892(1198991)) (a(2)

(119899119898) 119892(119899119898)) (a(2)

(119873119872) 119892(119873119872)

) (17)

where119892(119899119898)

is the label of the119898th sample a(2)(119899119898)

andwill be setto 119895 if a(2)

(119899119898)belongs to class 119895The architecture of the softmax

classifier is shown in Figure 8Given an input a(2)

(119899119898) the output 119901(119892

(119899119898)= 119895 |

a(2)(119899119898)) = 119890120579119879

119895a(2)(119899119898)sum

119869

119895=1119890120579119879

119895a(2)(119899119898) of the layer neuron 119895 gives the

probability of the input a(2)(119899119898)

belonging to the class 119895 Similarto the logistic regression the output of the softmax regressioncan be written as follows

hΘ (a(2)

(119899119898)) =

[[[[[[[[[[[

[

119901 (119892(119899119898)

= 1 | a(2)(119899119898)Θ)

119901 (119892(119899119898)

= 119895 | a(2)(119899119898)Θ)

119901 (119892(119899119898)

= 119869 | a(2)(119899119898)Θ)

]]]]]]]]]]]

]

=1

sum119869

119895=1119890Θ119895a(2)(119899119898)

[[[[[[[[[[[[

[

119890Θ119879

1a(2)(119899119898)

119890Θ119879

119895a(2)(119899119898)

119890Θ119879

119869a(2)(119899119898)

]]]]]]]]]]]]

]

(18)

8 Discrete Dynamics in Nature and Society

Input

Output

a(2)Vminus1

a(2)V

b(3) +1

W(3)

a(2)1

a(2)2

a(2)(nm)

p(g(nm) = J | a(2)(nm)

)

p(g(nm) = 1 | a(2)(nm)

)

p(g(nm) = j | a(2)(nm)

)

Figure 8 A calssical architecture of the softmax classifier

Here Θ isin R119869times119881 is the network parameter of the softmax(119881 is the dimension of the input a(2)

(119899119898) and 119869 is the number of

the classes within the input data)Θ119879119895represents the transpose

of the 119895th row of Θ and contains the parameters of theconnections between the neuron 119895 of the output layer and theinput sample a(2)

(119899119898)

The softmax regression and logistic regression have thesame minimization objective The cost function 119869softmax(Θ)of the softmax classifier can be generalized from the logisticregression function and written as follows

119869softmax (Θ) = minus1

119872

[

[

119872

sum

119898=1

119869

sum

119895=1

1 119892(119899119898)

= 119895

sdot log119901 (119892(119899119898)

= 119895 | a(2)(119899119898)Θ)]

]

+120582

2

119869

sum

119895=1

119881

sum

V=1[Θ119895V]2

(19)

where 1sdot is the indicator function so that1a true statement = 1 and 1a false statement = 0The element Θ

119895V of the Θ contains the parameters of theconnections between the neuron 119895 of the output layer andthe neuron V of the input layer (1205822)sum119869

119895=1sum119868

119894=1(Θ119895V)2

is a regularization term that enforces the cost function119869softmax(Θ) to be strictly convex where 120582 is a weight decayparameter predefined by the users

In our proposed system we extend the label 119892(119899119898)

of thetraining dataset to a vector g

(119899119898)isin R119869 and set the 119895th element

119892119895of g(119899119898)

to 1 and other elements to 0 when the input samplea(2)(119899119898)

belongs to class 119895 such as g(119899119898)

= [0 0 1 0 0]119879

The cost function 119869softmax(Θ) can be written as follows byusing vectorization

119869softmax (Θ) = minus1

119872[

119872

sum

119898=1

(g(119899119898))119879 hΘ (a

(2)

(119899119898))]

+120582

2

119869

sum

119895=1

119868

sum

119894=1

(Θ119895119894)2

(20)

The softmax classifier can be trained by using the L-BFGSalgorithm based on a dataset in order to find an optimalparameter setΘ for minimizing the cost function 119869softmax(Θ)In our proposed system the dataset for softmax classifiertraining is composed by two parts The first part is the inputsample a(2)

(119899119898)(feature II) calculated from the last hidden

layer of the deep autoencoder The second part is the datalabel g

(119899119898)isin R119869 where the 119895th element119892

119895of the g

(119899119898)will be

set to 1 when the input sample belongs to the source locatedin the range of DOAs of index 119895

We stack the softmax classifier and deep autoencodertogether after the training is completed as shown on theleft part of Figure 9 Finally we use the training dataset andL-BFGS algorithm to fine-tune the deep network with theinitialized parameters W(1) b(1) W(2) b(2) W(3) and b(3)obtained from the sparse autoencoders and softmax classifiertraining The training phase of the sparse autoencoders andsoftmax classifier are called pretraining phase and the stack-ingtraining of the overall network that is deep networkis called fine-tuning phase In the pretraining phase theshallow neural networks that is sparse autoencoders andsoftmax classifier are training individually using the outputof current layer as the input for the next layer In the fine-tuning phase we use the L-BFGS algorithm (ie a gradientdescent method) to minimize the difference between theoutput of the deep network and the label of training datasetThe gradient descent works well because the initializedparameters obtained from the pretraining phase include asignificant amount of ldquopriorrdquo information about the inputdata through unsupervised learning

4 Experiments

In this section we describe the communication system usedin the simulation system and show the separation results indifferent SNR and data rates It is important to note thatbit error rate (BER) is an extremely significant performanceindex for a pragmatic communication system in this sectiontherefore we will evaluate the separation quality of pragmaticsystem by BER and regard the BER performance of singleuser MFSK communication as the baseline under the samecondition

There is no need to take the multipath propagation ofchannels into consideration because of the relatively closedistance among each node in view of the communicationof multi-UUV We thus adopt the additive white Gaussiannoise channel model and Monte Carlo method to calculatethe system BER in the simulation such as the simulationsystem shown in Figure 10 and we also take the BER of

Discrete Dynamics in Nature and Society 9

b(1)

b(2) b(3)b(3)

+1

+1+1 +1

u4k

u4kminus1

u1

u2

u(nm)W(3)W(3) W(2)

W(1)

a(2)(nm)

a(1)(nm)

a(2)1

a(2)2

a(2)3

a(2)(nm)

a(2)1

a(2)2

a(2)3

a(1)1

a(1)2

a(1)3

a(2)Vminus2

a(2)Vminus1

a(2)V

a(1)Vminus2

a(1)Vminus1

a(1)V

p(g1 = 1 | u(nm))

p(gj = j | u(nm))

p(gJ = J | u(nm))

g1

gj

gJ

g(nm)

Input

Input

Softmaxclassifier

Softmaxclassifier

p(gJ = J | a(2)(nm)

)

p(g1 = 1 | a(2)(nm)

)

p(gj = j | a(2)(nm)

)

g1

gj

gJ

g(nm)

Copy

MVILDIPD

MVILDIPD

a(2)Vminus2

a(2)Vminus1

a(2)V

Features I Features IIFeatures II

u(m nK)

u(m (n minus 1)K + 1)

Figure 9 The illustration of stacking deep autoencoder and softmax classifier

Demodulation DemodulationbaselineBER

proposedBER

Demodulation DemodulationbaselineBER

proposedBER

TimedelayMixture Source separation

system+

AWGN channel

AWGN channelMFSK passband

MFSK passbandMFSK baseband

MFSK baseband

Source A

Source B

Figure 10 The simulation system used in the experiment

the single user communication as the baseline The sam-pling rate of our simulation is 16 kHz and the order ofthe modulation is 16 without any error-correction codingThe bandwidth of the MFSK modulationdemodulation is640Hz and the bandwidth ratio varies from 0625 to 0125corresponding to the bit rate 400 bitss to 80 bitss

Frist supposewe can obtain the optimal T-Fmask that isusing (6) to calculate T-F mask and then get the simulationresult of BER being changing with SNR as well as bandwidthratio and make a comparison with baseline

As shown in Tables 1 and 2 it can be seen that fromthe simulation result the BER similar with baseline can beattained byT-Fmaskmethodwhen bandwidth ratio lt 0417Furthermore the BER with T-F mask would be even lowerthan that of baseline under the same condition when SNRis very low that is SNR = minus20 The reason behind theabove phenomenon is that it can be seen from (6) that some

frequency points are adjusted to zero by the T-F mask whichpromotes the SNR of signal objectively and obtains lowerBER In the actual system however it is a highly challengingtask to accurately estimate the T-Fmask under such low SNRwhich can also be verified in later simulation

According to Section 3 we estimate the T-F mask byusing the orientation information of the nodes that is theDOA of the MFSK signal received by nodes Therefore weintroduce time delay 120591 to simulate pairwise hydrophonereceiver in the simulation and divide the space into 37 blocksalong the horizontal direction at the same time in order tocorrespond from minus90 to +90 horizontally with the step sizeof 5 in a horizontal space We estimate the T-F mask bysource separation system described in Section 3 and compareit to the BER performance of baseline under the condition ofdifferent SNR and bandwidth ratio the simulation result isshown in Table 3

10 Discrete Dynamics in Nature and Society

Table 1 The BER of the simulation system with the optimum T-F mask

Bandwidth ratio SNR = 30 dB SNR = 20 dB SNR = 10 dB SNR = 5 dB SNR = 0 dB0625 530E minus 02 530E minus 02 536E minus 02 550E minus 02 595E minus 020500 296E minus 02 295E minus 02 303E minus 02 316E minus 02 356E minus 020417 140E minus 02 138E minus 02 138E minus 02 139E minus 02 139E minus 020313 900E minus 06 160E minus 05 130E minus 05 200E minus 05 450E minus 050250 300E minus 06 400E minus 06 100E minus 06 400E minus 06 400E minus 060208 000E + 00 100E minus 06 100E minus 06 000E + 00 000E + 000156 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000125 000E + 00 000E + 00 000E + 00 000E + 00 000E + 00Bandwidth ratio SNR = minus5 dB SNR = minus10 dB SNR = minus15 dB SNR = minus20 dB 0625 764E minus 02 147E minus 01 340E minus 01 562E minus 01 0500 426E minus 02 682E minus 02 198E minus 01 424E minus 01 0417 148E minus 02 267E minus 02 110E minus 01 309E minus 01 0313 125E minus 04 308E minus 03 323E minus 02 157E minus 01 0250 900E minus 06 112E minus 04 713E minus 03 754E minus 02 0208 200E minus 06 219E minus 04 469E minus 03 462E minus 02 0156 000E + 00 210E minus 05 113E minus 03 146E minus 02 0125 000E + 00 000E + 00 200E minus 05 402E minus 03

Table 2 The BER of the baseline

Bandwidth ratio SNR = 30 dB SNR = 20 dB SNR = 10 dB SNR = 5 dB SNR = 0 dB0625 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000500 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000417 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000313 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000250 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000208 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000156 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000125 000E + 00 000E + 00 000E + 00 000E + 00 000E + 00Bandwidth ratio SNR = minus5 dB SNR = minus10 dB SNR = minus15 dB SNR = minus20 dB 0625 443E minus 04 508E minus 02 362E minus 01 726E minus 01 0500 100E minus 06 791E minus 03 216E minus 01 647E minus 01 0417 000E + 00 713E minus 04 119E minus 01 570E minus 01 0313 000E + 00 000E + 00 342E minus 02 435E minus 01 0250 000E + 00 000E + 00 965E minus 03 328E minus 01 0208 000E + 00 000E + 00 340E minus 03 253E minus 01 0156 000E + 00 000E + 00 292E minus 04 141E minus 01 0125 000E + 00 000E + 00 190E minus 05 723E minus 02

As shown in Table 3 it is observed that the BER per-formance of the proposed system is much the same as thebaseline when SNR gt 20 dB and bandwidth ratio lt 0417which is consistent with the result when using the optimal T-F mask When SNR lt 20 dB however the BER performanceof the proposed system begins to decline for a big error of T-Fmask estimationmade by the lower SNR of the signals whichresults in the system performance degradation

5 Summary

In this paper we point at the problem of control packets colli-sion avoiding existing widely in multichannel MAC protocol

and on the basis of the sparsity of noncoherent modulation-MFSK in time-frequency domain separate the sources fromthe MFSK mixture caused by packets collision through theuse of T-F masking method First we indicate the sparsityof MFSK signal with bandwidth ratio and demonstrate therelation between bandwidth ratio and PSR SIR and WDOby means of the simulation experiment Then we establishthe source separation system based on deep networks and themodel of MFSK communication system taking single userMFSK communication as the baseline to compare the BERperformance of proposed system and baseline under differentcondition of SNR and bandwidth ratio The simulation resultshows that first the optimal T-F masking could obtain

Discrete Dynamics in Nature and Society 11

Table 3 The BER of the proposed system

Bandwidth ratio SNR = 30 dB SNR = 20 dB SNR = 10 dB SNR = 5 dB SNR = 0 dB0625 580E minus 02 730E minus 02 636E minus 02 630E minus 02 359E minus 010500 326E minus 02 345E minus 02 393E minus 02 416E minus 02 336E minus 010417 146E minus 02 178E minus 02 148E minus 02 499E minus 02 214E minus 010313 100E minus 05 689E minus 04 950E minus 03 300E minus 02 190E minus 010250 700E minus 06 210E minus 04 600E minus 03 100E minus 02 160E minus 010208 100E minus 06 900E minus 05 300E minus 03 800E minus 03 130E minus 010156 000E + 00 300E minus 05 980E minus 04 200E minus 03 130E minus 010125 000E + 00 700E minus 06 100E minus 05 900E minus 04 120E minus 01Bandwidth ratio SNR = minus5 dB SNR = minus10 dB SNR = minus15 dB SNR = minus20 dB 0625 494E minus 01 0500 426E minus 01 0417 448E minus 01 0313 425E minus 01 0250 490E minus 01 0208 420E minus 01 0156 400E minus 01 0125 250E minus 01

the same BER performance as the baseline under lowerbandwidth ratio second the proposed system could obtainthe similar BER performance as the baseline under higherSNR third the BER performance of the proposed systemdeclines rapidly under the condition of lower SNR for lowerSNR leads to a greater error in the estimation of T-F mask Inthe future work we will adjust the structure of deep networksin subsequent research work to promote the performanceof proposed system under the condition of low SNR andmultipath propagation presenting in the underwater channelAs a future research topic it also deserves the possibility thatthe bioinspired computing models and algorithms are usedfor the underwater multichannel MAC protocols such as theP systems (inspired from the structure and the functioningof cells) [19 20] and evolutionary computation (motivatedby evolution theory of Darwin) [21 22]

Competing Interests

The authors declare that there are no competing interestsregarding the publication of this paper

Acknowledgments

This research was supported partially by the Natural ScienceBasis Research Plan in Shaanxi Province of China (Programno 2014JQ8355)

References

[1] Z Li C Yang N Ding S Bogdan and T Ge ldquoRobust adaptivemotion control for underwater remotely operated vehicleswith velocity constraintsrdquo International Journal of ControlAutomation and Systems vol 10 no 2 pp 421ndash429 2012

[2] A S K Annamalai R Sutton C Yang P Culverhouse andS Sharma ldquoRobust adaptive control of an uninhabited surfacevehiclerdquo Journal of Intelligent amp Robotic Systems vol 78 no 2pp 319ndash338 2015

[3] K Chen M Ma E Cheng F Yuan and W Su ldquoA survey onMAC protocols for underwater wireless sensor networksrdquo IEEECommunications Surveys and Tutorials vol 16 no 3 pp 1433ndash1447 2014

[4] R Otnes A Asterjadhi P Casari et al Underwater AcousticNetworking Techniques Springer Berlin Germany 2012

[5] S Climent A Sanchez J V Capella N Meratnia and JJ Serrano ldquoUnderwater acoustic wireless sensor networksadvances and future trends in physical MAC and routinglayersrdquo Sensors vol 14 no 1 pp 795ndash833 2014

[6] I M Khalil Y Gadallah M Hayajneh and A KhreishahldquoAn adaptive OFDMA-based MAC protocol for underwateracoustic wireless sensor networksrdquo Sensors vol 12 no 7 pp8782ndash8805 2012

[7] F Bouabdallah and R Boutaba ldquoA distributed OFDMAmedium access control for underwater acoustic sensors net-worksrdquo in Proceedings of the IEEE International Conference onCommunications (ICC rsquo11) June 2011

[8] C C Hsu K F Lai C F Chou and K C J Lin ldquoSt-mac spatial-temporal mac scheduling for underwater sensornetworksrdquo in Proceedings of the 28th Conference on ComputerCommunications (INFOCOM rsquo09) pp 1827ndash1835 IEEE Rio deJaneiro Brazil April 2009

[9] Y-D Chen C-Y Lien S-W Chuang and K-P Shih ldquoDSSSa TDMA-based MAC protocol with Dynamic Slot SchedulingStrategy for underwater acoustic sensor networksrdquo in Proceed-ings of the IEEE OCEANS pp 1ndash6 Santander Spain June 2011

[10] K Kredo II P Djukic and P Mohapatra ldquoStump exploitingposition diversity in the staggered tdma underwater mac pro-tocolrdquo in Proceedings of the IEEE 28th Conference on ComputerCommunications (INFOCOM rsquo09) pp 2961ndash2965 April 2009

12 Discrete Dynamics in Nature and Society

[11] M S Marcal Molins ldquoSlotted FAMA a MAC protocol forunderwater acoustic networksrdquo in Proceedings of the IEEEOCEANS 2006 Boston Mass USA September 2006

[12] N Chirdchoo W-S Soh and K C Chua ldquoRIPT a receiver-initiated reservation-based protocol for underwater acousticnetworksrdquo IEEE Journal on Selected Areas in Communicationsvol 26 no 9 pp 1744ndash1753 2008

[13] Z Zhou Z Peng P Xie J-H Cui and Z Jiang ldquoExploringrandom access and handshaking techniques in underwaterwireless acoustic networksrdquo Eurasip Journal on Wireless Com-munications and Networking vol 2013 article 95 2013

[14] YW ChihminChao ldquoAmultiple rendezvousmultichannelmacprotocol for underwater sensor networksrdquo IEEETransactions onSystems Man and Cybernetics Systems vol 43 no 1 pp 128ndash138 2010

[15] H Ramezani and G Leus ldquoDMC-MAC dynamic multi-channel MAC in underwater acoustic networksrdquo in Proceedingsof the 21st European Signal Processing Conference (EUSIPCO rsquo13)pp 1ndash5 Marrakech Morocco September 2013

[16] Z Zhou Z Peng J-H Cui and Z Jiang ldquoHandling triplehidden terminal problems for multichannel MAC in long-delayunderwater sensor networksrdquo IEEE Transactions on MobileComputing vol 11 no 1 pp 139ndash154 2010

[17] O Yilmaz and S Rickard ldquoBlind separation of speech mixturesvia time-frequency maskingrdquo IEEE Transactions on SignalProcessing vol 52 no 7 pp 1830ndash1847 2004

[18] T N Sainath B Kingsbury and B Ramabhadran ldquoAuto-encoder bottleneck features using deep belief networksrdquo inProceedings of the IEEE International Conference on AcousticsSpeech and Signal Processing (ICASSP rsquo12) pp 4153ndash4156 KyotoJapan March 2012

[19] X Zhang L Pan and A Paun ldquoOn the universality of axonp systemrdquo IEEE Transactions on Neural Networks and LearningSystems vol 26 no 11 pp 2816ndash2829 2015

[20] X Zhang Y Liu B Luo and L Pan ldquoComputational power oftissue P systems for generating control languagesrdquo InformationSciences vol 278 pp 285ndash297 2014

[21] X Zhang Y Tian and Y Jin ldquoA knee point-driven evolutionaryalgorithm for many-objective optimizationrdquo IEEE Transactionson Evolutionary Computation vol 19 no 6 pp 761ndash776 2015

[22] X Zhang Y Tian R Cheng and Y Jin ldquoAn efficient approachto nondominated sorting for evolutionary multiobjective opti-mizationrdquo IEEE Transactions on Evolutionary Computation vol19 no 2 pp 201ndash213 2015

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 6: Research Article The Control Packet Collision …downloads.hindawi.com/journals/ddns/2016/2437615.pdfResearch Article The Control Packet Collision Avoidance Algorithm for the Underwater

6 Discrete Dynamics in Nature and Society

+1

+1

W(1)

a(1)

b(1)

W(2)

b(2)

u

a1

a2

a3

a4

u1

u2

u3

u

u1

u2

u3

Figure 6 The single layer autoencoder

that of the output layer neuronsW(1) is a matrix containingthe weights of connections between the input layer neuronsand hidden layer neurons Similar toW(1) W(2) contains theweights of connections between the hidden layer neurons andthe output layer neurons b(1) is a vector of the bias valuesadded to the hidden layer neurons and b(2) is the vectorfor the output layer neurons Θ refers to the parameter setcomposed of weights W and bias b The neuron V is ldquoactiverdquowhen the output 119886V of this neuron is close to 1 which meansthat the function sigm(W(1)V u + b(1)V ) asymp 1 For ldquoinactiverdquoneurons however the output is close to 0 which means thefunction sigm(W(1)V u + b(1)V ) asymp 0 where W(1)V denotes theweights of connections between the hidden layer neuron Vand the input layer neurons which is the Vth rowof thematrixW(1) b(1)V is the Vth element of the vector b(1) which is the biasvalue added to the hidden layer neuron VThe superscript 119894 ofb(119894)W(119894) and a(119894) denotes the 119894th layer of the deep network

With the sparsity constraint most of the neurons areassumed to be inactive More specifically 119886V = sigm(W(1)V u +b(1)V ) denotes the activation value of the hidden layer unit V inthe autoencoder Generalizing this for the unit 119894 in the hiddenlayer the average activation V of unit Vwith the input sampleu(119898)

can be defined as follows

V =1

119872

119872

sum

119898=1

119886V (u(119898)) (14)

where119872 is the number of training samples and u(119898)

is the119898th input training sample Next the sparsity constraint V =120588 is enforced where 120588 is the parameter preset before trainingtypically small such as 120588 = 3 times 10minus3 To achieve the sparsityconstraint we use the penalty term in the cost function ofsparse autoencoders as follows

119881

sum

V=1KL (120588 V) =

119881

sum

V=1120588 log

120588

V+ (1 minus 120588) log

1 minus 120588

1 minus V (15)

The penalty term is essentially a Kullback-Leibler (KL)divergence Now the cost function 119869sparse(W b) of the sparseautoencoder can be written as follows

119869sparse (W b) =1

2u minus u2 + 120573

119881

sum

V=1KL (120588 V) (16)

where 120573 controls the weight of the penalty term In ourproposed system the cost function 119869sparse(W b) is minimizedusing the limited memory BFGS (L-BFGS) optimizationalgorithm and the single layer sparse autoencoder is trainedby using the backpropagation algorithm

After finishing the training of single layer sparse autoen-coder we discard the output layer neurons the relativeweights W(2) and bias b

(2)

and only save the input layerneurons W(1) and b(1) The output of the hidden layera(1) is used as the input samples of the next single layersparse autoencoder Repeating these steps like stacking theautoencoders we could build a deep autoencoder from twoor more single layer sparse autoencoders In our proposedsystem we use two single layer autoencoders to build a deepautoencoder The stacking procedures show on the right partof Figure 7

Lots of studies on deep autoencoders show that with thedeep architecture (more than one hidden layer) deep autoen-coder could build up more complex representation from thesample low-level features capture the underlying regularitiesof the data and improve the qualities of recognition That iswhy we use deep autoencoder in our proposed system

There are however several difficulties associated withobtaining the optimized weights of deep autoencoders Onechallenge is the presence of local optima In particulartraining a neural network using supervised learning involvessolving a highly nonconvex optimization problem that isfinding a set of network parameters (W b) to minimizethe training error u minus u2 In the deep autoencoder theoptimization problem with bad local optima turns out to berife and training with gradient descent no longer works wellAnother challenge is the ldquodiffusion of gradientsrdquoWhen usingbackpropagation to compute the derivatives the gradientsthat are propagated backwards (from the output layer to theearlier layers of the network) rapidly diminish in magnitudeas the depth of the network increases As a result thederivative of the overall cost with respect to the weights in theearlier layers is very smallThus when using gradient descentthe weights of the earlier layers change slowly However ifthe initializing parameter is already close to the optimizedvalues the gradient descent works well That is the idea ofldquogreedy layer-wiserdquo training where the layers of the networksare trained one by one as shown in the left part of Figure 7

First we use the backpropagation algorithm to train thefirst sparse autoencoder (only including one hidden layer)with the data label being the inputs In our proposed systemthe input data is the 4119870-dimensional feature vector u Asa result of the first-layer training we get a set of networkparameters Θ(1) (ie the parameters W(1) and b(1)) of thefirst-layer sparse autoencoder and a new dataset a(1) (ie thefeatures I shown in Figure 9)which is the output of the hidden

Discrete Dynamics in Nature and Society 7

Copy

Copy

Copy

Copy

Copy

Copy

Input

Input

Output

Output

Features I

Features II

(Features I)

autoencoderSecond sparse

autoencoderFirst sparse

autoencoderDeep sparse

middot middot middot

middot middot middot middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot middot middot middot

middot middot middot

middot middot middot

b(1)

b(2)

+1

+1a(1)Vminus1a(1)Vminus2 a(1)V

b(1)

u1u2+1

+1

u(nm) u(nm)

W(1) W(1)

u4k u4kminus1 u1 u2 u4ku4kminus1

a(1)1a(1)2a(1)3 a(1)1 a(1)2 a(1)3

a(2)Vminus1a(2)Vminus2 a(2)Va(2)1 a(2)2 a(2)3

a(2)Vminus1 a(2)Vminus2a(2)V a(2)1a(1)2a(1)3

a(1)Vminus1 a(1)Vminus2a(1)V

+1

+1

a(1)1a(1)2a(1)3a(1)Vminus1 a(1)Vminus2a(1)V

u4k u4kminus1 u2 u1 u(nm)

a(1)(nm)

a(1)(nm)

a(1)(nm)

W(2)

a(2)(nm)

W(2)

a(2)(nm)

b(2)

a(1)V a(1)Vminus1a(1)Vminus2 a(1)3 a(1)2 a(1)1

u(m nK) u(m nK)u(m (n minus 1)K + 1) u(m (n minus 1)K + 1)

(a) (b)

Figure 7The illustration of greedy layer-wise training and stacking (a) is the procedure of greedy layer-wise training and (b) is the procedureof stacking of sparse autoencoders

layer neurons (the activity state of the hidden layer neurons)by using this parameter set Next we use a(1) as the inputs tothe second sparse autoencoder After the second autoencoderis trained we can get the network parameters Θ(2) of thesecond sparse autoencoder and the new dataset a(1) (ie thefeature II in Figure 9) for the training of next single layerneural network We then repeat the above steps until the lastlayer (ie the softmax regression in our proposed system)Finally we obtain a pretrained deep autoencoder by stackingall the autoencoders and use Θ(1) and Θ(2) as the initializedparameters for this pretrained deep autoencoder The featureII is the high-level feature and can be used as the trainingdataset for softmax regression discussed next

332 SoftmaxClassifier In our proposed system the softmaxclassifier based on softmax regression was used to estimatethe probabilities of the current input T-F point u

(119899119898)belong-

ing to the orientation index 119895 by the deep autoencoder withthe extracted high-level features a(2)

(119899119898)as inputs

The softmax regression generalizes the classical logisticregression (for binary classification) to multiclass classifica-tion problems Different from the logistic regression the datalabel of the softmax regression is an integer value between 1and 119869 here 119869 is the number of data classes More specificallyin our proposed system for 119869 classes119872 samples dataset wasused to train the 119899th deep network

(a(2)(1198991) 119892(1198991)) (a(2)

(119899119898) 119892(119899119898)) (a(2)

(119873119872) 119892(119873119872)

) (17)

where119892(119899119898)

is the label of the119898th sample a(2)(119899119898)

andwill be setto 119895 if a(2)

(119899119898)belongs to class 119895The architecture of the softmax

classifier is shown in Figure 8Given an input a(2)

(119899119898) the output 119901(119892

(119899119898)= 119895 |

a(2)(119899119898)) = 119890120579119879

119895a(2)(119899119898)sum

119869

119895=1119890120579119879

119895a(2)(119899119898) of the layer neuron 119895 gives the

probability of the input a(2)(119899119898)

belonging to the class 119895 Similarto the logistic regression the output of the softmax regressioncan be written as follows

hΘ (a(2)

(119899119898)) =

[[[[[[[[[[[

[

119901 (119892(119899119898)

= 1 | a(2)(119899119898)Θ)

119901 (119892(119899119898)

= 119895 | a(2)(119899119898)Θ)

119901 (119892(119899119898)

= 119869 | a(2)(119899119898)Θ)

]]]]]]]]]]]

]

=1

sum119869

119895=1119890Θ119895a(2)(119899119898)

[[[[[[[[[[[[

[

119890Θ119879

1a(2)(119899119898)

119890Θ119879

119895a(2)(119899119898)

119890Θ119879

119869a(2)(119899119898)

]]]]]]]]]]]]

]

(18)

8 Discrete Dynamics in Nature and Society

Input

Output

a(2)Vminus1

a(2)V

b(3) +1

W(3)

a(2)1

a(2)2

a(2)(nm)

p(g(nm) = J | a(2)(nm)

)

p(g(nm) = 1 | a(2)(nm)

)

p(g(nm) = j | a(2)(nm)

)

Figure 8 A calssical architecture of the softmax classifier

Here Θ isin R119869times119881 is the network parameter of the softmax(119881 is the dimension of the input a(2)

(119899119898) and 119869 is the number of

the classes within the input data)Θ119879119895represents the transpose

of the 119895th row of Θ and contains the parameters of theconnections between the neuron 119895 of the output layer and theinput sample a(2)

(119899119898)

The softmax regression and logistic regression have thesame minimization objective The cost function 119869softmax(Θ)of the softmax classifier can be generalized from the logisticregression function and written as follows

119869softmax (Θ) = minus1

119872

[

[

119872

sum

119898=1

119869

sum

119895=1

1 119892(119899119898)

= 119895

sdot log119901 (119892(119899119898)

= 119895 | a(2)(119899119898)Θ)]

]

+120582

2

119869

sum

119895=1

119881

sum

V=1[Θ119895V]2

(19)

where 1sdot is the indicator function so that1a true statement = 1 and 1a false statement = 0The element Θ

119895V of the Θ contains the parameters of theconnections between the neuron 119895 of the output layer andthe neuron V of the input layer (1205822)sum119869

119895=1sum119868

119894=1(Θ119895V)2

is a regularization term that enforces the cost function119869softmax(Θ) to be strictly convex where 120582 is a weight decayparameter predefined by the users

In our proposed system we extend the label 119892(119899119898)

of thetraining dataset to a vector g

(119899119898)isin R119869 and set the 119895th element

119892119895of g(119899119898)

to 1 and other elements to 0 when the input samplea(2)(119899119898)

belongs to class 119895 such as g(119899119898)

= [0 0 1 0 0]119879

The cost function 119869softmax(Θ) can be written as follows byusing vectorization

119869softmax (Θ) = minus1

119872[

119872

sum

119898=1

(g(119899119898))119879 hΘ (a

(2)

(119899119898))]

+120582

2

119869

sum

119895=1

119868

sum

119894=1

(Θ119895119894)2

(20)

The softmax classifier can be trained by using the L-BFGSalgorithm based on a dataset in order to find an optimalparameter setΘ for minimizing the cost function 119869softmax(Θ)In our proposed system the dataset for softmax classifiertraining is composed by two parts The first part is the inputsample a(2)

(119899119898)(feature II) calculated from the last hidden

layer of the deep autoencoder The second part is the datalabel g

(119899119898)isin R119869 where the 119895th element119892

119895of the g

(119899119898)will be

set to 1 when the input sample belongs to the source locatedin the range of DOAs of index 119895

We stack the softmax classifier and deep autoencodertogether after the training is completed as shown on theleft part of Figure 9 Finally we use the training dataset andL-BFGS algorithm to fine-tune the deep network with theinitialized parameters W(1) b(1) W(2) b(2) W(3) and b(3)obtained from the sparse autoencoders and softmax classifiertraining The training phase of the sparse autoencoders andsoftmax classifier are called pretraining phase and the stack-ingtraining of the overall network that is deep networkis called fine-tuning phase In the pretraining phase theshallow neural networks that is sparse autoencoders andsoftmax classifier are training individually using the outputof current layer as the input for the next layer In the fine-tuning phase we use the L-BFGS algorithm (ie a gradientdescent method) to minimize the difference between theoutput of the deep network and the label of training datasetThe gradient descent works well because the initializedparameters obtained from the pretraining phase include asignificant amount of ldquopriorrdquo information about the inputdata through unsupervised learning

4 Experiments

In this section we describe the communication system usedin the simulation system and show the separation results indifferent SNR and data rates It is important to note thatbit error rate (BER) is an extremely significant performanceindex for a pragmatic communication system in this sectiontherefore we will evaluate the separation quality of pragmaticsystem by BER and regard the BER performance of singleuser MFSK communication as the baseline under the samecondition

There is no need to take the multipath propagation ofchannels into consideration because of the relatively closedistance among each node in view of the communicationof multi-UUV We thus adopt the additive white Gaussiannoise channel model and Monte Carlo method to calculatethe system BER in the simulation such as the simulationsystem shown in Figure 10 and we also take the BER of

Discrete Dynamics in Nature and Society 9

b(1)

b(2) b(3)b(3)

+1

+1+1 +1

u4k

u4kminus1

u1

u2

u(nm)W(3)W(3) W(2)

W(1)

a(2)(nm)

a(1)(nm)

a(2)1

a(2)2

a(2)3

a(2)(nm)

a(2)1

a(2)2

a(2)3

a(1)1

a(1)2

a(1)3

a(2)Vminus2

a(2)Vminus1

a(2)V

a(1)Vminus2

a(1)Vminus1

a(1)V

p(g1 = 1 | u(nm))

p(gj = j | u(nm))

p(gJ = J | u(nm))

g1

gj

gJ

g(nm)

Input

Input

Softmaxclassifier

Softmaxclassifier

p(gJ = J | a(2)(nm)

)

p(g1 = 1 | a(2)(nm)

)

p(gj = j | a(2)(nm)

)

g1

gj

gJ

g(nm)

Copy

MVILDIPD

MVILDIPD

a(2)Vminus2

a(2)Vminus1

a(2)V

Features I Features IIFeatures II

u(m nK)

u(m (n minus 1)K + 1)

Figure 9 The illustration of stacking deep autoencoder and softmax classifier

Demodulation DemodulationbaselineBER

proposedBER

Demodulation DemodulationbaselineBER

proposedBER

TimedelayMixture Source separation

system+

AWGN channel

AWGN channelMFSK passband

MFSK passbandMFSK baseband

MFSK baseband

Source A

Source B

Figure 10 The simulation system used in the experiment

the single user communication as the baseline The sam-pling rate of our simulation is 16 kHz and the order ofthe modulation is 16 without any error-correction codingThe bandwidth of the MFSK modulationdemodulation is640Hz and the bandwidth ratio varies from 0625 to 0125corresponding to the bit rate 400 bitss to 80 bitss

Frist supposewe can obtain the optimal T-Fmask that isusing (6) to calculate T-F mask and then get the simulationresult of BER being changing with SNR as well as bandwidthratio and make a comparison with baseline

As shown in Tables 1 and 2 it can be seen that fromthe simulation result the BER similar with baseline can beattained byT-Fmaskmethodwhen bandwidth ratio lt 0417Furthermore the BER with T-F mask would be even lowerthan that of baseline under the same condition when SNRis very low that is SNR = minus20 The reason behind theabove phenomenon is that it can be seen from (6) that some

frequency points are adjusted to zero by the T-F mask whichpromotes the SNR of signal objectively and obtains lowerBER In the actual system however it is a highly challengingtask to accurately estimate the T-Fmask under such low SNRwhich can also be verified in later simulation

According to Section 3 we estimate the T-F mask byusing the orientation information of the nodes that is theDOA of the MFSK signal received by nodes Therefore weintroduce time delay 120591 to simulate pairwise hydrophonereceiver in the simulation and divide the space into 37 blocksalong the horizontal direction at the same time in order tocorrespond from minus90 to +90 horizontally with the step sizeof 5 in a horizontal space We estimate the T-F mask bysource separation system described in Section 3 and compareit to the BER performance of baseline under the condition ofdifferent SNR and bandwidth ratio the simulation result isshown in Table 3

10 Discrete Dynamics in Nature and Society

Table 1 The BER of the simulation system with the optimum T-F mask

Bandwidth ratio SNR = 30 dB SNR = 20 dB SNR = 10 dB SNR = 5 dB SNR = 0 dB0625 530E minus 02 530E minus 02 536E minus 02 550E minus 02 595E minus 020500 296E minus 02 295E minus 02 303E minus 02 316E minus 02 356E minus 020417 140E minus 02 138E minus 02 138E minus 02 139E minus 02 139E minus 020313 900E minus 06 160E minus 05 130E minus 05 200E minus 05 450E minus 050250 300E minus 06 400E minus 06 100E minus 06 400E minus 06 400E minus 060208 000E + 00 100E minus 06 100E minus 06 000E + 00 000E + 000156 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000125 000E + 00 000E + 00 000E + 00 000E + 00 000E + 00Bandwidth ratio SNR = minus5 dB SNR = minus10 dB SNR = minus15 dB SNR = minus20 dB 0625 764E minus 02 147E minus 01 340E minus 01 562E minus 01 0500 426E minus 02 682E minus 02 198E minus 01 424E minus 01 0417 148E minus 02 267E minus 02 110E minus 01 309E minus 01 0313 125E minus 04 308E minus 03 323E minus 02 157E minus 01 0250 900E minus 06 112E minus 04 713E minus 03 754E minus 02 0208 200E minus 06 219E minus 04 469E minus 03 462E minus 02 0156 000E + 00 210E minus 05 113E minus 03 146E minus 02 0125 000E + 00 000E + 00 200E minus 05 402E minus 03

Table 2 The BER of the baseline

Bandwidth ratio SNR = 30 dB SNR = 20 dB SNR = 10 dB SNR = 5 dB SNR = 0 dB0625 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000500 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000417 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000313 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000250 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000208 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000156 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000125 000E + 00 000E + 00 000E + 00 000E + 00 000E + 00Bandwidth ratio SNR = minus5 dB SNR = minus10 dB SNR = minus15 dB SNR = minus20 dB 0625 443E minus 04 508E minus 02 362E minus 01 726E minus 01 0500 100E minus 06 791E minus 03 216E minus 01 647E minus 01 0417 000E + 00 713E minus 04 119E minus 01 570E minus 01 0313 000E + 00 000E + 00 342E minus 02 435E minus 01 0250 000E + 00 000E + 00 965E minus 03 328E minus 01 0208 000E + 00 000E + 00 340E minus 03 253E minus 01 0156 000E + 00 000E + 00 292E minus 04 141E minus 01 0125 000E + 00 000E + 00 190E minus 05 723E minus 02

As shown in Table 3 it is observed that the BER per-formance of the proposed system is much the same as thebaseline when SNR gt 20 dB and bandwidth ratio lt 0417which is consistent with the result when using the optimal T-F mask When SNR lt 20 dB however the BER performanceof the proposed system begins to decline for a big error of T-Fmask estimationmade by the lower SNR of the signals whichresults in the system performance degradation

5 Summary

In this paper we point at the problem of control packets colli-sion avoiding existing widely in multichannel MAC protocol

and on the basis of the sparsity of noncoherent modulation-MFSK in time-frequency domain separate the sources fromthe MFSK mixture caused by packets collision through theuse of T-F masking method First we indicate the sparsityof MFSK signal with bandwidth ratio and demonstrate therelation between bandwidth ratio and PSR SIR and WDOby means of the simulation experiment Then we establishthe source separation system based on deep networks and themodel of MFSK communication system taking single userMFSK communication as the baseline to compare the BERperformance of proposed system and baseline under differentcondition of SNR and bandwidth ratio The simulation resultshows that first the optimal T-F masking could obtain

Discrete Dynamics in Nature and Society 11

Table 3 The BER of the proposed system

Bandwidth ratio SNR = 30 dB SNR = 20 dB SNR = 10 dB SNR = 5 dB SNR = 0 dB0625 580E minus 02 730E minus 02 636E minus 02 630E minus 02 359E minus 010500 326E minus 02 345E minus 02 393E minus 02 416E minus 02 336E minus 010417 146E minus 02 178E minus 02 148E minus 02 499E minus 02 214E minus 010313 100E minus 05 689E minus 04 950E minus 03 300E minus 02 190E minus 010250 700E minus 06 210E minus 04 600E minus 03 100E minus 02 160E minus 010208 100E minus 06 900E minus 05 300E minus 03 800E minus 03 130E minus 010156 000E + 00 300E minus 05 980E minus 04 200E minus 03 130E minus 010125 000E + 00 700E minus 06 100E minus 05 900E minus 04 120E minus 01Bandwidth ratio SNR = minus5 dB SNR = minus10 dB SNR = minus15 dB SNR = minus20 dB 0625 494E minus 01 0500 426E minus 01 0417 448E minus 01 0313 425E minus 01 0250 490E minus 01 0208 420E minus 01 0156 400E minus 01 0125 250E minus 01

the same BER performance as the baseline under lowerbandwidth ratio second the proposed system could obtainthe similar BER performance as the baseline under higherSNR third the BER performance of the proposed systemdeclines rapidly under the condition of lower SNR for lowerSNR leads to a greater error in the estimation of T-F mask Inthe future work we will adjust the structure of deep networksin subsequent research work to promote the performanceof proposed system under the condition of low SNR andmultipath propagation presenting in the underwater channelAs a future research topic it also deserves the possibility thatthe bioinspired computing models and algorithms are usedfor the underwater multichannel MAC protocols such as theP systems (inspired from the structure and the functioningof cells) [19 20] and evolutionary computation (motivatedby evolution theory of Darwin) [21 22]

Competing Interests

The authors declare that there are no competing interestsregarding the publication of this paper

Acknowledgments

This research was supported partially by the Natural ScienceBasis Research Plan in Shaanxi Province of China (Programno 2014JQ8355)

References

[1] Z Li C Yang N Ding S Bogdan and T Ge ldquoRobust adaptivemotion control for underwater remotely operated vehicleswith velocity constraintsrdquo International Journal of ControlAutomation and Systems vol 10 no 2 pp 421ndash429 2012

[2] A S K Annamalai R Sutton C Yang P Culverhouse andS Sharma ldquoRobust adaptive control of an uninhabited surfacevehiclerdquo Journal of Intelligent amp Robotic Systems vol 78 no 2pp 319ndash338 2015

[3] K Chen M Ma E Cheng F Yuan and W Su ldquoA survey onMAC protocols for underwater wireless sensor networksrdquo IEEECommunications Surveys and Tutorials vol 16 no 3 pp 1433ndash1447 2014

[4] R Otnes A Asterjadhi P Casari et al Underwater AcousticNetworking Techniques Springer Berlin Germany 2012

[5] S Climent A Sanchez J V Capella N Meratnia and JJ Serrano ldquoUnderwater acoustic wireless sensor networksadvances and future trends in physical MAC and routinglayersrdquo Sensors vol 14 no 1 pp 795ndash833 2014

[6] I M Khalil Y Gadallah M Hayajneh and A KhreishahldquoAn adaptive OFDMA-based MAC protocol for underwateracoustic wireless sensor networksrdquo Sensors vol 12 no 7 pp8782ndash8805 2012

[7] F Bouabdallah and R Boutaba ldquoA distributed OFDMAmedium access control for underwater acoustic sensors net-worksrdquo in Proceedings of the IEEE International Conference onCommunications (ICC rsquo11) June 2011

[8] C C Hsu K F Lai C F Chou and K C J Lin ldquoSt-mac spatial-temporal mac scheduling for underwater sensornetworksrdquo in Proceedings of the 28th Conference on ComputerCommunications (INFOCOM rsquo09) pp 1827ndash1835 IEEE Rio deJaneiro Brazil April 2009

[9] Y-D Chen C-Y Lien S-W Chuang and K-P Shih ldquoDSSSa TDMA-based MAC protocol with Dynamic Slot SchedulingStrategy for underwater acoustic sensor networksrdquo in Proceed-ings of the IEEE OCEANS pp 1ndash6 Santander Spain June 2011

[10] K Kredo II P Djukic and P Mohapatra ldquoStump exploitingposition diversity in the staggered tdma underwater mac pro-tocolrdquo in Proceedings of the IEEE 28th Conference on ComputerCommunications (INFOCOM rsquo09) pp 2961ndash2965 April 2009

12 Discrete Dynamics in Nature and Society

[11] M S Marcal Molins ldquoSlotted FAMA a MAC protocol forunderwater acoustic networksrdquo in Proceedings of the IEEEOCEANS 2006 Boston Mass USA September 2006

[12] N Chirdchoo W-S Soh and K C Chua ldquoRIPT a receiver-initiated reservation-based protocol for underwater acousticnetworksrdquo IEEE Journal on Selected Areas in Communicationsvol 26 no 9 pp 1744ndash1753 2008

[13] Z Zhou Z Peng P Xie J-H Cui and Z Jiang ldquoExploringrandom access and handshaking techniques in underwaterwireless acoustic networksrdquo Eurasip Journal on Wireless Com-munications and Networking vol 2013 article 95 2013

[14] YW ChihminChao ldquoAmultiple rendezvousmultichannelmacprotocol for underwater sensor networksrdquo IEEETransactions onSystems Man and Cybernetics Systems vol 43 no 1 pp 128ndash138 2010

[15] H Ramezani and G Leus ldquoDMC-MAC dynamic multi-channel MAC in underwater acoustic networksrdquo in Proceedingsof the 21st European Signal Processing Conference (EUSIPCO rsquo13)pp 1ndash5 Marrakech Morocco September 2013

[16] Z Zhou Z Peng J-H Cui and Z Jiang ldquoHandling triplehidden terminal problems for multichannel MAC in long-delayunderwater sensor networksrdquo IEEE Transactions on MobileComputing vol 11 no 1 pp 139ndash154 2010

[17] O Yilmaz and S Rickard ldquoBlind separation of speech mixturesvia time-frequency maskingrdquo IEEE Transactions on SignalProcessing vol 52 no 7 pp 1830ndash1847 2004

[18] T N Sainath B Kingsbury and B Ramabhadran ldquoAuto-encoder bottleneck features using deep belief networksrdquo inProceedings of the IEEE International Conference on AcousticsSpeech and Signal Processing (ICASSP rsquo12) pp 4153ndash4156 KyotoJapan March 2012

[19] X Zhang L Pan and A Paun ldquoOn the universality of axonp systemrdquo IEEE Transactions on Neural Networks and LearningSystems vol 26 no 11 pp 2816ndash2829 2015

[20] X Zhang Y Liu B Luo and L Pan ldquoComputational power oftissue P systems for generating control languagesrdquo InformationSciences vol 278 pp 285ndash297 2014

[21] X Zhang Y Tian and Y Jin ldquoA knee point-driven evolutionaryalgorithm for many-objective optimizationrdquo IEEE Transactionson Evolutionary Computation vol 19 no 6 pp 761ndash776 2015

[22] X Zhang Y Tian R Cheng and Y Jin ldquoAn efficient approachto nondominated sorting for evolutionary multiobjective opti-mizationrdquo IEEE Transactions on Evolutionary Computation vol19 no 2 pp 201ndash213 2015

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 7: Research Article The Control Packet Collision …downloads.hindawi.com/journals/ddns/2016/2437615.pdfResearch Article The Control Packet Collision Avoidance Algorithm for the Underwater

Discrete Dynamics in Nature and Society 7

Copy

Copy

Copy

Copy

Copy

Copy

Input

Input

Output

Output

Features I

Features II

(Features I)

autoencoderSecond sparse

autoencoderFirst sparse

autoencoderDeep sparse

middot middot middot

middot middot middot middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot middot middot middot

middot middot middot

middot middot middot

b(1)

b(2)

+1

+1a(1)Vminus1a(1)Vminus2 a(1)V

b(1)

u1u2+1

+1

u(nm) u(nm)

W(1) W(1)

u4k u4kminus1 u1 u2 u4ku4kminus1

a(1)1a(1)2a(1)3 a(1)1 a(1)2 a(1)3

a(2)Vminus1a(2)Vminus2 a(2)Va(2)1 a(2)2 a(2)3

a(2)Vminus1 a(2)Vminus2a(2)V a(2)1a(1)2a(1)3

a(1)Vminus1 a(1)Vminus2a(1)V

+1

+1

a(1)1a(1)2a(1)3a(1)Vminus1 a(1)Vminus2a(1)V

u4k u4kminus1 u2 u1 u(nm)

a(1)(nm)

a(1)(nm)

a(1)(nm)

W(2)

a(2)(nm)

W(2)

a(2)(nm)

b(2)

a(1)V a(1)Vminus1a(1)Vminus2 a(1)3 a(1)2 a(1)1

u(m nK) u(m nK)u(m (n minus 1)K + 1) u(m (n minus 1)K + 1)

(a) (b)

Figure 7The illustration of greedy layer-wise training and stacking (a) is the procedure of greedy layer-wise training and (b) is the procedureof stacking of sparse autoencoders

layer neurons (the activity state of the hidden layer neurons)by using this parameter set Next we use a(1) as the inputs tothe second sparse autoencoder After the second autoencoderis trained we can get the network parameters Θ(2) of thesecond sparse autoencoder and the new dataset a(1) (ie thefeature II in Figure 9) for the training of next single layerneural network We then repeat the above steps until the lastlayer (ie the softmax regression in our proposed system)Finally we obtain a pretrained deep autoencoder by stackingall the autoencoders and use Θ(1) and Θ(2) as the initializedparameters for this pretrained deep autoencoder The featureII is the high-level feature and can be used as the trainingdataset for softmax regression discussed next

332 SoftmaxClassifier In our proposed system the softmaxclassifier based on softmax regression was used to estimatethe probabilities of the current input T-F point u

(119899119898)belong-

ing to the orientation index 119895 by the deep autoencoder withthe extracted high-level features a(2)

(119899119898)as inputs

The softmax regression generalizes the classical logisticregression (for binary classification) to multiclass classifica-tion problems Different from the logistic regression the datalabel of the softmax regression is an integer value between 1and 119869 here 119869 is the number of data classes More specificallyin our proposed system for 119869 classes119872 samples dataset wasused to train the 119899th deep network

(a(2)(1198991) 119892(1198991)) (a(2)

(119899119898) 119892(119899119898)) (a(2)

(119873119872) 119892(119873119872)

) (17)

where119892(119899119898)

is the label of the119898th sample a(2)(119899119898)

andwill be setto 119895 if a(2)

(119899119898)belongs to class 119895The architecture of the softmax

classifier is shown in Figure 8Given an input a(2)

(119899119898) the output 119901(119892

(119899119898)= 119895 |

a(2)(119899119898)) = 119890120579119879

119895a(2)(119899119898)sum

119869

119895=1119890120579119879

119895a(2)(119899119898) of the layer neuron 119895 gives the

probability of the input a(2)(119899119898)

belonging to the class 119895 Similarto the logistic regression the output of the softmax regressioncan be written as follows

hΘ (a(2)

(119899119898)) =

[[[[[[[[[[[

[

119901 (119892(119899119898)

= 1 | a(2)(119899119898)Θ)

119901 (119892(119899119898)

= 119895 | a(2)(119899119898)Θ)

119901 (119892(119899119898)

= 119869 | a(2)(119899119898)Θ)

]]]]]]]]]]]

]

=1

sum119869

119895=1119890Θ119895a(2)(119899119898)

[[[[[[[[[[[[

[

119890Θ119879

1a(2)(119899119898)

119890Θ119879

119895a(2)(119899119898)

119890Θ119879

119869a(2)(119899119898)

]]]]]]]]]]]]

]

(18)

8 Discrete Dynamics in Nature and Society

Input

Output

a(2)Vminus1

a(2)V

b(3) +1

W(3)

a(2)1

a(2)2

a(2)(nm)

p(g(nm) = J | a(2)(nm)

)

p(g(nm) = 1 | a(2)(nm)

)

p(g(nm) = j | a(2)(nm)

)

Figure 8 A calssical architecture of the softmax classifier

Here Θ isin R119869times119881 is the network parameter of the softmax(119881 is the dimension of the input a(2)

(119899119898) and 119869 is the number of

the classes within the input data)Θ119879119895represents the transpose

of the 119895th row of Θ and contains the parameters of theconnections between the neuron 119895 of the output layer and theinput sample a(2)

(119899119898)

The softmax regression and logistic regression have thesame minimization objective The cost function 119869softmax(Θ)of the softmax classifier can be generalized from the logisticregression function and written as follows

119869softmax (Θ) = minus1

119872

[

[

119872

sum

119898=1

119869

sum

119895=1

1 119892(119899119898)

= 119895

sdot log119901 (119892(119899119898)

= 119895 | a(2)(119899119898)Θ)]

]

+120582

2

119869

sum

119895=1

119881

sum

V=1[Θ119895V]2

(19)

where 1sdot is the indicator function so that1a true statement = 1 and 1a false statement = 0The element Θ

119895V of the Θ contains the parameters of theconnections between the neuron 119895 of the output layer andthe neuron V of the input layer (1205822)sum119869

119895=1sum119868

119894=1(Θ119895V)2

is a regularization term that enforces the cost function119869softmax(Θ) to be strictly convex where 120582 is a weight decayparameter predefined by the users

In our proposed system we extend the label 119892(119899119898)

of thetraining dataset to a vector g

(119899119898)isin R119869 and set the 119895th element

119892119895of g(119899119898)

to 1 and other elements to 0 when the input samplea(2)(119899119898)

belongs to class 119895 such as g(119899119898)

= [0 0 1 0 0]119879

The cost function 119869softmax(Θ) can be written as follows byusing vectorization

119869softmax (Θ) = minus1

119872[

119872

sum

119898=1

(g(119899119898))119879 hΘ (a

(2)

(119899119898))]

+120582

2

119869

sum

119895=1

119868

sum

119894=1

(Θ119895119894)2

(20)

The softmax classifier can be trained by using the L-BFGSalgorithm based on a dataset in order to find an optimalparameter setΘ for minimizing the cost function 119869softmax(Θ)In our proposed system the dataset for softmax classifiertraining is composed by two parts The first part is the inputsample a(2)

(119899119898)(feature II) calculated from the last hidden

layer of the deep autoencoder The second part is the datalabel g

(119899119898)isin R119869 where the 119895th element119892

119895of the g

(119899119898)will be

set to 1 when the input sample belongs to the source locatedin the range of DOAs of index 119895

We stack the softmax classifier and deep autoencodertogether after the training is completed as shown on theleft part of Figure 9 Finally we use the training dataset andL-BFGS algorithm to fine-tune the deep network with theinitialized parameters W(1) b(1) W(2) b(2) W(3) and b(3)obtained from the sparse autoencoders and softmax classifiertraining The training phase of the sparse autoencoders andsoftmax classifier are called pretraining phase and the stack-ingtraining of the overall network that is deep networkis called fine-tuning phase In the pretraining phase theshallow neural networks that is sparse autoencoders andsoftmax classifier are training individually using the outputof current layer as the input for the next layer In the fine-tuning phase we use the L-BFGS algorithm (ie a gradientdescent method) to minimize the difference between theoutput of the deep network and the label of training datasetThe gradient descent works well because the initializedparameters obtained from the pretraining phase include asignificant amount of ldquopriorrdquo information about the inputdata through unsupervised learning

4 Experiments

In this section we describe the communication system usedin the simulation system and show the separation results indifferent SNR and data rates It is important to note thatbit error rate (BER) is an extremely significant performanceindex for a pragmatic communication system in this sectiontherefore we will evaluate the separation quality of pragmaticsystem by BER and regard the BER performance of singleuser MFSK communication as the baseline under the samecondition

There is no need to take the multipath propagation ofchannels into consideration because of the relatively closedistance among each node in view of the communicationof multi-UUV We thus adopt the additive white Gaussiannoise channel model and Monte Carlo method to calculatethe system BER in the simulation such as the simulationsystem shown in Figure 10 and we also take the BER of

Discrete Dynamics in Nature and Society 9

b(1)

b(2) b(3)b(3)

+1

+1+1 +1

u4k

u4kminus1

u1

u2

u(nm)W(3)W(3) W(2)

W(1)

a(2)(nm)

a(1)(nm)

a(2)1

a(2)2

a(2)3

a(2)(nm)

a(2)1

a(2)2

a(2)3

a(1)1

a(1)2

a(1)3

a(2)Vminus2

a(2)Vminus1

a(2)V

a(1)Vminus2

a(1)Vminus1

a(1)V

p(g1 = 1 | u(nm))

p(gj = j | u(nm))

p(gJ = J | u(nm))

g1

gj

gJ

g(nm)

Input

Input

Softmaxclassifier

Softmaxclassifier

p(gJ = J | a(2)(nm)

)

p(g1 = 1 | a(2)(nm)

)

p(gj = j | a(2)(nm)

)

g1

gj

gJ

g(nm)

Copy

MVILDIPD

MVILDIPD

a(2)Vminus2

a(2)Vminus1

a(2)V

Features I Features IIFeatures II

u(m nK)

u(m (n minus 1)K + 1)

Figure 9 The illustration of stacking deep autoencoder and softmax classifier

Demodulation DemodulationbaselineBER

proposedBER

Demodulation DemodulationbaselineBER

proposedBER

TimedelayMixture Source separation

system+

AWGN channel

AWGN channelMFSK passband

MFSK passbandMFSK baseband

MFSK baseband

Source A

Source B

Figure 10 The simulation system used in the experiment

the single user communication as the baseline The sam-pling rate of our simulation is 16 kHz and the order ofthe modulation is 16 without any error-correction codingThe bandwidth of the MFSK modulationdemodulation is640Hz and the bandwidth ratio varies from 0625 to 0125corresponding to the bit rate 400 bitss to 80 bitss

Frist supposewe can obtain the optimal T-Fmask that isusing (6) to calculate T-F mask and then get the simulationresult of BER being changing with SNR as well as bandwidthratio and make a comparison with baseline

As shown in Tables 1 and 2 it can be seen that fromthe simulation result the BER similar with baseline can beattained byT-Fmaskmethodwhen bandwidth ratio lt 0417Furthermore the BER with T-F mask would be even lowerthan that of baseline under the same condition when SNRis very low that is SNR = minus20 The reason behind theabove phenomenon is that it can be seen from (6) that some

frequency points are adjusted to zero by the T-F mask whichpromotes the SNR of signal objectively and obtains lowerBER In the actual system however it is a highly challengingtask to accurately estimate the T-Fmask under such low SNRwhich can also be verified in later simulation

According to Section 3 we estimate the T-F mask byusing the orientation information of the nodes that is theDOA of the MFSK signal received by nodes Therefore weintroduce time delay 120591 to simulate pairwise hydrophonereceiver in the simulation and divide the space into 37 blocksalong the horizontal direction at the same time in order tocorrespond from minus90 to +90 horizontally with the step sizeof 5 in a horizontal space We estimate the T-F mask bysource separation system described in Section 3 and compareit to the BER performance of baseline under the condition ofdifferent SNR and bandwidth ratio the simulation result isshown in Table 3

10 Discrete Dynamics in Nature and Society

Table 1 The BER of the simulation system with the optimum T-F mask

Bandwidth ratio SNR = 30 dB SNR = 20 dB SNR = 10 dB SNR = 5 dB SNR = 0 dB0625 530E minus 02 530E minus 02 536E minus 02 550E minus 02 595E minus 020500 296E minus 02 295E minus 02 303E minus 02 316E minus 02 356E minus 020417 140E minus 02 138E minus 02 138E minus 02 139E minus 02 139E minus 020313 900E minus 06 160E minus 05 130E minus 05 200E minus 05 450E minus 050250 300E minus 06 400E minus 06 100E minus 06 400E minus 06 400E minus 060208 000E + 00 100E minus 06 100E minus 06 000E + 00 000E + 000156 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000125 000E + 00 000E + 00 000E + 00 000E + 00 000E + 00Bandwidth ratio SNR = minus5 dB SNR = minus10 dB SNR = minus15 dB SNR = minus20 dB 0625 764E minus 02 147E minus 01 340E minus 01 562E minus 01 0500 426E minus 02 682E minus 02 198E minus 01 424E minus 01 0417 148E minus 02 267E minus 02 110E minus 01 309E minus 01 0313 125E minus 04 308E minus 03 323E minus 02 157E minus 01 0250 900E minus 06 112E minus 04 713E minus 03 754E minus 02 0208 200E minus 06 219E minus 04 469E minus 03 462E minus 02 0156 000E + 00 210E minus 05 113E minus 03 146E minus 02 0125 000E + 00 000E + 00 200E minus 05 402E minus 03

Table 2 The BER of the baseline

Bandwidth ratio SNR = 30 dB SNR = 20 dB SNR = 10 dB SNR = 5 dB SNR = 0 dB0625 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000500 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000417 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000313 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000250 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000208 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000156 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000125 000E + 00 000E + 00 000E + 00 000E + 00 000E + 00Bandwidth ratio SNR = minus5 dB SNR = minus10 dB SNR = minus15 dB SNR = minus20 dB 0625 443E minus 04 508E minus 02 362E minus 01 726E minus 01 0500 100E minus 06 791E minus 03 216E minus 01 647E minus 01 0417 000E + 00 713E minus 04 119E minus 01 570E minus 01 0313 000E + 00 000E + 00 342E minus 02 435E minus 01 0250 000E + 00 000E + 00 965E minus 03 328E minus 01 0208 000E + 00 000E + 00 340E minus 03 253E minus 01 0156 000E + 00 000E + 00 292E minus 04 141E minus 01 0125 000E + 00 000E + 00 190E minus 05 723E minus 02

As shown in Table 3 it is observed that the BER per-formance of the proposed system is much the same as thebaseline when SNR gt 20 dB and bandwidth ratio lt 0417which is consistent with the result when using the optimal T-F mask When SNR lt 20 dB however the BER performanceof the proposed system begins to decline for a big error of T-Fmask estimationmade by the lower SNR of the signals whichresults in the system performance degradation

5 Summary

In this paper we point at the problem of control packets colli-sion avoiding existing widely in multichannel MAC protocol

and on the basis of the sparsity of noncoherent modulation-MFSK in time-frequency domain separate the sources fromthe MFSK mixture caused by packets collision through theuse of T-F masking method First we indicate the sparsityof MFSK signal with bandwidth ratio and demonstrate therelation between bandwidth ratio and PSR SIR and WDOby means of the simulation experiment Then we establishthe source separation system based on deep networks and themodel of MFSK communication system taking single userMFSK communication as the baseline to compare the BERperformance of proposed system and baseline under differentcondition of SNR and bandwidth ratio The simulation resultshows that first the optimal T-F masking could obtain

Discrete Dynamics in Nature and Society 11

Table 3 The BER of the proposed system

Bandwidth ratio SNR = 30 dB SNR = 20 dB SNR = 10 dB SNR = 5 dB SNR = 0 dB0625 580E minus 02 730E minus 02 636E minus 02 630E minus 02 359E minus 010500 326E minus 02 345E minus 02 393E minus 02 416E minus 02 336E minus 010417 146E minus 02 178E minus 02 148E minus 02 499E minus 02 214E minus 010313 100E minus 05 689E minus 04 950E minus 03 300E minus 02 190E minus 010250 700E minus 06 210E minus 04 600E minus 03 100E minus 02 160E minus 010208 100E minus 06 900E minus 05 300E minus 03 800E minus 03 130E minus 010156 000E + 00 300E minus 05 980E minus 04 200E minus 03 130E minus 010125 000E + 00 700E minus 06 100E minus 05 900E minus 04 120E minus 01Bandwidth ratio SNR = minus5 dB SNR = minus10 dB SNR = minus15 dB SNR = minus20 dB 0625 494E minus 01 0500 426E minus 01 0417 448E minus 01 0313 425E minus 01 0250 490E minus 01 0208 420E minus 01 0156 400E minus 01 0125 250E minus 01

the same BER performance as the baseline under lowerbandwidth ratio second the proposed system could obtainthe similar BER performance as the baseline under higherSNR third the BER performance of the proposed systemdeclines rapidly under the condition of lower SNR for lowerSNR leads to a greater error in the estimation of T-F mask Inthe future work we will adjust the structure of deep networksin subsequent research work to promote the performanceof proposed system under the condition of low SNR andmultipath propagation presenting in the underwater channelAs a future research topic it also deserves the possibility thatthe bioinspired computing models and algorithms are usedfor the underwater multichannel MAC protocols such as theP systems (inspired from the structure and the functioningof cells) [19 20] and evolutionary computation (motivatedby evolution theory of Darwin) [21 22]

Competing Interests

The authors declare that there are no competing interestsregarding the publication of this paper

Acknowledgments

This research was supported partially by the Natural ScienceBasis Research Plan in Shaanxi Province of China (Programno 2014JQ8355)

References

[1] Z Li C Yang N Ding S Bogdan and T Ge ldquoRobust adaptivemotion control for underwater remotely operated vehicleswith velocity constraintsrdquo International Journal of ControlAutomation and Systems vol 10 no 2 pp 421ndash429 2012

[2] A S K Annamalai R Sutton C Yang P Culverhouse andS Sharma ldquoRobust adaptive control of an uninhabited surfacevehiclerdquo Journal of Intelligent amp Robotic Systems vol 78 no 2pp 319ndash338 2015

[3] K Chen M Ma E Cheng F Yuan and W Su ldquoA survey onMAC protocols for underwater wireless sensor networksrdquo IEEECommunications Surveys and Tutorials vol 16 no 3 pp 1433ndash1447 2014

[4] R Otnes A Asterjadhi P Casari et al Underwater AcousticNetworking Techniques Springer Berlin Germany 2012

[5] S Climent A Sanchez J V Capella N Meratnia and JJ Serrano ldquoUnderwater acoustic wireless sensor networksadvances and future trends in physical MAC and routinglayersrdquo Sensors vol 14 no 1 pp 795ndash833 2014

[6] I M Khalil Y Gadallah M Hayajneh and A KhreishahldquoAn adaptive OFDMA-based MAC protocol for underwateracoustic wireless sensor networksrdquo Sensors vol 12 no 7 pp8782ndash8805 2012

[7] F Bouabdallah and R Boutaba ldquoA distributed OFDMAmedium access control for underwater acoustic sensors net-worksrdquo in Proceedings of the IEEE International Conference onCommunications (ICC rsquo11) June 2011

[8] C C Hsu K F Lai C F Chou and K C J Lin ldquoSt-mac spatial-temporal mac scheduling for underwater sensornetworksrdquo in Proceedings of the 28th Conference on ComputerCommunications (INFOCOM rsquo09) pp 1827ndash1835 IEEE Rio deJaneiro Brazil April 2009

[9] Y-D Chen C-Y Lien S-W Chuang and K-P Shih ldquoDSSSa TDMA-based MAC protocol with Dynamic Slot SchedulingStrategy for underwater acoustic sensor networksrdquo in Proceed-ings of the IEEE OCEANS pp 1ndash6 Santander Spain June 2011

[10] K Kredo II P Djukic and P Mohapatra ldquoStump exploitingposition diversity in the staggered tdma underwater mac pro-tocolrdquo in Proceedings of the IEEE 28th Conference on ComputerCommunications (INFOCOM rsquo09) pp 2961ndash2965 April 2009

12 Discrete Dynamics in Nature and Society

[11] M S Marcal Molins ldquoSlotted FAMA a MAC protocol forunderwater acoustic networksrdquo in Proceedings of the IEEEOCEANS 2006 Boston Mass USA September 2006

[12] N Chirdchoo W-S Soh and K C Chua ldquoRIPT a receiver-initiated reservation-based protocol for underwater acousticnetworksrdquo IEEE Journal on Selected Areas in Communicationsvol 26 no 9 pp 1744ndash1753 2008

[13] Z Zhou Z Peng P Xie J-H Cui and Z Jiang ldquoExploringrandom access and handshaking techniques in underwaterwireless acoustic networksrdquo Eurasip Journal on Wireless Com-munications and Networking vol 2013 article 95 2013

[14] YW ChihminChao ldquoAmultiple rendezvousmultichannelmacprotocol for underwater sensor networksrdquo IEEETransactions onSystems Man and Cybernetics Systems vol 43 no 1 pp 128ndash138 2010

[15] H Ramezani and G Leus ldquoDMC-MAC dynamic multi-channel MAC in underwater acoustic networksrdquo in Proceedingsof the 21st European Signal Processing Conference (EUSIPCO rsquo13)pp 1ndash5 Marrakech Morocco September 2013

[16] Z Zhou Z Peng J-H Cui and Z Jiang ldquoHandling triplehidden terminal problems for multichannel MAC in long-delayunderwater sensor networksrdquo IEEE Transactions on MobileComputing vol 11 no 1 pp 139ndash154 2010

[17] O Yilmaz and S Rickard ldquoBlind separation of speech mixturesvia time-frequency maskingrdquo IEEE Transactions on SignalProcessing vol 52 no 7 pp 1830ndash1847 2004

[18] T N Sainath B Kingsbury and B Ramabhadran ldquoAuto-encoder bottleneck features using deep belief networksrdquo inProceedings of the IEEE International Conference on AcousticsSpeech and Signal Processing (ICASSP rsquo12) pp 4153ndash4156 KyotoJapan March 2012

[19] X Zhang L Pan and A Paun ldquoOn the universality of axonp systemrdquo IEEE Transactions on Neural Networks and LearningSystems vol 26 no 11 pp 2816ndash2829 2015

[20] X Zhang Y Liu B Luo and L Pan ldquoComputational power oftissue P systems for generating control languagesrdquo InformationSciences vol 278 pp 285ndash297 2014

[21] X Zhang Y Tian and Y Jin ldquoA knee point-driven evolutionaryalgorithm for many-objective optimizationrdquo IEEE Transactionson Evolutionary Computation vol 19 no 6 pp 761ndash776 2015

[22] X Zhang Y Tian R Cheng and Y Jin ldquoAn efficient approachto nondominated sorting for evolutionary multiobjective opti-mizationrdquo IEEE Transactions on Evolutionary Computation vol19 no 2 pp 201ndash213 2015

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 8: Research Article The Control Packet Collision …downloads.hindawi.com/journals/ddns/2016/2437615.pdfResearch Article The Control Packet Collision Avoidance Algorithm for the Underwater

8 Discrete Dynamics in Nature and Society

Input

Output

a(2)Vminus1

a(2)V

b(3) +1

W(3)

a(2)1

a(2)2

a(2)(nm)

p(g(nm) = J | a(2)(nm)

)

p(g(nm) = 1 | a(2)(nm)

)

p(g(nm) = j | a(2)(nm)

)

Figure 8 A calssical architecture of the softmax classifier

Here Θ isin R119869times119881 is the network parameter of the softmax(119881 is the dimension of the input a(2)

(119899119898) and 119869 is the number of

the classes within the input data)Θ119879119895represents the transpose

of the 119895th row of Θ and contains the parameters of theconnections between the neuron 119895 of the output layer and theinput sample a(2)

(119899119898)

The softmax regression and logistic regression have thesame minimization objective The cost function 119869softmax(Θ)of the softmax classifier can be generalized from the logisticregression function and written as follows

119869softmax (Θ) = minus1

119872

[

[

119872

sum

119898=1

119869

sum

119895=1

1 119892(119899119898)

= 119895

sdot log119901 (119892(119899119898)

= 119895 | a(2)(119899119898)Θ)]

]

+120582

2

119869

sum

119895=1

119881

sum

V=1[Θ119895V]2

(19)

where 1sdot is the indicator function so that1a true statement = 1 and 1a false statement = 0The element Θ

119895V of the Θ contains the parameters of theconnections between the neuron 119895 of the output layer andthe neuron V of the input layer (1205822)sum119869

119895=1sum119868

119894=1(Θ119895V)2

is a regularization term that enforces the cost function119869softmax(Θ) to be strictly convex where 120582 is a weight decayparameter predefined by the users

In our proposed system we extend the label 119892(119899119898)

of thetraining dataset to a vector g

(119899119898)isin R119869 and set the 119895th element

119892119895of g(119899119898)

to 1 and other elements to 0 when the input samplea(2)(119899119898)

belongs to class 119895 such as g(119899119898)

= [0 0 1 0 0]119879

The cost function 119869softmax(Θ) can be written as follows byusing vectorization

119869softmax (Θ) = minus1

119872[

119872

sum

119898=1

(g(119899119898))119879 hΘ (a

(2)

(119899119898))]

+120582

2

119869

sum

119895=1

119868

sum

119894=1

(Θ119895119894)2

(20)

The softmax classifier can be trained by using the L-BFGSalgorithm based on a dataset in order to find an optimalparameter setΘ for minimizing the cost function 119869softmax(Θ)In our proposed system the dataset for softmax classifiertraining is composed by two parts The first part is the inputsample a(2)

(119899119898)(feature II) calculated from the last hidden

layer of the deep autoencoder The second part is the datalabel g

(119899119898)isin R119869 where the 119895th element119892

119895of the g

(119899119898)will be

set to 1 when the input sample belongs to the source locatedin the range of DOAs of index 119895

We stack the softmax classifier and deep autoencodertogether after the training is completed as shown on theleft part of Figure 9 Finally we use the training dataset andL-BFGS algorithm to fine-tune the deep network with theinitialized parameters W(1) b(1) W(2) b(2) W(3) and b(3)obtained from the sparse autoencoders and softmax classifiertraining The training phase of the sparse autoencoders andsoftmax classifier are called pretraining phase and the stack-ingtraining of the overall network that is deep networkis called fine-tuning phase In the pretraining phase theshallow neural networks that is sparse autoencoders andsoftmax classifier are training individually using the outputof current layer as the input for the next layer In the fine-tuning phase we use the L-BFGS algorithm (ie a gradientdescent method) to minimize the difference between theoutput of the deep network and the label of training datasetThe gradient descent works well because the initializedparameters obtained from the pretraining phase include asignificant amount of ldquopriorrdquo information about the inputdata through unsupervised learning

4 Experiments

In this section we describe the communication system usedin the simulation system and show the separation results indifferent SNR and data rates It is important to note thatbit error rate (BER) is an extremely significant performanceindex for a pragmatic communication system in this sectiontherefore we will evaluate the separation quality of pragmaticsystem by BER and regard the BER performance of singleuser MFSK communication as the baseline under the samecondition

There is no need to take the multipath propagation ofchannels into consideration because of the relatively closedistance among each node in view of the communicationof multi-UUV We thus adopt the additive white Gaussiannoise channel model and Monte Carlo method to calculatethe system BER in the simulation such as the simulationsystem shown in Figure 10 and we also take the BER of

Discrete Dynamics in Nature and Society 9

b(1)

b(2) b(3)b(3)

+1

+1+1 +1

u4k

u4kminus1

u1

u2

u(nm)W(3)W(3) W(2)

W(1)

a(2)(nm)

a(1)(nm)

a(2)1

a(2)2

a(2)3

a(2)(nm)

a(2)1

a(2)2

a(2)3

a(1)1

a(1)2

a(1)3

a(2)Vminus2

a(2)Vminus1

a(2)V

a(1)Vminus2

a(1)Vminus1

a(1)V

p(g1 = 1 | u(nm))

p(gj = j | u(nm))

p(gJ = J | u(nm))

g1

gj

gJ

g(nm)

Input

Input

Softmaxclassifier

Softmaxclassifier

p(gJ = J | a(2)(nm)

)

p(g1 = 1 | a(2)(nm)

)

p(gj = j | a(2)(nm)

)

g1

gj

gJ

g(nm)

Copy

MVILDIPD

MVILDIPD

a(2)Vminus2

a(2)Vminus1

a(2)V

Features I Features IIFeatures II

u(m nK)

u(m (n minus 1)K + 1)

Figure 9 The illustration of stacking deep autoencoder and softmax classifier

Demodulation DemodulationbaselineBER

proposedBER

Demodulation DemodulationbaselineBER

proposedBER

TimedelayMixture Source separation

system+

AWGN channel

AWGN channelMFSK passband

MFSK passbandMFSK baseband

MFSK baseband

Source A

Source B

Figure 10 The simulation system used in the experiment

the single user communication as the baseline The sam-pling rate of our simulation is 16 kHz and the order ofthe modulation is 16 without any error-correction codingThe bandwidth of the MFSK modulationdemodulation is640Hz and the bandwidth ratio varies from 0625 to 0125corresponding to the bit rate 400 bitss to 80 bitss

Frist supposewe can obtain the optimal T-Fmask that isusing (6) to calculate T-F mask and then get the simulationresult of BER being changing with SNR as well as bandwidthratio and make a comparison with baseline

As shown in Tables 1 and 2 it can be seen that fromthe simulation result the BER similar with baseline can beattained byT-Fmaskmethodwhen bandwidth ratio lt 0417Furthermore the BER with T-F mask would be even lowerthan that of baseline under the same condition when SNRis very low that is SNR = minus20 The reason behind theabove phenomenon is that it can be seen from (6) that some

frequency points are adjusted to zero by the T-F mask whichpromotes the SNR of signal objectively and obtains lowerBER In the actual system however it is a highly challengingtask to accurately estimate the T-Fmask under such low SNRwhich can also be verified in later simulation

According to Section 3 we estimate the T-F mask byusing the orientation information of the nodes that is theDOA of the MFSK signal received by nodes Therefore weintroduce time delay 120591 to simulate pairwise hydrophonereceiver in the simulation and divide the space into 37 blocksalong the horizontal direction at the same time in order tocorrespond from minus90 to +90 horizontally with the step sizeof 5 in a horizontal space We estimate the T-F mask bysource separation system described in Section 3 and compareit to the BER performance of baseline under the condition ofdifferent SNR and bandwidth ratio the simulation result isshown in Table 3

10 Discrete Dynamics in Nature and Society

Table 1 The BER of the simulation system with the optimum T-F mask

Bandwidth ratio SNR = 30 dB SNR = 20 dB SNR = 10 dB SNR = 5 dB SNR = 0 dB0625 530E minus 02 530E minus 02 536E minus 02 550E minus 02 595E minus 020500 296E minus 02 295E minus 02 303E minus 02 316E minus 02 356E minus 020417 140E minus 02 138E minus 02 138E minus 02 139E minus 02 139E minus 020313 900E minus 06 160E minus 05 130E minus 05 200E minus 05 450E minus 050250 300E minus 06 400E minus 06 100E minus 06 400E minus 06 400E minus 060208 000E + 00 100E minus 06 100E minus 06 000E + 00 000E + 000156 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000125 000E + 00 000E + 00 000E + 00 000E + 00 000E + 00Bandwidth ratio SNR = minus5 dB SNR = minus10 dB SNR = minus15 dB SNR = minus20 dB 0625 764E minus 02 147E minus 01 340E minus 01 562E minus 01 0500 426E minus 02 682E minus 02 198E minus 01 424E minus 01 0417 148E minus 02 267E minus 02 110E minus 01 309E minus 01 0313 125E minus 04 308E minus 03 323E minus 02 157E minus 01 0250 900E minus 06 112E minus 04 713E minus 03 754E minus 02 0208 200E minus 06 219E minus 04 469E minus 03 462E minus 02 0156 000E + 00 210E minus 05 113E minus 03 146E minus 02 0125 000E + 00 000E + 00 200E minus 05 402E minus 03

Table 2 The BER of the baseline

Bandwidth ratio SNR = 30 dB SNR = 20 dB SNR = 10 dB SNR = 5 dB SNR = 0 dB0625 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000500 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000417 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000313 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000250 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000208 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000156 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000125 000E + 00 000E + 00 000E + 00 000E + 00 000E + 00Bandwidth ratio SNR = minus5 dB SNR = minus10 dB SNR = minus15 dB SNR = minus20 dB 0625 443E minus 04 508E minus 02 362E minus 01 726E minus 01 0500 100E minus 06 791E minus 03 216E minus 01 647E minus 01 0417 000E + 00 713E minus 04 119E minus 01 570E minus 01 0313 000E + 00 000E + 00 342E minus 02 435E minus 01 0250 000E + 00 000E + 00 965E minus 03 328E minus 01 0208 000E + 00 000E + 00 340E minus 03 253E minus 01 0156 000E + 00 000E + 00 292E minus 04 141E minus 01 0125 000E + 00 000E + 00 190E minus 05 723E minus 02

As shown in Table 3 it is observed that the BER per-formance of the proposed system is much the same as thebaseline when SNR gt 20 dB and bandwidth ratio lt 0417which is consistent with the result when using the optimal T-F mask When SNR lt 20 dB however the BER performanceof the proposed system begins to decline for a big error of T-Fmask estimationmade by the lower SNR of the signals whichresults in the system performance degradation

5 Summary

In this paper we point at the problem of control packets colli-sion avoiding existing widely in multichannel MAC protocol

and on the basis of the sparsity of noncoherent modulation-MFSK in time-frequency domain separate the sources fromthe MFSK mixture caused by packets collision through theuse of T-F masking method First we indicate the sparsityof MFSK signal with bandwidth ratio and demonstrate therelation between bandwidth ratio and PSR SIR and WDOby means of the simulation experiment Then we establishthe source separation system based on deep networks and themodel of MFSK communication system taking single userMFSK communication as the baseline to compare the BERperformance of proposed system and baseline under differentcondition of SNR and bandwidth ratio The simulation resultshows that first the optimal T-F masking could obtain

Discrete Dynamics in Nature and Society 11

Table 3 The BER of the proposed system

Bandwidth ratio SNR = 30 dB SNR = 20 dB SNR = 10 dB SNR = 5 dB SNR = 0 dB0625 580E minus 02 730E minus 02 636E minus 02 630E minus 02 359E minus 010500 326E minus 02 345E minus 02 393E minus 02 416E minus 02 336E minus 010417 146E minus 02 178E minus 02 148E minus 02 499E minus 02 214E minus 010313 100E minus 05 689E minus 04 950E minus 03 300E minus 02 190E minus 010250 700E minus 06 210E minus 04 600E minus 03 100E minus 02 160E minus 010208 100E minus 06 900E minus 05 300E minus 03 800E minus 03 130E minus 010156 000E + 00 300E minus 05 980E minus 04 200E minus 03 130E minus 010125 000E + 00 700E minus 06 100E minus 05 900E minus 04 120E minus 01Bandwidth ratio SNR = minus5 dB SNR = minus10 dB SNR = minus15 dB SNR = minus20 dB 0625 494E minus 01 0500 426E minus 01 0417 448E minus 01 0313 425E minus 01 0250 490E minus 01 0208 420E minus 01 0156 400E minus 01 0125 250E minus 01

the same BER performance as the baseline under lowerbandwidth ratio second the proposed system could obtainthe similar BER performance as the baseline under higherSNR third the BER performance of the proposed systemdeclines rapidly under the condition of lower SNR for lowerSNR leads to a greater error in the estimation of T-F mask Inthe future work we will adjust the structure of deep networksin subsequent research work to promote the performanceof proposed system under the condition of low SNR andmultipath propagation presenting in the underwater channelAs a future research topic it also deserves the possibility thatthe bioinspired computing models and algorithms are usedfor the underwater multichannel MAC protocols such as theP systems (inspired from the structure and the functioningof cells) [19 20] and evolutionary computation (motivatedby evolution theory of Darwin) [21 22]

Competing Interests

The authors declare that there are no competing interestsregarding the publication of this paper

Acknowledgments

This research was supported partially by the Natural ScienceBasis Research Plan in Shaanxi Province of China (Programno 2014JQ8355)

References

[1] Z Li C Yang N Ding S Bogdan and T Ge ldquoRobust adaptivemotion control for underwater remotely operated vehicleswith velocity constraintsrdquo International Journal of ControlAutomation and Systems vol 10 no 2 pp 421ndash429 2012

[2] A S K Annamalai R Sutton C Yang P Culverhouse andS Sharma ldquoRobust adaptive control of an uninhabited surfacevehiclerdquo Journal of Intelligent amp Robotic Systems vol 78 no 2pp 319ndash338 2015

[3] K Chen M Ma E Cheng F Yuan and W Su ldquoA survey onMAC protocols for underwater wireless sensor networksrdquo IEEECommunications Surveys and Tutorials vol 16 no 3 pp 1433ndash1447 2014

[4] R Otnes A Asterjadhi P Casari et al Underwater AcousticNetworking Techniques Springer Berlin Germany 2012

[5] S Climent A Sanchez J V Capella N Meratnia and JJ Serrano ldquoUnderwater acoustic wireless sensor networksadvances and future trends in physical MAC and routinglayersrdquo Sensors vol 14 no 1 pp 795ndash833 2014

[6] I M Khalil Y Gadallah M Hayajneh and A KhreishahldquoAn adaptive OFDMA-based MAC protocol for underwateracoustic wireless sensor networksrdquo Sensors vol 12 no 7 pp8782ndash8805 2012

[7] F Bouabdallah and R Boutaba ldquoA distributed OFDMAmedium access control for underwater acoustic sensors net-worksrdquo in Proceedings of the IEEE International Conference onCommunications (ICC rsquo11) June 2011

[8] C C Hsu K F Lai C F Chou and K C J Lin ldquoSt-mac spatial-temporal mac scheduling for underwater sensornetworksrdquo in Proceedings of the 28th Conference on ComputerCommunications (INFOCOM rsquo09) pp 1827ndash1835 IEEE Rio deJaneiro Brazil April 2009

[9] Y-D Chen C-Y Lien S-W Chuang and K-P Shih ldquoDSSSa TDMA-based MAC protocol with Dynamic Slot SchedulingStrategy for underwater acoustic sensor networksrdquo in Proceed-ings of the IEEE OCEANS pp 1ndash6 Santander Spain June 2011

[10] K Kredo II P Djukic and P Mohapatra ldquoStump exploitingposition diversity in the staggered tdma underwater mac pro-tocolrdquo in Proceedings of the IEEE 28th Conference on ComputerCommunications (INFOCOM rsquo09) pp 2961ndash2965 April 2009

12 Discrete Dynamics in Nature and Society

[11] M S Marcal Molins ldquoSlotted FAMA a MAC protocol forunderwater acoustic networksrdquo in Proceedings of the IEEEOCEANS 2006 Boston Mass USA September 2006

[12] N Chirdchoo W-S Soh and K C Chua ldquoRIPT a receiver-initiated reservation-based protocol for underwater acousticnetworksrdquo IEEE Journal on Selected Areas in Communicationsvol 26 no 9 pp 1744ndash1753 2008

[13] Z Zhou Z Peng P Xie J-H Cui and Z Jiang ldquoExploringrandom access and handshaking techniques in underwaterwireless acoustic networksrdquo Eurasip Journal on Wireless Com-munications and Networking vol 2013 article 95 2013

[14] YW ChihminChao ldquoAmultiple rendezvousmultichannelmacprotocol for underwater sensor networksrdquo IEEETransactions onSystems Man and Cybernetics Systems vol 43 no 1 pp 128ndash138 2010

[15] H Ramezani and G Leus ldquoDMC-MAC dynamic multi-channel MAC in underwater acoustic networksrdquo in Proceedingsof the 21st European Signal Processing Conference (EUSIPCO rsquo13)pp 1ndash5 Marrakech Morocco September 2013

[16] Z Zhou Z Peng J-H Cui and Z Jiang ldquoHandling triplehidden terminal problems for multichannel MAC in long-delayunderwater sensor networksrdquo IEEE Transactions on MobileComputing vol 11 no 1 pp 139ndash154 2010

[17] O Yilmaz and S Rickard ldquoBlind separation of speech mixturesvia time-frequency maskingrdquo IEEE Transactions on SignalProcessing vol 52 no 7 pp 1830ndash1847 2004

[18] T N Sainath B Kingsbury and B Ramabhadran ldquoAuto-encoder bottleneck features using deep belief networksrdquo inProceedings of the IEEE International Conference on AcousticsSpeech and Signal Processing (ICASSP rsquo12) pp 4153ndash4156 KyotoJapan March 2012

[19] X Zhang L Pan and A Paun ldquoOn the universality of axonp systemrdquo IEEE Transactions on Neural Networks and LearningSystems vol 26 no 11 pp 2816ndash2829 2015

[20] X Zhang Y Liu B Luo and L Pan ldquoComputational power oftissue P systems for generating control languagesrdquo InformationSciences vol 278 pp 285ndash297 2014

[21] X Zhang Y Tian and Y Jin ldquoA knee point-driven evolutionaryalgorithm for many-objective optimizationrdquo IEEE Transactionson Evolutionary Computation vol 19 no 6 pp 761ndash776 2015

[22] X Zhang Y Tian R Cheng and Y Jin ldquoAn efficient approachto nondominated sorting for evolutionary multiobjective opti-mizationrdquo IEEE Transactions on Evolutionary Computation vol19 no 2 pp 201ndash213 2015

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 9: Research Article The Control Packet Collision …downloads.hindawi.com/journals/ddns/2016/2437615.pdfResearch Article The Control Packet Collision Avoidance Algorithm for the Underwater

Discrete Dynamics in Nature and Society 9

b(1)

b(2) b(3)b(3)

+1

+1+1 +1

u4k

u4kminus1

u1

u2

u(nm)W(3)W(3) W(2)

W(1)

a(2)(nm)

a(1)(nm)

a(2)1

a(2)2

a(2)3

a(2)(nm)

a(2)1

a(2)2

a(2)3

a(1)1

a(1)2

a(1)3

a(2)Vminus2

a(2)Vminus1

a(2)V

a(1)Vminus2

a(1)Vminus1

a(1)V

p(g1 = 1 | u(nm))

p(gj = j | u(nm))

p(gJ = J | u(nm))

g1

gj

gJ

g(nm)

Input

Input

Softmaxclassifier

Softmaxclassifier

p(gJ = J | a(2)(nm)

)

p(g1 = 1 | a(2)(nm)

)

p(gj = j | a(2)(nm)

)

g1

gj

gJ

g(nm)

Copy

MVILDIPD

MVILDIPD

a(2)Vminus2

a(2)Vminus1

a(2)V

Features I Features IIFeatures II

u(m nK)

u(m (n minus 1)K + 1)

Figure 9 The illustration of stacking deep autoencoder and softmax classifier

Demodulation DemodulationbaselineBER

proposedBER

Demodulation DemodulationbaselineBER

proposedBER

TimedelayMixture Source separation

system+

AWGN channel

AWGN channelMFSK passband

MFSK passbandMFSK baseband

MFSK baseband

Source A

Source B

Figure 10 The simulation system used in the experiment

the single user communication as the baseline The sam-pling rate of our simulation is 16 kHz and the order ofthe modulation is 16 without any error-correction codingThe bandwidth of the MFSK modulationdemodulation is640Hz and the bandwidth ratio varies from 0625 to 0125corresponding to the bit rate 400 bitss to 80 bitss

Frist supposewe can obtain the optimal T-Fmask that isusing (6) to calculate T-F mask and then get the simulationresult of BER being changing with SNR as well as bandwidthratio and make a comparison with baseline

As shown in Tables 1 and 2 it can be seen that fromthe simulation result the BER similar with baseline can beattained byT-Fmaskmethodwhen bandwidth ratio lt 0417Furthermore the BER with T-F mask would be even lowerthan that of baseline under the same condition when SNRis very low that is SNR = minus20 The reason behind theabove phenomenon is that it can be seen from (6) that some

frequency points are adjusted to zero by the T-F mask whichpromotes the SNR of signal objectively and obtains lowerBER In the actual system however it is a highly challengingtask to accurately estimate the T-Fmask under such low SNRwhich can also be verified in later simulation

According to Section 3 we estimate the T-F mask byusing the orientation information of the nodes that is theDOA of the MFSK signal received by nodes Therefore weintroduce time delay 120591 to simulate pairwise hydrophonereceiver in the simulation and divide the space into 37 blocksalong the horizontal direction at the same time in order tocorrespond from minus90 to +90 horizontally with the step sizeof 5 in a horizontal space We estimate the T-F mask bysource separation system described in Section 3 and compareit to the BER performance of baseline under the condition ofdifferent SNR and bandwidth ratio the simulation result isshown in Table 3

10 Discrete Dynamics in Nature and Society

Table 1 The BER of the simulation system with the optimum T-F mask

Bandwidth ratio SNR = 30 dB SNR = 20 dB SNR = 10 dB SNR = 5 dB SNR = 0 dB0625 530E minus 02 530E minus 02 536E minus 02 550E minus 02 595E minus 020500 296E minus 02 295E minus 02 303E minus 02 316E minus 02 356E minus 020417 140E minus 02 138E minus 02 138E minus 02 139E minus 02 139E minus 020313 900E minus 06 160E minus 05 130E minus 05 200E minus 05 450E minus 050250 300E minus 06 400E minus 06 100E minus 06 400E minus 06 400E minus 060208 000E + 00 100E minus 06 100E minus 06 000E + 00 000E + 000156 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000125 000E + 00 000E + 00 000E + 00 000E + 00 000E + 00Bandwidth ratio SNR = minus5 dB SNR = minus10 dB SNR = minus15 dB SNR = minus20 dB 0625 764E minus 02 147E minus 01 340E minus 01 562E minus 01 0500 426E minus 02 682E minus 02 198E minus 01 424E minus 01 0417 148E minus 02 267E minus 02 110E minus 01 309E minus 01 0313 125E minus 04 308E minus 03 323E minus 02 157E minus 01 0250 900E minus 06 112E minus 04 713E minus 03 754E minus 02 0208 200E minus 06 219E minus 04 469E minus 03 462E minus 02 0156 000E + 00 210E minus 05 113E minus 03 146E minus 02 0125 000E + 00 000E + 00 200E minus 05 402E minus 03

Table 2 The BER of the baseline

Bandwidth ratio SNR = 30 dB SNR = 20 dB SNR = 10 dB SNR = 5 dB SNR = 0 dB0625 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000500 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000417 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000313 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000250 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000208 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000156 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000125 000E + 00 000E + 00 000E + 00 000E + 00 000E + 00Bandwidth ratio SNR = minus5 dB SNR = minus10 dB SNR = minus15 dB SNR = minus20 dB 0625 443E minus 04 508E minus 02 362E minus 01 726E minus 01 0500 100E minus 06 791E minus 03 216E minus 01 647E minus 01 0417 000E + 00 713E minus 04 119E minus 01 570E minus 01 0313 000E + 00 000E + 00 342E minus 02 435E minus 01 0250 000E + 00 000E + 00 965E minus 03 328E minus 01 0208 000E + 00 000E + 00 340E minus 03 253E minus 01 0156 000E + 00 000E + 00 292E minus 04 141E minus 01 0125 000E + 00 000E + 00 190E minus 05 723E minus 02

As shown in Table 3 it is observed that the BER per-formance of the proposed system is much the same as thebaseline when SNR gt 20 dB and bandwidth ratio lt 0417which is consistent with the result when using the optimal T-F mask When SNR lt 20 dB however the BER performanceof the proposed system begins to decline for a big error of T-Fmask estimationmade by the lower SNR of the signals whichresults in the system performance degradation

5 Summary

In this paper we point at the problem of control packets colli-sion avoiding existing widely in multichannel MAC protocol

and on the basis of the sparsity of noncoherent modulation-MFSK in time-frequency domain separate the sources fromthe MFSK mixture caused by packets collision through theuse of T-F masking method First we indicate the sparsityof MFSK signal with bandwidth ratio and demonstrate therelation between bandwidth ratio and PSR SIR and WDOby means of the simulation experiment Then we establishthe source separation system based on deep networks and themodel of MFSK communication system taking single userMFSK communication as the baseline to compare the BERperformance of proposed system and baseline under differentcondition of SNR and bandwidth ratio The simulation resultshows that first the optimal T-F masking could obtain

Discrete Dynamics in Nature and Society 11

Table 3 The BER of the proposed system

Bandwidth ratio SNR = 30 dB SNR = 20 dB SNR = 10 dB SNR = 5 dB SNR = 0 dB0625 580E minus 02 730E minus 02 636E minus 02 630E minus 02 359E minus 010500 326E minus 02 345E minus 02 393E minus 02 416E minus 02 336E minus 010417 146E minus 02 178E minus 02 148E minus 02 499E minus 02 214E minus 010313 100E minus 05 689E minus 04 950E minus 03 300E minus 02 190E minus 010250 700E minus 06 210E minus 04 600E minus 03 100E minus 02 160E minus 010208 100E minus 06 900E minus 05 300E minus 03 800E minus 03 130E minus 010156 000E + 00 300E minus 05 980E minus 04 200E minus 03 130E minus 010125 000E + 00 700E minus 06 100E minus 05 900E minus 04 120E minus 01Bandwidth ratio SNR = minus5 dB SNR = minus10 dB SNR = minus15 dB SNR = minus20 dB 0625 494E minus 01 0500 426E minus 01 0417 448E minus 01 0313 425E minus 01 0250 490E minus 01 0208 420E minus 01 0156 400E minus 01 0125 250E minus 01

the same BER performance as the baseline under lowerbandwidth ratio second the proposed system could obtainthe similar BER performance as the baseline under higherSNR third the BER performance of the proposed systemdeclines rapidly under the condition of lower SNR for lowerSNR leads to a greater error in the estimation of T-F mask Inthe future work we will adjust the structure of deep networksin subsequent research work to promote the performanceof proposed system under the condition of low SNR andmultipath propagation presenting in the underwater channelAs a future research topic it also deserves the possibility thatthe bioinspired computing models and algorithms are usedfor the underwater multichannel MAC protocols such as theP systems (inspired from the structure and the functioningof cells) [19 20] and evolutionary computation (motivatedby evolution theory of Darwin) [21 22]

Competing Interests

The authors declare that there are no competing interestsregarding the publication of this paper

Acknowledgments

This research was supported partially by the Natural ScienceBasis Research Plan in Shaanxi Province of China (Programno 2014JQ8355)

References

[1] Z Li C Yang N Ding S Bogdan and T Ge ldquoRobust adaptivemotion control for underwater remotely operated vehicleswith velocity constraintsrdquo International Journal of ControlAutomation and Systems vol 10 no 2 pp 421ndash429 2012

[2] A S K Annamalai R Sutton C Yang P Culverhouse andS Sharma ldquoRobust adaptive control of an uninhabited surfacevehiclerdquo Journal of Intelligent amp Robotic Systems vol 78 no 2pp 319ndash338 2015

[3] K Chen M Ma E Cheng F Yuan and W Su ldquoA survey onMAC protocols for underwater wireless sensor networksrdquo IEEECommunications Surveys and Tutorials vol 16 no 3 pp 1433ndash1447 2014

[4] R Otnes A Asterjadhi P Casari et al Underwater AcousticNetworking Techniques Springer Berlin Germany 2012

[5] S Climent A Sanchez J V Capella N Meratnia and JJ Serrano ldquoUnderwater acoustic wireless sensor networksadvances and future trends in physical MAC and routinglayersrdquo Sensors vol 14 no 1 pp 795ndash833 2014

[6] I M Khalil Y Gadallah M Hayajneh and A KhreishahldquoAn adaptive OFDMA-based MAC protocol for underwateracoustic wireless sensor networksrdquo Sensors vol 12 no 7 pp8782ndash8805 2012

[7] F Bouabdallah and R Boutaba ldquoA distributed OFDMAmedium access control for underwater acoustic sensors net-worksrdquo in Proceedings of the IEEE International Conference onCommunications (ICC rsquo11) June 2011

[8] C C Hsu K F Lai C F Chou and K C J Lin ldquoSt-mac spatial-temporal mac scheduling for underwater sensornetworksrdquo in Proceedings of the 28th Conference on ComputerCommunications (INFOCOM rsquo09) pp 1827ndash1835 IEEE Rio deJaneiro Brazil April 2009

[9] Y-D Chen C-Y Lien S-W Chuang and K-P Shih ldquoDSSSa TDMA-based MAC protocol with Dynamic Slot SchedulingStrategy for underwater acoustic sensor networksrdquo in Proceed-ings of the IEEE OCEANS pp 1ndash6 Santander Spain June 2011

[10] K Kredo II P Djukic and P Mohapatra ldquoStump exploitingposition diversity in the staggered tdma underwater mac pro-tocolrdquo in Proceedings of the IEEE 28th Conference on ComputerCommunications (INFOCOM rsquo09) pp 2961ndash2965 April 2009

12 Discrete Dynamics in Nature and Society

[11] M S Marcal Molins ldquoSlotted FAMA a MAC protocol forunderwater acoustic networksrdquo in Proceedings of the IEEEOCEANS 2006 Boston Mass USA September 2006

[12] N Chirdchoo W-S Soh and K C Chua ldquoRIPT a receiver-initiated reservation-based protocol for underwater acousticnetworksrdquo IEEE Journal on Selected Areas in Communicationsvol 26 no 9 pp 1744ndash1753 2008

[13] Z Zhou Z Peng P Xie J-H Cui and Z Jiang ldquoExploringrandom access and handshaking techniques in underwaterwireless acoustic networksrdquo Eurasip Journal on Wireless Com-munications and Networking vol 2013 article 95 2013

[14] YW ChihminChao ldquoAmultiple rendezvousmultichannelmacprotocol for underwater sensor networksrdquo IEEETransactions onSystems Man and Cybernetics Systems vol 43 no 1 pp 128ndash138 2010

[15] H Ramezani and G Leus ldquoDMC-MAC dynamic multi-channel MAC in underwater acoustic networksrdquo in Proceedingsof the 21st European Signal Processing Conference (EUSIPCO rsquo13)pp 1ndash5 Marrakech Morocco September 2013

[16] Z Zhou Z Peng J-H Cui and Z Jiang ldquoHandling triplehidden terminal problems for multichannel MAC in long-delayunderwater sensor networksrdquo IEEE Transactions on MobileComputing vol 11 no 1 pp 139ndash154 2010

[17] O Yilmaz and S Rickard ldquoBlind separation of speech mixturesvia time-frequency maskingrdquo IEEE Transactions on SignalProcessing vol 52 no 7 pp 1830ndash1847 2004

[18] T N Sainath B Kingsbury and B Ramabhadran ldquoAuto-encoder bottleneck features using deep belief networksrdquo inProceedings of the IEEE International Conference on AcousticsSpeech and Signal Processing (ICASSP rsquo12) pp 4153ndash4156 KyotoJapan March 2012

[19] X Zhang L Pan and A Paun ldquoOn the universality of axonp systemrdquo IEEE Transactions on Neural Networks and LearningSystems vol 26 no 11 pp 2816ndash2829 2015

[20] X Zhang Y Liu B Luo and L Pan ldquoComputational power oftissue P systems for generating control languagesrdquo InformationSciences vol 278 pp 285ndash297 2014

[21] X Zhang Y Tian and Y Jin ldquoA knee point-driven evolutionaryalgorithm for many-objective optimizationrdquo IEEE Transactionson Evolutionary Computation vol 19 no 6 pp 761ndash776 2015

[22] X Zhang Y Tian R Cheng and Y Jin ldquoAn efficient approachto nondominated sorting for evolutionary multiobjective opti-mizationrdquo IEEE Transactions on Evolutionary Computation vol19 no 2 pp 201ndash213 2015

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 10: Research Article The Control Packet Collision …downloads.hindawi.com/journals/ddns/2016/2437615.pdfResearch Article The Control Packet Collision Avoidance Algorithm for the Underwater

10 Discrete Dynamics in Nature and Society

Table 1 The BER of the simulation system with the optimum T-F mask

Bandwidth ratio SNR = 30 dB SNR = 20 dB SNR = 10 dB SNR = 5 dB SNR = 0 dB0625 530E minus 02 530E minus 02 536E minus 02 550E minus 02 595E minus 020500 296E minus 02 295E minus 02 303E minus 02 316E minus 02 356E minus 020417 140E minus 02 138E minus 02 138E minus 02 139E minus 02 139E minus 020313 900E minus 06 160E minus 05 130E minus 05 200E minus 05 450E minus 050250 300E minus 06 400E minus 06 100E minus 06 400E minus 06 400E minus 060208 000E + 00 100E minus 06 100E minus 06 000E + 00 000E + 000156 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000125 000E + 00 000E + 00 000E + 00 000E + 00 000E + 00Bandwidth ratio SNR = minus5 dB SNR = minus10 dB SNR = minus15 dB SNR = minus20 dB 0625 764E minus 02 147E minus 01 340E minus 01 562E minus 01 0500 426E minus 02 682E minus 02 198E minus 01 424E minus 01 0417 148E minus 02 267E minus 02 110E minus 01 309E minus 01 0313 125E minus 04 308E minus 03 323E minus 02 157E minus 01 0250 900E minus 06 112E minus 04 713E minus 03 754E minus 02 0208 200E minus 06 219E minus 04 469E minus 03 462E minus 02 0156 000E + 00 210E minus 05 113E minus 03 146E minus 02 0125 000E + 00 000E + 00 200E minus 05 402E minus 03

Table 2 The BER of the baseline

Bandwidth ratio SNR = 30 dB SNR = 20 dB SNR = 10 dB SNR = 5 dB SNR = 0 dB0625 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000500 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000417 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000313 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000250 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000208 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000156 000E + 00 000E + 00 000E + 00 000E + 00 000E + 000125 000E + 00 000E + 00 000E + 00 000E + 00 000E + 00Bandwidth ratio SNR = minus5 dB SNR = minus10 dB SNR = minus15 dB SNR = minus20 dB 0625 443E minus 04 508E minus 02 362E minus 01 726E minus 01 0500 100E minus 06 791E minus 03 216E minus 01 647E minus 01 0417 000E + 00 713E minus 04 119E minus 01 570E minus 01 0313 000E + 00 000E + 00 342E minus 02 435E minus 01 0250 000E + 00 000E + 00 965E minus 03 328E minus 01 0208 000E + 00 000E + 00 340E minus 03 253E minus 01 0156 000E + 00 000E + 00 292E minus 04 141E minus 01 0125 000E + 00 000E + 00 190E minus 05 723E minus 02

As shown in Table 3 it is observed that the BER per-formance of the proposed system is much the same as thebaseline when SNR gt 20 dB and bandwidth ratio lt 0417which is consistent with the result when using the optimal T-F mask When SNR lt 20 dB however the BER performanceof the proposed system begins to decline for a big error of T-Fmask estimationmade by the lower SNR of the signals whichresults in the system performance degradation

5 Summary

In this paper we point at the problem of control packets colli-sion avoiding existing widely in multichannel MAC protocol

and on the basis of the sparsity of noncoherent modulation-MFSK in time-frequency domain separate the sources fromthe MFSK mixture caused by packets collision through theuse of T-F masking method First we indicate the sparsityof MFSK signal with bandwidth ratio and demonstrate therelation between bandwidth ratio and PSR SIR and WDOby means of the simulation experiment Then we establishthe source separation system based on deep networks and themodel of MFSK communication system taking single userMFSK communication as the baseline to compare the BERperformance of proposed system and baseline under differentcondition of SNR and bandwidth ratio The simulation resultshows that first the optimal T-F masking could obtain

Discrete Dynamics in Nature and Society 11

Table 3 The BER of the proposed system

Bandwidth ratio SNR = 30 dB SNR = 20 dB SNR = 10 dB SNR = 5 dB SNR = 0 dB0625 580E minus 02 730E minus 02 636E minus 02 630E minus 02 359E minus 010500 326E minus 02 345E minus 02 393E minus 02 416E minus 02 336E minus 010417 146E minus 02 178E minus 02 148E minus 02 499E minus 02 214E minus 010313 100E minus 05 689E minus 04 950E minus 03 300E minus 02 190E minus 010250 700E minus 06 210E minus 04 600E minus 03 100E minus 02 160E minus 010208 100E minus 06 900E minus 05 300E minus 03 800E minus 03 130E minus 010156 000E + 00 300E minus 05 980E minus 04 200E minus 03 130E minus 010125 000E + 00 700E minus 06 100E minus 05 900E minus 04 120E minus 01Bandwidth ratio SNR = minus5 dB SNR = minus10 dB SNR = minus15 dB SNR = minus20 dB 0625 494E minus 01 0500 426E minus 01 0417 448E minus 01 0313 425E minus 01 0250 490E minus 01 0208 420E minus 01 0156 400E minus 01 0125 250E minus 01

the same BER performance as the baseline under lowerbandwidth ratio second the proposed system could obtainthe similar BER performance as the baseline under higherSNR third the BER performance of the proposed systemdeclines rapidly under the condition of lower SNR for lowerSNR leads to a greater error in the estimation of T-F mask Inthe future work we will adjust the structure of deep networksin subsequent research work to promote the performanceof proposed system under the condition of low SNR andmultipath propagation presenting in the underwater channelAs a future research topic it also deserves the possibility thatthe bioinspired computing models and algorithms are usedfor the underwater multichannel MAC protocols such as theP systems (inspired from the structure and the functioningof cells) [19 20] and evolutionary computation (motivatedby evolution theory of Darwin) [21 22]

Competing Interests

The authors declare that there are no competing interestsregarding the publication of this paper

Acknowledgments

This research was supported partially by the Natural ScienceBasis Research Plan in Shaanxi Province of China (Programno 2014JQ8355)

References

[1] Z Li C Yang N Ding S Bogdan and T Ge ldquoRobust adaptivemotion control for underwater remotely operated vehicleswith velocity constraintsrdquo International Journal of ControlAutomation and Systems vol 10 no 2 pp 421ndash429 2012

[2] A S K Annamalai R Sutton C Yang P Culverhouse andS Sharma ldquoRobust adaptive control of an uninhabited surfacevehiclerdquo Journal of Intelligent amp Robotic Systems vol 78 no 2pp 319ndash338 2015

[3] K Chen M Ma E Cheng F Yuan and W Su ldquoA survey onMAC protocols for underwater wireless sensor networksrdquo IEEECommunications Surveys and Tutorials vol 16 no 3 pp 1433ndash1447 2014

[4] R Otnes A Asterjadhi P Casari et al Underwater AcousticNetworking Techniques Springer Berlin Germany 2012

[5] S Climent A Sanchez J V Capella N Meratnia and JJ Serrano ldquoUnderwater acoustic wireless sensor networksadvances and future trends in physical MAC and routinglayersrdquo Sensors vol 14 no 1 pp 795ndash833 2014

[6] I M Khalil Y Gadallah M Hayajneh and A KhreishahldquoAn adaptive OFDMA-based MAC protocol for underwateracoustic wireless sensor networksrdquo Sensors vol 12 no 7 pp8782ndash8805 2012

[7] F Bouabdallah and R Boutaba ldquoA distributed OFDMAmedium access control for underwater acoustic sensors net-worksrdquo in Proceedings of the IEEE International Conference onCommunications (ICC rsquo11) June 2011

[8] C C Hsu K F Lai C F Chou and K C J Lin ldquoSt-mac spatial-temporal mac scheduling for underwater sensornetworksrdquo in Proceedings of the 28th Conference on ComputerCommunications (INFOCOM rsquo09) pp 1827ndash1835 IEEE Rio deJaneiro Brazil April 2009

[9] Y-D Chen C-Y Lien S-W Chuang and K-P Shih ldquoDSSSa TDMA-based MAC protocol with Dynamic Slot SchedulingStrategy for underwater acoustic sensor networksrdquo in Proceed-ings of the IEEE OCEANS pp 1ndash6 Santander Spain June 2011

[10] K Kredo II P Djukic and P Mohapatra ldquoStump exploitingposition diversity in the staggered tdma underwater mac pro-tocolrdquo in Proceedings of the IEEE 28th Conference on ComputerCommunications (INFOCOM rsquo09) pp 2961ndash2965 April 2009

12 Discrete Dynamics in Nature and Society

[11] M S Marcal Molins ldquoSlotted FAMA a MAC protocol forunderwater acoustic networksrdquo in Proceedings of the IEEEOCEANS 2006 Boston Mass USA September 2006

[12] N Chirdchoo W-S Soh and K C Chua ldquoRIPT a receiver-initiated reservation-based protocol for underwater acousticnetworksrdquo IEEE Journal on Selected Areas in Communicationsvol 26 no 9 pp 1744ndash1753 2008

[13] Z Zhou Z Peng P Xie J-H Cui and Z Jiang ldquoExploringrandom access and handshaking techniques in underwaterwireless acoustic networksrdquo Eurasip Journal on Wireless Com-munications and Networking vol 2013 article 95 2013

[14] YW ChihminChao ldquoAmultiple rendezvousmultichannelmacprotocol for underwater sensor networksrdquo IEEETransactions onSystems Man and Cybernetics Systems vol 43 no 1 pp 128ndash138 2010

[15] H Ramezani and G Leus ldquoDMC-MAC dynamic multi-channel MAC in underwater acoustic networksrdquo in Proceedingsof the 21st European Signal Processing Conference (EUSIPCO rsquo13)pp 1ndash5 Marrakech Morocco September 2013

[16] Z Zhou Z Peng J-H Cui and Z Jiang ldquoHandling triplehidden terminal problems for multichannel MAC in long-delayunderwater sensor networksrdquo IEEE Transactions on MobileComputing vol 11 no 1 pp 139ndash154 2010

[17] O Yilmaz and S Rickard ldquoBlind separation of speech mixturesvia time-frequency maskingrdquo IEEE Transactions on SignalProcessing vol 52 no 7 pp 1830ndash1847 2004

[18] T N Sainath B Kingsbury and B Ramabhadran ldquoAuto-encoder bottleneck features using deep belief networksrdquo inProceedings of the IEEE International Conference on AcousticsSpeech and Signal Processing (ICASSP rsquo12) pp 4153ndash4156 KyotoJapan March 2012

[19] X Zhang L Pan and A Paun ldquoOn the universality of axonp systemrdquo IEEE Transactions on Neural Networks and LearningSystems vol 26 no 11 pp 2816ndash2829 2015

[20] X Zhang Y Liu B Luo and L Pan ldquoComputational power oftissue P systems for generating control languagesrdquo InformationSciences vol 278 pp 285ndash297 2014

[21] X Zhang Y Tian and Y Jin ldquoA knee point-driven evolutionaryalgorithm for many-objective optimizationrdquo IEEE Transactionson Evolutionary Computation vol 19 no 6 pp 761ndash776 2015

[22] X Zhang Y Tian R Cheng and Y Jin ldquoAn efficient approachto nondominated sorting for evolutionary multiobjective opti-mizationrdquo IEEE Transactions on Evolutionary Computation vol19 no 2 pp 201ndash213 2015

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 11: Research Article The Control Packet Collision …downloads.hindawi.com/journals/ddns/2016/2437615.pdfResearch Article The Control Packet Collision Avoidance Algorithm for the Underwater

Discrete Dynamics in Nature and Society 11

Table 3 The BER of the proposed system

Bandwidth ratio SNR = 30 dB SNR = 20 dB SNR = 10 dB SNR = 5 dB SNR = 0 dB0625 580E minus 02 730E minus 02 636E minus 02 630E minus 02 359E minus 010500 326E minus 02 345E minus 02 393E minus 02 416E minus 02 336E minus 010417 146E minus 02 178E minus 02 148E minus 02 499E minus 02 214E minus 010313 100E minus 05 689E minus 04 950E minus 03 300E minus 02 190E minus 010250 700E minus 06 210E minus 04 600E minus 03 100E minus 02 160E minus 010208 100E minus 06 900E minus 05 300E minus 03 800E minus 03 130E minus 010156 000E + 00 300E minus 05 980E minus 04 200E minus 03 130E minus 010125 000E + 00 700E minus 06 100E minus 05 900E minus 04 120E minus 01Bandwidth ratio SNR = minus5 dB SNR = minus10 dB SNR = minus15 dB SNR = minus20 dB 0625 494E minus 01 0500 426E minus 01 0417 448E minus 01 0313 425E minus 01 0250 490E minus 01 0208 420E minus 01 0156 400E minus 01 0125 250E minus 01

the same BER performance as the baseline under lowerbandwidth ratio second the proposed system could obtainthe similar BER performance as the baseline under higherSNR third the BER performance of the proposed systemdeclines rapidly under the condition of lower SNR for lowerSNR leads to a greater error in the estimation of T-F mask Inthe future work we will adjust the structure of deep networksin subsequent research work to promote the performanceof proposed system under the condition of low SNR andmultipath propagation presenting in the underwater channelAs a future research topic it also deserves the possibility thatthe bioinspired computing models and algorithms are usedfor the underwater multichannel MAC protocols such as theP systems (inspired from the structure and the functioningof cells) [19 20] and evolutionary computation (motivatedby evolution theory of Darwin) [21 22]

Competing Interests

The authors declare that there are no competing interestsregarding the publication of this paper

Acknowledgments

This research was supported partially by the Natural ScienceBasis Research Plan in Shaanxi Province of China (Programno 2014JQ8355)

References

[1] Z Li C Yang N Ding S Bogdan and T Ge ldquoRobust adaptivemotion control for underwater remotely operated vehicleswith velocity constraintsrdquo International Journal of ControlAutomation and Systems vol 10 no 2 pp 421ndash429 2012

[2] A S K Annamalai R Sutton C Yang P Culverhouse andS Sharma ldquoRobust adaptive control of an uninhabited surfacevehiclerdquo Journal of Intelligent amp Robotic Systems vol 78 no 2pp 319ndash338 2015

[3] K Chen M Ma E Cheng F Yuan and W Su ldquoA survey onMAC protocols for underwater wireless sensor networksrdquo IEEECommunications Surveys and Tutorials vol 16 no 3 pp 1433ndash1447 2014

[4] R Otnes A Asterjadhi P Casari et al Underwater AcousticNetworking Techniques Springer Berlin Germany 2012

[5] S Climent A Sanchez J V Capella N Meratnia and JJ Serrano ldquoUnderwater acoustic wireless sensor networksadvances and future trends in physical MAC and routinglayersrdquo Sensors vol 14 no 1 pp 795ndash833 2014

[6] I M Khalil Y Gadallah M Hayajneh and A KhreishahldquoAn adaptive OFDMA-based MAC protocol for underwateracoustic wireless sensor networksrdquo Sensors vol 12 no 7 pp8782ndash8805 2012

[7] F Bouabdallah and R Boutaba ldquoA distributed OFDMAmedium access control for underwater acoustic sensors net-worksrdquo in Proceedings of the IEEE International Conference onCommunications (ICC rsquo11) June 2011

[8] C C Hsu K F Lai C F Chou and K C J Lin ldquoSt-mac spatial-temporal mac scheduling for underwater sensornetworksrdquo in Proceedings of the 28th Conference on ComputerCommunications (INFOCOM rsquo09) pp 1827ndash1835 IEEE Rio deJaneiro Brazil April 2009

[9] Y-D Chen C-Y Lien S-W Chuang and K-P Shih ldquoDSSSa TDMA-based MAC protocol with Dynamic Slot SchedulingStrategy for underwater acoustic sensor networksrdquo in Proceed-ings of the IEEE OCEANS pp 1ndash6 Santander Spain June 2011

[10] K Kredo II P Djukic and P Mohapatra ldquoStump exploitingposition diversity in the staggered tdma underwater mac pro-tocolrdquo in Proceedings of the IEEE 28th Conference on ComputerCommunications (INFOCOM rsquo09) pp 2961ndash2965 April 2009

12 Discrete Dynamics in Nature and Society

[11] M S Marcal Molins ldquoSlotted FAMA a MAC protocol forunderwater acoustic networksrdquo in Proceedings of the IEEEOCEANS 2006 Boston Mass USA September 2006

[12] N Chirdchoo W-S Soh and K C Chua ldquoRIPT a receiver-initiated reservation-based protocol for underwater acousticnetworksrdquo IEEE Journal on Selected Areas in Communicationsvol 26 no 9 pp 1744ndash1753 2008

[13] Z Zhou Z Peng P Xie J-H Cui and Z Jiang ldquoExploringrandom access and handshaking techniques in underwaterwireless acoustic networksrdquo Eurasip Journal on Wireless Com-munications and Networking vol 2013 article 95 2013

[14] YW ChihminChao ldquoAmultiple rendezvousmultichannelmacprotocol for underwater sensor networksrdquo IEEETransactions onSystems Man and Cybernetics Systems vol 43 no 1 pp 128ndash138 2010

[15] H Ramezani and G Leus ldquoDMC-MAC dynamic multi-channel MAC in underwater acoustic networksrdquo in Proceedingsof the 21st European Signal Processing Conference (EUSIPCO rsquo13)pp 1ndash5 Marrakech Morocco September 2013

[16] Z Zhou Z Peng J-H Cui and Z Jiang ldquoHandling triplehidden terminal problems for multichannel MAC in long-delayunderwater sensor networksrdquo IEEE Transactions on MobileComputing vol 11 no 1 pp 139ndash154 2010

[17] O Yilmaz and S Rickard ldquoBlind separation of speech mixturesvia time-frequency maskingrdquo IEEE Transactions on SignalProcessing vol 52 no 7 pp 1830ndash1847 2004

[18] T N Sainath B Kingsbury and B Ramabhadran ldquoAuto-encoder bottleneck features using deep belief networksrdquo inProceedings of the IEEE International Conference on AcousticsSpeech and Signal Processing (ICASSP rsquo12) pp 4153ndash4156 KyotoJapan March 2012

[19] X Zhang L Pan and A Paun ldquoOn the universality of axonp systemrdquo IEEE Transactions on Neural Networks and LearningSystems vol 26 no 11 pp 2816ndash2829 2015

[20] X Zhang Y Liu B Luo and L Pan ldquoComputational power oftissue P systems for generating control languagesrdquo InformationSciences vol 278 pp 285ndash297 2014

[21] X Zhang Y Tian and Y Jin ldquoA knee point-driven evolutionaryalgorithm for many-objective optimizationrdquo IEEE Transactionson Evolutionary Computation vol 19 no 6 pp 761ndash776 2015

[22] X Zhang Y Tian R Cheng and Y Jin ldquoAn efficient approachto nondominated sorting for evolutionary multiobjective opti-mizationrdquo IEEE Transactions on Evolutionary Computation vol19 no 2 pp 201ndash213 2015

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 12: Research Article The Control Packet Collision …downloads.hindawi.com/journals/ddns/2016/2437615.pdfResearch Article The Control Packet Collision Avoidance Algorithm for the Underwater

12 Discrete Dynamics in Nature and Society

[11] M S Marcal Molins ldquoSlotted FAMA a MAC protocol forunderwater acoustic networksrdquo in Proceedings of the IEEEOCEANS 2006 Boston Mass USA September 2006

[12] N Chirdchoo W-S Soh and K C Chua ldquoRIPT a receiver-initiated reservation-based protocol for underwater acousticnetworksrdquo IEEE Journal on Selected Areas in Communicationsvol 26 no 9 pp 1744ndash1753 2008

[13] Z Zhou Z Peng P Xie J-H Cui and Z Jiang ldquoExploringrandom access and handshaking techniques in underwaterwireless acoustic networksrdquo Eurasip Journal on Wireless Com-munications and Networking vol 2013 article 95 2013

[14] YW ChihminChao ldquoAmultiple rendezvousmultichannelmacprotocol for underwater sensor networksrdquo IEEETransactions onSystems Man and Cybernetics Systems vol 43 no 1 pp 128ndash138 2010

[15] H Ramezani and G Leus ldquoDMC-MAC dynamic multi-channel MAC in underwater acoustic networksrdquo in Proceedingsof the 21st European Signal Processing Conference (EUSIPCO rsquo13)pp 1ndash5 Marrakech Morocco September 2013

[16] Z Zhou Z Peng J-H Cui and Z Jiang ldquoHandling triplehidden terminal problems for multichannel MAC in long-delayunderwater sensor networksrdquo IEEE Transactions on MobileComputing vol 11 no 1 pp 139ndash154 2010

[17] O Yilmaz and S Rickard ldquoBlind separation of speech mixturesvia time-frequency maskingrdquo IEEE Transactions on SignalProcessing vol 52 no 7 pp 1830ndash1847 2004

[18] T N Sainath B Kingsbury and B Ramabhadran ldquoAuto-encoder bottleneck features using deep belief networksrdquo inProceedings of the IEEE International Conference on AcousticsSpeech and Signal Processing (ICASSP rsquo12) pp 4153ndash4156 KyotoJapan March 2012

[19] X Zhang L Pan and A Paun ldquoOn the universality of axonp systemrdquo IEEE Transactions on Neural Networks and LearningSystems vol 26 no 11 pp 2816ndash2829 2015

[20] X Zhang Y Liu B Luo and L Pan ldquoComputational power oftissue P systems for generating control languagesrdquo InformationSciences vol 278 pp 285ndash297 2014

[21] X Zhang Y Tian and Y Jin ldquoA knee point-driven evolutionaryalgorithm for many-objective optimizationrdquo IEEE Transactionson Evolutionary Computation vol 19 no 6 pp 761ndash776 2015

[22] X Zhang Y Tian R Cheng and Y Jin ldquoAn efficient approachto nondominated sorting for evolutionary multiobjective opti-mizationrdquo IEEE Transactions on Evolutionary Computation vol19 no 2 pp 201ndash213 2015

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 13: Research Article The Control Packet Collision …downloads.hindawi.com/journals/ddns/2016/2437615.pdfResearch Article The Control Packet Collision Avoidance Algorithm for the Underwater

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of