12
Problems of Information Transmission, Vol. 38, No. 4, 2002, pp. 268–279. Translated from Problemy Peredachi Informatsii, No. 4, 2002, pp. 24–36. Original Russian Text Copyright c 2002 by Tsybakov, Rubinov. INFORMATION THEORY AND CODING THEORY Some Constructions of Conflict-Avoiding Codes B. S. Tsybakov and A. R. Rubinov Received August 14, 2001; in final form, June 18, 2002 Abstract—Constructions of conflict-avoiding codes are presented. These codes can be used as protocol sequences for successful packet transmission over a collision channel without feedback. We give a relation between codes that avoid conflicts of different numbers of colliding users. Upper bounds on the maximum code size and three particular code constructions are presented. 1. INTRODUCTION We consider binary conflict-avoiding codes. A length-N codeword of such a code is assigned to a user as his protocol sequence. Each user uses its protocol sequence for transmission over a collision channel without feedback. The maximum number of channel users can at most be equal to the number of codewords in the code. The channel is slotted. The users have slot synchronization. No other synchronization is assumed. A user can transmit his packets in his session with a length of N slots. It is assumed that any slot can belong to at most k sessions. During his session, each user transmits packets according to his protocol sequence. If a user begins his session at time T (T is the beginning of slot T ) and his protocol sequence has symbol 1 (symbol 0) at the first position, then the user transmits (does not transmit, respectively) a packet in slot T . Similarly, if there is symbol 1 (symbol 0) at the ith position, then the user transmits (does not transmit, respectively) a packet in slot T + i - 1, 1 i N . If exactly one user sends a packet in a slot, the packet has successful transmission. If more than one user transmit packets in a slot, there is a conflict in the slot and none of the packets has successful transmission. The protocol sequences must be such that they guarantee successful transmission of at least r packets for each user within his session. The set of such protocol sequences, or words, is called a conflict-avoiding code with the following parameters: N , the word length; k, the maximum number of sessions intersecting in a slot; and r, the minimum number of successfully transmitted packets in a session. Important results on a collision channel without feedback and on conflict-avoiding codes are presented in [1–7]. The problems considered in these papers and our present paper are illustrated in Fig. 1 for the case of k = 2. For k = 2, precise formulations of the problems considered in this paper are given below in Sections 2 and 3. Fig. 1a shows the problem considered in [1–6]. In this problem, one session, called the “signal codeword,” can collide in the channel with two other sessions, called “noise codewords,” which go one after another without a gap. Collision of the signal codeword with each of the two noise codewords is partial; otherwise, the signal codeword collides fully with only one of these noise codewords. These two noise codewords are called the same in the sense that they are sessions of the same user. This means that the signal codeword can collide with any cyclic shift of another codeword. The problem was motivated by the traditional-for-information-theory consideration of long-information-stream transmission. Fig. 1b shows the problem considered in [7]. In this problem, each codeword must have length N =2N 0 - 1 and at least N 0 - 1 zeros at the end. This means that a signal codeword of length N 0 extended with the all-zero sequence of length N 0 - 1 can collide fully with any cyclic shift of 0032-9460/02/3804-0268 $27.00 c 2002 MAIK “Nauka/Interperiodica”

Some Constructions of Conflict-Avoiding Codes

Embed Size (px)

Citation preview

Problems of Information Transmission, Vol. 38, No. 4, 2002, pp. 268–279. Translated from Problemy Peredachi Informatsii, No. 4, 2002, pp. 24–36.Original Russian Text Copyright c© 2002 by Tsybakov, Rubinov.

INFORMATION THEORY AND CODING THEORY

Some Constructions of Conflict-Avoiding Codes

B. S. Tsybakov and A. R. Rubinov

Received August 14, 2001; in final form, June 18, 2002

Abstract—Constructions of conflict-avoiding codes are presented. These codes can be used asprotocol sequences for successful packet transmission over a collision channel without feedback.We give a relation between codes that avoid conflicts of different numbers of colliding users.Upper bounds on the maximum code size and three particular code constructions are presented.

1. INTRODUCTION

We consider binary conflict-avoiding codes. A length-N codeword of such a code is assigned to auser as his protocol sequence. Each user uses its protocol sequence for transmission over a collisionchannel without feedback. The maximum number of channel users can at most be equal to thenumber of codewords in the code. The channel is slotted. The users have slot synchronization.No other synchronization is assumed. A user can transmit his packets in his session with a length ofN slots. It is assumed that any slot can belong to at most k sessions. During his session, each usertransmits packets according to his protocol sequence. If a user begins his session at time T (T isthe beginning of slot T ) and his protocol sequence has symbol 1 (symbol 0) at the first position,then the user transmits (does not transmit, respectively) a packet in slot T . Similarly, if there issymbol 1 (symbol 0) at the ith position, then the user transmits (does not transmit, respectively)a packet in slot T + i − 1, 1 ≤ i ≤ N . If exactly one user sends a packet in a slot, the packethas successful transmission. If more than one user transmit packets in a slot, there is a conflict inthe slot and none of the packets has successful transmission. The protocol sequences must be suchthat they guarantee successful transmission of at least r packets for each user within his session.The set of such protocol sequences, or words, is called a conflict-avoiding code with the followingparameters: N , the word length; k, the maximum number of sessions intersecting in a slot; and r,the minimum number of successfully transmitted packets in a session.

Important results on a collision channel without feedback and on conflict-avoiding codes arepresented in [1–7]. The problems considered in these papers and our present paper are illustratedin Fig. 1 for the case of k = 2. For k = 2, precise formulations of the problems considered in thispaper are given below in Sections 2 and 3.

Fig. 1a shows the problem considered in [1–6]. In this problem, one session, called the “signalcodeword,” can collide in the channel with two other sessions, called “noise codewords,” whichgo one after another without a gap. Collision of the signal codeword with each of the two noisecodewords is partial; otherwise, the signal codeword collides fully with only one of these noisecodewords. These two noise codewords are called the same in the sense that they are sessions ofthe same user. This means that the signal codeword can collide with any cyclic shift of anothercodeword. The problem was motivated by the traditional-for-information-theory consideration oflong-information-stream transmission.

Fig. 1b shows the problem considered in [7]. In this problem, each codeword must have lengthN = 2N ′− 1 and at least N ′− 1 zeros at the end. This means that a signal codeword of length N ′

extended with the all-zero sequence of length N ′ − 1 can collide fully with any cyclic shift of

0032-9460/02/3804-0268 $27.00 c© 2002 MAIK “Nauka/Interperiodica”

SOME CONSTRUCTIONS OF CONFLICT-AVOIDING CODES 269

N

N N (a)

Signalcodeword

Noisecodeword

The same noisecodeword

N ′

N ′ (b)

Signalcodeword

Noisecodeword

N ′ − 1

All-zerosequence

N ′ − 1

All-zerosequence

N ′ − 1

All-zerosequence

N

N N (c)

Signalcodeword

Noisecodeword

Not necessarilythe same noisecodeword

Fig. 1.

another codeword extended in the same way. The problem was motivated by the short-messagecommunication, like in a wireless access channel. For the case of r = 1, [7] gives a sufficient conditionfor a code to be conflict-avoiding; for the case of k = 2 and r = 1, [7] presents the optimum codesand their sizes.

In this paper, we consider the problems illustrated in Figs. 1b and 1c. The first problem isconsidered in Section 2; the second problem is considered in Section 3. The second problem lookslike the problem illustrated in Fig. 1a. The only difference is that noise codewords are not necessarilythe same. Note that Fig. 1c illustrates a particular case of a more general problem which allowssome empty gap between two noisy sessions.

Despite the proximity of the problems illustrated in Fig. 1, the corresponding combinatorialproblems are quite different.

In Section 2, we present a precise formulation of the problem illustrated in Fig. 1b, a relationbetween codes with different k, upper bounds for the maximum code size, and two code construc-tions for the case of k = 2 and r ≥ 1. Section 3 considers the problem illustrated in Fig. 1c andgives a construction of a code with k = 2 and r = 1.

2. CODES WITH WORDS EXTENDED BY ALL-ZERO SEQUENCES

In this section, we consider the problem illustrated in Fig. 1b. First, we introduce our notationsand formulate the problem.

PROBLEMS OF INFORMATION TRANSMISSION Vol. 38 No. 4 2002

270 TSYBAKOV, RUBINOV

2.1. Notations and the Problem

We denote by v = (v1, . . . , vN′) the binary word of length N ′ and by v′ = v0 the extended word

v of length 2N ′−1; here and in the sequel, 0 is the all-zero word of length N ′−1. Let w(v) denotethe Hamming weight of v, i.e., the number of symbols 1 in v. In what follows, we denote by d(u,v)the Hamming distance between the binary words u = (u1, . . . , uN ′) and v. We denote by α(u,v)the number of symbols 1 in the word u that are not covered by symbols 1 of v; i.e., α(u,v) is thenumber of positions i such that ui = 1 and vi = 0. We denote by β(u,v) the number of symbols 1in u that are covered by symbols 1 of v; i.e., β(u,v) is the number of positions i such that ui = 1and vi = 1.

Note that α(u,v) + β(u,v) = w(u) for any u and v. Also, d(u,v) = 2α(u,v) if w(u) = w(v).If all symbols 1 of the word u are covered by v, we say that u is a subword of v (this term is

used in Section 2.3 only).The usual cyclic shift of v by i positions rightwards is denoted by v(i), i = 0, 1, . . . , N ′ − 1,

where v(0) ≡ v. Also, we will use the parallel shift of v by i positions rightwards, which is definedas v(i) , v′(i), i = 0, 1, . . . , N ′ − 1, where v(0) , v′ ≡ v′(0).

For any given u and v, the minimum of α(u′,v(i)) over all i is denoted by t(u,v). Similarly, forany given words u,v1, . . . ,vk−1 and their shifts v(i1)

1 , . . . ,v(ik−1)k−1 , ij = 0, 1 . . . , N ′−1, j = 1, . . . , k−1,

we define α(u′,v(i1)1 , . . . ,v

(ik−1)k−1 ), the number of symbols 1 that are not covered in u by any of the

shifts v(i1)1 , . . . ,v

(ik−1)k−1 . The minimum of α(u′,vi11 , . . . ,v

(ik−1)k−1 ) over all i1, . . . , ik−1 is denoted by

t(u,v1, . . . ,vk−1).For given integers N ≥ 3, k ≥ 2, and r ≥ 1, a code C consisting of binary words of length

N = 2N ′ − 1 and with N ′ − 1 zeros at the end is called an [N, k, r] code if, for any `, 2 ≤ ` ≤ k,of its different words u′ = u0, v′1 = v10, . . . , v′`−1 = v`−10, we have t(u,v1, . . . ,v`−1) ≥ r. Notethat, according to the definition, if C is an [N, `, r] code and |C| = `, then C is an [N, k, r] code forany k > `. This remark relieves us of the necessity to comment on the case k > |C| in the sequel.

The problem, called here the [N, k, r] problem, is to find an [N, k, r] code with the maximumnumber of codewords. This maximum number is denoted by M [N, k, r], whereas the number ofcodewords (i.e., the code size) of an arbitrary code C is denoted by |C|. Thus, an [N, k, r] code Cgives the optimum solution of the [N, k, r] problem if |C| = M [N, k, r].

Also, we will focus on the [N,w, k, r] problem, whose only difference from the [N, k, r] problemis that we consider [N, k, r] codes consisting of words of constant weight w. The maximum size |C|of an [N,w, k, r] code is denoted by M [N,w, k, r].

2.2. Relation between [N,w, k, r] Codes with Different k

Let C be an [N,w, 2, r∗] code with r∗ > w/2. This means that β(u′,v(i)) < w/2 for any twodifferent codewords u0 and v0 and any shift length i; i.e., a shift of a codeword can only cover lessthan a half of symbols 1 of another word. Therefore, all symbols 1 of a codeword cannot be coveredby shifts of two other codewords. Thus, the code C is an [N,w, 3, 1] code. Actually, the numberof symbols 1 of a codeword covered by shifts of two other codewords can be upper bounded by2(w−r∗). Hence, the number of noncovered symbols 1 is lower bounded by w−2(w−r∗) = 2r∗−w.This means that C is an [N,w, 3, r] code with r = 2r∗ − w. Moreover, this means that an optimalsolution of the [N,w, 2, r∗ = d(r + w)/2e] problem can be used as a solution of the [N,w, 3, r]problem and that M [N,w, 3, r] ≥ M [N,w, 2, d(r + w)/2e]. Here and in what follows, dxe denotesthe minimum integer greater than or equal to x, and bxc denotes the integer part of x.

Similarly, a nonoptimal solution of the [N,w, k, r] problem can be obtained from a solution ofthe [N,w, k∗, r∗] problem with k∗ < k and r∗ > r. To show this, let us assume that k∗ − 1 is a

PROBLEMS OF INFORMATION TRANSMISSION Vol. 38 No. 4 2002

SOME CONSTRUCTIONS OF CONFLICT-AVOIDING CODES 271

multiple of k − 1, i.e., k − 1 = m(k∗ − 1), where m > 1 is an integer. Note that the conditionw − t(u,v1, . . . ,vk−1) ≤ w − r is weaker than the condition w − t(u,v1, . . . ,vk∗−1) ≤ (w − r)/m.Let us take r∗ = d(r+(m−1)w)/me. Then w−r∗ ≤ (w−r)/m, and we have the following theorem.

Theorem 1. Let k− 1 = m(k∗ − 1) and r∗ = d(r+ (m− 1)w)/me, where m > 1 is an integer.Then

1. Each [N,w, k∗, r∗] code is an [N,w, k, r] code;2. M [N,w, k, r] ≥M [N,w, k∗, r∗].

2.3. Upper Bounds for M [N,w, 2, r] and M [N, 3, r]

Let C be an [N,w, 2, r] code, u0 and v0 be words of C, and v0 and u0 be subwords of u0and v0, respectively, of weight w(u) = w(v) = w − r + 1.

If some shift v(i) transferred v0 into u0, then the word v(i) would cover not less than w−r+1 ofsymbols 1 in the word u0, which would contradict the inequality t(u,v) ≥ r. Thus, the subwordsof weight

w0 , w − r + 1

contained in different codewords of C do not coincide for any shifts. The subwords that do notcoincide for any shifts are called strongly different.

For a code C, we want to give an upper bound for the maximum number of pairwise stronglydifferent weight-w0 subwords of codewords of C.

For this purpose, note that two subwords of a codeword are strongly different if and only if theirshifts that transfer the first nonzero position (i.e., the first position that contains 1) to the firstposition are different. Consider the set of all different words of length N ′ and weight w0 with 1 atthe first position. The number of elements of this set is

(N ′−1w0−1

)since it is equal to the number of

words of weight w0 − 1 and length N ′ − 1.Next, let us lower bound the number of strongly different weight-w0 subwords of an arbitrary

initial weight-w word. The set of different weight-w0 subwords of a weight-w word has size( ww0

).

Some pairs of subwords in the set can be non-strongly different. Taking this into account, we choosefrom the set only the subwords that have 1 at the first nonzero position of the initial weight-w word.The number of thus chosen subwords is

( w−1w0−1

), and all of them are pairwise strongly different.

Thus, the number of strongly different weight-w0 subwords of each codeword of C is not less than( w−1w0−1

), these subwords are strongly different for different codewords of C, and the total number

of strongly different weight-w0 words is(N ′−1w0−1

). Therefore, the total number of words in C is not

greater than(N ′−1w0−1

)/( w−1w0−1

); i.e., we have the following theorem.

Theorem 2. For the maximum number of words of an [N,w, 2, r] code, we have the followingupper bound:

M [N,w, 2, r] ≤

(N ′ − 1w − r

)(w − 1w − r

) =(N ′ − 1)!(r − 1)!

(N ′ − w + r − 1)!(w − 1)!, N = 2N ′ − 1. (1)

For r = 1, (1) holds with equality, i.e.,

M [N,w, 2, 1] =

(N ′ − 1w − r

).

PROBLEMS OF INFORMATION TRANSMISSION Vol. 38 No. 4 2002

272 TSYBAKOV, RUBINOV

Furthermore,

maxw

M [N,w, r, 1] =

(N ′ − 1

b(N ′ − 1)/2c

),

with w = b(N ′ − 1)/2c giving the maximum.

The statement of this theorem for the case r = 1 was obtained in [7]. Also, note that the boundis close to the Johnson bound [8] for constant-weight codes though the bounds are different.

Now, our objective is to get an upper bound for M [N, 3, r]. To achieve this goal, we introduceone more type of codes.

For given integers N ′ ≥ 2, k ≥ 2, and r ≥ 1, a code C consisting of binary words of length N ′

is called an [N ′, k, r]f code (here, f is the first letter of the word “fixed”) if, for any k of itsdifferent words u,v1, . . . ,vk−1, the inequality α(u,v1, . . . ,vk−1) ≥ r holds. The maximum of |C|over all [N ′, k, r]f codes is denoted by Mf [N ′, k, r]. As above, if we consider a code with a givenconstant weight w, we include w into the code notation (like [N ′, w, k, r] above, we know write[N ′, w, k, r]f ). The difference between codes introduced here and [N, k, r] codes is that codewordsare now compared without shifts. Note that [N ′, w, k, r]f codes are codes of constant weight wwith Hamming distance at least 2r.

It is clear that each [N, k, r] code, after truncation of 0 at the end of its codewords, becomes an[N ′, k, r]f code. Also, we have the following inequalities:

Mf [N ′, w, k, r] ≥M [N,w, k, r], Mf [N ′, k, r] ≥M [N, k, r]. (2)

It is interesting to note that, in fact, the [N ′, 3, 1]f problem was set up by Erdos as the followingunsolved problem. For a given set, find the maximum family of subsets with the property that nosubset of this family can be covered by the union of two other subsets of the family.

Let us find an upper bound on Mf [N ′, 3, r]. To do this, for each two different words u and v ofan [N ′, 3, r]f code C, we define a word a(u,v). The word a(u,v) has 1 at position i if at least oneof the words u and v has 1 at this position, 1 ≤ i ≤ N ′, and a(u,v) has 0 at position i otherwise.The set of all such words a(u,v) for u and v from C is denoted by C. Note that C is an [N ′, 2, r]fcode and |C| = |C|(|C| − 1)/2. This gives the following theorem.

Theorem 3. For the number of words Mf [N ′, 3, r], we have the following bounds:

M [N, 3, r] ≤Mf [N ′, 3, r] ≤⌊√

2Mf [N ′, 2, r]⌋

+ 1. (3)

For r = 1, inequality (3) means that

M [N, 3, 1] ≤

√√√√2

(N ′

bN ′/2c

)+ 1.

2.4. First Construction of an [N, 2, r] Code

The first construction of an [N, 2, r] code presented here is a natural generalization of theTsybakov–Weber construction [7].

Let N ′ = N ′1 + N ′2. Let C1 be an [N ′2, 2, r]f code and let a word z of length N ′1 be such thatα(z′,z(i)) ≥ r for all i = 1, . . . , N ′−1. Then the code C that consists of all words u′ = zx0, wherex ∈ C1, is an [N, 2, r] code. Indeed, the inequality α(u′,v(i)) ≥ r for u′,v′ ∈ C holds in the case ofi = 0 because the code C1 is an [N ′2, 2, r] code and in the case of i = 1, . . . , N ′− 1, by the propertyof the word z.

PROBLEMS OF INFORMATION TRANSMISSION Vol. 38 No. 4 2002

SOME CONSTRUCTIONS OF CONFLICT-AVOIDING CODES 273

In the Tsybakov–Weber construction [7], we have r = 1, N ′1 = 1, and z is the word of lengthone with symbol 1. If r > 1, we can take z in the form z = z0zrzr−1 . . . z2, where z0 is the wordconsisting of r symbols 1 and zi for i > 0 is the word of length i with only one 1, which is atposition i. For example, z = 1101 for r = 2 and z = 11100101 for r = 3. Note that the length of zis (r(r + 3)− 2)/2.

However, there exist more efficient constructions. For example, it is possible to take as z apseudorandom sequence, particularly, an m-sequence, with autocorrelation function equal to −1for all nonzero shifts (if we change 0 and 1 for ±1) [1, 9, 10]. The main property of an (n + 2)-sequence of length L = 2n+2 − 1 is that d(z, z(i)) = 2n+1 and α(z, z(i)) = 2n, i = 1, . . . , L. It isclear that α(z′,z(i)) ≥ 2n. Taking into account that the interval [r, 2r) contains a unique numberof the form 2n, we obtain the following relations:

M f [N ′ − 8r, 2, r] ≤M [N, 2, r] ≤M f [N ′, 2, r], (4)

where M [ · ] (as well as Mf [ · ]) denotes the maximum size of a constant-weight code with parame-ters indicated in [ · ]. The second inequality in (4) is trivial. To prove the first inequality in (4), notethat, for any r ≥ 1, there is an (n+2)-sequence of length 2n+2−1, where 4r−1 ≤ 2n+2−1 ≤ 8r−5.Therefore, if we take an arbitrary [N ′− 8r, 2, r]f code and add to each of its codewords an (n+ 2)-sequence as a prefix and some zeros at the end so that the total word length becomes N , we obtainan [N, 2, r] code and the first inequality in (4). Inequalities (4) link the [N, 2, r] problem and thestandard error-correcting-code problem.

Instead of an (n+2)-sequence, we can use a different pseudorandom sequence z of length 4m−1and such that d(z, z(i)) = 2m and α(z,z(i)) = m for all i > 0 (see, for example, [11]).

We can apply the first presented construction of [N, 2, r] codes together with Theorem 1 toconstruct an [N,w, k, 1] code. Theorem 1 claims that we can take an [N,w, k∗, r∗] code as an[N,w, k, r] code if k∗ < k and r∗ > r. For example, let us construct a code with k = 4 andr = 1. Let us take m = 3 (m is an integer from Theorem 1), r∗ = d(1 + 2w)/3e, and usesome [N,w, 2, d(1 + 2w)/3e] code as an [N,w, 4, 1] code. This [N,w, 2, d(1 + 2w)/3e] code can beconstructed using an [N ′2, bN ′2/2c, 2, r]f code, which is a classical coding-theoretic code of weightw = bN ′2/2c, length N ′2, and Hamming distance at least 2r.

2.5. Second Construction of an [N, 2, r] Code: Constant-Weight Code

Now, we present the second construction of [N, 2, r] codes. For this, we need two more types ofcodes.

A cyclic code with words of constant weight w, length N ′, and minimum Hamming distanced = 2r is called an [N ′, w, r]c code. Let Mc[N ′, w, r] denote the maximum size of an [N ′, w, r]ccode.

Below, we use a cyclically permutable code [12], which is a binary block code of length N ′

such that each codeword has N ′ distinct cyclic shifts and codewords are cyclically distinct (i.e., nocodeword can be obtained by a cyclic shift of another codeword). A cyclically permutable codewith words of constant weight w and minimum Hamming distance d = 2r is called an [N ′, w, r]pcode. Let Mp[N ′, w, r] denote the maximum size of an [N ′, w, r]p code.

Note that the parallel shift v(i) of a word v does not cover more symbols 1 of a word u than thecyclic shift v(i). Hence, M [N,w, 2, r] ≥Mp[N ′, w, r].

Let C2 be an [N ′, w, r]c code with r ≤ w/2. For simplicity, we assume that N ′ and w are coprime.Then, the N ′ distinct cyclic shifts u(i), i = 0, . . . , N − 1, of a codeword u of C2 are also in C2, andthe code C2 is split into such classes of cyclic shifts. Without loss of generality, we assume that thefirst position of u contains 1. Let us choose numbers i1 = 0, i2, . . . , ig, where g = bw/rc, such that

PROBLEMS OF INFORMATION TRANSMISSION Vol. 38 No. 4 2002

274 TSYBAKOV, RUBINOV

the shift u(ij) of u transfers the ((j − 1)r+ 1)st symbol 1 of u to the first position. We include theg words u(ij) in the code C and do the same for all the remaining classes of cyclic shifts.

Now, we can present a statement on the above-constructed code C.

Theorem 4. A code that consists of all codewords of a code C extended at the end by 0 (the0 sequence has length N ′1) is an [N,w, 2, r] code.

Proof. If u and v are words in the above-constructed code C and these words are derived fromdifferent cyclic classes, then t(u,v) ≥ r since a parallel shift of one word can cover at most thesame number of symbols 1 of the other word as a cyclic shift. If u and v are derived from the samecyclic class and i is such that u 6= v(i), then α(u′,v(i)) ≥ r for the same reason. Finally, if i is suchthat u = v(i), then α(u′,v(i)) ≥ r since, after the parallel shift v(i), at least r of symbols 1 of theword v become shifted out of the positions 1, . . . , N ′, where the word u is located. 4

The above constructed code C has |C2|bw/rc/N ′ words. Hence,

M [N,w, 2, r] ≥Mc[N ′, w, r]bw/rc/N ′. (5)

3. GENERALIZATION OF THE [N, k, r] PROBLEM

Here, a generalization of the [N, k, r] problem is presented.

3.1. Problem Specification

Consider a code C with |C| codewords of length N . It is assumed that symbols of codewordscan appear at times T = . . . ,−1, 0, 1, . . . . A codeword of length N can occupy an interval T + 1,. . . , T + N with any T . If a codeword u occupies the interval T + 1, . . . , T + N , we say that u istransmitted in this interval. This interval is called the transmission interval of word u. We assumethat each time T belongs to at most k different codewords; i.e., at time T , at most k differentcodewords are transmitted. The same codeword cannot cover any given time T more than once;i.e., transmission intervals of the same word are disjoint.

Here, we consider {N, k, r} codes, which are a generalization of [N, k, r] codes. A code C iscalled an {N, k, r} code if each transmitted codeword has at least r positions with symbol 1, whichappear at times when other transmitted words have positions with symbol 0. A relation between{N, k, r} and [N, k, r] codes is the following. A code with words of length N = 2N ′ − 1, each wordhaving N ′ − 1 zeros at the end, is an {N, k, r} code if and only if this code is an [N, k, r] code.

The problem called the {N, k, r} problem is finding an {N, k, r} code with the maximum numberof codewords. This maximum number is denoted by M{N, k, r}. The {N, k, r} problem differsfrom the [N, k, r] problem by the absence of the all-zero condition at the end of codewords. In thispaper, we present only one construction of an {N, 2, 1} code and show that the construction givescodes of significantly better size than the optimal [N, 2, 1] codes. Note that the codewords in ourconstruction of {N, 2, 1} codes have also an all-zero sequence at the end, but the length of thissequence is less than N ′ − 1.

3.2. Construction of an {N, 2, 1} Code

In this section, we give a construction of {N, k, r} code with k = 2 and r = 1. Below, an{N, k, r} code with words of constant weight w is referred to as an {N,w, k, r} code.

Now, let us describe a code C, which, as will be proved, is an {N,w, 2, 1} code. Let ` and wbe positive integers such that `+w ≤ N . Let S(N,w, `) be the set of all different words of weightw− 1 and length N − `− 1 that do not contain the (`+ 1)-tuple 0 . . . 01 of ` zeros followed by 1 atthe end. We define C as the code whose words consist of the following three parts. The first part

PROBLEMS OF INFORMATION TRANSMISSION Vol. 38 No. 4 2002

SOME CONSTRUCTIONS OF CONFLICT-AVOIDING CODES 275

1 N−`−1 `

Noise

1 N−`−1 `

Signal(a)

1 N−`−1 `

Noise1 N−`−1

Noise

1 N−`−1 `

Signal(b)

1 N−`−1 `

Noise1 N−`−1

Noise

1 N−`−1 `

Signal(c)

1 N−`−1 `

Noise1 N−`−1

Noise

1 N−`−1 `

Signal(d)

Fig. 2.

is the symbol 1 at the first position. The third part is the all-zero `-tuple at the last ` positions.The second part is a word of S(N,w, `), each word of S(N,w, `) corresponding to one word of C.The size of C equals the size of S(N,w, `).

Theorem 5. The code C is an {N,w, 2, 1} code.

Proof. The proof is illustrated in Fig. 2. Figure 2a shows two codewords of C. One codewordis called the signal and the other is called the noise. The noise is not shifted with respect to thesignal. Since the second parts of the words are different and have the same weight, at least one 1in the signal is not covered.

Now, we shift the noise word leftwards. The worst case for the signal is that another noisecodeword adjoins the first one. Let us consider this worst case for different kinds of shifts.

For the shift shown in Fig. 2b, the noise has less than w symbols 1 covering the first and secondparts of the signal (these parts jointly have weight w).

For the shift shown in Fig. 2c, the third part of the left noise, which is the all-zero `-tuple, coversan interval inside of the second part of the signal. If this interval of the signal is not the all-zerosequence, then it is clear that the signal has at least one noncovered 1. If this interval of the signalis the all-zero sequence, then, according to the construction of the code C, the signal has only zerosafter the interval. Therefore, similarly to the case shown in Fig. 2b, the left-hand noise, which isbelow the first and second parts of the signal, has less than w symbols 1, while the signal has wsymbols 1.

PROBLEMS OF INFORMATION TRANSMISSION Vol. 38 No. 4 2002

276 TSYBAKOV, RUBINOV

For the shift shown in Fig. 2d, the first 1 of the signal is not covered.If we consider right shifts of noise words, we have the same cases. 4Now, let us find the size of the specified code C, which has words with different second parts.

We assume that the parameters N , w, and ` of the code C are given. We denote the size of C byV (N,w, `).

Lemma 1. The size V (N,w, `) of C satisfies the recursive equation

V (N,w, `) = V (N − 1, w, `) + V (N − 1, w − 1, `)− V (N − 1− `, w − 1, `),V (N,N − `, `) = 1, V (N,w, `) = 0 for l + w > N.

(6)

Proof. Consider the last position of the second part of a codeword (LPSPCW), i.e., positionN − ` of a word of C. Let us denote by C(0) (respectively, C(1)) the set of codewords of C with 0(respectively, 1) at the LPSPCW. We have |C(0)| = V (N − 1, w, `) since, when we delete theLPSPCW, we simply obtain a code C with parameters N − 1, w, `. For the set C(1), we have

|C(1)| = V (N − 1, w − 1, `) − V (N − 1− `, w − 1, `).

The last equality holds for the following reason. When we delete the LPSPCW, we obtain wordsof a code C with parameters N − 1, w − 1, `. However, the set C(1) does not contain all words ofthe code C with parameters N − 1, w − 1, `. The set C(1) does not contain words with ` or morezeros at the end of the second part. If we delete these ` positions, we get a code C with parametersN−1−`, w−1, `. Note that a code C with parameters N,w, ` was defined for `+w ≤ N . Therefore,we put V (N,w, `) = 0 for `+ w > N . 4

Another way of finding V (N,w, `), which is also used in the sequel, is presented in the followinglemma.

Lemma 2. V (N,w, `) = W (N − ` − w,w − 1, ` − 1), where W (x, y, z) satisfies the recursiveequation

W (x, y, z) = W (x− 0, y − 1, z) + . . . +W (x−min(x, z), y − 1, z), y ≥ 2,W (x, 1, z) = 1 + min(x, z).

(7)

For x ≤ z, (7) has a solution

W (x, y, z) =

(x+ y

y

). (8)

Proof. Here, we need to use distance vectors [7]. A distance vector (v1, . . . , vw−1) associ-ated with a binary sequence of weight w ≥ 2 is defined as the vector of length w − 1 in whichthe component vi indicates the number of zeros between the ith and (i + 1)st symbols 1 in thesequence.

Consider the binary sequence at the positions 1, . . . , N − ` of codewords of a code C withparameters N,w, `. Consider the distance vector (v1, . . . , vw−1) for this sequence. Let W (x, y, z)denote the number of different distance vectors satisfying the restriction

v1 + . . .+ vy−1 ≤ x, 0 ≤ vi ≤ z, 0 < i < y. (9)

We get (7) if we note that W (x−i, y−1, z) is the number of distance vectors (v1 = i, v2, . . . , vy−1)that satisfy (9) for 0 ≤ i ≤ min(x, z).

When x ≤ z, the situation is the same as in the case of M [N = x + y + 1, w = y + 1, 2, 1].To get (8), we use the identity(

u

v

)=

(u− 1v − 1

)+ . . .+

(v − 1v − 1

). 4

PROBLEMS OF INFORMATION TRANSMISSION Vol. 38 No. 4 2002

SOME CONSTRUCTIONS OF CONFLICT-AVOIDING CODES 277

Table 1

w\N 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 231 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 12 0 0 1 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 33 0 0 0 1 3 6 8 9 9 9 9 9 9 9 9 9 9 9 9 9 94 0 0 0 0 1 4 10 17 23 26 27 27 27 27 27 27 27 27 27 27 275 0 0 0 0 0 1 5 15 31 50 66 76 80 81 81 81 81 81 81 81 816 0 0 0 0 0 0 1 6 21 51 96 147 192 222 237 242 243 243 243 243 2437 0 0 0 0 0 0 0 1 7 28 78 168 294 435 561 651 701 722 728 729 7298 0 0 0 0 0 0 0 0 1 8 36 113 274 540 897 1290 1647 1913 2074 2151 21799 0 0 0 0 0 0 0 0 0 1 9 45 157 423 929 1711 2727 3834 4850 5634 613810 0 0 0 0 0 0 0 0 0 0 1 10 55 211 625 1507 3061 5365 8272 11411 1431811 0 0 0 0 0 0 0 0 0 0 0 1 11 66 276 891 2343 5193 9933 16698 2504812 0 0 0 0 0 0 0 0 0 0 0 0 1 12 78 353 1233 3510 8427 17469 3182413 0 0 0 0 0 0 0 0 0 0 0 0 0 1 13 91 443 1664 5096 13170 2940614 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 14 105 547 2198 7203 1993015 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 15 120 666 2850 994816 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 16 136 801 363617 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 17 153 95318 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 18 17119 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 19

Table 2

w\N 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 231 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 12 0 0 0 1 2 3 4 4 4 4 4 4 4 4 4 4 4 4 4 4 43 0 0 0 0 1 3 6 10 13 15 16 16 16 16 16 16 16 16 16 16 164 0 0 0 0 0 1 4 10 20 32 44 54 60 63 64 64 64 64 64 64 645 0 0 0 0 0 0 1 5 15 35 66 106 150 190 221 241 251 255 256 256 2566 0 0 0 0 0 0 0 1 6 21 56 121 222 357 512 667 802 903 968 1003 10187 0 0 0 0 0 0 0 0 1 7 28 84 204 420 756 1212 1758 2338 2884 3340 36768 0 0 0 0 0 0 0 0 0 1 8 36 120 323 736 1464 2592 4146 6064 8192 103209 0 0 0 0 0 0 0 0 0 0 1 9 45 165 487 1215 2643 5115 8938 14266 2099410 0 0 0 0 0 0 0 0 0 0 0 1 10 55 220 706 1912 4510 9460 17911 3096211 0 0 0 0 0 0 0 0 0 0 0 0 1 11 66 286 991 2893 7348 16588 3379312 0 0 0 0 0 0 0 0 0 0 0 0 0 1 12 78 364 1354 4236 11518 2782013 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 13 91 455 1808 6032 1747214 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 14 105 560 2367 838615 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 15 120 680 304616 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 16 136 81617 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 17 15418 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1819 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1

Table 3

N M [(N + 1)/2, 2, 1] maxw,`

V (N,w, `)

9 6 1011 10 3113 20 9615 35 29417 70 92919 126 306121 256 993323 462 33793

PROBLEMS OF INFORMATION TRANSMISSION Vol. 38 No. 4 2002

278 TSYBAKOV, RUBINOV

Note that (9) in this proof describes, in some sense, the structure of the code C. It is preciselythe set of integer points in a multidimensional cube with a bounded sum of coordinates. Thecardinality of this truncated cube (see [13]) can be used for an estimation of the number of theseinteger points.

Additionally, we give one more recursive equation for V (N,w, `), namely,

V (N,w, `) = V (N − 1, w, ` − 1) + (w − 1)V (N − `− 1, w − 1, `− 1)

+

(w − 1

2

)V (N − 2`− 1, w − 2, `− 1) + . . .

+

(w − 1i

)V (N − i− 1, w − i, `− 1) + . . . . (10)

Equations for V (N,w, `), for example, (6), can easily be solved for a given ` and different Nand w. Table 1 gives the size V (N,w, `) for 3 ≤ N ≤ 23, 1 ≤ w ≤ 19, and ` = 3. Table 2 gives thesize V (N,w, `) for 3 ≤ N ≤ 23, 1 ≤ w ≤ 19, and ` = 4. Table 3 gives the comparison of sizes ofthe optimum code from [7] for k = 2 and r = 1 and V (N,w, `). The size V (N,w, `) shown in thethird column is the maximum of V (N,w, `) over w and ` for given N . The table demonstrates thatmaxw,`

V (N,w, `) is substantially greater than the size of the optimum code from [7]. A comparison

of the optimum code from [7] and codes from [1] was given in [7]. For example, for k = 2 andr = 1, [1] gives codes with lengths 55, 78, 114, and 222, which have sizes 11, 13, 19, and 37,respectively.

Finally, note that one of the sets of words considered in this section was studied by Gilbert [14](see also [15] by Guibas and Odlyzko). However, here we require an additional constant-weightcondition, which was not considered in [14, 15]. To construct {N, 2, r} codes, it is possible to usecodes introduced by Levenshtein in [11, 16, 17], which provide a given Hamming distance betweenany codeword and the junction of any two codewords.

4. CONCLUSION

In this paper, we gave three constructions of conflict-avoiding codes. Two of them use code-words of length N = 2N ′ − 1 with N ′ − 1 zero positions at the end and guarantee the successfultransmission of a given number of packets for two-user collision. We showed how to apply one ofthese constructions to build a conflict-avoiding code for the case of more-than-two-user collision.The third construction is given for generalized codes, which do not require the N ′−1 zero positionsat the end of codewords. Also, we presented some bounds for the maximum code sizes.

REFERENCES

1. A, N.Q., Gyorfi, L., and Massey, J., Constructions of Binary Constant-Weight Cyclic Codes and Cycli-cally Permutable Codes, IEEE Trans. Inf. Theory, 1992, vol. 38, no. 3, pp. 940–949.

2. Bassalygo, L.A. and Pinsker, M.S., Restricted Asynchronous Multiple Access, Probl. Peredachi Inf.,1983, vol. 19, no. 4, pp. 92–96.

3. Gyorfi, L. and Vaida, I., Construction of Protocol Sequences for Multiple Access Collision Channelwithout Feedback, IEEE Trans. Inf. Theory, 1993, vol. 39, no. 5, pp. 1762–1765.

4. Massey, J.L. and Mathys, P., The Collision Channel without Feedback, IEEE Trans. Inf. Theory, 1985,vol. 31, no. 2, pp. 192–204.

5. Mathys, P., A Class of Codes for a T -Active-Users-out-of-N Multiple-Access Communication System,IEEE Trans. Inf. Theory, 1990, vol. 36, no. 6, pp. 1206–1219.

PROBLEMS OF INFORMATION TRANSMISSION Vol. 38 No. 4 2002

SOME CONSTRUCTIONS OF CONFLICT-AVOIDING CODES 279

6. Tsybakov, B.S. and Likhanov, N.B., Packet Switching in a Channel without Feedback, Probl. PeredachiInf., 1983, vol. 19, no. 2, pp. 69–84 [Probl. Inf. Trans. (Engl. Transl.), 1983, vol. 19, no. 2, pp. 147–161].

7. Tsybakov, B.S. and Weber, J.H., Conflict-Avoiding Codes, in Proc. 17th Symp. on Information Theoryin the Benelux, Enschede, The Netherlands, 1996, pp. 49–55.

8. Johnson, S.M., A New Bound for Error-Correcting Codes, IRE Trans. Inf. Theory, 1962, vol. 8, no. 3,pp. 203–207.

9. MacWilliams, F.J. and Sloane, N.J.A., The Theory of Error-Correcting Codes, Amsterdam: North-Holland, 1977. Translated under the title Teoriya kodov, ispravlyayushchikh oshibki, Moscow: Svyaz’,1979.

10. Simon, M.K., Omura, J.K., Scholtz, R.A., and Levit, B.K., Spread Spectrum Communications, vol. 1,Rockville: Comp. Sci., 1985.

11. Levenshtein, V.I., One Method of Constructing Quasilinear Codes Providing Synchronization in thePresence of Errors, Probl. Peredachi Inf., 1971, vol. 7, no. 3, pp. 30–40 [Probl. Inf. Trans. (Engl. Transl.),1971, vol. 7, no. 3, pp. 215–222].

12. Gilbert, E.N., Cyclically Permutable Error-Correcting Codes, IEEE Trans. Inf. Theory, 1963, vol. 9,no. 2, pp. 175–182.

13. Khachiyan, L.G., The Problem of Computing the Volume of Polytopes is NP-Hard, Uspekhi Mat. Nauk,1989, vol. 44, no. 3, pp. 199–200 [Russian Math. Surveys (Engl. Transl.), 1989, vol. 44, no. 3, pp. 179–180].

14. Gilbert, E.N., Synchronization of Binary Messages, IRE Trans. Inf. Theory, 1960, vol. 6, no. 4, pp. 470–477.

15. Guibas, L.J. and Odlyzko, A.M., Maximal Prefix-Synchronized Codes, SIAM J. Appl. Math., 1978,vol. 35, no. 2, pp. 401–418.

16. Levenshtein, V.I., Maximum Number of Words in Codes without Overlaps, Probl. Peredachi Inf., 1970,vol. 6, no. 4, pp. 88–90.

17. Levenshtein, V.I., Bounds for Codes Providing Error Correction and Synchronization, Probl. PeredachiInf., 1969, vol. 5, no. 2, pp. 3–13.

PROBLEMS OF INFORMATION TRANSMISSION Vol. 38 No. 4 2002