92
AVERAGE CASE REDUCTIONS FOR SUBSET-SUM AND DECODING OF LINEAR CODES Geneviève Arboit A thesis submitted in conformity with the requirements for the degree of Master of Science Graduate Department of Cornputer Science UniversiS. of Toronto Copyright @ 1999 by Geneviève Arboit

AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

  • Upload
    others

  • View
    6

  • Download
    0

Embed Size (px)

Citation preview

Page 1: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

AVERAGE CASE REDUCTIONS FOR SUBSET-SUM AND DECODING OF LINEAR CODES

Geneviève Arboit

A thesis submitted in conformity with the requirements for the degree of Master of Science

Graduate Department of Cornputer Science UniversiS. of Toronto

Copyright @ 1999 by Geneviève Arboit

Page 2: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

National Library m+m of Canada Bibliothèque nationale du Canada

Acquisitions and Acquisitions et Bibliographie Services services bibliographiques

395 Wellin910(1 Strwt 395. rue Welliigtm OnawaON KlAûN4 OttawaON K 1 A W canada Canada

The author has granted a non- exclusive licence allowing the National Library of Canada to reproduce, loan, distribute or sell copies of this thesis in microform, paper or electronic formats.

The author retains ownership of the copyright in this thesis. Neither the thesis nor substantial extracts fiom it may be printed or otherwise reproduced without the author's permission.

L'auteur a accordé une licence non exclusive permettant à la Bibliotheque nationale du Canada de reproduire, prêter, distribuer ou vendre des copies de cette thèse sous la forme de microfiche/nlm, de reproduction sur papier ou sur format électronique.

L'auteur conserve la propriété du droit d'auteur qui protège cette thèse. Ni la thèse ni des extraits substantiels de celie-ci ne doivent être imprimes ou autrement reproduits sans son is'rs.'corisation.

Page 3: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

Abstract

Average case reductions for Subset-Sum and Decoding of Linear Codes

Geneviève Arboit

Master of Science

Graduate Department of Computer Science

University of Toronto

1999

In a 1996 paper, R. Impagliazzo and M. Naor show two average case redudions for the

Subset Stim problem (SS). We use similar ideas to obtain stronger and additional such

reductions for SS, Furthemore, we use modifications of these ideas to obtain similar

reductions for the Decoding of Linear Codes problem (DLC). The theorems give further

evidence that the hardest case for Average case SS is when the number of integers is equal

to their lengt h. For Average case DLC, the theorems give evidence that the hardest case

is when the dimension of the code is equal to the channel capacity times the length of

the words.

Average case SS and DLC hardness assumptions can be used to obtain one-way

funct ions, pseudorandorn generators, and secure private-key cryptography.

Page 4: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

Acknowledgments

1 dedicate this work to my parents, who have always supported me in more ways than

I can enumerate.

I am t hanlcful to my second reader, mi ch el Molloy, as weil as to Daniel Panario, who

pointed me to combinatorics references and techniques, which directly applied to the first

parts of Section 5.3, as well as, generally, to subsequently obtained results. I also thank

rny supervisor, Charles Rackoff for showing me how much more than 1 fhought there is

to probability theory.

1 thank the foUowing people for some very useful conversations: Micah Adler, Faith

Fich, Adreas Goerdt, Valentine Kabanets, Daniele Micciancio, Lucia Moura, François

Pitt, Amit Sahai, Steven Tanny.

I thank the following people for their precious morale support: Albert Camus, Hole,

Kerry Khoo, Jean Leloup, Daniel Nyborg, Patti, NataSa Przulj, Alex Vasilescu, Ning

wu.

I thank the following people for encouraging me to undertake graduate studies: Syed

Ali, Elie Cohen, Joel Hillel, Clement Lam, Daniel Nadeau, Geza Szamosi.

1 am grateful to the Natural Sciences and Engineering Research Council for their

generous assistance through a NSERC PGS A gant.

Page 5: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

Contents

1 Introduction 1

1.1 Computational complexity theory . . . . . . . . . . . . . . . . . . . . . . 1

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Subset Sum problem 3

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 VJorst case SS 3

. . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.2 Average case SS 4

. . . . . . . . . . . . . . . . . . . . . 1.3 Decoding of Linear Codes problem 9

. . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.1 Worst case DLC 11

. . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.2 Average case DLC 12

. . . . . . . . . . . . . . . . . . 1.4 Approximation results for SS and DLC 20

. . . . . . . . . . . . . . . . . . . . . . . . . 1.4.1 Subset Sum problem 20

. . . . . . . . . . . . . . . . . 1.4.2 Decoding of Linear Codes problem 21

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 Notes 22

2 Reductions for the Subset-Sum Problem 23

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Introduction 23

. . . . . . . . . . . . . . . . . 2.1.1 Preliminary conventions and lemma 23

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.2 Results 24

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Four reductions 25

. . . . . . . . . . . . . . . . . . . . . . . 2.2.1 Notation and definitions 25

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.2 Reductions 27

Page 6: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Notes 39

3 Reductions for the Decoding of Linear Codes Problem 40

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Introduction 40

. . . . . . . . . . . . . . . . . 3.1.1 Preliminary conventions and lemma 40

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.2 Results 41

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Four reductions 42

. . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Notation and dehitions 42

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.2 Reductions 44

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Notes 60

4 Conclusion and open problems 61

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Subset Sum 62

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.1 Conclusion 61

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.2 Improvements 63

. . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Decoding of Linear Codes 64

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Conclusion 64

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.2 Irnprovements 65

5 Appendix: Lemmata 68

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Jensen's inequality 68

. . . . . . . . . . . . . . . . . . . . . . . 5.2 Bounds on the entropy function 69

. . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Bounds in Pascal's triangle 71

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Technicai lemmata 73

. . . . . . . . . . 5.4.1 Error vecton from an instance of DLC(n, m', E ) 73

. . . . . . . 5.4.2 Error vectors from an oracle solution to DLC(n, m, E ) 75

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Notes 81

Bibliography 82

Page 7: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

Chapter 1

Introduction

1.1 Computational complexity theory

Comput at ional complexity t heory analyses the time efficiency and resource costs, includ-

ing space and possibly random bits, of problem solving. Ln particular, modern cryptog-

raphy is based on a gap between the efficiency of algorithms for legitimate users, and the

computational infeasibility for an adversary trying to extract information. Consequently,

a necessary condition for the existence of secure encryption schemes is that NP is not

contained in BPP, and thus that P # NP.

However, that P # NP implies only that there exist encryption schemes t h a t are

hard to break in the worst case. It may be the case that an NP-complete problem can

not be solved efficiently only on a few artificially difficult instances. In fact, any NP-

complete problem could be easy to solve almost always, and particuiar examples of this

can be constructed [Wi184]. It is in this sense that worst case complexity theory gives no

information about average instances of a problem.

Consequently, the subject matter of this thesis is related to considering average case

versions of some NP-cornpiete pmblems which are not known to be easy to solve. It is

often the case that if a problem is hard oc the average, then it can be used to obtain

Page 8: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

a "one-way function". Informally, these are functions that are efficiently computatle,

while being hard to invert on average.

Definition 1 [One-way function] A function f : {O, 1)' H (0,l)' L.r one-way if the

following two conditions hold:

1. f 2s easy to cornpute: There ezists a deterministic polynomial time algorithm, A,

so that on input x algorithm A outputs f (x), L e . A(s ) = f ( x ) .

2. f is hard to inuert: For euery probabilistic polynomial-time algorithrn, I , and for x

uniformly selected in { O , 1)"

for euery constant c and for su .c ien t ly large T Z .

The first property is intended to be used by "legitimate usersn, and the second, for the

counteraction of potential "adversaries". It can be proven that from one-way functions,

we can construct pseudorandom generators, secure private-key cryptography, and secure

digi ta1 signature schemes [Lub96, Lectures 9-10, 11-12, and 171. However, it is not known

whether one-way functions exist, since their existence implies P # NP.

There exists three types of conjectures on how one-way functions may be obtained.

The first type is number theoretic. An assumption on the intractability of factorization,

and another, on that of discrete logarithm, can be used to obtain one-way functions.

While these assumptions are most commonly used to obtain secure public-key cryptogra-

phy, the known secure private-key cryptography schemes that t hey yield are too inefficient

to be used in practice. The second way is to conjecture that given algorithrns directly

produce pseuderandom generators. A one-way function can be derived from the as-

sumption that the encryption algonthm named "Digital Encryption Standard" (DES) is

a pseuderandom generator. This derived one-way function is used in the UNIX password

scheme.

Page 9: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

Finally, we try to obtain one-way functions frorn hardness assumptions on average

case versions of NP-complete problems. This thesis concentrates on this third way, and

in particular on the average case Subset Sum (SS) problem and average case Decoding of

Linear Codes (DLC) problem. The conjecture that these functions (with certain choices

of parameters) are one-way is supported by the failure of extensive research to produce

efficient adversary algorithms [G0198, Section 2.2.41.

1.2 Subset Sum problem

Lnformally, the Subset Sum (SS) problem is the following. A set of n integers, each of m

bits in length, and a sum, rnodulo 2m, of some of these integers are given. The problem

is to find a seIection of the integers that adds to the given sum, modulo 2".

An n x rn matrix A represents the n integers of m bits each. Each row A; represents

one of the m-bit integers, also called "weightsn. The given sum a is m-bit long, and

the set, consisting of the original selection of integers, is represented by its n-bit long

incidence vector x. The Subset Surn problem, parameterized by the number of integers

n and their length, m, is denoted by S S ( n , m).

Definition 2 [Scalar product for SS(n, m)] Let x E {O, 1)" be a 1 x n row vector and

A E { O , 1)"" be an n x m matriz A E {O,l)"m.

1.2.1 Worst case SS

Then the scalar product of x and A is

mod 2"

The definitions of the two worst case versions of the SS problem folIow, as weU as what

is known on the hardness of their solution.

Page 10: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

Definition 3 [Decisional SS(n, m)] Giuen Ai, ..., A, E (O, 1 )* and a E {O, 1)*, de-

termine whether there is an x E { O , l)" such that a = x O A.

Definition 3 is not identical to the definition of the Subset Sum problem given in

[GJ79, NP-complete problem SP 131, but the two defmitions are equivalent (the latter is

clearly reducible to the former, while the former is also reducible to the latter, with the

help of linear time guessing of some extra carry bits). Worst case Decisional SS is one

of the original problems that was proven to be NP-complete by Karp [Kar72].

Theorem 4 [GJ79, NP-comptete problem SPI31 The Decisional SS problem is

NP -cornpiete.

Definition 5 [Functional SS(n, m)] Giuen Ai, ..., A, E {O, l)m and a E {O, IIm, find,

ÿ such ezists, an x E {O, 1)" such that a = x A.

Since Decisional SS is NP-complete, Functional SS is reducible to it. Clearly, F'unc-

tional SS is harder, so it is also NP-hard.

1.2.2 Average case SS

Average case SS is the fist of the two problems in whose complexity we are interested. We

will analyze average case reductions for SS. Our aim is to obtain average case reductions

between versions of SS with a choice of parameters, and SS with another choice of

parameters.

The definition of the average case version of the problem follows below, as well as a

conjecture on the hardness of its solution. The following notation is used when a random

variable uniforrnly samples a given space. Let x be a random variable. Thec x EU S

means that x is chosen uniforrnly within the domain S.

Definition 6 [Average case SS(n , m)] Let Ai, ..., A, Eu {O, 1)" and x Ep {O, l)",

and then let cr = x 0 A. Given Ai, ..., An and a, find x' E {O, 1)" satisfying a = x' O A.

Page 11: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

in order to conjecture that Average case SS(n,rn) is hard, the statement of the

problem needs to be dependent on only one parameter, since the conjecture should be of

the form "For every polynomial t ime probabilist ic algorithm, the probability of solving

S S ( n , rn) is small as a function of n". So let the parameter m be a parameter function

t hat depends on n. In other words, the value of m is denoted by g(n).

Conjecture 7 [Hardness for Average case SS w.r.t. to the parameterization

rn = g(n)] Let i be a probabilistic polynomial t i m e a l g o d h m of the following fonn . Let

n E N+ and let m = g(n). Algorithm I that takes as input a pair consisting of an n x rn

matriz and a n m-bit row uector, and r e t u m as output an n-bit row uector. Define the

probability p ( n ) via the following experïment.

Let Al, ..., A, be rn-bit integers, chosen uni formly and independentfy, and let x 6e an

n-bit row vector, chosen uniformly and independently. Then let a = x @ A and define

p ( n ) = Pr[l(A, a) A = a].

1 Then for euery constant c and for suf icient ly large n, it holds that p(n) < ,.

It is easy to see that this conjecture is equivalent to saying that the following function

is one-way (as d e h e d by Definition l), assurning that the s a m e parameterization m =

g(n) is used.

Definition 8 [Candidate one-way function from SS w.r.t. to the parameteri-

zation m = g(n)] Let n E N+ and let m = g ( n ) . Let A be an n x rn bit matriz and x an

n-bit row uector. Define the function fss,, as

mapping nm + n bits to nm + m bits.

Solved instances The general Average case S S ( n , rn) problem is solvable for special

dimensions, which give the problem a special structure.

Page 12: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

The problem is said to have "low density", if m > n. Idormally, if m E Q(n2), there

is an polynomial time algorithm solving Average case S S ( n , m) with high probability

[CJL+92]. It is an improvernent over the algorithms in [LOI351 and [Bri83]. Al1 such

algorithms use a reduction of SS into a shortest vector in a lattice approximation problem.

The problem is said to have "high densityn , if nl < n. For very small m, that is,

informaliy for m E O(logn), there is a dynamic prograrnming algoethm for Worst case

S S ( n , m) that runs in time 0 (n2) [TS86]. If rn < f log(n), there are even more efficient

algorithms [GM91]. For m = n, the best known attack takes time 0(2f) and space

~ ( 2 4 ) [SS79].

Past results and this thesis One of the few places in which resuits on average case

SS are found is [INSG]. Impagliazzo and Naor show that, supposing that for some c > 1

and for n E N+ suf£iciently large, g(n) 2 cn, if f s s , is one-way, then fssg is aiso a

pseudorandom number generator [Theorem 2.21; supposing that for some c < 1 and for

n E N+ sdcient ly large, g(n) < cn, if fss, is one-way, then fss, is also a famjly of

universal one-way hash functions [Theorem 3-11.

They are also interested in average case reductions between different SS(n, rn) prob-

lems. The subject matter of this thesis is such average case reductions. We will ask: if

a solution can be found with sorne probability, for given dimensions of the problem, then

with what probability can a solution be found for other dimensions? Impagliazzo and

Naor show the following [IN96, Proposition 1.21.

1. Suppose g(n) 5 g'(n) for all n, and cn 5 g ( n ) for some c > 1 and n suficiently

large. Then hardness w.r.t. to the parameterization m' = g'(n) implies hardness

w.r.t. to the parameterization m = g(n).

2. Suppose g(n) 2 g'(n) for all n, and m 2 g(n) for some c < 1 and n sufficiently

large. Then hardness w.r.t. to the parameterization m' = g'(n) implies hardness

w.r.t. to the parameterization m = g(n)

Page 13: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

Rackoff stated a stronger form of these two theorems [Rac98], which foliow.

1. Suppose n 5 g(n) < g'(n) for ail n. Then hardness w.r.t. to the parameterization

rnt = gt(n) implies hardness w.r. t. to the parameterization m = g(n).

2. Suppose n 2 g(n) 3 g'(n) for ail n. Then hardness w.r.t. to the parameterization

m' = g'(n) implies hardness w.r.t. to the parameterization m = g(n).

Instead of parameterking rn as a function gfn) of n, we can consider the parameterï-

zation of n as a function h(m) of m. Just as we can conjecture the Hardness for Average

case SS w.r.t. to the parameterization m = g(n), we can conjecture Hardness for Average

case SS w.r.t. to the parameterization n = h(nz), and similarly obtain an equivalent can-

didate one-way function. The relationship between these two types of parameterization

will be discussed in Section 4.1.1.

Correspondingly, Rackoff stated t hese additional two theorems [Rac98], which follow.

1. Suppose ht(m) 5 h(m) 5 m for al1 m. Then hardness w.r.t. to the parameteriza-

tion nt = ht(rn) implies hardness w.r.t. to the parameterization n = h(rn).

2. Suppose V(m) 2 h(m) 2 m for all m. Then hardness w.r.t. to the parameteriza-

tion n' = h'(m) implies hardness w.r.t. to the parameterization n = h(m).

We wiil prove al1 four above stated theorems in Chapter 2. Each of the theorems is

proven by a reduction, which are, with abusive notation, the foilowing.

1. If n 5 m 5 mt, then S S ( n , mt) a,, S S ( n , m).

2. If n 3 m 2 mt, then SS(n,m') a,, SS(n,m).

3. If nt 5 n 2 rn, then SS(nt, m) ocav S S ( n , m).

4. If n' 2 n 2 m, then SS(nt, m) a, S S ( n , m).

Page 14: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

The meaning of this notation has to be explained, since the statements are asymptotic,

although they do not appear to be. The simplest way is to give an explanation by

example.

We mean by (1) that there is a probabilistic oracle algorithm J with the foilowing

properties. Algorithm J has for inputs n, m and rn' such that n 5 m 5 m', and a

deterministic oracle I for (average case) S S ( n , m). We defme p to be the probability that

the correct solution to SS(n, m) is returned, when the inputs are distributed "properly"

for the Average case SS(n,m) , That is, for A, an n x m bit matrix chosen unifody

and independently, and for an n-bit row vector x chosen uniformly and independently,

let CY = x O A. Then p is the probability that [ ( A , a) returns an n-bit vector such that

I (A , a) O A = a.

Algorithm J uses oracle I in attempting to solve instances of (average case) S S ( n , ml).

Algorithm J runs in time polynomial in m' = max(n,m,mt,n'). Let p' be the prcba-

bility that J solves uproperly" distributed instances of SS(n,mf). That is, for for B,

an n x mf bit matrix chosen uniformly and independentiy, and for an n-bit row vector

y chosen uniformly and independently, let = y O B. Then p' is the probability that

J ( B , P, n, m, m', 1) returns an n-bit vector such that J ( B , 8, n, rn, m', I ) @ B = 4. We

( 2

will show that, for nf = max(n, n, m', nt) sufIiciently large, p' 2 pol y m ~ ~ n , ~ , m l , n l ~ . The four theorems above imply two simpler statements, which follow.

)

1. Hardness w.r.t. to the parameterization m' = g'(n) implies hardness w.r.t. to the

parameterization m = n.

2. Hardness w.r.t. to the parameterization n' = ht(rn) impiies hardness w-r.' . to the

parameterization n = m.

Statement (1) is implied by the theorems of [IN96, Proposition 1-21. The reason why

these statements are not equivalent to one another is subtle and will be discussed in

Section 4.1.1. In particular, one might think that if g = Lh-'1 or h = lg-'J, then (1)

Page 15: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

and (2) are equivalent, but they are not.

The above statements (1) and (2) can be summarized by saying that if Average case

SS is hard for some dimensions (n, g ( n ) ) , then it is hard for (n, n); if Average case SS

is hard for some dimensions (h(m), m), then it is hard for (m, m). One might think that

this means that S S ( n , m) is hardest when n = m, i.e. when the number of integers is

equal to their length. However, what this means is not clear and is actuaUy misleading.

All we have is that certain reductions where the probability of solution is never de-

creased "too much" by using sohtions for instances where m = n. One might think

that, while SS(n,n) is hard, the larger n is, the harder it gets, but this is not clear. For

example, we do not know how to compare SS(n2, n2) aod SS(n, n). This open question

will be discussed in Section 4.1.2.

1.3 Decoding of Linear Codes problem

The purpose of error-conecting codes is to correct errors in messages transmitted on noisy

communication channels. We are concerned with binary symmetric charnels, where the

probability that the charnel makes a mistake, Le. that a bit is flipped, is a constant

E. An additional assurnption is made: the errors, for each bit, are independent '. The

t ransmitt i ng end attempts to protect messages by adding some redundancy to them.

This allows the receiving end to make some corrections.

From now on, we consider a to be a fixed constant fraction such that O < e < $. Ife >

$, then interchanching the names of the received symbols changes this to a binary channel

with E < f. If E = $, then no communication is possible [MS77, Section 1.1, Problem

(3)]. This is a ~pecial case of Shannon's bound (further discussed in Section 1.3.2) where

1 the capacity of the channel is zero, whicu occurs when the error probability E = 5 .

Inforrnally, the rate at which a message can be sent through a chamel involves the

'In concrete engineering situations, the errors often are, on the contrary, correlated.

Page 16: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

expansion of the message by a factor l/(capacity of a channel with error probability E ) .

So if the capacity is zero, then messages are not transmittabte, If E = O, then ail the

results we later present still hold, though they are not interesting because decoding can

trivialIy be done efficiently via linear algebra. The question of not requiring E to be

constant will be addressed in Section 4.2.2.

One cornmon, simple, and efficient method fgr encoding and decoding, is that of

'linear codesn [MS77], which are determined by n x m bit matrices A. We will refer to

any such A as a linear code of word length rn and dimension n. The ratio 2 is the

rate of the code. For a fixed n x m bit A, an n-bit message x will be transmitted as the

m-bit codeword xA. Some bits of the received word are flipped, which correspond to the

one entries of an m-bit row vector e. The received word is equal to XA + e.

Informally, the Decoding of Linear Codes (DLC) problem is the following. We are

given an n x m bit matrix A, and a bitwise sum of some its rows, with at most a constant

fraction E of its bits flipped. The problem is to find a selection of A's rows that bitwise

adds to the given sum, with at most a fraction e of the sum's bits flipped. The Decoding of

Linear Codes problem, parameterized by the dimensions of the matrices A's considered,

n and m, is denoted by DLC(n, m).

Definition 9 [Scalar product for DLC(n, m)] Let x E {O, 1)" be a 1 x n row uector

and A E {O, lInm be an n x m bit matriz. Then thr scalar product of x and A is

XA = ( C y A j i mod 2, ..., C z ~ A ~ ~ rnod 2 )

2To take n to be the number of rows and m to be the number of columns follows the conventions es- tablished for SS(n, m) in Section 1.2, which follow the conventions in the literature cited there. However, most error-correcting code references use k and n instead of n and m, respectively.

Page 17: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

1.3.1 Worst case DLC

The definitions of the two worst case versions of the the problem foliow, as wcil as what

is known on the hardness of their solution. The Hamrning weight fwction w(e) denotes

the number of one's in a bit vector e.

Definition 10 [Decisional DLC(n, m)] Giuen Ai, ..., An E (O, lIrn and a E ( O , l)',

determine whether there is an x E {O, L)" and an e E { O , 1)'" such that a = xA + e and 44 5 l&m]

Definition 10 is equivalent, although not identical, to the definition of the Decoding of

Linear Codes problem given in [GJi9, NP-complete problem MS71. This last definition

utilizes duality properties of Linear codes [MS77, Section 1.81. Worst case Decisional

DLC was proven to be JfP-complete in [BMvTS].

Theorern 11 [GJ?9, NP-complete problem MS7] The Decisional DLC problem is

NP-complete .

Definition 12 [finctional DLC(n, m)] Giuen Ai, ..., A, E {O, llrn and a E (O, Ilm, find, if such ezist, an x E {O, 1)" and an e E {O, 1)' such that a = x A + e and w(e) $

IN Since Decisional DLC is NP-complete, Functional DLC is reducible to it . Clearly,

Functional DLC is harder, so it is also NP-hard.

Good codes The notion of "minimum distance of a coden permits to classify codes

as good or bad. Because an error vector of lesser weight is more likely than another

with greater weight [MS77, Section 1.3, Problem (7)], when the receiving end decodes

a transmitted codeword, ili will always pick the error vector which hâs the least weight

possible 3. This is called "nearest neighbor decoding" or "maximum likeli hood decoding"

3 ~ h i s strategy is best assurning th?-t a transmission is done o d y once. Schemes using sorne error detection and retransmission are different.

Page 18: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

[MS77, Section 1.3, p.1 lf.

Definition 13 [Minimum distance of a code] Let an n x m bit matriz A be a code.

Then its minimum distance d is the minimum Hamrning distance between i ts codewo7ds.

If A has linearly dependent rows, Le. A does not have fidi rank, then for x # XI, it

is possible that xA = x'A, so that the minimum distance of A is zero. Note that in the

literature on error-correcting codes, A always has full rank. Definition 13 is equivalent

to the following [MS77, Section 1.3, Theorem 11.

Definition 14 [Equivalent minimum distance of a code] Let an n x m bit mat& A

be a code. Then its minimum distance i s the minimum Hamrning weàght of any nonzero

codeword.

if A has minimum distance d, then it is possible to decode uniquely for up to 1 9 1 errors [MS77, Section 1.3, Theorem 21. ü there are more errors, then the received word

may or may not be closer to some other codeword than to the correct one. If it is,

t h e decoder will be deceived into outputting the wrong codeword, yielding a "decoding

error". For a code to be good in practice, it is required that this rarely happens [MS77,

Section 1.3, p.111.

1.3.2 Average case DLC

Average case DLC is the second of the two problems in whose complexity we are inter-

ested. UOne of the most outstanding open problems in the area of error correcting codes

is that of presenting efficient decoding algorithms for random linear codes. Of particular

Page 19: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

interest are random !inear codes with constant information rate which can correct a con-

stant fraction of errors." [Go198, Section 2.2.41 We will analyze average case reductions

for DCL. S-a- aim is to obtain average case reductions between versions of DLC with a

choice of parameters, and DLC with another choice of parameters.

Bounds for the achievable rate

The concept of entropy was defined by Shannon in [Sha48], where two fundamental

theorerns were first published. Shannon's First Theorern is concerned with the sending

of i nformation through noiseless channels. S h a ~ o n ' s Second Theorem is concerned with

the sending of information through noisy channels. It contains two parts, which we refer

to as an upper and a lower bound. Infurmally, the upper bound guarantees that most

(not necessarily linear) codes with rate bounded above by capacity have arbitrarily smail

decoding error probability. Inforrnally, the lower bound implies that codes with rate

bounded below by capacity have a best decoding error probability that is bounded away

from O, as as the word length m increases. The birth of information theory can be dated

from Shannon's publication of this fundamental paper, which can also be found as the

second part of the book [SW64].

Rather than stating Shannon's Second Theorem, we wiLi state stronger versions of

the above mentioned upper and lower bounds as Theorems 18 and 19, respectively.

Informally, Theorem 18 guarantees that most linear codes with rate bounded above by

capacity have arbitrarily small decoding error probability. Infonnaily, Theorem 19 states

that codes with rate bounded below by capacity have a best decoding error probability

t hat approaches 1, as the word length m increases. This stronger form of the lower bound

was proven in [Wo160].

Shannon also pioueered the use of entropy in cryptography [Sha49].

Definition 15 [Entropy] The entrop y function is H (e) = -6 10&) - (1 - E ) log(1- E ) ,

where the logarithms are taken in base 2 .

Page 20: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

Definition 16 [Capacity] The capacity function is C(E) = 1 - H ( E ) .

The following theorems show that random linear codes are likely to be good codes,

as defined at the end of Section 1.3.1. Nevertheless, that a code is good does not imply

the existence of an eficient decoding aigorithm.

Theorem 17 [An upper bound: Generalized Gilbert-Varsharnov Bound] Let

R < C(E). Then Yalmost all" n x m linear codes of rate 5 R have min imum distance

> cm, in the following sense- The fraction of n x m linear codes of rate 5 R that have

srnaller minimum distance tends t o O as m tends to W.

The proof of this theorern can be found in [TVSI, p. 771.

The capacity of the channel C(E) will play an important role in our results. In fact,

the statement that m is about equal to ci?;i wilI play the same role as the expression

rn = n does in the results for Average case SS, as stated at the end of Section 1.2.2.

As rnentioned at the end of the previous section, it is possible to achieve perfect

maximum likelihood decoding with a code of minimum distance 2em + 1 when no more

than Ern errors occur, which is the expected number of errors when the probability that

the channel rnakes an error is E.

Nevertheless, Theorem 18 below parantees that most linear codes of rate bounded

above by CIE), can be decoded arbitrarily close to perfectly, for a chamel with bit error

probability e. Informally, this says that most such codes act as if they had minimum

distance 3 2em + 1, on average.

Theorem 18 [An upper bound: Coding Theorem for linear codes] Let R <

C(E). Then uuLmost all" n x m linear codes of rate 5 R have a small probability 01 decoding error, in the following sense.

Consider a random linear code and a binary symmetRc channel of capacity C(E).

Assume maximum likeiihood decoding is used. Consider a random encoded message and

a random error vector. Then the pmbability (over the choice of code, message, and enor)

Page 21: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

that the wrong codeword is decoded tends to O as m tends to 00.

A statement and an explanation of Theorem 18 can be found in p S 7 7 , Section 1.61,

and a proof for a random linear code with the binary symmetric charnel in [Ga168,

Section 6-21.

Theorem 19 [A lower bound: Impossibility of diable coding] Let R > C(e) .

In every binary symmetr ic channel of capacity C(E), wheneuer a (not necessarily linear)

code with codewords of length nz kas rate 2 R, then the code tends to be totally unreliable

for a n y decoding strategy, i.e. its error probabdity tends t o 1 , wtth increasing m.

Theorem 19 was proven by Wolfowitz [Wo160].

Average case DCP

The definition of the average case version of the DCP problem follows, as weil as a

conjecture on the hardness of its solution. The notation EU is used as defined in Sec-

tion 1.2.2.

Definition 20 [Average case DLC(n, m)] Let Ai, ..., A, EU {O, l)", x EU {O, l)",

and e Eu {s E {O, lIrn & W ( S ) 5 L ~ r n ] ) , and then let a = x A + e. Giuen Al, ..., A,, and

a, Jind xt E { O , 1)" and ef E { O , Ilrn satisfying cr = x'A + ef and w(et) 5 [~rn].

As in Section 1.2.2, in order to conjecture that Average case DLC(n,m) is hard,

the statement of the problem needs to be dependent on o d y one parameter, since the

conjecture should be of the form KFor every polynomial time probabilistic algorithni, the

probability of solving DLC(n,m) is small as a function of nn. So let the value of the

parameter m be denoted by g ( n ) .

Conjecture 21 pardness for Average case DLC w.r.t. to the parameteriza-

tion m = g(n) ] Let I Se a probabilistic polynomial t ime algorithm of the following f o m .

Let n E N+ and let m = g ( n ) . Algorithm I that takes as input a pair consisting of an

n x m matriz and a n m-bit row uector, and retums crs output a pair cotlsisting of a n

Page 22: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

n-6it row uector and a n m-bit row uector. Define the probabifity p ( n ) via the following

experiment.

Let Al, ..., A, be m-bi t integers, chosen u n i f o m l y and independently, let x be a n n-bit

row vector, chosen u n i f o m l y and independently. and let e be an m-bit row uector, chosen

uniformly and independently arnong al1 such uectors with Harnming weight no more than

l ~ r n J . Then let a = XA + e and (x', el) = I(A, a). Define p ( n ) = Pr[x' O A + e' = a].

T h e n for euery constant c and for s u f i e n t l y large n, it holds that p(n) < 5. It is easy to see that this conjecture is equident to saying that the following function

is one-way (as d e h e d by Definition l ) , assuming that the same parameterization m =

g ( n ) is used.

Definition 22 [Candidate one-way function from DLC w.r.t. to the parame-

terkation m = g ( n ) ] Let n E N+ and let rn = g ( n ) . Let A be a n n x m bit matriz, x an

n-bit rou: vector, and e( i ) a n m-bit row uector such that w(e ( i ) ) < L&rnJ. The parameter

i indexes the xjzJ - ( y ) e m r uectors in a natural woy. Define the function fcodeg as

rnapping nrn + n + log (~izj (7) ) bits to nm + rn bits.

1 Solved instances Conjecture 21 seems reasonable when a = i and m is about ci;in.

It is certainly not true for some other values of m and n. For instance, the problem is

easy when m < n or, for a constant c > O, when m 2 2 n k

If m 5 n, then Average case DLC can be solved via linear algebra with constant

probability greater than 0.28. This is approximatively the probability that a random

n x n rnatrix has f d rank [BvH82, Lemma 21. This probability is greater for n x m

matrix with m < n.

If for a constant c > O, it holds that m 2 T / ~ , then Worst case DLC can be solved

in polynomial tirne. Going through all possible 2" solutions requires time at most mC.

Page 23: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

A survey of positive and negative results on the hardness of decoding can be found

in [Var97, Section 2.21, in particular for families of linear codes, that is, for linear codes

wit h additional properties that are useful in practice. From this survey, it stands out that

worst case Kpolynomial time maximum-li keli hood decoding algorithms are not known for

any specific family of useful codes". However, trellises (a type of graph) can be used

to implement worst case "randomized 'sequent ial' decoding algon t hms (for Linear codes)

[Fan63, For74], whose performance is close to maximum-likelihood and whose &ng

time is polynomial 6 t h appreciabiy hi& probability."

Results in this thesis The subject macter of this thesis also inchdes average case

reductions for DLC. As for SS, we will ask: if a solution can be found with some prob-

ability, for given dimensions of the problern, then with whot probability can a solution

be found for other dimensions? Rackoff conjectured four theorems which correspond to

the ones enumerated for SS, at the end of Section 1-2.2- We will prove them in C h a p

ter 3. The first two theorerns, corresponding to the above parameterizations in n are the

following.

1. Suppose g(n) 5 g'(n) for aU n, and g ( n ) >_ [&l for n sufEiciently large, so that

[+l 5 g(n) 5 g'(n)- Then hardness w.r.t. to the parameterization m' = g'(n)

implies hardness w.r.t. to the parameterization rn = g(n).

2. Suppose g(n) 2 gf(n) for ail n, and g(n) 5 L+J for n sufEciently large, so that

Lci";i] 2 g(n) 2 g'(n) Tnen hardness w.r.t. to the parameterization rn' = g'(n)

implies hardness w.r.t. to the parameterization m = g(n).

Instead of parameterizing m as s function g(n) of n, we can consider the parameteri-

zation of n as a function h(m) of m. Just as we conjectured the Hardness for Average case

DLC w.r.t. to the parameterization n = g(n), we conjecture Hardriess for Average case

DLC w.r.t. to the parameterization n = h(m), and similarly obtain an equivalent can-

Page 24: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

didate one-way function. The relationship between these two types of parameterization

will be discussed in Section 4.2.1.

Correspondingly, Rackoff conjectured t hese addit ional two t heorems.

1. Suppose hr(m) 5 h(m) for all m, and h(m) 5 [C(~)rn] for m sdiciently large,

so that V(m) 5 h(m) 5 [C(&)m]. Then hardness w.r.t. to the pararneterization

nt = hl(m) implies hardness w,r.t. to the parameterization n = h(m).

2. Suppose ht(m) 2 h(m) for all m, and h(m) > r C ( ~ ) m ] for m sufficiently large,

so that ht(m) 2 h(m) 3 [ C ( ~ ) m l . Then hardness w.r.t. to the parameterization

nt = ht(m) implies hardness w.r.t. to the parameterization n = h(rn).

We will prove aiI four above stated theorems in Chapter 3. Each of the theorems is

proven by a reduction, which are, with abusive notation, the following.

2. If 2 rn 2 ml then DLC(n, ml) a,, DLC(n, m).

3- If n1 5 n 4 [ C ( ~ ) m j , then DLC(nl, m) a., DLC(n, m).

The meaning of this notation has to be explained, since the statements are asymptotic,

although they do not appear to be. The simplet way is to give an explanation by

example.

We mean by (1) that there is a probabilistic oracle algorithm J with the following

properties. Algorithm J has for inputs n, m and rn' such that L*] 5 m 5 ml, and a

deterministic oracle I for average case DLC(n, m). We define p to be the probability that

the correct solution to DLC(n, m) is returned, when the inputs are distributed "properly"

for the average case DLC(n, m). That is, for A, an n x m bit matrix chosen uniformly

and independently, for an n-bit row vector x chosen uniformly and independently, and

Page 25: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

for an m-bit row vector e chosen unifonnly and independently among aL such vedors

with Hamming weight no more than LemJ, let a = xA + e. Thcn p is the probabiiity

t hat [ ( A , a) retunis a pair (x', et), consisting of an n-bit row vector xt and an m-bit row

vector et, such that x ' A + et = a.

Algorithm J uses oracle I in attempting to solve instances of (average case) DLC(n, m').

Algorithm J runs in time polynomial in m' = max(n,m,mr,n'). Let # be the proba-

bility that J solves 'properly" distributed instances of DLC(n,m'). That is, for for

B, an n x rn' bit rnatrix chosen uniformly and independently, for an n-bit row vector

y chosen uniformly and independently, and for an m-bit row vector f chosen uaiformly

and independently among all such vectors with Hamming weight no more than [~m'], let

/3 = y a B+ f. Then p' is the probabiüty that J(B,P,n,m,m: I ) returns a pair (y', f') , consisting of an n-bit row vector y' and an m'-bit row vector f', such that y ' ~ B + f' = p.

We will show that, for m' = max(n, rn, m', n') sufficiently large, p' 2 poly

The four theorems above imply two simpler statements, which foLlow.

1. Hardness w.r.t. to the parameterization m' = g'(n) impües hardness w.r.t. to the

parameterization m = L*].

2. Hardness w.r.t. to the parameterization nt = h'(rn) irnplies hardness w.r.t. to the

parameterization n = LC(&)rn].

Note that the choice of floors instead of ceilings in the two statements above is arbi-

trary. We will discussed this issue in more depth in Section 4.2.2.

As in the case of SS, the reason why these statements are not equivalent to one

another is sribtle and will be discussed in Section 4.2.1. In particular, one might think

that if g = Lh-' J or h = Lg-' J , then (1) and (2) are equivalent, but they are not.

The above statements (1) and (2) can be summarized by saying that if Average tue

DLC is hard for some dimensions (n ,g (n) ) , then it is hard for (n, L&nJ); if Average

case SS is hard for some dimensions (h(m), m), then it is hard for (LC(e)n], m). One

Page 26: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

might think that this means that DLC is hardest when n is about C(~) rn , i.e. when

the dimension of the code is about equal to the channel capacity times the length of the

words. However, what this means is not clear and is actualiy misleading.

AU we have is that certain reductions where the probability of solution is never de-

creased '900 much" by using solutions for instances where n is about C(e)rn. This

is comparable to when n = rn for SS, in Section 1.2.2. One might think that, while

DLC(LC(e)m!, m) is hard. the larger m is, the harder it gets, but this is not clear. For

example, we do not know how to compare DLC(LC(E)~'], m2) and DLC(LC(e)mJ, m).

This open question wiil be discussed in Section 4.2.1.

1.4 Approximation results for SS and DLC

-4nother indication that a problem is hard is that it is hard to approximate. There is a

sense in whicb SS is efficiently approximable. In another sense, a problem closely related

to DLC is inapproximable in any efficient way, unless the hierarchy of some complexity

classes collapses.

1.4.1 Subset Sum problem

Consider the functional version of SS as defmed in [GJ79, NP-complete problem SP131,

which is as Definition 3, but with actual addition (not mod 2m). As discussed before, the

two worst case problems are equivalent. This version of Functiunal SS is approximable

in the following sense [KPSg?].

Theorern 23 [Approximation of a subset sum] Let O < 6 < 1. For n integers

of rn bits of length, let z* be the sum of a subset of the n integers. Then there is an

- 4 5 6 . algorithm which returns a subset of the n integers that sums to z such that

This aigorithm runs in time O(min[n/b, n $ (116) log(1/6)]) and space O(n + 116).

Page 27: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

1.4.2 Decoding of Linear Codes problem

The maximum Likelihood decoding problem, as introduced at the end of Section 1.3.1,

airns to the decoding of a received word to a nearest codeword. Since its soIution is a

pa r t ida r solution to DLC, it is also NP-hard. Define the distance between two words

x and y to be the Hamming weight of their difference w ( x - y). Then the distance

between a received word and a nearest codeword is inapproximable in the following sense

[ABSS97].

Theorem 24 [Hardness of approximation of the distance to a nearest code-

word] Let 6 > O . For an n x m code A and a received rn-6it word a, let a nearest

codeword to cr be x, and their distance be d* = w ( a - x). Then there is no polynomial

Id* -dl tirne algorithm which returns solutions d such that -a;- < 2"d-"", unless NP E ZQF,

the class of ezpected quasi-polynomial time algorithm.

A problem related to the maximum likelihood decoding problern is the minimum

distance of a code problem (as defined at the end of Section 1.3.1, Definition 14). The

corresponding decision problem is NP-cornpiete [Var97], a d the functional problem is

hard to approximate in the two following ways [DMSSS].

Theorem 25 [Hardness of approximation of the minimum distance of a code]

Let 6 > O. For an n x m code A, let the minimum distance be d'. Then there is no

Idg-dl < 2~og1-' m, polynomial time algorithm which returns solutions d such that 7 -

NF 2 ZQP, the class of ezpected quasi-polynornial time algorithms.

Theorem 26 [Hardness of approximation of the minimum distance of a code]

Let b > O . For a n n x rn code A, let the minimum distance be d*. Then there is no

polynornial tirne algorithm which returns solutions d such that 9 5 6, unless N P =

Z P P , the class of expected polynomial tirne algorithms.

Page 28: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

1.5 Notes

The ideas developed at the beginning of Section 1.1 were taken îcom [LP98, Section

7.41 and [G0198, Chapter 21. Other complexity theory references are [vLeSO, Volume A],

[HU79], and [Pap94]. Anot her cryptography reference is [Lub96]. Parts of [Go1981 can

also be found as (Go1991.

Some parts of Section 1.2 borrowed ideas from [Od190]. The greater part of the

discussion of the solved instances of SS in Section 1.2.2 was taken from [INSG].

A proof of Section 1.3.2's Lemma 17 can be found in [TV91, p. 771. It can be proven

similarly to the Gilbert-Varshamov Bound [PW72, Section 4-11. An alternate discussion

can be found in [vL82, Section 5-11. More historical details about Shannon's Theorems

can be found in [Abr63, p. 149,1741. Other discussions and proofs of Shannon's Theorems

(for not necessarily linear codes) can be found in [vL82, Chapter 21, [PW72, Section 4-21,

and [HamSG, Chapter IO]. The Converse of Shannon's Theorem can be found in [SW64,

p. 711, [Ham86, Section 10.81, and [Abr63, Chapter 61. Other coding theory references

include [MS77], [Ga168], and [Sti95, Chapter 101 .

The facts concerning approximation schemes for coding theory problems, at the end

of Section 1.4, are taken from [DMS99].

Page 29: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

Chapter 2

Reductions for the Subset-Sum

Problem

2.1 Introduction

In a 1996 paper, R, Lmpagliazzo and M. Naor show two average-case reductions for SS

[IN96, Proposition 1-21. We use sirnilar ideas to obtain stronger and additional reductions.

The necessary background for this chapter can be found in Chapter 1, Sections 1.1

and 1.2. Technical lemmata are proven in Chapter 5.

2.1.1 Preluninary conventions and lemma

Conventions for the figures

On the figures, S S ( n , rn) will always be represented with bold Lines. The problern

S S ( n r , n) or S S ( n , m') wiil always be the problem which is to be reduced to S S ( n , m),

with dimensions nt or m' that may be different from n or m respectively. On the figures,

S S ( n f , m) or S S ( n , m') will always be represented with thin Lines.

Page 30: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

CHAPTER 2 . REDUCTIONS FOR THE SUBSET-SUM PROBLEM

Preliminary lemma

Lemma 27 Consider SS(n, m). If z # xf, then PrA[x O A = x' a A] = &. Proof: Since z # z', assume without loss of generality that xj = 1 and x: = O. We will

prove tbat 3: 0 A is distributed uniformly, and independently of z' O A.

Fix A; for al1 i # j. This determines x' O A = Cr=, xfA;.

Next, choose Aj uniformly. Then z O A = CL, xiAi is uniformly distributed. H

In Section 2.2.2, we wili show the foilowing. In each case, we let p be the probability

that the given SS(n, m) oracle succeeds on a properly distributed input.

1. Theorem 28 If n 5 m 5 na1, then SS(n,mr) cc., SS(n,m). The probability for

SS(n, m') is > 2. See Figure 2.2.

This is a stronger version of [IN96, Proposition 1.2, Part 11.

2. Theorem 30 If n 2 m 2 mf, then SS(n,mf) a,, SS(n,rn). The probability for

S S ( n , ml) is > 5. Corollary 33 has iirnit of probability > $. See Figure 2.3.

This is a strooger version of [IN96, Proposition 1.2, Part 21.

3. Theorem 35 If n' < n 5 m, then SS(nl, rn) cc., SS(n , m). The probability for

SS(nl, rn) is > 2. See Figure 2.4.

4. Theorem 36 If n' 2 n 2 m, then SS(nl, m) a,, SS(n, m). The probability for 2

S S ( n r , rn) is > 5. See Figure 2.5.

There are no statements, in [IN96], such as items 3 and 4, that allow the parameter

n, the number of integers, to vary to n'.

Page 31: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

CHAPTER 2. REDUCTIONS FOR THE SUBSET-SUM PROBLEM

2.2 Four reductions

2.2 . 1 Notation and definitions

Inputs to the assumed algorithm corresponding to instances of SS(n , m)

Suppose that n and m are fked and I is an oracle for the SS problem.

Generating function Consider S S ( n , m). Defhe the function f mapping mrz + n bits

to rnn + rn bits as f (A, x) = (A , x A), and defme the equivalence relation

Equivalence classes Define S = {(A, x) : A E { O , l)"",x E {O, 1 ) " ) and define the

equivalence classes on S by

£&(Ay X ) = {(A', x') : ( A , X ) (A', x'))

Soived instances Denote by U the subset of S which are solved by 1, with success.

It is well-defined since I is a deterministic oracle,

U = { ( A , X ) € S : A Q I ( A , X ~ A ) = X @ A )

Note that if (A, x ) (At, x'), then

( A , x ) E U * ( A ' , x ~ ) E U

so that the elements of an equivalence class are either all in U or al1 outside U.

Generalized inputs

Let T = { (A, a) : A E {O, l)mn, a E {O, l)m), the set of dl possible inputs to 1, not

necessarily corresponding to instances of SS (n, m) .

Denote by V the subset of T of instances which are solved by 1, with success.

Page 32: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

CHAPTER 2. REDUCTIONS FOR THE SUBSET-SUM PROBLEM

Relations between inputs for SS(n , rn) and generaiized inputs

Note that ( A , x ) E U if and only if (A, x O A) E V. Also, (A, a) E V is an input to I for

S S ( n , m) if and only if a = x Q A for some x E {O, 1)". Therefore, (A, a) E V implies

that, for some x E (0, Iln, the second argument a = x O A.

There is a 1-1 correspondence between the equivalence classes that partition U and

the elements in V. See Figure 2.1. Note that the point shown in Ui is (A', x i ) , and that

the second argument of this pair, x i , is the solution returned by I ( A i , ai). Also notice

t hat f (Li) = V and f -'(VI = W .

Figure 2.1: Inputs to I

Notation Let k = IV1 and denote the elements of V by vi = ( ~ ' , a ' ) so that

- . V = {(A', aa))gl

Let xi = l ( A ' , a i ) and denote the equivalence classes that partition U by U' =

EQ(A', z'), so that

k ui U = U,,

Then LIi = EQ(Ai, x i ) c 17 corresponds to v' = (A', ni) E V via f.

Page 33: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

CHAPTER 2, REDUCTIONS FOR THE SUBSET-SUM PROBLEM

2.2.2 Reductions

Theorem 28 If n 5 m 5 m', then S S ( n , m t ) oc, S S ( n , m ) , and the probability for

S S ( n , m f ) is >

Proof: We prove that S S ( n , m') a,, S S ( n , m) by showing that the probability for

S S ( n , m') is > Gy using an oracle for S S ( n , m). We transfo- an average-case instance

of S S ( n , m') to an instance of S S ( n , m), by algorithm J , iuustrated in Figure 2.2. The

idea is simply to truncate the input to submit it to I . This process returns a good

solution to the Iarger, original probiem with a probability bounded below by 2.

Figure 2.2: The case n 5 m 5 rn'

Let algorithm J ( B , P ) , for B E (O, I ) ~ ' " and E {O, lIm', behave in the following w ay,

1. Truncate the n integers B; and B, each of rn' bits, into n integers of m bits, A; and a

for i = 1 t o n do Ai := Bi rnod 2"

and CY := ,O mod 2"

2. Output y' := I(A, a)

Page 34: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

CHAPTER 2. REDUCTIONS FOR THE SUBSET-SUM PROBLEM 28

Statistical experiment: Uniformly select an m'n-bit B and an n-bit y , and then

compute y' = J ( B , y O B).

Event of interest: The algorithm B is successful if y O B = y' O B.

Claim: Pr[y O B = y' O B] > $.

That Z is successful on ( A , a ) = (A, y A) does not imply that J is successful

on (B, y O B), because there may be more solutions to S S ( n , rn) than to S S ( n , ml).

Nevertheless, it is sufEcient for J to be succesdui that the returned y' = y. With A and

cr defined as computed in algorithm J, we bound the probabiiity that J solves SS(n, ml)

by

Pr[y O B = y' O B] Pr[l(A, a) = y]

= Pr[Z(A, y O A) = y].

As defined in algorithrn J , the argument A is uniformly distributed whenever B is,

and the a = y @ A has the distribution corresponding to SS(n, m), whenever A and y

are chosen uniformly, that is, whenever B and y are chosen unifonnly. Consequently, it

suffices to show that for a mn-bit vector A and a n-bit vector y chosen uniformiy

which is given by Lemma 29, which follows. I

Lemma 29 [Approximate one-oneness] Let n 5 m. If algorithm 1 d u e s Average

case S S ( n , m) with probability p, then, for inputs A EU {O, 1jmn end x Eu {O, 1)"

Proof: Following the definitions and notations introduced in Section 2.2.1, there are 1 VI

inputs in S for which algorithm I returns the same x we started with, which are precisely

Page 35: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

CHAPTER 2. REDUCTIONS FOR THE SUBSET-SUM PROBLEM 29

the equivalence class representatives under the algorithm A. Since the distribution of the

generators of the input to 1, n d y (A, x), is uniform, we have a uniform probability of

selecting the class representatives. Thus, we want to show that if

t hen

IV1 k P* -= - ISI ISI ' 2

We want to compute a Iower bound on the unknown k, the nurnber of equivalence

classes. So consider the probability that two elements of U are in the same equivalence

class, i.e. the probability distribution of 1 ' s inputs, as generated by the function /, that

i s

Pr[/(A, x) = f (A', x')] = Pr[(A, x) = (A', x')]

for mn-bit A, A' and n-bit x, x' uniformly chosen-

Bound from above

P A , x) ( A , x ) = Pr[(A, x 0 A) = (A', x' 0 A')]

= Pr[A = Al (Pr[x # XI Pr[x Q A = x' A 1 x # XI + Pr[x = XI Pr[x a A = X ' Q A 1 x = x'])

Page 36: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

Bound from below On the first line below, the bound below is obtained via the

added restriction that (A, x) and (A', 2') are in U. To obtain the second line, note that

it suflices that (A, x) = (A', x') for them to be in the same equivaience class because I is

deterministic i.e., for equal inputs, 1 has equal outputs.

Pr[(A, x) = (A', x')] Pr[(A, z) = (A', x') E U]

2 In Lemma54 from Chapter 5, let n = kt si = and f(r) = z2. Then C;=, (w) 2 l Ul

The use of Lemma 54 from Chapter 5 corresponds to the intuition that to minimize

the probability that two elements are in the same equivaience class, all classes should be

made to have the same size, C. The bound from below

p2 Pr[(A, x) (A', x')] 1 - k

will be referred to and used in the proofs of Lemma 34 in this chapter, and Lemmata 39

and 44 in Chapter 3.

We have found that

Page 37: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

CHAPTER S. REDUCTIONS FOR THE SUBSET-SUM PROBLEM 31

Theorem 30 If n 2 m 2 nt', then SS(n,mt) a,, S S ( n , m) and the probability for

Proof: We transform an average-case instance of S S ( n , mt) to an instance of SS(n , m)

where the inputs are uniformly distributed, by algorithm J , iliustrated in Figure 2.3.

The most obvious strategy would be simply to append random bits to the input, and

submit it to 1. However, we have not been able to analyze it, and this question wiU be

discussed in Section 4-1 -2.

Figure 2.3: The case n 2 m 2 rn'

Informally, algorithm J works in the foilowing way. First, its input matrix and sum

are modified. One of the integers, corresponding to one of the matrix's rows, is replaced

by a new random one. According to whether we guess that this integer was used or not

in the given sum, and whether we guess that it is going to be used or not by the oracle's

answer, we modify the sum to be given to the oracle. Then random bits are appended to

t h e matrix and sum, and the resulting niatrix and sum are passed to the oracle 1. The

oracle's solution is modified, according to the guesses made.

We are given a random matrix B, and, for a random selection of its rows, in the form

of an incidence vector y, we are given the sum of these rows ,L? = y a B. It is convenient

Page 38: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

CHAPTER 2. R~~DUCTIONS FOR THE SUBSET-SUM PROBLEM 32

to assume that we are also given i = yt. More precisely, algorithm J could guess whether

the first integer (corresponding to the first row of matrix B) is or is not used in the given

sum. In algorithm J below, i = O corresponds to the guess that the first row is not used,

and vice versa for i = 1. Each of these possibilities happens with probability exactly

I. 2 Nevertheless, by repeating this procedure twice with different guesses, one of them is

certain to be true. So we analyze the case when i is the correct guess.

Then algorithrn J guesses whether or not the fkst integer (this time corresponding to

the k t row of matnx A, obtained from B) is going to be used in the oracle's solut ion. Ln

algorithm J below, j is the guess of the first bit of the oracle's answer. We will show that

oracle I can not distinguish which guesses were made. For one oracle call, the probability

that the guess was right is hence exactly equal to $.

- - - - - --

Let algorithm J(B,B,i) , for B E {O, l}m'n, P E {O, i)m', and i E (O, 1), behave in the following way.

1. Uniformly Bip a bit j EU {O, 1)

2. Modify and extend B to A

(a) B' := B with BI repiaced by Bi EU {O, 1)"'

(b) Extend B' to A with m - rn' uniformly chosen bits on the left of each A k

i. Choose B" EU {O, l)(m-m')n ii. A := B"I B'

3. Modify and extend ,6 to a

(a) Pt:= 0 - (1 - i ) B i + jBi (b) Extend to a with m - m' uniformly chosen bits on the left

i. Choose ,f?" Eu {O, 1)"-"' ii. a :=PI@'

5. Output y' := x but with y; = 1 - i

Page 39: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

CHAPTER 2- REDUCTIONS FOR THE SUBSET-SOM PROBLEM 33

Note that even though the statisticat experiment is to select an m'n-bit B and an

n-bit y uniformly, and then to compute y' = J ( B , y O B), the leftmost bits of a do not,

in generai, correspond to the sum of the rows setected by y.

Statistical experiment: Uniforrnly select an m'n-bit B and an n-bit y, and then

cornpute y' = J ( B , y O B, yl).

Event of interest: The algorithm J is successful if y Q B = Y' Q B.

Clairn: Pr[y O B = y' a B] > $-

We first show that

From line (3a) of algorithm J we have P' := p- (1 - i)Bl + j Bi. Following the notation

of algorithm J , the following two Lines are equivalent.

Neglect the leftmost bits, which correspond to B" and p". This neglect of the leftmost

bits is the reason why P r [ y a B = y' O BI 2 Pr[x O A = a & xi = j] contains a "t", and

not a "=". Then the following holds.

Overail, i t holds that

Page 40: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

CHAPTER 2. REDUCTIONS FOR THE SUBSET-SUM PROBLEM

By the multiplication rule for two events

Independence of the distribution of (A, a) from j We assume that i = y,. Then

there are two cases, corresponding to the two guesses that c m be made for j. We

now show that in each of these cases, (A, a) is uniformly distributed. In other words,

regardless of which guess is made for j, algorithm J can not extract information of the

value of that j.

On line (3a) of algorithm J

P'

Fact 31 If j = O, then ( A , a ) is unifomly distributed.

Then 13' = Bi + CL, - ykBk. Since BI is not correlated to B', it makes uniformly

distributed, and independent from BI.

Fact 32 If j = 1, then (A, a) is unifomly dbtributed.

Then ,8' = Bi + (CL2 ykBk - Bi). Since Bi is not correlated to BI, it makes 0'

uniformly distributed, and independent from BI.

Distribution of xl given that the oracle is successful We have shown that the

two distributions of (A, a), for j = 0,1, are the same, regardless of the specific guess j.

Therefore, it is not possible for oracle I to distinguish them, even given the event that it

returns a correct answer. Since j is chosen uniformly, the first bit of the oracle's correct

answer xi is exactly j with probability i.

1 Pr[xi = j 1 x A = or] = -

2

Page 41: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

CHAPTER 2. REDUCTIONS FOR THE SUBSET-SUM PROBLEM 35

FinaLly, since (A, a) is uniformly distributed, Pr[z A = a] > 2. This is precisely

given by Lemma 34, which foUows this proof.

Corollary 33 I f n 2 m 2 rn', then SS(n, mf) a., SS(n, m) and the lower bound for the

probability for S S ( n , mf ) approaches $ as the number of oracle m l k increases.

Proof: We can improve the bound of the Theorem 30 by a factor close to 2.

By running algorithm J many times on the same (B, y (3 B, YI), that is, by repeating

the guess for j and the oracle calls: the probability of a wrong guess every time approaches

O. Indeed, by rnaking N independent guesses, the probability of N wrong guesses is &. Therefore, by repeating the algorithm many times, we can make the bound below on

the probability of the reduction, in the proof of Theorem 30, to be as close as we wish

Lemma 34 [Approximate onto-ness] Let n 2 m. I/ algorithm I solves Average case

S S ( n , rn) with probability p, then, for inputs A EU ( O , lImn and a { O , lIm, and letting

x = [ ( A , cr) n

Proof:

We use the notation and definitions given in Section 2.2.1. We will show that if

Fi = p theo > 2. ISl

Page 42: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

CHAPTER 2. REDUCTIONS FOR THE SUBSET-SUM PROBLEM 36

Bound from above We consider Pr[(A, x) (A', XI)]. Compute the bound from above

as follows. The first Line below is obtained via Lemma 27, The third hne below is because

since n > m.

Bound from below We bound Pr[(A, x) (A', XI)] 2 Pr[(A, x) z (A', x') E (Il. Take

the bound from below as computed in the proof of Lemma 29, in this chapter, that is

2 * We have found that 5 c 1.e. 1 > 2. ITI

Theorem 35 If nt < n 5 m, then SS(n1, rn) oc,, S S ( n o rn) and the proba6ility for

S S ( n t , m ) is > 2. Proof: See Figure 2.4.

Figure 2.4: The case n' < n 5 m

Page 43: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

CHAPTER S. REDUCTIONS FOR THE SUBSET-SUM PROBLEM 37

Let algorithm J ( B , B ) , for B E (O, l)mn' and ,û E {O, L)", behave in the following w ay.

1. Let A := B with appencied rows & EU {O, Ilrn for n' + 1 5 i 5 n

2. U n i f o d y choose z Eu {O, 1)"-" indexed by n' + 1 5 i 5 n

3. Let a := ,f3 + z",,,, Airi

4. Let x' := I(A, a)

5 . Output y' := x', truncated of its positions n' + 1 5 i 5 n

Statistical experiment: Uniformly select an mn'-bit B and an n'-bit y, and then

compute y' = B ( B , y O B ) .

Event of interest: The algorit hm J is successful if y O B = y' O B.

Let x = Y~Z. Then a = P + Cy=nl+, ziAi = y O B + Cy_t+, tiA; = x O A, so that

(A, a) is a proper average-case instance of S S ( n , m).

That 1 is successfd on (A, x a A) does not imply that J is successful on (B, y B).

Nevertheless, it is sufficient for J to be succesdul that the returned y' = y. For which

it is in turn sufficient that I returns x' = x. So we bound the probability that J solves

SS(nr ,m) by

It suffices to show that for a mn-bit vector A and a n-bit vector x chosen unifordy,

which is given by Lemrna 29.

Theorem 36 If n' 2 n 2 m, then SS(n t , m) a, S S ( n , m) and the probability for

S S ( n t , m ) is > 2.

Page 44: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

CHAPTER 2. REDUCTIONS FOR THE SUSSET-SUM PROBLEM

-m+

Figure 2.5: The case n' 2 n 2 rn

Proof: See Figure 2.5.

Let algorithm J ( B , p), for B E {O, 1lrnn' and f l E {O, l)*, behave in the fouowing way.

l

1. Let A:=rows 1 5 i 5 n of B

2. Let cr := 0

3. Let x' := I(A, a)

4. Output y' := xf, padded with 0's in positions n + 1 5 i 5 n'

Statistical experiment: Uniformly select an mnf-bit B and an n'-bit y, and then

compte y' = J ( B , y a B).

Event of interest: The algorithm J is successful if y B = y' a B.

We condition the analysis on two complementary classes of events. Let E = [y; = O

for al1 n + 1 5 i 5 nt]. Consider E and its complement Ë.

1. Subject to E, algorithm J has the same success probability as 1, that is, p > $.

Page 45: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

CHAPTER 2. REDUCTIONS FOR THE SUBSET-SUM PROBLEM 39

2. Otherwise, subject to the complementary event Ë = [y; = 1 for some n+l 5 i 5 n'],

we have that (A, y,, ..., y,, CFL+~ yi Bi) is uniformly distributed.

SO (A, y O A + x;Ln+, yi Bi) is uniformly distributed, and (A, a) is uniformly dis-

tributed.

By Lemma 34, [(A, a) returns z' such that s' a A = a with probability > $- It

suffices that zf O A = ct for y a B = y' a B. So J is successful with probability

> f.

The statements and proof outlines of Lemmata 29 and 34 and Theorems 28, 30, 35 and 36

in Section 2.2 are due to Rackoff [Rac98]. Lemma 27 is taken from [IN96, Proposition

1.11.

Page 46: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

Chapter 3

Reductions for the Decoding of

Linear Codes Problem

3.1 Introduction

LVe consider that DLC is a natural continuation of Chapter 2, as the reductions in the

proofs of the theorems are similar t o the corresponding ones for SS,

The necessary background for the complexity theoretic content of this chapter can

be found in Section 1.1. The background for the information theoretic content of this

chapter can be found in Section 1.3. Technical lemmata are proven in Chapter 5.

3.1.1 Preliminary conventions and lemma

Conventions for the figures

On the figures, DLC(n, rn) will always be represented with bold lines. The prob-

lem DLC(nl,m) or DLC(n,m') will always be the problem which is to be reduced

to DLC(n, m), with dimensions n1 or m' that may be different from n or m respectively.

On the figures, DLC(nl, rn) or DLC(n, m') will always be represented with thin lines.

Page 47: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

C H A P Y ~ R 3. R~~DUCTIONS FOR THE DECODING OF LINEAR CODES PROBLEM 41

Preliminary lemma

Lemma 37 Consider DLC(n, m). I f s # x', then for al1 e , e', it holds that PrA[xA+e =

z'A + e l = &. Proof: Since x # xf , assume without loss of generality that xj = 1 and x: = O. We will

prove that x A + e is distributed unifordy, and independently of s'A + et.

Fix A; for ail i # j. This determines ztA = Cy=, x:Ai- Next, choose Aj uniformly.

Then X A = CiR_, xiAi is uniformly distributed.

Consequently, Pra [xA + e = xtA + e l = PrA[xA = x r A + e' - e] = - rn

3.1.2 Results

In Section 3.2.2, we will show the following. In each case, we let p be the probability

t hat the given DLC(n, m) oracle succeeds on a properly distributed input.

1. Theorem 38 If [*l 5 m 5 rnf then DLC(n, mt) a,, DLC(n, m). The proba-

bility for DLC(n, m') is R (5). See Figure 3.2 and note that i t corresponds to

Figure 2.2 in Chapter 2.

2. Theorem 40 if L+] 2 m 2 rn' then DLC(n,mt) a., DLC(n, m). The proba-

bility for DLC(n,mt) is 0 (5). See Figure 3.3 and note that i t corresponds to

Figure 2.3 in Chapter 2.

3. Theorem 45 If n' 5 n < lC(~)rnJ, then DLC(n',m) cc,, DLC(n,m). The

probability for DLC(nJ, rn) is > 2. See Figure 3.4 and note that it corresponds to

Figure 2.4 in Chapter 2.

4. Theorem 46 If n' 2 n 2 r C ( ~ ) m l , then DLC(nf,m) ma, DLC(n,m). The

probability for DLC(nt, rn) is R (5). See Figure 3.5 and note that it corresponds

to Figure 2.5 in Chapter 2.

Page 48: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

CHAPTER 3- REDUCTIONS FOR THE DECODING OF LINEAR CODES PROBLEM 42

3.2 Four reductions

3 -2.1 Notation and definitions

Inputs corresponding to instances of DLC(n, m )

Suppose that n and m are fixed and 1 is an oracle for the DLC problem.

Generating function Consider DLC(n, rn, ). Define the function fcode mapping

n n + n + m bits to mn + m bits as fswle(A, x, e ) = (A, + A + e), and define the equivalence

relation

(A , x, e) (A', f, et) fade (A , x, e) = fmde(Ar, x', et) -

Equivalence classes Define S = { ( A , x, e ) : A E {O, l)"", x E {O, l}", e E {O, 1)" & w(e) 5

1 ~ r n J ) and define the equivalence classes on S by

EQ(A, x, e) = {(A', x', et) : ( A , x, e ) = ( A f , x', e t ) } .

Solved instances Denote by U the subset of S of instances which are solved by 1, with

(x', e') = [ ( A , xA + e) , such that w(er) < [ ~ m J .

LI = { ( A , x, e ) E S : (A , [ ( A , xA + e ) ) G (A, x , e ) } .

Note that if (A, x, e) (A', x', et), then

( A , x, e) E U a (A', x', e') E U

so that the elements of an equivalence class are either ail in U or al1 outside W .

Generalized inputs

Let T = { ( A , a ) : A E { O , l } m n , a E {O, l)"), the set O f al1 possible inputs to I , not

necessarily corresponding to instances of DLC(n,m). Indeed, a might not be equal to

XA + e for any given x and e such that w(e) 5 lem].

Denote by V the subset of T of instances which are solved by I .

V = { (A,a) E T : if I ( A , a ) = (xt,e') then ztA+ et = a and w(ef) 5 l ~ m ] ) .

Page 49: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

CHAPTER 3. REDUCTIONS FOR THE DECODING OF LINEAR CODES PROBLEM 43

Relations between inputs for DLC(n, m) and generalized inputs

Correspondence of solved instances Note that (A, x, e) E U if and only if (A, z A + e) E V. Also, (A, a) E V is an input to I for DLC(n, m) if and only if a = xA + e for

some x E {O, 1)" and some e E {O, l)* such that w ( e ) 5 LernJ. Therefore, if (A, a) E V,

t hen for some x E {O, 1)" and some e E {O, lIrn such that w(e ) 5 l ~ r n j , the second

argument a! is exactly X A + e.

Tbere is a 1-1 correspondence between the equivalence classes that partition U and

the elements in V. See Figure 3.1. Note that the point shown in Ui is (A', x i , e i ) , and

t hat the pair formed by the second and third arguments of this triple, (z', ei ) , is the

solution ret urned by I(A', ai). Also notice that f (U) = V and f -'(V) = U.

Figure 3.1: Inputs to I

Notation Let k = IV1 and denote the elements of V by vi = (Ai, a i ) so that

Let ( x i , ei) = [(A', a i ) and denote the equivalence classes that partition U by Ui =

£Q(A', xi, e i ) so that

k - u = u;=,ua

Then U' = EQ(Ai, xi, e') c U corresponds to vi = (A', ai) E V via f.

Page 50: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

CHAPTER 3. REDUCTIONS FOR THE DECODING OF LINEAR CODES PROBLEM 44

3.2.2 Reductions

Theorem 38 If r*] 5 m 5 nt then DLC(n, mt) a., DLC(n, m). The probability

for DLC(n, m3 is SZ

Proof: See Figure 3.2. Note that it corresponds to Figure 2-2 in Chapter 2.

We prove that DLC(n, mf) oc., DLC(n, m) by showing that the probability for

DLC(n, mf) is R (&), using an oracle for DLC(n, m). We transform an average-case

instance of DLC(n, m') to an instance of DLC(n,m), by algorithm J, illustrated in

Figure 3.2. As in the proof of Theorem 28, the idea is simply to truncate the input to

submit i t to 1.

Let trunk(cr) denote the m rightmost bits of a mf-bit vector a. The function trunc,

corresponds to modSm, in Chapter 2.

Figure 3.2: The case n 5 [C(&)rnJ 5 LC(e)mtJ

Page 51: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

/ Let algorithm J (B,p) , for B E {O, 1}"" and p E {O, l}"', behaue in the foilowing

1. Truncate the n integers Bi and /3, each of rn' bits, into n integers of rn bits, A; and a

for i = 1 to n do

and

2. Let (x, e) := I(A, a)

3. Output (y', f') := ( x , P - 58)

Statistical experiment: Unifonnly select an m'n -bit B, an n-bit y and an m'-bit , f such that w( f ) 5 L~rn'j, and then compute (y', f') = J ( B , yB + f ). Event of interest: The algorithm J is successful if yB+ f = ytB+f' and w(f ' ) < L~rn'].

c l a h : Pr[yB + f = y'B + f' & w ( f ' ) $ [~rn']] E R (5). Let f ~ = trunc,,,( f ) , the m rightmost bits of f. We first show that

+ f = d B + /' & ~ ( f ' ) I L~m'j] 1 Pr[(z, e) = (y, f R ) ]

In other words, i t is su£Ecient for J to be successful, that the oracle's answer (x, e)

is the partiaily truncated original (y, fR). Indeed if (x, e) = (y, fR) then, from the

algorithm, y = y'. Then, since y = y', we have

Distribution on (A, a) W e need to show that, with "good" probability, the inputs

to oracle I are distributed properly. Since B is uniformly distributed, so is A. However

Page 52: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

CHAPTER 3. REDUCTIONS FOR THE DECODING OF LINEAR CODES PROBLEM 46

a = y A + fR, and it is not necessary that fR be uaiformly distnbuted among the m-bit

vectors with weight 5 LE^]. Nevertheless, we are able to analyze this experiment by

conditioning on the event E that (f - fR), that is, the Lefimost bits of f , has weight

exactly l~rn'J - L ~ m l

The probability of event E is bounded by Lemma 62 from Chapter 5.

We wiU show that the above experiment, conditioned on event E, is equïvalent to the

foUowing new experirnent.

New statistical experiment: Uniformly select an mn -bit C , an n-bit z and an rn-bit

g such that w(g) 5 l&rnJ, and then cornpute (zf,g') = I(C, ZC + g). New event of interest: The algorithm I is successful if (z,g) = (2,g').

New clairn: Pr[l(C, zC + g ) = (z , g) ] > f .

Note that this new daim will be proven as Lemma 39.

In the original statistical experiment, we are interested in the event [(x, e ) = (y, fR) 1 E]

where ( x , e ) = [ (A , yA + fR), i.e the event [I(A, yA + fR) = (y, f') 1 El. The parame-

ters A and y, which are independent of event E, were selected accordingly to C and z,

respectively, in the new statistical experiment. It remains to show that fR, subject to

event E, is distributed as g is in the new statistical experiment.

It is clear that W ( fR) 5 1~mJ. Let 1; be the number of rn-bit strings of weight i, for

i = O, ..., LernJ. Then the number of mf-bit strings of weight i + L~rn'j - l ~ m ] is

Page 53: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

CHAPTER 3. R.E~~UCTIONS FOR THE DECODING OF LINEAR CODES PROBLEM 47

Then, conditioning on event E

In words, the probability of an error of weight i is uniforrn, that is, equal to the

number of such vectors divided by the total number of vectors.

OveraU, (A, y, fR), in the original statistical experiment conditioned on E, is dis-

tributed as (C, z ,g) , in the new statistical experiment. Therefore, assuming the new

claim, which we recall wiil be proven as Lemma 39 which follow, we have

Lemma 39 [Approximate one-oneness] Let m 2 If algorithm I solves Av-

erage case DLC(n, m) with probability p, then, for inputs A eu {O, l)mn, x Eu (0,l)"

and e EU {O, lIrn such that w(e) 5 L~mj

pz Pr[I(A, xA + e) = (x, e)] > - 2

Proof:

Following the definitions and notations introduced in Section 3.2.1, there are 1 VI

inputs in S for which algorithm I returns the same (x, e) we started with, which are

precisely the equivalence class representatives under the algorithm I . Since the distribu-

tion of the generators of the input to 1, namely (A, x, e), is uniforrn, we have a uniform

probability of selecting the class representatives. Thus, we want to show that if = p

then fJ = > 2.

Page 54: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

CHAPTER 3. REDUCTIONS FOR THE DECODING OF LINEAR CODES PROBLEM 48

W e want to compute a lower bound on the unknown k, the number of equivalence

classes. So consider the probability that two elements of U are in the same equivdence

class, i.e. the probability distribution of 17s inputs, as generated by the function fade,

that is

Bound from above

Pr[(A, x, e) (A', x', e')] = Pr[( A, z A + e) = (A', x'A + et)]

Using Lemma 37, if x # sr, we find Pr[xA + e = x'A + e l = &.

Let E denote the set of error vectors, so that IEI = C~Z' (y) . On the first line

below, use Pr[e = e l = - l On the third Line, use & < a,,+&e, , which cornes from IEI '

m 2 r&] 2 -" and then &. On the fifth Line, use ISI = 2mn+nIEI, and then, C(4

Page 55: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

CHAPTER 3. REDUCTIONS FOR THE DECODING OF LINEAR CODES PROBLEM 49

by the upper bound from Lemma 60 from Chapter 5, use IEl = z:zJ (7) 5 2mH(c).

Overall

Bnund from below We bound Pr[(A, x, e) E (A', x', et)] 2 Pr[(A, x, e) = (A', x', et) E

U]. Take the bound from below as cornputed in the proof of Lemma 29, in Chapter 2.

Here, the set S here consists of triples instead of pairs, so the proof of this bound should

be changed accordingly, formally speaking. Nevert heless, no result is affected by t hat,

so it would be redundant to give another proof of the bound. We have Pr[(A, x, e) G

(A', zt , et) E LI] >_ 5. We have found that $ 5 Pr[f (A , z, e) = f(At, x', e')] < & Le. < $ rn

Theorem 40 If L& J 2 rn 2 m' then DLC(n, m') a,, DLC(n, m). The probability

for DLC(n, mt) is R (6). Proof: See Figure 3.3. Note that it corresponds to Figure 2.3 in Chapter 2.

We transform an average-case instance of DLC(n, m') to an instance of DLC(n, m)

where the inputs are uniforrnly distributed, by algorithm J, iiiustrated in Figure 3.3.

As in the proof of Theorem 30, the most obvious strategy would be simply to append

random bits to the input, and subrnit it to I . However, we have not been able to analyze

it, and this question will be discussed in Section 4.2.2.

Page 56: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

CHAPTER 3. REDUCTIONS FOR THE DECODING OF LINEAR CODES PROBLEM 50

Figure 3.3: The case L&] 2 rn 2 rn'

Informally, algorithm J works in the foilowing way. First, its input matrix and word

are modified, One of matrix's rows is replaced by a new random one. According to

whether we guess that this row was used or not in the original codeword, and whether

we guess that it is going to be used or not by the oracle, we modify the word to be given

to the oracle. Then random bits are appeilded to the matrix and word, and the resulting

matrix and word are passed to the oracle I . The codeword from the oracle's solution is

modified, according to the guesses made.

We are given a random matrix B. Also, for a randorn selection of B's rows, in the

form of an incidence vector y, and for an random error vector f of weight bounded above

by ~ m ' , we are given the word ,û = yB + f . It is convenient to assume that we are also

given i = yl. More precisely, algorithm J could guess whether the first row of matrix

B is or is not used in the given sum. In algorithm J below, i = O corresponds to the

gueçs that the first row is not used, and vice versa for i = 1. Each of these possibilities

happens with probability exactly B. Nevertheless, by repeating this procedure twice with

different guesses, one of them is certain to be true- So we analyze the case when i is the

correct guess.

Then algorithm J guesses whether or not the first row of matrix A (obtained from

Page 57: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

CHAPTER 3. REDUCTIONS FOR THE DECODING OF LMEAR CODES PROBLEM 51

B) is going to be used in the oracle's solution, In algorithm J below, j is the guess of

the first bit of the oracle's answer. We will show that oracle I can not distinguish which

guesses were made. For one oracle caU, the probability that the guess was right is hence

1 exact ly equal to 2.

Note that algorithm J does not append random bits to the original input, as in the

reduction €rom the proof of Theorem 30, in Chaptet 2. Instead, algorithm J "sticks inn

random bits in randomly chosen columns. These columns are denoted by C, and are the

same in the new input rnatrix and word, formed by this process.

Informally, if the random bits were always appended in the same positions, it wodd

be possible for the oracle to concentrate the errors in positions that would minimize

algorithm J's success probability. Oracle I could let its retumed error vector have as

many ones as possible in the bit positions of the word that corresponds to our original

one. For randomly "stuck in" bits, the oracle distributes the errors it returns uniformly,

at the very worst.

The operations of normal concatenation 1 and truncation (modula), as used in the

proof of Theorem 30, in Chapter 2, need to be modified for these purposes. We extend

these usual operations in order to make them dependent on the concerned colurnns C.

W e define the extended operations with respect to any C , in a way that a random choice

of C yields the intended randomized form of these extended operations.

First, we define what is meant by the operation of "sticking inn colurnns. Let Bu,

B', and A be vectors or matrices. If B" has m - m' colurnns, and B' has rn' colurnns,

let C be a vector of length m', where each distinct entry is in (1, ..., m}. It is convenient

that the entries of C be in increaçing order. Let Ic denote the action of sticking in bits,

so that A = BI' lcBf means to form A as follows. If C = {ci, c2, ..., k t ) then for each

1 5 j 5 m', the cjth column of A is the jth column of Bt. The remaining columns of A

are the columns of B", in order.

Second, we d e h e what is rneant by the operation of truncating columns. Consider

Page 58: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

CHAPTER 3. REDUCTIONS FOR THE DECODING OF LINEAR CODES PROBLEM 52

C, B', and A as in the previous paragraph. Let truncc denote the action of removing

columns, so that B' = truncc(A) means to form B' as follows. If k is an entry in C, let

Bns kth column be A's kth column. The process is continued until all entries of C are

used.

Let algorithm J ( B o P , i), for B E {O, l ) m f n , f l E { O , I ) ~ ' , and i E {O, 11, behave in the following way.

1. Uniformly flip a bit j EU {O, 1)

Uniformly choose a 1 x rn' vector C of distinct eiements in 11, ..., m)

2. Modify and extend B to A = B"lcBf

(a) B' := B but with Bi replaced by Bi Eu {O, l Imf

(b) Extend B' to A with B" consisting of n - rn' uniformly chosen bits, for each row

i. Choose BI' EU {O, t}(m-"f)n ii. A := f3''lcBf

3. Modify and extend P to a = FlcP'

(a) :=p- (1 - i ) B l + jB; (b) Extend to a with 0" consisting of m - rn' uniformly chosen bits

i. Choose B" EU {O, 1)"-"' ii. a := P'IB'

4. Apply oracle I and traosform its output

(a) (5 , e) := [ (A , a)

(b) y' := x but with y: = 1 - i (c) f' := truncc(e)

5. Output (y', f)

Statistical experiment: Uniformly select an m'n-bit B, an n-bit y and an m'-bit .

Event of interest: The algorithm J is successful if yB + f = y'B + f' such that

w(f') 5 1 ~ 4 -

Page 59: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

CHAPTER 3. REDUCTIONS FOR THE DECODING OF LINEAR CODES PROBLEM 53

Ciaim: P ~ [ Y B + f = y'B + ff & w ( P ) 5 lemf]] E (5). We first show that

For sirnilar rasons as in the proof of Theorem 30 in Chapter 2, it holds that xA+e = a

and xi = j imply that yB + f = yfB + y. For cornpleteness, we now prove this. From

line (3a) of algorithm J we have 0' := f l - (1 - i)Bl + jB;. Following the notation of

algori t hm J, the foüowing two lines are equivalent .

We neglect the bits not indicated by C, which correspond to B" and ,Bu.

Overall, it holds that

k=2

= ( i - l)Bl - x I B ; +SB'+/'

So we have the following, using the multiplication nile for two events, on the fourth

Line beIow.

Page 60: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

CHAPTER 3. REDUCTIONS FOR THE DECODING OF LINEAR CODES PROBLEM 54

Independence of the distribution of (A, a) nom j and C We assume that i = y,.

Then there are two cases, correspondhg to the two guesses that can be made for j. We

now show that in each of these cases, (S' ,Pt) is uniformly distributed. In other words,

regardless of which guess is made for j, algonthm J can not extract information of the

value of that j.

On Line (3a) of algorithm J

Fact 41 If j = O , then (BI, P t ) is uniformly distrïbuted.

Then P' = Bi + (CL, yk Bk + f). Since Bi is not correlated to BI, it rnakes 0'

uniforrnly distributed, and independent from BI.

Fact 42 If j = 1 , then (B', 0') is uniformly distnbuted.

Then ,BI = Bi + (CL, ycBk - Bi + f). Since Bi is not correlated to B', it makes P1 uniformly distributed, and independent from B'.

Furthermore, since C and (B", y) are chosen uniforrnly, the fact that (B', ,@) is uni-

formly distributed implies that (A, a) = ( B"lc BI, /3"lcf11) is also uniformly distributed.

Because (A, a) is uniformly distributed, Pr[% O A = a & w(e) 5 L~rnl] E fl (5). This

is precisely given by Lemma 44, which follows this proof.

Distribution of xl given that the oracle is successful We have shown that the

two distributions of (A, a), for j = 0,1, are the same, regardless of the specific guesses

of j and C . It is not possible for oracle I to distinguish them. Therefore, oracle 1 has no

information about the values of j and C, even given the event that it returns a correct

answer. Consequently, the probabilities are the same as they would be if we chose j and

C uniformly and independently, cafter the oracle call.

Page 61: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

CHAPTER 3. REDUCTIONS FOR THE DECODING OF LINEAR CODES PROBLEM 55

Since j is chosen uniformly, the first bit of the oracle's correct answer XI is exactly j

1 with probability i. We have Pr[xl = j 1 x A + e = a & w(e) 5 lnnJ] = 5 .

We now consider the event w ( f l ) 5 Lern'J conditioned on the event that [xA + e =

a & w(e) 5 LE^] & xi = j ] . Fix e such that w(e ) = [ ~ r n j , since, as an adversary, I

can not do anything worse than to retum an e with a maximum weight. We consider

the following experiment. Choose rn' bits from e without replacement. We want a lower

bound on the probability that no more than [~ rn 'J of these bits are one's. As we have

argued above, the experiment is eqnivalent to choosing C unifordy after the oracle cail.

By Lemma 63 (Chapter 5), this probability is bounded below as follows.

Pr[w( f') 5 L~rn'l 1 x A + e = a & w(e) = L~rn] & XI = j ] E R(1)

Therefore

1 Pr[w( f') 5 L~rn'l 1 x A + e = a & w(e) = l&mJ & xi = j ] E - n(l) = n(l)

2

Overall

Remark 43 As for Theorem 30 in Chapter 2, it is possible to improve the aboue vahe

of p' b y a factor close to 2, by running the algorithm many times.

Lemma 44 [Approximate onto-ness] Let [&! 2 m. If algorithm I solves Average

case DLC(n, rn) with probability p, then, for inputs A Eu {O, lImn and a Eu (O, 1)" ,

and letting (s, e ) = I(A, a)

Page 62: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

CHAPTER 3. REDUCTIONS FOR THE DECODING OF LINEAR CODES PROBLEM 56

Proof:

We use the notation and definitions given in Section 3.2.1. We will show that if

U- - p, then E 0 (6). Consider Pr[(A, z, e) = (A', s', et)] .

Bound from above As in the proof of Lemma 39, let E denote the set of error m m c ) vectors, so that IEl = z!z1 (7). Since rn 5 L n J $ n, then 5 < =

C(4 C(4

Also, ITI = 2"'- and by the lower bound from Lemma 60 from Chapter 5, use IEl =

ZeJ (3 > .* 2 m H W -

Pr[(A,x, e) G (A', x', et)] =

Bound from below We bound Pr[(A, x, e) E (A', x', et)] 2 Pr[(A, x, e) r (A', x', e') E

U]. Take the bound from below as cornputed in the proof of Lemma 29, in Chapter 2.

Here, as in the proof of Lemma 39, earlier in this chapter, the set S here consists of

triples instead of pairs, so the proof of this bound shodd be changed accordingly, formally

speaking. Nevertheless, no result is affected by that, so it would be redundant to give

another proof of the bound. We have Pr[(A, z, e) = (At,s', e') E Lr] 2 $ We have found that $ < f [$fi] i.e. 6 >

Theorem 45 If nt $ n < lC(~)rnJ, then DLC(nf, rn) cc., DLC(n, m). The probability

for DLC(nf, rn) is > $. Proof: See Figure 3.4. Note that it corresponds to Figure 2.4 in Chapter 2. This proof

also is similar to the one of Theorern 35.

Page 63: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

CHAPTER 3. REDUCTIONS FOR THE DECODING OF LWEAR CODES PROBLEM 57

Figure 3.4: The case nf 5 n 5 LC(~)rnl

Let algorithm J ( B , P), for B E {O, l)mn' and P E {O, l)", behave in the foLiowing way.

1. A = B with appended rows Ai EU {O, 1)" for n' + 1 5 i 5 n

2. Choose z Eu {O, l)"-n' indexed by n' + 1 5 i 5 n

3- a := p + C:=,t +, aizi

4. Apply algorithm I and transform its output

(a) (xf, e') := [ ( A , a)

(b) y' := x', truncated of its positions n' + 1 5 i 5 n (c) f' := e'

5. Output (y', j')

Statistical experiment: Uniformly select an mnf-bit B, an n'-bit y and an rn-bit j

such that w(f) 5 [~rn], and then compute (x', f') = J ( B , yB + /). Event of interest: The algorithm J is successful if x B+ f = x 'B+r and w(f ' ) 5 l ~ m J .

Claim: Pr[xB + f = x'B + f' & w(f ' ) 5 Lem]] > 2. Let x = y 1%. Then a = /3 + Cr=nt+, a;zi = y B + Cy=nt+, a;zi + f = X A + f , so that

Page 64: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

CHAPTER 3- REDUCTIONS FOR THE DECODING OF LINEAR CODES PROBLEM 58

(A, a) is a properly distributed instance of Average case DLC(n, m).

That I is successful on (A, x A + f ) does not imply that J is succesfi on (B, y B + f). Nevertheless, it is sufficient for J to be successful that the returned (y' , f') = ( y , f ). For

which it is in turn sufficient that I returns (x', f') = (z, f). So we bound the probability

that J solves DLC(n', m) by

It suffices to show that for a mn-bit A, an n-bit y and an m-bit f such that w ( f ) 5

Leml chosen uniformly

which is given by Lemma 39.

Theorem 46 If nt 2 n 2 rC(~)rnl , then DLC(nt, rn) a., DLC(n, m). The pmbability

for DLC(nf, m) is R (5). Proof: See Figure 3.5. Note that it corresponds to Figure 2.5 in Chapter 2. This proof

also is similar to the one of Theorem 36.

m m

Figure 3.5: The case n' 2 n 2 r C ( ~ ) m ]

Page 65: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

CHAPTER 3. REDUCTLONS FOR THE DECODING OF LINEAR CODES PROBLEM 59

Let algorithm J(B,P) , for B E {O, l)mn' and P E (O, l)m, behave in the following way.

1. A:=rowsl _ < i < n o f B -

2. a := fl

3. Apply algorithm I and transform its output

(a ) (x', et) := I(A, a)

(b ) y' := x', padded with 0's in positions n + 1 5 i 5 nt

( c ) f' := e'

4. Output (Y', f)

Statistical experiment: Uniformly select an mn'-bit B, an n'-bit y and an rn-bit

Event of interest: The algonthm J is successfui if YB+/ = yrB+ f' and w ( j ' ) 5 l&mJ.

Condition the analysis on two classes of events; let E = [yi = O for al1 n + 1 5 i 5 nt] .

1. Subject to E, algorithm J has the same success probability as 1, that is, p E

2. Othenvise, subject to E = [y; = 1 for some n + 1 5 i 5 n'], we have

n'

uniformly distributed. So (A, yA + CY;~+, giBi + f ) is u n i f o d y distributed, and

(A, a) is uniformly distributed. By Lemma 44, I(A, a) returns ( x t , r ) such that

x r A + f' = a with probability St (5). 1t s a c e s that z ' A + ~ = a for y ~ + f =

y'B + f'. So J is successfui with probability R (5)-

Page 66: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

CHAPTER 3. REDUCTIONS FOR THE DECODING OF LINEAR CODES PROBLEM 60

3.3 Notes

Theorems 38, 40, 45 and 46 in Section 3.2 were conjectured by Rackoff. Lemma 37 is

based on Lemma 27 from Chapter 2.

Page 67: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

Chapter 4

Conclusion and open problems

4.1 Subset Sum

4.11 Conclusion

For each of the four theorems for Average case SS, we have found a reduction with,

for sorne constant c, probability of success > C. These theorems imply two hardness

t heorems.

In al1 theorems, we assume that the parameterization functions, g, g', h, and h' can

be computed easily.

Theorem 47 For any g'(n), hardness w.r. t . to the parameterization m' = b ( n ) implies

hardness w.T.~. to the parameterization m = n.

Theorem 28 and Corollary 33 together give a reduction, for n, m' positive integers,

such that S S ( n , m') M., SS(n , n) with p' > 2. Theorem 47 is stronger than the one implied by [IN96, Proposition 1.2). This propo-

sition, states the following. Suppose that for al1 constants cl > 1 and cz < 1, it is the case

that for al1 n E N+ sufüciently large g'(n) 2 g ( n ) 2 cln, or for all n E N+ sufnciently

large g'(n) 5 g(n) 5 czn. Then, hardoess w.r.t. to the parameterization rn' = g'(n)

Page 68: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

CHAPTER 4. CONCLUSION AND OPEN PROBLEMS

implies hardness w.r.t. to the parameterization m = g(n) .

Theorem 48 For any hf(rn), hardness w.r.t. to the parameterization nt = hf(m) implies

hardness w.r.t. to the parameterkation n = m.

Theorems 35 and 36 together give a reduction, for nt, m positive integers, such that

SS(nf, rn) oc,, S S ( m , rn) with pt > 2. No such theorem is implied by the results of [IN96].

Relationship between Theorems 47 and 48 As mentioned in Chapter 1, Theo-

rems 47 and 48 appear to be related. We address the question as to whether or not they

are equivalent to one another, that is whether or not hardness with respect to any g' or

hf, implies hardness with respect to some appropriate h' or g', respectively. However this

is not the case, which we wiLl show through a counter-example.

We show that Theorem 45 does not imply Theorem 47. Suppose that i t is possible

to break the hardness assumption with respect to the parameterization m = n infinitely

often. Then it is possible to break the hardness assumption with respect to the param-

eterization n = m infinitely often. By Theorern 48, since it is possible to break the

hardness assumption with respect to the parameterizat ion n = n inhitely often, then

it is possible to break the hardness assumption with respect to the parameterization

hf(m) = Li/m] infinitely often.

Then we want to show that there is no function g'(n) such that the hardness assump-

tion with respect to g'(n) would be broken infinitely often. The only function for which

this could hold is the inverse of hf, since these are the only parameters for which we have

some knowledge. Hence, we take g'(n) = n3.

We want to show that it is not the case that this implies that we can break the

hardness assumption with respect to the parameterization g'(n) = n3 infinitely often.

This would be true if there were infinitely many m such that 1 6 1 = n, for some n,

and such that the hardness assumption fails on these par t idar m. However, we only

Page 69: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

CHAPTER 4. CONCLUSION AND OPEN PROBLEMS 63

assumed that the hardness assumption fails on infinitely many m, and that set could

possibly exclude all perfect cubes.

We make the following observation. Suppose that for any g' or h', it would be possible

to find a h' or g', respectively, which is an exact inverse of the other, that is without floors

or ceilings. Then the two hardness theorerns would be equivalent.

Thus, the two hardness theorems state similar results, that complement each other.

4.1.2 Improvement s

We conjecture that the proof of Theorem 30 can be improved in the following way.

Without using repeated oracle calls, we were only able to get a probability bounded

below by 8. With two oracle calls, we were only able to get a probability bounded below

by <. It is only with an unbounded number of oracle caUs that we were able to bound

below the probability by $.

We conjecture that the bound below of $ is attainable without repeated oracle caiis.

To obtain this, we aIso conjecture that proof to consist in the analysis of the suggested

simpler algorithm mentioned at the b e g i ~ i n g of the proof of Theorem 30. It is the

algorithm by which the instance to be passed to the oracle is the same as the original

one, with appended random bits. It is a more "naturaln reduction aigorithm. This is how

the proof of Proposition 1.2, Part 2 is done in [IN96]. This proposition is weaker than

Theorern 30 in the sense that the later applies to more restricted values of (n, m), though

it needs only one call to the oracle for the reduction algorithm's success probability to

be bounded below by $.

Page 70: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

CHAPTER 4. CONCLUSION AND OPEN PROBLEMS

Open problems

We do not know if the foilowing are true.

SS(n, n) a, SS(2n, 2n)

SS(n, n ) a., SS(n2, n2)

The reductions we have are of a very special nature, and we did not find a way

to analyze simultaneous variations of the two parameters. Nevert heless, Kabanets has

proven the following [Rac98].

Theorem 49 For < m < n, for some c constant, and for al1 E > O

SS(m, ml+=) oc., S S ( n log n, cn Log n)

The proof involves obtaining a lattice with a nd-unique shortest vector, using results

from [L085, FriSG], and hding this vector, using results from [Ajt96].

4.2 Decoding of Linear Codes

4.2.1 Conclusion

For each of the four theorems for Average case DLC, we have found a reduction with

probability of success > poly (5). These theorems imply two hardness theorems.

In al1 theorems, we assume that the parameterization functions, g, g', h, and hf can

be computed easily.

Theorern 50 For any g'(n), hardness w.r.t. to the parameterization mf = g'(n) implies

hardness w.r.t. to the parameterkation rn = L&J. Theorems 38 and Theorem 40 together give a reduction, for n,mf positive integers,

such that DLC(n,mf) a., DLC(n, &]) with, for some c constant, pf > 5.

Page 71: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

CHAPTER 4- CONCLUSION AND OPEN PROBLEMS 65

Theorem 5 1 For any hf(m), hardness w.r.t. to the pararneteritation nf = hf(m) impl ies

hardness w-r-t. to the parameterizution n = [ C ( ~ ) r n ] .

Theorems 45 and 46 together give a reduction, for nt, m positive integers, such that

DLC(nf, n) oc., DLC([C(E)~J, rn) with, for some c constant, pf > 5. The relationship between Theorems 50 and 51 is the same as the one between Theo-

rems 47 and 48.

4.2.2 Improvements

Statement of the theorems In the proofs of Lemmata 39 and 44, we use Lemma 60

from Chapter 5, while Lemma 61 gives tighter bounds. Changing the rates in the Theo-

rems' statements to, for a constant c, n being about (C(E) + ( c / m ) log m)m would leave

Lemma 39's bouod unchanged. On the other hand, this change would better Lemma44's

bound by a factor of Jm, and this would better Theorems 40 and 46 by a factor of Jm as well.

However, we chose to leave the theorems statements in their simpler forms. Note that

n being about (C(E) + (c /m) log m)m is letting n be larger than in the hardness theorems

in the previous section. The expression (clm) log m)m approaches O as m increases. For

increasing m, that n is about (C(E) + (c /m) log m)m tends to be the same as n being

about C(E)m.

Improvements for the reductions As in Section 4.2.2, we conjecture that the prob-

ability of Theorem 40 can be improved by a factor 2 without repeated oracle calls.

Analyzing a more "naturai" reduction, as discussed in Section 4.1.2, should give this

resul t .

Rounding The statements of the theorems of this chapter are similar to the ones in the

previous chapter. They however contain an additional, fixed parameter E, the probability

Page 72: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

CHAPTER 4. CONCLUSION AND OPEN PROBLEMS

of error on a bit, in turn affecting the the dimensions of the problems.

First, the dimensions of the problerns are affected by the capacity C(E) associated

with ë. On one hand, a theorem's assumption may be of the form n 5 C(~) rn , although

we do not wish to require that C(~)rn be an integer l. Since n E N, the inequality

n 5 C(&)m is the same as n 5 LC(e)m]. We thus choose to round-down this number,

yielding a new assumption of the form n 5 [C(~) rn J . Similarly a theorem's assumption

of the form n 2 C ( ~ ) r n becomes a new assumption of the form n 1 [C(r)ml.

Changing the floors and ceilings respectively for ceilings and floors in the statements

of the theorems makes them stronger, while preserving the results. This is because

a difference of 1 in either dimension of DLC wiU not affect the success probabilities

significantly. In fact, a difference of the order of the logarithm of a dimension does not

affect our results, as these are formally stated towards the end of Section 1.3.2-

Similarly, the DLC problem statement of Section 1.3.2 includes that an m-bit enor

vector may have weight < &m. FoUowing the same logic, we instead state that the weight

is 5 1 ~ r n J -

Choice of distribution of the error In the references on coding theory mentioned

in Section 1.3, the standard choice of distribution of the error is a Bernoulli distribution

of parameter E. We instead chose to uniformly select the error vector frorn all the vectors

with weight no larger than l&rnrJ. This makes it easier to generate an instance of DLC

t han wit h a Bernoulli-distributed error.

In the case of a Bernoulli-distributed error, the bits of the error vector are one's

with probability E , so there is no guarantee that a random error vector has fewer than

a fraction s of one's, although that is their expected fraction. Tberefore, an instance

generating algorithm is needed, so that instances with error vectors with two many one's

are rejected. It can be a Las Vegas algorithm, which always returns a correct answer,

' Which would be impossible if C(c) were irrational. Such a requirement would restrict our choice of E and m. Clearly, we wish to avoid such restrictions.

Page 73: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

but has probabilistic ~ n n i n g time. [MR95, Section 1.21

ALI results in Chapter 3 wouid be affected by a change to a Bernoulli-distributed error.

For instance, in the proofs of Lemmata 39 and 44, Pr[e = e l would not be the inverse

of an approximation to the partial s u m of a row of Pascal's triangle. If the number of

one's was not lirnited, we would have Pr[e = e l = (1 - 2 ~ ( 1 For a Limit of l&mJ one's, the expression of this probability is more compiicated. The binomial distribution

has mean &rn and is in some sense comparable to the distribution on errors that we

chose. Hence, it is possible that changing the distribution of the error from binomial to

Bernoulli wodd not affect the results dramaticaiiy.

Open problems

We do not know if the following are true.

As for SS, the reductions we have are of a very special nature, and we did not find a

way to analyze simultaneous variations of the two parameters.

In Section 1.3, E was fixed to be a constant between O and i. There is in fact no

reason at al1 why E should be constant, other than it allowed us to get our results.

Another natural version of DLC could hence have three parameters, (n, rn, E ) , where

E varies as a function of n or m. Note that if E is very small, then a very large m is

needed so that L E ~ J > O. Otherwise, as mentioued before for E = O, DLC would be

trivial since it would be efficiently solvable via Linear programrning.

A simple open question of t his form is the following. For two different fixed constants

E and E', can we prove the following?

DLC (n, Ln] W') ,et) a.. D L c (n, l&j ,.)

Page 74: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

Chapter 5

Appendix: Lemrnat a

The definitions and results stated without proofs, in this Chapter, are taken from books

or papers. The details of their references are in Section 5.5.

5.1 Jensen's inequality

Definition 52 A function f is convex (or concave up) if its value at the midpoint of

any intenial in its domain does not ezceed its value ut the ends of the intentai. In other

U I O P ~ S , f is convex on [a, 61 if for any xi, x2 E [a,6]

Remark 53 If f has a second derivative in [a, b], then f LF conuez on [a, b] if and only

if fr ' (x) > O for al1 x E [a, b].

The foilowing lemma in used in Chapter 2, in Lemma 29. It is also involved in

Chapter 2, in Lemma 34 and in Chapter 3, Lemmata 44 and 39, although it does not

appear in them explicitly (their proofs refer to the one of Lemma 29).

Page 75: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

Lemma 54 [Jensen's inequalie] For eueryfinction f (x), convez on an open interual

(a , b ) , the following inequality holds for any value of n

where x; is a point on the interual (a , b) and n is a positiue integer.

5.2 Bounds on the entropy function

The definition of the entropy function was first given in Chapter 1, Definition 15.

Definition 55 [Entropy function] For O < E < 1 , the entropy function is defined as

where the l o g a d h m are taken in buse 2 .

The following lemma is used in this chapter, in Lemmata 57, 60, 61, and 63.

Lemma 56 The entropy function H(E) is strictly increasing on O < E < !.

While the entropy fuoction is stridly increasing on O < E < $, given E' 5 E but

sufficiently close to E , we bound H(E') to be larger than H ( E ) minus a positive term that

tend to O.

The following lemma is used in this chapter, in Lemmata 60 and 62.

Lemma 57 Let O < E < $ and É = e, with rn E IV+. Then

1 H(E') > H (E) - - log (:)

m

Proof: Since E' = @, then E - $ < E' 5 E. By Lemma 56, H(E') > H(E - h). We

expand the right hand side and reduce it to the form H ( e ) minus a positive constant over

m.

Page 76: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

Log (1 + (1 - l ) &)n2

l+ 1-e rn On the fourth lioe above; we ignored the te- (a - $) log (e,), 1-- since it is c m

nonnegative. Next, use (1 + i)" 5 et for all t , n E R+, with t = & and n = m.

1 - € H ( E - J - ) > H ( e ) - m (log + 1% ( e * ) )

The last &ne cornes from the facts that 1 - E < 1, and that since O < E < !, we have

that e < e k < e2. H

Notice that $ > 1. So log (5) > O is a positive constant, and 6 log

tends to O as m increases-

Page 77: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

5.3 Bounds in Pascal's triangle

The following lemma is used in this chapter in Lemmata 61, and 63.

Lemma 58 [Bounds for a binomial coefficient] Suppose Em is an integer where

0 < & < 1 . Then

The following lemma is used in this chapter, in Lemma 60-

Lemma 59 [Bounds for a sum of a row of Pascal's triangle] Let O < E < and

m E W. Suppose em is an integer . Then

The following lemma is used in Chapter 3, in Theorems 39 and 44.

Lemma 60 [Bounds for a sum of a row of Pascal's triangle (General Form)]

Let O < E < $ and rn E W .

Proof: Let e' = 5, then O 5 E' 5 E.

First consider the RHS. By Lemma 59

Since O < e' 5 E and since the entropy function is increasing between O and f by

Lemma 56, we have H(E') 5 H(E). It follows that

Page 78: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

Now, consider the LHS. We have E' 5 E. Furthemore, suppose that E' > O (we wili

deal wit h the case E' = O separately), so that 1 - E' < 1. Then, by Lemma 59

B y Lemma 57, H ( d ) > H(E) - $ log (f ) . Then

It follows that

Now suppose that E' = O and consider the LHS. Then zEJ ((C = ( y ) = 1, so we

want to show that j & 2mH(c) < 1. As before, by Lemma 57, 2mH(C) < $ 2mH(c')-

The following lemma is used in this chapter, in Lemma 62.

Lemma 61 [Bounds for a sum of a row of Pascal's triangle (Lmproved)] Let

O < E < f , and let m E l?i+ be suficientiy large, i.e. m 2 [ l / ~ ] .

Proof: The LHS is the same as Lemma 60. We want to bound the RHS using the

upper bound of Lemma 58, so we want to RHS to be bounded above by (L~ , ) times a

constant.

Let r = e. Since Lem] 5 em, we have that

(k) r(k: l) rLmJ-*(,:,)

Page 79: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

The last Iine is obtained, frorn rk being a geometric series, with O < r < 1.

Let E' = e. By the upper bound of Lemma 58

Since ef 5 E , we have that & $ &- Fmm Lemma 56, we have that 2mH(C') < - 2mH(C).

Since m 2 1 / ~ , we have that E' > 0, and lmz] 2 1.

T herefore

5.4 Technical lemmata

5.4.1 Error vectors fkom an instance of DLC(n, m', E )

For m' > rn, fix some m bit positions of any m'-bit vector f . This defines an m-bit vector

e.

We are interested in m'-bit f chosen accordingly to the distribution of input error

vectors for average case DLC(n, m', E ) . in other words, f is uniformly chosen among al1

m'-bit strings with weight 5 ~ m ' .

Page 80: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

We are interested in m-bit e such that w(e) 5 ~ m , and such that au of these are

equally likely. In other words, we are interested in e distributed as are the input vectors

for average case DLC(n,m,e). For e to be properly distributed, it is sdicient that

the other m' - m bits of / has weight exactly l&rn'J - L&mJ (see proof of Theorem 38

in Chapter 3). Define such error strings to be "properly distributed input errorsn for

DLC(n, m, E ) .

The following lemma is used in Chapter 3, in Theorem 38. Part of the proof of

Lemma 63 is similar.

Lemma 62 [fiaction of properly distributed input errors for DLC(n, m, E)] Let

O < s < ! and m, m' E W. If m' is s u . e n t l y large and m' > m, then the fraction of

properly d is tnh ted input e m r s for DLC(n, m, E ) is

Proof: The proof consist in applying Lemma 61 three times- To use this lemma for the

for the first terrn, ( I m ~ ~ ~ ~ m l ), note that [~(rn' - m)l 2 lcmfJ - l&mJ 2 l&(rn' - m)J.

Denoting the fraction part of am by f ruc (~m) , this is because

The last line above holds since -1 c fruc(emr) - f r ac (~m) < 1. Similarly,

[.(rd - m) 1 2 l~m ' ] - l ~ m ] .

To use the above mentioned lernma for the first term, we need that l ~ m ' j - l&mJ 5

f (m' - m). This is true because

Page 81: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

Use the above mentioned lemma's lower bound twice, and its upper bound, once.

Since l~rn'J > ~ m ' - 1, and since m' - m < m', then for rn' sufficientiy large

l&m'J > €mf- 1 - - E - l/mf E 1- m(mf - m) mm' m 2m

Overail

5.4.2 Error vectors from an oracle solution to DLC (n, m, E )

Suppose an m-bit ermr vector e that has been returned by an oracle for DLC(n, m, E ) .

Also suppose that e has maximal weight, that is, weight exactly l~rnJ.

We select m' < m bits of e randomly and without replacement. This defines an

mf-bit vector f'. We are interested in f' such that w(f ' ) 5 ~ m ' . In other words, we

are interested in f' that are as the error vectors in correct solutions to DLC(n, m', e)

(see proof of Theorem 40 in Chapter 3). Define such strings to be Yproperly distributed

output errorsn for DLC(n, m', E ) .

The f"s are distributed accordingly to the hypergeometric distribution [Fe150, Section

2-51. The desired probability is

P.[w(ff) 5 [&m']

exact ly

Page 82: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

CHAPTER 5 . APPENDIX: LEMMATA

The following lemma is used in Chapter 3, in Theorem 40.

Lemma 63 [Fkaction of properly distributed output errors for DLC(n, m', E ) ]

Let O < a < f and m, m' E W. If m > m', then the fraction of properly distributed

output errors for DLC(n, m', E ) is

Proof: We wiLl show that some nurnber of the terms of the sum are large enough, so

that their sum is bounded below by a constant.

Firstly, we show that the temi with k = l&rntJ - [&(1 - ~)rn'(rn - mf)/rn is a lower 1 bound for al1 terms with l ~ r n ' j - L&(l - ~)rn'(rn - rnr)/rnj 5 k < l&rnf]. Secondly,

we show a lower bound on the te- with k = L&rn'J - JE(^ - ~)rn'(rn - rnf)/rn] . Finally, we use the first two results to show that the sum OF the terms with l&rnfJ -

Note that the floor in LE^] can be dropped, for the following reason. Dropping the

floor is equivalent to assuming that E n z is an integer. This is equivalent to considering

strings of length larger than m, but with the same weight, since L&mJ and Ern differ by

less than one. Therefore, the probability obtain when dropping the fioor in [ ~ m ] is a

lower bound for the probability when the floor is not dropped.

The terms of the sum are increasing w.r.t. k

For k $ LE^'] - 1, we show that

Page 83: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

Consider the ratio

€rnl(l - &) (m - m') < ( 1 - e)ml€(m - ml)

Lower bound on the smallest term to be summed

We want a lower bound on

Use Stirling's approximation to obtain bounds on binomial coefficients. For x E R+

and x > 1

In the case that x = em' < 1 , we have k = O, so there is only one terrn in the sum,

which is bounded as foilows.

Page 84: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

It is oot possible that Em - k = O, because it is rtot possible that E r n = [€mrJ, since

m > m' and Ern is an integer. Thecefore, in all remaining cases, x 2 1 in (2)' so we

may now apply the above bounds for the binomial coefficients.

Let j = 1 ~ 4 1 - e)mf(m - mr)/m] . We fkst approxirnate the expression with the

bound for binomial coefficients. Then, we approximate some of the floors, and then we

collect the terms we wish to cancel.

Page 85: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

We cancel some terms and obtain

2 LmlJ-i cm'-j-1 Consider the term (1 - < ( 1 - 5 ) . Use (1 fi)" 5 et with

( .+, rQ-c)ml+j (1-e)m'+j+i Consider the term 1 + (lLc)ml (l + &-&) - Use (1 + +)n 4

et with = ,,ZI a n d n = ( l - ~ } m ' + j + l .

( ) c(m-m')+j+l

Consider the term 1 + c ( ~ ~ ~ r ) < (1 + ~ ( m - m l ) j+' . Use (1 + e)" 5

a n d n = ~ ( r n - m t ) + j + l . et with L = c ( ~ _ m , )

Use (1 +:)" 5 et with $ = - (1-.z)(m-m1) j- i andn= (1-e)(m-mt) - j - 1 .

Page 86: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

The surn of the exponents of the last four terms is

- j ( € m 1 - j - 1 ) + ( j + l ) ( ( l - € ) m f + j + l ) ( j + l ) ( & ( n - m ' ) + j + I ) &rnf ( 1 - €)ml +

~ ( m - m') + - ( j - 1 ) ((1 - &)(m - m') - j - 1 )

(1 - &)(nt - mf)

1 1 j2 [J- +- + +

&rnt ( 1 - &)m' &(rn - m') ( 1 - &)(rn I l - ml)

Overall

Lower bound on the sum

First, we need a bound on the term with k = Lem']. From the proof of Lemma 62, it is

easy to see that

Page 87: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

Finally, we s u m the terms we bounded.

C

= n(q

5.5 Notes

For a proof of Jensen's inequality, see [Jen06] or [Mit70, p. 164-1651. The former is the

original paper, the Lat ter shows a generalization of Jensen's inequality.

A more detailed motivation for Definition 55 can be found in [Ros98, Section 9-31.

Lemma 56 was taken from [Ham86, Section 6-31, where it is proven and illustrated.

Section 5.3's Lemmata 58 and 59 were taken from [MS77, Section 10.1 11, where their

proofs are given. A discussion of the fact that there is no closed form for the partial

sum of a row of Pascal's triangle can be found in [GKP89, p. 165, 1661 and a proof, in

[GKP89, Chapter 51. For a proof of Stirling's approximation for the factoriai, see (Mit70,

p. 181-1851.

Page 88: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

Bibliography

[Abr63] N. Abramson. Information theory and coding. McGraw-HiU, 1963.

[ABSS97] S. Arora, L. Babai, J. Stern, and 2. Sweedyk. The hardness of approximate

optima in lattices, codes and systems of Linear equations. Journal of Computer

and System Sciences, 54:3 17-331, 1997.

[Ajt96] M. Ajtai. Generating hard instancesoflatticeproblems. In Proceedings of

The Twenty-Eighth Annual ACM Symposium On The Theory Of Computing

(SToC '96), pages 99-108, New York, USA, May 1996. ACM Press.

[BMv78] E. R. Berlekainpl R. J . McELiece, and H. C. A. van Tilborg. On the in-

tractability of certain coding problems. IEEE Trans. Information Theory,

IT-24(3):384-385, 1978.

[Bri83] E. F. Brickell. Solving low density knapsacks. Proceedings Crypto '83, pages

25-37, 1983.

[BvHS2] A. Borodin, J. von zur Gathen, and J. Hopcroft. Fast parallel matrix and gcd

computations. Proceedings of the Twenty- Third Annual IEEE Symposium on

Foundations of Computer Science, pages 65-71, 1982.

[CJLf92] M. J. Coster, A. Joux, B. A. LaMacchia, A. M. Odlyzko, C. P. Schnorr, and

J. Stern. Improved low-density subset s u m algorithms. Computationaf com-

plexity, 2(2):111-128, 1992.

Page 89: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

1. Dumer, D. Micciancio, and M. Sudhan. Hardness of the minimum distance

of a Linear code. To appear in FOCS, 1999.

R. M. Fano. A heuristic discussion of probabilistic decoding. IEEE Transac-

tions on Information Theory, 9:64-73, 1963.

W . Feiler. An lntroduction fo Probability Theory and its Applications. John

Wiley & Sons, hc., New York, 1950.

G- D. Forney, Jr. Convolut ional codes iii: sequential decoding. in formation

Control, 25:267-297, 1974.

A. M. Frieze. On the lagarias-odlyzko algorithm for the subset sum problem.

SIAM Journal of Computation, 15(2):536-539, 1986.

R. G. Gallager. Information Theory and Reliable Communiation. John Wiley

and Sons, Inc., 1968.

M.R. Garey and D.S. Johnson. Cornputers and Intractabifity: A Guide to the

Theory of NP-Conpleteness. W.H. Freeman and Company, New York, 1979.

R. L. Graham, D- E. Knuth, and O. Patashnick. Concrete Mathernatics.

Addison-Wesley, Reading, Massachusetts, 2 989.

2. Galil and 0. MargaLit. An almost linear-time algorithm for the dense subset-

sum problem. In Javier Leach Albert, Burkhard Monien, and Mario Rodriguez-

Artalejo, editors, Automata, Languages and Programming, 18th lnternational

Colloqvium, volume 510 of Lecture Notes in Cornputer Science, pages 719-727,

Madrid, Spain, 8-12 J d y 1991. Springer-Verlag.

O. Goldreich. Foundations of cryptography (fragments of a book - version

2.03). manuscript, February 1998.

Page 90: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

[INSG]

[JenOb]

[Kar 721

[KPS97]

[LOS51

[L PS8]

[Lub96]

[Mit 701

O. Goldreich. Modem Cryptography, Probabilistic Proofs and Pseudorandom-

ness. Springer, 1999.

R- W. Hamming. Coding and Information Theory. Prentice-Hall, New Jersey,

1986.

J.E. Hopcroft and J.D. Ullman. Introduction to Automata Theory, Languages,

and Computation. Addison-Wesley, 1979.

R. Impagliazzo and M. Naor- Efficient cryptographie schemes provably as

secure as subset-sum. Journal of Cryptology, 9: 199-216, 1996-

J. L. W. V. Jensen- Sur les fonctions convexes et les inégalités entre les valeurs

moyennes. Acta Math., 30:175-193, 1906.

R. M. Karp. Reducibility among combinatorial problems. Compfexity of Com-

puter Computations, pages 85-103, 1972.

H. Kellerer, U. Pferschy, and M. G. Speranzak. An efficient approximation

scheme for the subset-sum problem. Lecture Notes in Cornputer Science,

I S A A C '97, 1350:394-403, 1997.

J. C- Lagrias and A. M. Odlyzko. Soiving low density subset sum problems.

Journal of the ACM, 32:229-246, 1985.

H. R. Lewis and C.H. Papadimitriou. Elernents of the Theory of Computational

Cornplezity, 2nd edition. Prentice-Hall, 1998.

M. Luby. Pseudorandomness and Cryptographie Applications. Princeton Uni-

versity Press, Princeton, N-J., 1996.

D. S. Mitrinovit. Analytic Inequalities. Springer-Verlag, Berlin, 1970.

Page 91: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

[MR95] R. Motwani and P. Raghavan. Randomized Algorithm Cambridge University

Press, 1995.

[MS77] F. J . MacWiUiams and N. J. A. Sloane. The Theory of Error Correcting Codes.

North-Holland Pub. Co., Amsterdam, New York, 1977.

[Od190] A. M. Odlyzko. The rise and faU of knapsack cryptosystems, in cryptology and

computational number t heory. A MS Proc. Symp. Appl. Math, 42:75-88, 1990.

[Pap94] C . H . Papadimitriou. Compu tational C~mple~ ty . Addison-Wesley, Reading,

Mass., 1994.

[PW72] W. W. Peterson and E. J. Weldon Jr. Error-Correcting Codes, 2nd edition.

MIT, 1972.

[Rac98] C. Rackoff. Open questions in cryptozoology. Talk in Worhhop on Interactive

Proofs, PCP's and the Fundamentais of Cryptography, at the Fields Institute,

University of Toronto, unpublished, May 1998.

[Rac99] C. Rackoff. Topics in cryptography. Course notes, Spring 1999.

[RosSS] S. Ross. A First Course in Pro babiiit y, 5th edition. Prentice-Hall, New-Jersey,

1998.

[Sha48] C. E. Shannon. A mathematical theory of communication. Bell Systems Tech-

nical Journal, 27:379-423, 623-656, 1948.

[S ha491 C. E. Shannon. Communication t heory of secrecy systems. Bell Systems Tech-

nicai Journal, 28:656-715, 1949.

[SS79] R. Schroeppel and A. Shamir. A T * S2 = O(2") time/space tradeoff for

certain NP-complete problems. In 20th Annual Symposium on Foundations

of Computer Science, pages 328-336, Long Beach, Ca., USA, October 1979.

IEEE Computer Society Press.

Page 92: AVERAGE CASE REDUCTIONS SUBSET-SUM …...Finally, we try to obtain one-way functions frorn hardness assumptions on average case versions of NP-complete problems.This thesis concentrates

[Sti95] D. R. Stinson. Cryptogruphy - Theory and Practice. CRC Press, 1995.

[SW64] C. E. Shamon and W. Weaver. The Mathematical Theory of Communication.

Illini Books, 1964.

[TS86] G. Tinhofer and H. Schreck. The bounded subset sum problem is almost every-

where randomly decidable in O( N) . Information Processing Letters, 23(l): 1 1-

17, July 1986.

[TV911 M. A. Tsfasman and S. G. WSdust. Aïgebmic Geornetry Codes. Dodrecht:

Kluwer Academic, 1991.

[Var971 A. Vardy. The intractability of computing the mininïùn distance of a code.

IEEE Trans. Infornation Theory, IT-43(6), November 1997.

[vL82] J. H. van Lint. Introduction to Coding Theory. Springer-Verlag, New York,

Heidelberg, Berlin, 1982.

[vLe9O] J . van Leeuwen editor. Handbook of Theoretical Computer Science. MIT Press,

Cambridge, Mass., 1990.

[Wi184] H. S. Wilf. Backtrack: an O(1) expected tirne algorithm for the graph coloring

problem. Information Processing Letters, 18(3):119-121, March 1984.

[Wo160] J . Wolfowitz. A note on the strong converse of the coding theorem for the

general h i t e memory channel. Information and Control, 3(1):89-93, mar 1960.

[Yam97] T. Yamakami. Average case computational cornplexity theory. Technical Re-

port 307/97, Department of Computer Science, University of Toronto, 1997.