27
Dov Gordon & Jonathan Katz University of Maryland

Partial Fairness in Secure Two-Party Computation

  • Upload
    kynton

  • View
    39

  • Download
    0

Embed Size (px)

DESCRIPTION

Partial Fairness in Secure Two-Party Computation. Dov Gordon & Jonathan Katz University of Maryland. What is Fairness?. Before the days of secure computation… (way back in 1980) It meant a “fair exchange”: of two signatures of two secret keys of two bits certified mail - PowerPoint PPT Presentation

Citation preview

Page 1: Partial Fairness in  Secure  Two-Party Computation

Dov Gordon&

Jonathan Katz University of Maryland

Page 2: Partial Fairness in  Secure  Two-Party Computation

What is Fairness?Before the days of secure computation…

(way back in 1980)It meant a “fair exchange”:

of two signaturesof two secret keysof two bitscertified mail

Over time, developed to include general computation:F(x,y): X × Y → Z(1) × Z(2)

Page 3: Partial Fairness in  Secure  Two-Party Computation

Does that

verify?NO.

Does that

verify?NO.

Exchanging Signatures [Even-Yacobi80]

Impossible: if we require both players to receive the signature “at the same time”

Does that

verify?NO.

Does that

verify?NO.

Does that

verify?Yes!!

Sucker!

Impossible: later, in 1986, Cleve would show that exchanging two bits is impossible!

Page 4: Partial Fairness in  Secure  Two-Party Computation

“Gradual Release”Reveal it “bit by bit”! (halve the brute force time.)

Prove each bit is correct and not junk. Assume that the resulting “partial

problem” is still (relatively) hard.Notion of fairness: almost equal time

to recover output on an early abort.[Blum83, Even81, Goldreich83, EGL83,

Yao86, GHY87, D95, BN00, P03, GMPY06]

Page 5: Partial Fairness in  Secure  Two-Party Computation

“Gradual Convergence”Reduce the noise, increase the confidence;

(probability of correctness increases over time)E.g., resulti = output ci, where ci → 0 with increasing

i.

Removes assumptions about resources.Notion of fairness: almost equal

confidence at the time of an early abort.[LMR83, VV83, BG89, GL90]

Page 6: Partial Fairness in  Secure  Two-Party Computation

Drawbacks (release, convergence)Key decisions are external to the protocol:

Should a player brute force the output? Should a player trust the output?

If the adversary knows how the decision is made, can violate fairness.

Fairness can be violated by an adversary who is willing to:run slightly longer than the honest parties are willing to run.accept slightly less confidence in the output.

No a priori bound on honest parties’ running time.Assumes known computational resources for each party.If the adversary has prior knowledge, they will receive

“useful output” first.

Page 7: Partial Fairness in  Secure  Two-Party Computation

Our ResultsWe demonstrate a new framework for partial

fairness.

We place the problem in the real/ideal paradigm.

We demonstrate feasibility for a large class of functions.

We show that our feasibility result is tight.

Page 8: Partial Fairness in  Secure  Two-Party Computation

Defining Security (2 parties)

protocolx

Real world:

x y

F1(x, y) F2(x, y)

view

output

view

F1(x, y)

Ideal world:

x

Page 9: Partial Fairness in  Secure  Two-Party Computation

Defining Security (2 parties)Real world:

Ideal world:

view

output

Indistinguishable!

view

F1(x, y)

“Security with Complete Fairness”

Page 10: Partial Fairness in  Secure  Two-Party Computation

The Standard Relaxation

protocolx

Real world:

x y

F1(x, y) F2(x, y)

view

output

view

F1(x, y)/“

continue

“abort”

Ideal world:

x

Page 11: Partial Fairness in  Secure  Two-Party Computation

The Standard RelaxationReal world:

Ideal world:

view

output

Indistinguishable!

view

F1(x, y)/

“Security with abort” Note: no fairness at all!

Page 12: Partial Fairness in  Secure  Two-Party Computation

Our RelaxationStick with real/ideal paradigm

Real world and ideal world are indistinguishable

relaxed-ideal

-indistinguishable*

*I.e., For all PPT A, |Pr[A(real)=1] – Pr[A(ideal)=1]| < (n) + negl (Similar to: [GL01], [Katz07])

“Full security”“Security with abort”“-Security”

Offers complete fairness, but it can onlybe achieved for a limited set of functions.Can be achieved for any poly-time function,but it offers no fairness!

Page 13: Partial Fairness in  Secure  Two-Party Computation

Protocol 1

ShareGenx y

a1, …, ar

b1, …, br

a1(2), …, ar

(2)

b1(2), …, br

(2)

a1(1), …, ar

(1)

b1(1), …, br

(1)ai

(1) ai(2) = ai

bi(1) bi

(2) = bi

ai: output of Alice if Bob aborts in round i+1.bi: output of Bob if Alice aborts in round i+1.To compute F(x,y): X × Y → Z(1) × Z(2)

Page 14: Partial Fairness in  Secure  Two-Party Computation

Protocol 1 similar to: [GHKL08], [MNS09]

a1a2a3

ai

ar

... ...

a

1a2a3

ai

ar

...

...

b

1

b2

b3

bi

br

... ......

b1

b2

b3

bi

br

...

a1

b1

a2a3

b2

b3

ai bi

ar br

x y

Page 15: Partial Fairness in  Secure  Two-Party Computation

Protocol 1s1s2s3

si

ar

... ...ar

...bi

br

......

s1s2s3

bi

br

...

a1

b

1a2a3

b2

b3

ai

bi-

1

x y

Page 16: Partial Fairness in  Secure  Two-Party Computation

Protocol 1a

1a2a3

ai

ar

...

......

b

1

b2

b3

bi

br

...

a

1

b

1a2a3

b2

b3

ai bi

ar br

Choose round i* uniformly at random.

For i ≥ i* ai = bi = F(x,y)

For i ˂ i*: ai = F(x,Y) where Y is uniformFor i ˂ i*: bi = F(X,y) where X is uniformx y

= F1(x,y) F2(x,y) =

= F1(x,y) F2(x,y) =

How does we choose ?

...bi

br ar

...

Page 17: Partial Fairness in  Secure  Two-Party Computation

Protocol 1: analysisWhat are the odds that aborts in round i*?

If she knows nothing about F1(x, y), it is at most 1/r.But this is not a reasonable assumption!

Probability that F1(x, Y) = z or F1(x, Y) = z’ may be small! Identifying F1(x, y) in round i* may be simple.

I know the

output is z or z’

a1

a2

a3

z’ za6

a7

z’

Page 18: Partial Fairness in  Secure  Two-Party Computation

A Key LemmaConsider the following game,

(parameterized by (0,1] and r ≥ 1):Fix distributions D1 and D2 s.t. for every z

Pr[D1=z] ≥ Pr[D2=z]Challenger chooses i* uniformly from {1, …, r}For i < i* choose ai according to D1

For i ≥ i* choose ai according to D2

For i = 1 to r, give ai to the adversary in iteration iThe adversary wins if it stops the game in

iteration i*

Lemma: Pr[Win] ≤ 1/r

Page 19: Partial Fairness in  Secure  Two-Party Computation

Protocol 1: analysisD1 = F1(x, Y) for uniform YD2 = F1(x, y) So Pr[D1 = F1(x, y)] ≥ Pr[Y=y] = 1/|Y|Probability that P1 aborts in iteration i* is at

most |Y|/rSetting r = |Y|-1 gives -security

Need |Y| to have polynomial sizeNeed to be 1/poly

α = 1/|Y|

Page 20: Partial Fairness in  Secure  Two-Party Computation

Protocol 1: summaryTheorem: Fix function F and = 1/poly: If F

has poly-size domain (for at least one player) then there is an -secure protocol computing F (under standard assumptions).

The protocol is privateAlso secure-with-abort (after a small tweak)

Page 21: Partial Fairness in  Secure  Two-Party Computation

Handling large domainsWith the previous approach, = 1/|Y|

becomes negligibly small: this causes r to become exponentially large

Solution: if the range of Alice’s function is poly-sizeWith probability 1-, choose ai as before: ai =

F1(x, Y)

With probability , choose ai Z(1)(uniformly) is polynomial again!

I know the

output is z or z’

but…Pr[ai = z] ≥ ε/|Z(1)|

α = ε/|Z(1)|

Page 22: Partial Fairness in  Secure  Two-Party Computation

Protocol 2: summaryTheorem: Fix function F and = 1/poly: If F

has poly-size range (for at least one player) then there is an -secure protocol computing F (under standard assumptions).

The protocol is privateThe protocol is not secure-with-abort

anymore

Page 23: Partial Fairness in  Secure  Two-Party Computation

Our Results are Tight (wrt I/O size)

Theorem: There exists a function with super-polynomial size domain and range that cannot be efficiently computed with -security.

Theorem: There exists a function with super-polynomial size domain and poly-size range that cannot be computed with -security and with security-with-abort simultaneously.

Page 24: Partial Fairness in  Secure  Two-Party Computation

SummaryWe suggest a clean notion of partial fairness.

Based on the real/ideal paradigm.Parties have well defined outputs at all times.

We show feasibility for functions with poly-size domain/range, and infeasibility for certain functions outside that class.

Open: can we find a definition of partial fairness that has the above properties, and can be achieved for all functions?

Page 25: Partial Fairness in  Secure  Two-Party Computation

Thank You!

Page 26: Partial Fairness in  Secure  Two-Party Computation

Gradual Convergence: equality

b ⊕ c1 = 0

F(x,y) = 1 if x = y

0 if x ≠ y

Suppose b = f(x,y) = 0 whpAllice can bias Bob to output

1

x y

b ⊕ c2 = 1

b ⊕ c3 = 1

Hope I’m lucky!

For small i, ci has a lot of entropy! Bob’s output is

(almost) random

Accordingly, [BG89] instructs Bob to always respond by

aborting.

Can’t trust that

output⊥But what if Alice runs until

the last round!

Page 27: Partial Fairness in  Secure  Two-Party Computation

Gradual Convergence: drawbacksIf parties always trust their output,

adversary can induce a bias.Decision of whether an honest party should

trust the output is external to the protocol:If made explicit, the adversary can abort just at

that point.If the adversary is happy with less confidence,

he can receive “useful” output alone.If the adversary has higher confidence a

priori, he will receive “useful” output first.