Vous êtes sur la page 1sur 26

Thesis Proposal

Two-Party Computation and Fairness


by
Nikolaos Makriyannis
Ph.D. Advisor
Prof. Vanesa Daza
May 20, 2012
CONTENTS
1 Introduction 2
1.1 Motivation & Historical Overview . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.1 Notions from Complexity Theory . . . . . . . . . . . . . . . . . . . . . 2
1.2.2 Cryptographic Tools/Assumptions . . . . . . . . . . . . . . . . . . . . 4
2 State of the Art 5
2.1 Two-Party Computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.1.1 The Semi-Honest Case . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.1.2 The Malicious Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Fairness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2.1 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2.2 A Negative Result: Cleve (1986) . . . . . . . . . . . . . . . . . . . . . 10
2.2.3 Fair Computation: Gordon/Katz/Hazay/Lindell (2008) . . . . . . . . 12
3 Fairness in Two-Party Computation 19
3.1 A generalization of Cleves result . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.2 A family of fair functions using the protocol of Gordon et al. . . . . . . . . . 20
4 Future Directions 23
A Additional Notions 24
1
CHAPTER 1
INTRODUCTION
1.1 Motivation & Historical Overview
Bla Bla Bla
1.2 Preliminaries
1.2.1 Notions from Complexity Theory
In this section, we introduce the necessary notions from complexity theory. Namely, we
dene the major complexity classes which play an important role in cryptography. We
assume (some) familiarity with the concept of a deterministic Turing Machine. In particular,
recall that a Turing machine M takes as input a string of bits and after a nite amount of
time (steps) outputs 1 or 0. In the former case, we say that M accepts and in the latter,
that it rejects.
Denition 1.1. A language L 0, 1

is in T if there exists a deterministic Turing


machine M and a polynomial p such that
on input x, machine M halts after at most p([x[) steps, and
M(x) = 1 if and only if x L.
Denition 1.2. A language L 0, 1

is in AT if there exists a Boolean relation R


L

0, 1

0, 1

and a polynomial p such that


R
L
is in T, and
x L if and only if there exists y such that [y[ p([x[) and (x, y) R
L
.
In particular, we say that y is a witness for membership of x.
Randomized algorithms play an important role in cryptography. We will now tweak our
denition of Turing machines to include such algorithms. A probabilistic Turing machine
is a deterministic Turing machine with an additional read-only tape consisting of random
bits. This way, the machine takes a unique action depending on its state, the read-symbol
2
and the random bits. Alternatively, think of the machine as making random choices among
available transitions.
Denition 1.3. A language L 0, 1

is in BTT if there exists a probabilistic Turing


machine M and a polynomial p such that
on input x, machine M halts after at most p([x[) steps, and
for every x L, Pr[M(x) = 1] 2/3, and
for every x / L, Pr[M(x) = 0] 2/3.
Class BTT consists of what is considered as ecient computation. In cryptography, a
minimal requirement is for a given scheme to be resilient again adversaries with that kind
of computational power.
Remark 1.4. Note that from the denitions, it is clear that T AT and that T BTT.
The relation however between BTT and AT is not known.
T
AT BTT
Figure 1.1: Known relation between T, AT and BTT.
Finally, we also introduce the class T/poly. This class, which is associated with non-
uniform polynomial-time machines is actually stronger than the previously dened com-
plexity classes but is also quite unrealistic as a computational model. In cryptography, this
model is used in a negative way to show that even adversaries with that kind of computa-
tional power cannot cause harm.
Dene a non-uniform polynomial-time machine to be a pair (M, a) where M is a two
input polynomial time Turing machine and a = (a
1
, . . . , a
k
, . . .) is an innite sequence of
strings such that a
k
has polynomial length in k. For every x, we consider the computation
of M on input (x, a
|x|
).
Denition 1.5. A language L 0, 1

is in T/poly if there exists a non-uniform polynomial-


time machine (M, a) and a polynomial p such that
on input x, machine M halts after at most p([x[) steps, and
x L if and only if M(x, a
|x|
) = 1.
The next theorem shows why proving security against non-uniform polynomial time adver-
saries implies security against probabilistic adversaries.
3
Theorem 1.6. BTT T/poly.
Proof. See [4, p. 18].
It is customary to consider computational problems in T as easy. As we have already
mentioned, we make the same consideration for problems in BTT i.e. problems that can
be solved using randomization. Furthermore, assuming T ,= AT there exist problems in
AT that require super-polynomial running time in the worst case. Such problems are thus
considered hard. Finally, the class T/poly was introduced as a technical tool since it contains
BTT and thus results regarding the former apply to the latter as a particular case. Note that
non-uniformity is sometimes invoked in the literature in order to prove statements which
are unknown or to complicated in the probabilistic model.
To conclude, we provide a gure with the suspected relation between the complexity
classes we dened. Note that this is just conjecture. The only known relation so far is
AT T BTT T/poly.
T = BTT AT
T/poly
Figure 1.2: Suspected relation between complexity classes.
1.2.2 Cryptographic Tools/Assumptions
1. Commitment Schemes
2. (Strong) Zero-Knowledge Proofs
3. (Enhanced) Trapdoor Permutation & Oblivious Transfer
4
CHAPTER 2
STATE OF THE ART
2.1 Two-Party Computation
Denition 2.1 (Two-Party Functionality/Protocol).
A two-party functionality T = f
n

nN
, is a sequence of random processes such that
each f
n
maps pairs of inputs to pairs of random variables (one for each party). The
domain of f
n
is denoted X
n
Y
n
and the output (f
1
n
, f
2
n
).
A two-party protocol for computing a functionality T, is a polynomial-time protocol
such that on inputs x X
n
and y Y
n
, the joint distribution of the outputs of any
honest execution of is statistically close to f
n
(x, y) = (f
1
n
(x, y), f
2
n
(x, y)).
2.1.1 The Semi-Honest Case
Let n denote the security parameter and suppose that parties P
1
, P
2
wish to compute a
functionality T on inputs x and y respectively by means of a two-party protocol . Let A
denote the adversary corrupting one of the parties.
Denition 2.2. Adversary / is semi-honest (or passive) if she executes the protocol ac-
cording to instructions. In other words, a passive adversary attempts to gather information
she is not supposed to know without disrupting the execution of a given scheme.
We now introduce the ideal model for the passive adversary. Let T denote an additional
party, called the trusted party. We assume that T is honest and communicates with P
1
and
P
2
privately. Consider the following (hypothetical) scheme (gure 2.1):
Inputs: P
1
holds 1
n
and x, P
2
holds 1
n
and y. Adversary / is given an auxiliary
input z 0, 1

.
Parties send inputs: P
1
sends x to T followed by P
2
who sends y to T .
Trusted party performs computation: Party T chooses a string r uniformly at
random and computes f
n
(x, y; r) = (f
1
n
(x, y; r), f
2
n
(x, y; r)).
Trusted party sends outputs: Party T rst sends f
1
n
(x, y; r) to P
1
and then sends
f
2
n
(x, y; r) to P
2
.
5
Outputs: The honest party outputs whatever T sent him, the corrupted party outputs
nothing and the adversary outputs a probabilistic polynomial-time function of its view.
P
1
x, 1
n
P
2
y, 1
n
T
/
x y
P
1
P
2
T
r, f
n
(x, y; r)
f
1
n
f
2
n
Figure 2.1: Ideal model for passive adversary.
In the presence of a passive adversary corrupting one the parties, the ideal model amounts
to the best-case scenario in terms of security and privacy. Hence, our goal is to design
protocols that emulate the ideal model. Before we formerly dene what emulate the ideal
model means, we need to introduce further notation.
Denition 2.3. Given security parameter n, suppose that parties P
1
, P
2
wish to compute
T on inputs x and y by means of a two-party protocol . Let / denote an adversary
corrupting one of the parties. The adversary is given the auxiliary input z 0, 1

. Dene
the following variables.
OUT
ideal
F,A(z)
(x, y, n) denotes the honest partys output in the ideal model.
VIEW
ideal
F,A(z)
(x, y, n) denotes the adversary view in the ideal model.
OUT
real
,A(z)
(x, y, n) denotes the honest partys output in the real model.
VIEW
real
,A(z)
(x, y, n) denotes the adversary view in the real model.
We can now formally dene what it means for a protocol to be secure in the presence of a
passive adversary.
Denition 2.4 (Security for passive adversary). Let be a two-party functionality for
computing T. We say that privately computes T if for every non-uniform probabilistic
polynomial-time passive adversary / in the real model there exists a non-uniform proba-
bilistic polynomial-time passive adversary o in the ideal model such that
__
VIEW
real
,A(z)
(x, y, n), OUT
real
,A(z)
(x, y, n)
__
x,y,z,n
c

__
VIEW
ideal
F,S(z)
(x, y, n), OUT
ideal
F,S(z)
(x, y, n)
__
x,y,z,n
.
6
In other words, the above denition says that it is infeasible for any probabilistic polynomial-
time algorithm to distinguish between between the real and ideal model. In this respect,
privately computing T by means of a protocol essentially amounts to computing T in the
ideal model by means of a trusted party.
Theorem 2.5. If there exists a collection of enhanced trapdoor permutations, then for any
two-party functionality T there exists a protocol that privately computes T.
2.1.2 The Malicious Case
A question that naturally arises now is what happens if the adversary behaves in an arbitrary
way? We will now make no assumptions regarding the adversarys motives and inquire into
the security of a given protocol. In the spirit of section 2.1.1, we will dene a new ideal
model and redene security in terms of that model. As usual, given security parameter n,
suppose that parties P
1
, P
2
wish to compute a functionality T on inputs x and y. Let /
denote an adversary corrupting one of the parties. Consider now the following ideal scheme
(gure 2.2):
Inputs: P
1
holds 1
n
and x, P
2
holds 1
n
and y. Adversary / is given an auxiliary
input z 0, 1

.
Parties send inputs: The honest party sends his input to T , the corrupted party
sends a value of the adversarys choice. Write (x

, y

) for the pair of inputs received


by T .
Trusted party performs computation: If either x

or y

is not in the appropriate


domain, then T reassigns to some default value in the input domain. Again, write
(x

, y

) for the resulting values. Party T then chooses a string r uniformly at random
and computes f
n
(x

, y

; r) = (f
1
n
(x

, y

; r), f
2
n
(x

, y

; r)).
Trusted party sends outputs: Trusted party sends f
i
n
(x

, y

; r) to corrupted party
P
i
. After receiving its output, the adversary decides whether abort the protocol or
not. In the former case, T sends an abort symbol to the honest party, in the latter
case he sends the remaining output.
Outputs: The honest party outputs whatever T sent him, the corrupted party outputs
nothing and the adversary outputs a probabilistic polynomial-time function of its view.
Again, we need to introduce the random variables from denition 2.3. This time however,
our variables our dened with respect to the new ideal model. We will use the same notation
for consistency, but we stress out that the following denition and denition 2.3 dier in
that we are considering dierent ideal models.
Denition 2.6. Given security parameter n, suppose that parties P
1
, P
2
wish to compute
T on inputs x and y by means of a two-party protocol . Let / denote an adversary
corrupting one of the parties. The adversary is given the auxiliary input z 0, 1

. Dene
the following variables.
OUT
ideal
F,A(z)
(x, y, n) denotes the honest partys output in the ideal model.
VIEW
ideal
F,A(z)
(x, y, n) denotes the adversary view in the ideal model.
OUT
real
,A(z)
(x, y, n) denotes the honest partys output in the real model.
7
VIEW
real
,A(z)
(x, y, n) denotes the adversary view in the real model.
We can now formally dene what it means for a protocol to be secure in the presence of a
active adversary.
P
1
x, 1
n
P
2
y, 1
n
T
/
x

P
1
P
2
T
r, f
n
(x

, y

; r)
f
1
n
or f
2
n
or
Figure 2.2: Ideal model with abort
Denition 2.7 (Security for active adversary). Let be a two-party functionality for
computing T. We say that securely computes T with abort if for every non-uniform
probabilistic polynomial time adversary / in the real model there exists a non-uniform
probabilistic polynomial time adversary o in the ideal model such that
__
VIEW
real
,A(z)
(x, y, n), OUT
real
,A(z)
(x, y, n)
__
x,y,z,n
c

__
VIEW
ideal
F,S(z)
(x, y, n), OUT
ideal
F,S(z)
(x, y, n)
__
x,y,z,n
.
The above denition implies that computing T by means of a secure protocol essen-
tially amounts to computing T in the ideal model by means of a trusted party, where the
adversary can prevent the honest party from learning his output.
Theorem 2.8. If there exists a collection of enhanced trapdoor permutations, then for any
two-party functionality T there exists a protocol that securely computes T with abort.
2.2 Fairness
2.2.1 Denition
From the previous discussion, we see that for malicious adversaries, security is guaranteed as
long as we disregard fairness. What happens if we demand that, as long as one of the parties
learns the desired output then both of them do? We need to redene our security notions by
introducing yet another ideal model. Using the notation from the previous section, consider
the following ideal scheme (gure 2.3):
8
Inputs: P
1
holds 1
n
and x, P
2
holds 1
n
and y. Adversary / is given an auxiliary
input z 0, 1

.
Parties send inputs: The honest party sends his input to T , the corrupted party
sends a value of the adversarys choice. Write (x

, y

) for the pair of inputs received


by T .
Trusted party performs computation: If either x

or y

is not in the appropriate


domain, then T reassigns to some default value in the input domain. Again, write
(x

, y

) for the resulting values. Party T then chooses a string r uniformly at random
and computes f
n
(x

, y

; r) = (f
1
n
(x

, y

; r), f
2
n
(x

, y

; r)).
Trusted party sends outputs: Trusted party sends f
1
n
(x

, y

; r) to P
1
and f
2
n
(x

, y

; r)
to P
2
.
Outputs: The honest party outputs whatever T sent him, the corrupted party outputs
nothing and the adversary outputs a probabilistic polynomial-time function of its view.
P
1
x, 1
n
P
2
y, 1
n
T
/
x

P
1
P
2
T
r, f
n
(x

, y

; r)
f
1
n
f
2
n
Figure 2.3: Ideal model for complete fairness
Denition 2.9. Given security parameter n, suppose that parties P
1
, P
2
wish to compute
T on inputs x and y by means of a two-party protocol . Let / denote an adversary
corrupting one of the parties. The adversary is given the auxiliary input z 0, 1

. Dene
the following variables.
OUT
ideal
F,A(z)
(x, y, n) denotes the honest partys output in the ideal model.
VIEW
ideal
F,A(z)
(x, y, n) denotes the adversary view in the ideal model.
OUT
real
,A(z)
(x, y, n) denotes the honest partys output in the real model.
VIEW
real
,A(z)
(x, y, n) denotes the adversary view in the real model.
Remark 2.10. Note that denitions 2.3, 2.6 and 2.9 dier, in that the underlying ideal model
is dierent in each case. We essentially wrote the same denitions 3 times to stress this point.
9
Denition 2.11 (Complete fairness). Let be a two-party functionality for computing
T. We say that securely computes T with complete fairness if for every non-uniform
probabilistic polynomial time adversary / in the real model there exists a non-uniform
probabilistic polynomial time adversary o in the ideal model such that
__
VIEW
real
,A(z)
(x, y, n), OUT
real
,A(z)
(x, y, n)
__
x,y,z,n
c

__
VIEW
ideal
F,S(z)
(x, y, n), OUT
ideal
F,S(z)
(x, y, n)
__
x,y,z,n
.
2.2.2 A Negative Result: Cleve (1986)
The rst result regarding fairness is a negative one. Namely, we show that for a particular
class of functionalities, there is no protocol that guarantees security with respect to denition
2.11. Suppose parties P
1
, P
2
wish to compute a common random bit by means of a protocol
. In other words, after an execution of , parties P
1
and P
2
obtain output such that
Pr[ = 0] = 1/2. Motivated by the concept of fairness, we demand that the parties output
is always well dened. Thus, if some anomaly occurs, we would like for the parties to output
a truly random bit. Given security parameter n, we model as an r(n)-round protocol,
where r is some polynomially-bounded function such that:
At the beginning, P
2
computes a bit b
0
.
At round i, P
1
computes a bit a
i
then sends a message to P
2
who in turn computes a
bit b
i
and sends a reply to P
1
.
After the nal round, P
1
computes a
r(n)+1
.
Parties output the last bit they successfully constructed. In particular, if no anomaly
occurs, P
1
and P
2
output a
r(n)+1
and b
r(n)
respectively.
Denition 2.12. Let > 0. Using the notation above, we say that is -consistent if
Pr[a
r(n)+1
= b
r(n)
]
1
2
+ .
Denition 2.13. For a random variable taking values in 0, 1, dene the bias of to be

Pr[ = 0]
1
2

.
Theorem 2.14 (Cleve). If for some > 0, protocol is -consistent then there exists an
adversary corrupting one of the parties such that the bias of the honest partys output is at
least

4r(n) + 1
.
Proof. We dene 4r(n) + 1 adversaries
/, /
1
1,0
, . . . /
1
r(n),0
, /
1
1,1
, . . . /
1
r(n),1
,
/
2
1,0
, . . . /
2
r(n),0
, /
2
1,1
, . . . /
2
r(n),1
with the following quitting strategies:
10
Round 1 P
1
a
1
P
2
b
0
P
2
b
0
b
1
Round 2 P
1
P
2
P
1
a
2
P
2
b
2
Round r(n) 1 P
2
P
1
P
1
a
r(n)1
P
2
b
r(n)1
Round r(n) P
2
P
1
P
1
a
r(n)
P
2
b
r(n)
P
1
a
r(n)
a
r(n)+1
Figure 2.4: Coin-tossing protocol
Adversary / instructs P
1
to quit immediately (no information is exchanged).
Adversary /
j
i,
instructs P
j
to proceed normally until round i 1. At round i, if the
corrupted partys backup bit (a
i
or b
i
) is equal to then proceed to the next round
and quit. Otherwise quit at round i.
Next, let and
j
i,
denote the bias of the honest partys output under the attack of / and
/
j
i,
, respectively. From the denition of the bias, we deduce the following lower bounds:
maxPr[b
0
= 0], Pr[b
0
= 1]
1
2
,

1
i,0
Pr[a
i
= 0 b
i
= 0] + Pr[a
i
= 1 b
i1
= 0]
1
2
,

1
i,1
Pr[a
i
= 1 b
i
= 1] + Pr[a
i
= 0 b
i1
= 1]
1
2
,

2
i,0
Pr[b
i
= 0 a
i+1
= 0] + Pr[b
i
= 1 a
i
= 0]
1
2
,

2
i,1
Pr[b
i
= 1 a
i+1
= 1] + Pr[b
i
= 0 a
i
= 1]
1
2
.
Let denote the average of the above values. After simplication, we deduce that
(4r(n) + 1) + Pr[b
0
,= a
1
] + Pr[b
r(n)
= a
r(n)+1
] 1. (2.1)
Knowing that a
1
and b
0
are independent variables (no information is exchanged), notice
that
Pr[b
0
,= a
1
] = 1 Pr[b
0
= a
1
]
= 1 Pr[a
1
= 0]Pr[b
0
= 0] Pr[a
1
= 1]Pr[b
0
= 1]
1 maxPr[b
0
= 0], Pr[b
0
= 1].
11
Plug the above expression into equation (2.1), use the -consistency of and deduce that
there exists > 0 such that


4r(n) + 1
.
Corollary 2.15. Finite two-input Boolean functions f : X Y 0, 1 satisfying
x X, y Y, Pr[f(x, ) = 0] = Pr[f(, y) = 0] =
1
2
are not computable with complete fairness.
Proof. Let f be as above. Any fair protocol for computing f can be used as a fair coin-
tossing scheme by instructing the parties to choose their inputs randomly. However, in the
ideal model, the honest partys output is always truly random whereas in the real model,
by theorem 2.4, there exists an adversary that can bias the honest partys output in a non-
negligible way. This contradicts our assumption that there exists a protocol that computes
f with complete fairness.
2.2.3 Fair Computation: Gordon/Katz/Hazay/Lindell (2008)
The (-hybrid model
Let T = f
n

nN
and ( = g
n

nN
be two-party functionalities. We say that protocol
computes T in the (-hybrid model with abort if the following conditions are satised.
1. At any given round, exactly one of the following occurs:
(a) Parties exchange information in the real model or
(b) Parties make a single call to a trusted party that computes ( in the ideal model
with abort.
2. The joint distribution of the parties outputs after an execution of is statistically
close to the output of T on the same inputs.
Denition 2.16. Given security parameter n, suppose that parties P
1
, P
2
wish to compute
T on inputs x and y by means of a two-party protocol . Let / denote an adversary
corrupting one of the parties. The adversary is given the auxiliary input z 0, 1

. Dene
the following variables.
OUT
ideal
,A(z)
(x, y, n) denotes the honest partys output in ideal model for complete fair-
ness.
VIEW
ideal
,A(z)
(x, y, n) denotes the adversary view in the ideal model for complete fairness.
OUT
G-hyb
F,A(z)
(x, y, n) denotes the honest partys output in the (-hybrid model with abort.
VIEW
G-hyb
F,A(z)
(x, y, n) denotes the adversary view in the (-hybrid model with abort.
Denition 2.17. Let be a two-party protocol for computing T in the (-hybrid model
with abort. We say that securely computes T with complete fairness in the (-hybrid
model if for every non-uniform probabilistic polynomial time adversary / in the (-hybrid
12
model there exists a non-uniform probabilistic polynomial time adversary o in the ideal
model such that
__
VIEW
G-hyb
,A(z)
(x, y, n), OUT
G-hyb
,A(z)
(x, y, n)
__
x,y,z,n
c

__
VIEW
ideal
F,S(z)
(x, y, n), OUT
ideal
F,S(z)
(x, y, n)
__
x,y,z,n
.
If is a two-party protocol for computing T in the (-hybrid model with abort and is
a two party protocol for computing (, we write

for the (real) protocol where any call to


the trusted party is replaced by an execution of .
Proposition 2.18. Using the notation above, if computes T with complete fairness in
the (-hybrid model and securely computes ( with abort, then

securely computes T with


complete fairness.
P
1
u, 1
n
P
2
v, 1
n
T
/
u

P
1
P
2
T
r, g
n
(u

, v

; r)
g
1
n
or g
2
n
or
P
i
P
i
/
OR
Figure 2.5: A round in the execution of in the (-hybrid model with abort
The protocol
Suppose that P
1
, P
2
wish to compute a functionality T = f
n

nN
, where for all n, f
n
=
f : X Y 0, 1 is a nite function. We will now describe a generic protocol, say ,
for fair computation. The scheme consists of a preliminary phase, where the parties jointly
compute functionality ShareGen (gure 2.6), followed by m iterations described in gure
2.7. The protocol is parametrized by a function = (n) and the number of rounds is
13
ShareGen
Inputs: The security parameter is n, P
1
holds x X, P
2
holds y Y .
Computation: Dene a
1
, . . . , a
m
and b
1
, . . . , b
m
.
Choose i

according to some geometric distribution with parameter .


For i 1, . . . , i

1 do
Choose y Y and set a
i
= f(x, y).
Choose x X and set b
i
= f( x, y).
For i = i

to m set a
i
= b
i
= f(x, y).
For 1 i m, choose (a
(1)
i
, a
(2)
i
) and (b
(1)
i
, b
(2)
i
) as random secret sharings of a
i
and b
i
respectively.
Generate keys k
a
and k
b
from the Gen(1
n
) algorithm and compute all tags
t
a
i
= Mac
k
a
(k|a
(2)
i
), t
b
i
= Mac
k
b
(i|b
(1)
i
).
Output:
P
1
receives a
(1)
1
, . . . , a
(1)
m
, (b
(1)
1
, t
b
1
), . . . , (b
(1)
m
, t
b
m
) and the Mac-key k
a
.
P
2
receives b
(2)
1
, . . . , b
(2)
m
, (a
(2)
1
, t
a
1
), . . . , (a
(2)
m
, t
a
m
) and the Mac-key k
b
.
If the initial received inputs are not in the appropriate domain, then they are
both given output .
Figure 2.6: The ShareGen functionality
set to m = (
1
log n). Finally, we assume the existence of a m-times secure message
authentication code (Gen, Mac, Vrfy) (denition A.1, page 24).
Denition 2.19. For a nite function f : X Y , where X = x
1
, . . . , x

X
and Y =
y
1
, . . . , y

Y
. Dene
(M
f
)
i,j
= (f(x
i
, y
j
))
i,j
,
p
x
= Pr[f(x, ) = 1],
p
y
= Pr[f(, y) = 1],

f
= min
i,j
_
[1 f(x
i
, y
j
) p
x
i
[ [1 f(x
i
, y
j
) p
y
j
[
[1 f(x
i
, y
j
) p
x
i
[ [1 f(x
i
, y
j
) p
y
j
[ +[f(x
i
, y
j
) p
y
j
[
_
.
Finally, for all i 1, . . . ,
X
dene the following real vectors:
C
(0)
x
i
(j) =
_
p
y
j
if f(x
i
, y
j
) = 1,

k
p
y
j
(1)(1p
x
i
)
+ p
y
j
otherwise.
C
(1)
x
i
(j) =
_

k
(p
y
j
1)
(1)p
x
i
+ p
y
j
if f(x
i
, y
j
) = 1,
p
y
j
otherwise.
14
Gordon/Katz/Hazay/Lindell Protocol (2008)
Inputs: P
1
, P
2
hold x and y respectively. The security parameter is n.
Protocol:
1. Preliminary phase:
(a) P
1
chooses y Y uniformly at random and sets a
0
= f(x, y). P
2
chooses
x X uniformly at random and sets b
0
= f( x, y).
(b) Parties run a protocol for ShareGen on their respective inputs.
(c) If P
1
(resp. P
2
) receives , then he outputs a
0
(resp. b
0
). Otherwise proceed
to the next step.
(d) Let a
(1)
1
, . . . , a
(1)
m
, (b
(1)
1
, t
b
1
), . . . , (b
(1)
m
, t
b
m
) and k
a
denote the output of P
1
. Let
b
(2)
1
, . . . , b
(2)
m
, (a
(2)
1
, t
a
1
), . . . , (a
(2)
m
, t
a
m
) and k
b
denote the output of P
2
.
2. For i = 1, . . . , m do:
P
2
sends the next share to P
1
:
(a) P
2
sends (a
(2)
i
, t
a
i
) to P
1
.
(b) If Vrfy
k
a
(i|a
(2)
i
, t
a
i
) ,= 1 (or some other anomaly occurs), then P
1
outputs
a
i1
and halts.
(c) If Vrfy
k
a
(i|a
(2)
i
, t
a
i
) = 1, then P
1
sets a
i
= a
(1)
i
a
(2)
i
and continues.
P
1
sends the next share to P
2
:
(a) P
1
sends (b
(1)
i
, t
b
i
) to P
2
.
(b) If Vrfy
k
b
(i|b
(2)
i
, t
b
i
) ,= 1 (or some other anomaly occurs), then P
2
outputs
b
i1
and halts.
(c) If Vrfy
k
b
(i|b
(1)
i
, t
b
i
) = 1, then P
2
sets b
i
= b
(1)
i
b
(2)
i
and continues.
Output: If all iterations have been run, then P
1
outputs a
m
, P
2
outputs b
m
.
Figure 2.7: Protocol for computing f
15
Theorem 2.20. Let f be as above. If for all b 0, 1 and x X there exists a probability
vector X
(b)
x
such that
X
(b)
x
M
f
= C
(b)
x
,
then, assuming (Gen, Mac, Vrfy) is a m-times secure MAC, protocol securely computes f
with complete fairness in the hybrid model.
Sketch of proof. For an adversary / corrupting P
i
in the hybrid model, we dene an ad-
versary o in the ideal model, with black-box access to / and show that the probability
distribution of the adversarys view and the honest partys output in the hybrid and ideal
model are computationally indistinguishable.
Adversary corrupts P
2
:
1. o invokes / on input y, security parameter n, auxiliary input z. Simulator o also
chooses a value y Y at random.
2. o receives y

from / for the computation of ShareGen.


(a) If y

/ Y then o sends to / as the output of the functionality and sends y to


the trusted party. It then halts and outputs whatever / outputs.
(b) Otherwise, o chooses b
(2)
1
, . . . , b
(2)
m
, a
(2)
1
, . . . , a
(2)
m
at random. She generates keys
k
a
and k
b
from algorithm Gen(1
n
), computes t
a
i
= Mac(i|a
(2)
i
) for every i and
hands b
(2)
1
, . . . , b
(2)
m
, (a
(2)
1
, t
a
1
), . . . , (a
(2)
m
, t
a
m
), and k
b
to / as its output from the
ShareGen functionality.
3. If / choose to abort the ShareGen functionality, then o sends y to the trusted party,
halts and outputs whatever / outputs. Otherwise, proceed to the next step.
4. o chooses i

according to some geometric distribution with parameter .


5. For i = 1 . . . i

1:
(a) o receives ( a
(2)
i
,

t
a
i
). If Vrfy
k
a
(i| a
(2)
i
) = 0 (or some other anomaly occurs), then o
sends y to the trusted party, halts and outputs whatever / outputs. Otherwise,
o proceeds.
(b) o chooses x X at random, computes b
i
= f( x, y

), sets b
(1)
i
= b
(2)
i
b
i
and
computes t
b
i
= Mac
k
b
(i|b
(1)
i
). The simulator hands message (b
(1)
i
, t
b
i
) to /.
6. For i = i

:
(a) o receives ( a
(2)
i
,

t
a
i
). If Vrfy
k
a
(i

| a
(2)
i
) = 0 (or some other anomaly occurs), then
o sends y to the trusted party, halts and outputs whatever / outputs. Otherwise,
o sends y

to the trusted party and receives the output = f(x, y

).
(b) o sets b
(1)
i
= b
(2)
i
and computes t
b
i
= Mac
k
b
(i

|b
(1)
i
). The simulator hands
message (b
(1)
i
, t
b
i
) to /.
7. For i = i

+ 1 . . . m:
(a) o receives ( a
(2)
i
,

t
a
i
). If Vrfy
k
a
(i| a
(2)
i
) = 0 (or some other anomaly occurs), then
o halts and outputs whatever / outputs.
16
(b) o sets b
(1)
i
= b
(2)
i
and computes t
b
i
= Mac
k
b
(i|b
(1)
i
). The simulator hands
message (b
(1)
i
, t
b
i
) to /.
8. If all m iterations have been executed but i

> m (so o hasnt sent anything to the


trusted party), then o sends y to the trusted party. In any case, o outputs whatever
/ outputs and halts.
Adversary corrupts P
1
:
1. o invokes / on input x, security parameter n, auxiliary input z. Simulator o also
chooses a value x X at random.
2. o receives x

from / for the computation of ShareGen.


(a) If x

/ X then o sends to / as the output of the functionality and sends x to


the trusted party. It then halts and outputs whatever / outputs.
(b) Otherwise, o chooses a
(1)
1
, . . . , a
(1)
m
, b
(1)
1
, . . . , b
(1)
m
at random. She generates keys
k
a
and k
b
from algorithm Gen(1
n
), computes t
b
i
= Mac(i|b
(1)
i
) for every i and
hands a
(1)
1
, . . . , a
(1)
m
, (b
(1)
1
, t
b
1
), . . . , (b
(1)
m
, t
b
m
), and k
a
to / as its output from the
ShareGen functionality.
3. If / choose to abort the ShareGen functionality, then o sends x to the trusted party,
halts and outputs whatever / outputs. Otherwise, proceed to the next step.
4. o chooses i

according to some geometric distribution with parameter .


5. For i = 1 . . . i

1:
(a) o chooses y at random, sets a
i
= f(x

, y), a
(2)
i
= a
(1)
i
a
i
and computes t
a
i
=
Mac
k
a
(a
(2)
i
). He sends (a
(2)
i
, t
a
i
) to /.
(b) The simulator receives (

b
(1)
i
, t
b
i
) from /. If Vrfy
k
b
(i|

b
(1)
i
) = 0 (or some other
anomaly occurs), then o sends x to the trusted party, where x is chosen according
to probability distribution X
(a
i
)
x
. The simulator then halts and outputs whatever
/ outputs. Otherwise, o proceeds.
6. For i = i

. . . m:
(a) If i = i

, o sends x

to the trusted party and receives the output = f(x

, y).
(b) The simulator sets b
(1)
i
= b
(2)
i
, computes t
a
i
= Mac
k
a
(a
(2)
i
) and sends (a
(2)
i
, t
a
i
)
to /.
(c) The simulator receives (

b
(1)
i
, t
b
i
) from /. If Vrfy
k
b
(i|

b
(1)
i
) = 0 (or some other
anomaly occurs), then o halts and outputs whatever / outputs. Otherwise, o
proceeds.
7. If all m iterations have been executed but i

> m (so o hasnt sent anything to the


trusted party), then o sends x to the trusted party. In any case, o outputs whatever
/ outputs and halts.
The goal is to show that o generates a distribution which is indistinguishable to the distri-
bution in a hybrid execution between / and an honest party. In fact, using the information-
theoretic security of the MAC, the distributions are in fact identical. For the rst case,
notice that /s view in both worlds is identical and, regarding P
1
s output, depending on
17
what / does, his output is distributed identically in both worlds. Finally, for the second
case, one can show that for all i, if / aborts at iteration i then the joint distribution of the
honest partys output and the adversarys view in the hybrid and ideal model are identical.
The proof considers all possible outcomes and uses the fact that Pr[i

= i[i

i] = to
show that the probabilities are equal.
Corollary 2.21. Using the notation above, if there exist a family of enhanced trapdoor
permutations and an m-times secure MAC, then f is computable with complete fairness.
Proof. By theorem 2.8, there exists a secure protocol for computing ShareGen with abort.
By theorem 2.20, protocol securely computes f with complete fairness in the hybrid model.
Finally, by theorem 2.18, protocol

securely computes f with complete fairness.


18
CHAPTER 3
FAIRNESS IN TWO-PARTY COMPUTATION
3.1 A generalization of Cleves result
Recall the following theorem from section 2.2.2:
Suppose that is a two-party coin-tossing protocol such that outputs are always well
dened. Then there exists an adversary that the honest partys output is not truly random,
with overwhelming probability.
In other words if two parties want to produce a bit such that Pr[ = 0] = 1/2 then,
provided the outputs are always well dened, there exists an adversary who can move the
probability of the honest partys output away from 1/2.
One can show that in fact the same is true for any probability distribution of the outputs
i.e. if two parties want to produce a bit such that Pr[ = 0] = p (0, 1) then, provided
the outputs are always well dened, there exists an adversary who can move the probability
of the honest partys output away from p. Consider the protocol from gure 2.4 and, using
the same notation, let us introduce the following denitions.
Denition 3.1. Let p (0, 1). Using the notation above, we say that is p-consistent if
for some > 0,
Pr[a
r(n)+1
= b
r(n)
] maxp, 1 p + .
Denition 3.2. For a random variable taking values in 0, 1, dene the p-bias of to
be

Pr[ = 0] p

.
Theorem 3.3. If for some p (0, 1), protocol is p-consistent then there exists an ad-
versary corrupting one of the parties such that the p-bias of the honest partys output is at
least

4r(n) + 1
,
for some 0.
Proof. The proof goes along the lines of theorem 2.14. We introduce the same 4r(n) +1 ad-
versaries and by replacing bias and -consistency with p-bias and p-consistency respectively,
the same averaging argument proves the theorem.
19
Corollary 3.4. Let T = f
n

nN
be a two-party functionality such that, for all n, f
n
outputs
(, ), where is a random variable that yields 0 with probability p and 1 with probability
1 p. Then T is not computable with complete fairness.
Proof. In the ideal model, the p-bias of the honest partys output is always equal to 0. By
theorem 3.3, there exists an adversary in the real model that has no ideal-world counterpart.
Hence, fairness cannot be achieved.
Corollary 3.5. Let f : X Y 0, 1 be a nite function such that for some p (0, 1),
for all (x, y) X Y ,
Pr[f(x, ) = 0] = Pr[f(, y) = 0] = p.
Then f is not computable with complete fairness.
Proof. By instructing the parties to chose their inputs randomly, such a function can be used
to produce a common bit according to probability distribution (p, 1 p). Consequently, the
existence of a fair protocol that computes f contradicts corollary 3.4.
3.2 A family of fair functions using the protocol of Gor-
don et al.
We assume k 3. Consider the following function
f
k
: x
1
, . . . , x
k
y
1
, . . . , y
k
0, 1
(x
i
, y
j
)
_
0 if i 2 and j + i = k + 2,
1 otherwise.
Function f
k
has the following matrix representation:
M
k
=
_
_
_
_
_
_
_
_
_
_
_
1 1 1 1 1 1
1 1 1 1 1 0
1 1 1 1 0 1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1 1 1 0 1 1
1 1 0 1 1 1
1 0 1 1 1 1
_
_
_
_
_
_
_
_
_
_
_
Proposition 3.6. As a real matrix, the determinant of M
k
is 1 if k 0, 1( mod 4) and 1
otherwise. Its inverse is
M
1
k
=
_
_
_
_
_
_
_
_
_
_
_
2 k 1 1 1 1 1
1 0 0 0 0 1
1 0 0 0 1 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1 0 0 1 0 0
1 0 1 0 0 0
1 1 0 0 0 0
_
_
_
_
_
_
_
_
_
_
_
Proof. For the determinant, use Laplace formula along the last column and notice that all
the minor matrices are non-invertible except the rst one which is M
k1
. For the inverse,
just check that the product yields the identity matrix.
20
We want to show that functions f
k

k3
can be computed securely with complete fairness.
To this end we will apply corollary 2.21. Hence, we will show that our functions full the
hypothesis of the corollary.
Denition 3.7. Fix i, j 1, . . . , k and dene
p
x
i
= Pr[f
k
(x
i
, ) = 1],
p
y
j
= Pr[f
k
(, y
j
) = 1].
Thus in our case:
p
x
i
=
_
1 if i = 1,
k1
k
otherwise.
p
y
j
=
_
1 if j = 1,
k1
k
otherwise.
Denition 3.8. For a given k, dene

k
= min
i,j
_
[1 f
k
(x
i
, y
j
) p
x
i
[ [1 f
k
(x
i
, y
j
) p
y
j
[
[1 f
k
(x
i
, y
j
) p
x
i
[ [1 f
k
(x
i
, y
j
) p
y
j
[ +[f
k
(x
i
, y
j
) p
y
j
[
_
We distinguish 4 possible values of
k
:
(i, j) = (1, j),
(i, j) = (i, 1),
(i, j) is not one of the above and f(i, j) = 1,
(i, j) is not one of the rst two and f(i, j) = 0.
We use the appropriate values in the formula and deduce the following possible alphas.

k
?
=
k1
k
,

k
?
= 1,

k
?
=
(k1)
2
k
2
k+1
,

k
?
=
1
k
2
k+1
.
A quick analysis shows that

k
=
1
k
2
k + 1
.
Denition 3.9. For all i 1, . . . , k dene the following real vectors:
C
(0)
x
i
(j) =
_
p
y
j
if f(x
i
, y
j
) = 1,

k
p
y
j
(1)(1p
x
i
)
+ p
y
j
otherwise.
C
(1)
x
i
(j) =
_

k
(p
y
j
1)
(1)p
x
i
+ p
y
j
if f(x
i
, y
j
) = 1,
p
y
j
otherwise.
21
Hence for us, if i = 1 we have
C
(0)
x
1
=
_
1,
k 1
k
, . . . ,
k 1
k
_
C
(1)
x
1
=
_
1,
k
3
2k
2
+ k 1
k
3
k
2
, . . . ,
k
3
2k
2
+ k 1
k
3
k
2
_
.
and for i ,= 1,
C
(0)
x
i
=
_
1,
k 1
k
, . . . ,
k 1
k
,
f(x
i
,y
j
)=0

1,
k 1
k
, . . . ,
k 1
k
_
C
(1)
x
i
=
_
1,
(k 1)
3
1
k(k 1)
2
, . . . ,
(k 1)
3
1
k(k 1)
2
,
k 1
k
,

f(x
i
,y
j
)=0
(k 1)
3
1
k(k 1)
2
, . . . ,
(k 1)
3
1
k(k 1)
2
_
.
Theorem 3.10. There exists a protocol for computing f
k
securely with complete fairness.
Proof. We show that our family of functions fulls the hypothesis of theorem 2.20 i.e. We
show that for all b 0, 1 and all i 1, . . . , k there exists a probability vector X
(b)
x
i
such
that
X
(b)
x
i
M
k
= C
(b)
x
i
.
Now, we know that M
k
is invertible, thus the equation above admits a unique solution for
each C
(b)
x
i
which is
X
(b)
x
i
= C
(b)
x
i
M
1
k
.
It remains to write down everything and verify that the Xs are probability vectors.
X
(0)
x
1
=
_
1/k, . . . , 1/k
_
X
(1)
x
1
=
_
k 1
k
2
,
k
2
k + 1
k
3
k
2
. . . ,
k
2
k + 1
k
3
k
2
_
X
(0)
x
i
=
_
2/k, 1/k, . . . , 1/k, 0,

f(x
i
,y
j
)=0
1/k, . . . , 1/k
_
X
(1)
x
i
=
_
1
k

k 2
k(k 1)
2
, 1
(k 1)
3
1
k(k 1)
2
, . . . ,
1
(k 1)
3
1
k(k 1)
2
,
1
k
,

f(x
i
,y
j
)=0
1
(k 1)
3
1
k(k 1)
2
, . . . , 1
(k 1)
3
1
k(k 1)
2
_
We see that all the values above are positive. To see that each vector sums to one, one can
either check it by hand or notice that that the rst row of M
k
is all-1 and that the rst
entry of C
(b)
x
i
is 1 as well.
22
CHAPTER 4
FUTURE DIRECTIONS
Towards a classication of nite Boolean functions
Other directions
23
APPENDIX A
ADDITIONAL NOTIONS
Denition A.1 (MAC). A message authentication code consists of three polynomial time
algorithms (Gen, Mac, Vrfy) such that:
Gen takes as input the security parameter 1
n
and outputs a key k,
Mac takes as input the key k and a message m 0, 1
n
and outputs a tag t =
Mac
k
(m),
Vrfy takes as input, k, m and tag t and outputs a bit b = Vrfy
k
(m, t). We regard b = 1
as acceptance and b = 0 as rejection.
For correctness, we also require that Vrfy
k
(m, Mac
k
(m)) = 1.
Denition A.2. A MAC (Gen, Mac, Vrfy) is an information-theoretic secure -times mes-
sage authentication code if for any sequence of messages m
1
, . . . , m

, a computationally
unbounded adversary A succeeds in the following game with negligible probability:
1. Messages m
1
, . . . , m

are given to the adversary with their respective tags t


1
, . . . , t

.
2. Adversary outputs a new message m

/ m
i

i
and a tag t

.
3. Adversary wins if Vrfy
k
(m

, t

) = 1.
24
BIBLIOGRAPHY
[1] J. Katz, Bridging game theory and cryptography: recent results and future directions,
in Proceedings of the 5th conference on Theory of cryptography, TCC08, (Berlin, Hei-
delberg), pp. 251272, Springer-Verlag, 2008.
[2] R. Cleve, Limits on the security of coin ips when half the processors are faulty
(extended abstract), in Hartmanis [3], pp. 364369.
[3] J. Hartmanis, ed., Proceedings of the 18th Annual ACM Symposium on Theory of Com-
puting, May 28-30, 1986, Berkeley, California, USA, ACM, 1986.
[4] O. Goldreich, Foundations of cryptography. II. Cambridge: Cambridge University Press,
2004. Basic Applications.
[5] S. D. Gordon, C. Hazay, J. Katz, and Y. Lindell, Complete fairness in secure two-party
computation, in Dwork [6], pp. 413422.
[6] C. Dwork, ed., Proceedings of the 40th Annual ACM Symposium on Theory of Com-
puting, Victoria, British Columbia, Canada, May 17-20, 2008, ACM, 2008.
[7] G. Asharov, R. Canetti, and C. Hazay, Towards a game theoretic view of secure
computation, IACR Cryptology ePrint Archive, vol. 2011, p. 137, 2011.
[8] R. Canetti, Security and composition of multi-party cryptographic protocols, Journal
of Cryptology, vol. 13, p. 2000, 1998.
[9] K. Truemper, Matroid decomposition. Academic Press, 1992.
[10] J. G. Oxley, Matroid theory. Oxford University Press, 1992.
25

Vous aimerez peut-être aussi