Académique Documents
Professionnel Documents
Culture Documents
0, 1
nN
, is a sequence of random processes such that
each f
n
maps pairs of inputs to pairs of random variables (one for each party). The
domain of f
n
is denoted X
n
Y
n
and the output (f
1
n
, f
2
n
).
A two-party protocol for computing a functionality T, is a polynomial-time protocol
such that on inputs x X
n
and y Y
n
, the joint distribution of the outputs of any
honest execution of is statistically close to f
n
(x, y) = (f
1
n
(x, y), f
2
n
(x, y)).
2.1.1 The Semi-Honest Case
Let n denote the security parameter and suppose that parties P
1
, P
2
wish to compute a
functionality T on inputs x and y respectively by means of a two-party protocol . Let A
denote the adversary corrupting one of the parties.
Denition 2.2. Adversary / is semi-honest (or passive) if she executes the protocol ac-
cording to instructions. In other words, a passive adversary attempts to gather information
she is not supposed to know without disrupting the execution of a given scheme.
We now introduce the ideal model for the passive adversary. Let T denote an additional
party, called the trusted party. We assume that T is honest and communicates with P
1
and
P
2
privately. Consider the following (hypothetical) scheme (gure 2.1):
Inputs: P
1
holds 1
n
and x, P
2
holds 1
n
and y. Adversary / is given an auxiliary
input z 0, 1
.
Parties send inputs: P
1
sends x to T followed by P
2
who sends y to T .
Trusted party performs computation: Party T chooses a string r uniformly at
random and computes f
n
(x, y; r) = (f
1
n
(x, y; r), f
2
n
(x, y; r)).
Trusted party sends outputs: Party T rst sends f
1
n
(x, y; r) to P
1
and then sends
f
2
n
(x, y; r) to P
2
.
5
Outputs: The honest party outputs whatever T sent him, the corrupted party outputs
nothing and the adversary outputs a probabilistic polynomial-time function of its view.
P
1
x, 1
n
P
2
y, 1
n
T
/
x y
P
1
P
2
T
r, f
n
(x, y; r)
f
1
n
f
2
n
Figure 2.1: Ideal model for passive adversary.
In the presence of a passive adversary corrupting one the parties, the ideal model amounts
to the best-case scenario in terms of security and privacy. Hence, our goal is to design
protocols that emulate the ideal model. Before we formerly dene what emulate the ideal
model means, we need to introduce further notation.
Denition 2.3. Given security parameter n, suppose that parties P
1
, P
2
wish to compute
T on inputs x and y by means of a two-party protocol . Let / denote an adversary
corrupting one of the parties. The adversary is given the auxiliary input z 0, 1
. Dene
the following variables.
OUT
ideal
F,A(z)
(x, y, n) denotes the honest partys output in the ideal model.
VIEW
ideal
F,A(z)
(x, y, n) denotes the adversary view in the ideal model.
OUT
real
,A(z)
(x, y, n) denotes the honest partys output in the real model.
VIEW
real
,A(z)
(x, y, n) denotes the adversary view in the real model.
We can now formally dene what it means for a protocol to be secure in the presence of a
passive adversary.
Denition 2.4 (Security for passive adversary). Let be a two-party functionality for
computing T. We say that privately computes T if for every non-uniform probabilistic
polynomial-time passive adversary / in the real model there exists a non-uniform proba-
bilistic polynomial-time passive adversary o in the ideal model such that
__
VIEW
real
,A(z)
(x, y, n), OUT
real
,A(z)
(x, y, n)
__
x,y,z,n
c
__
VIEW
ideal
F,S(z)
(x, y, n), OUT
ideal
F,S(z)
(x, y, n)
__
x,y,z,n
.
6
In other words, the above denition says that it is infeasible for any probabilistic polynomial-
time algorithm to distinguish between between the real and ideal model. In this respect,
privately computing T by means of a protocol essentially amounts to computing T in the
ideal model by means of a trusted party.
Theorem 2.5. If there exists a collection of enhanced trapdoor permutations, then for any
two-party functionality T there exists a protocol that privately computes T.
2.1.2 The Malicious Case
A question that naturally arises now is what happens if the adversary behaves in an arbitrary
way? We will now make no assumptions regarding the adversarys motives and inquire into
the security of a given protocol. In the spirit of section 2.1.1, we will dene a new ideal
model and redene security in terms of that model. As usual, given security parameter n,
suppose that parties P
1
, P
2
wish to compute a functionality T on inputs x and y. Let /
denote an adversary corrupting one of the parties. Consider now the following ideal scheme
(gure 2.2):
Inputs: P
1
holds 1
n
and x, P
2
holds 1
n
and y. Adversary / is given an auxiliary
input z 0, 1
.
Parties send inputs: The honest party sends his input to T , the corrupted party
sends a value of the adversarys choice. Write (x
, y
or y
, y
) for the resulting values. Party T then chooses a string r uniformly at random
and computes f
n
(x
, y
; r) = (f
1
n
(x
, y
; r), f
2
n
(x
, y
; r)).
Trusted party sends outputs: Trusted party sends f
i
n
(x
, y
; r) to corrupted party
P
i
. After receiving its output, the adversary decides whether abort the protocol or
not. In the former case, T sends an abort symbol to the honest party, in the latter
case he sends the remaining output.
Outputs: The honest party outputs whatever T sent him, the corrupted party outputs
nothing and the adversary outputs a probabilistic polynomial-time function of its view.
Again, we need to introduce the random variables from denition 2.3. This time however,
our variables our dened with respect to the new ideal model. We will use the same notation
for consistency, but we stress out that the following denition and denition 2.3 dier in
that we are considering dierent ideal models.
Denition 2.6. Given security parameter n, suppose that parties P
1
, P
2
wish to compute
T on inputs x and y by means of a two-party protocol . Let / denote an adversary
corrupting one of the parties. The adversary is given the auxiliary input z 0, 1
. Dene
the following variables.
OUT
ideal
F,A(z)
(x, y, n) denotes the honest partys output in the ideal model.
VIEW
ideal
F,A(z)
(x, y, n) denotes the adversary view in the ideal model.
OUT
real
,A(z)
(x, y, n) denotes the honest partys output in the real model.
7
VIEW
real
,A(z)
(x, y, n) denotes the adversary view in the real model.
We can now formally dene what it means for a protocol to be secure in the presence of a
active adversary.
P
1
x, 1
n
P
2
y, 1
n
T
/
x
P
1
P
2
T
r, f
n
(x
, y
; r)
f
1
n
or f
2
n
or
Figure 2.2: Ideal model with abort
Denition 2.7 (Security for active adversary). Let be a two-party functionality for
computing T. We say that securely computes T with abort if for every non-uniform
probabilistic polynomial time adversary / in the real model there exists a non-uniform
probabilistic polynomial time adversary o in the ideal model such that
__
VIEW
real
,A(z)
(x, y, n), OUT
real
,A(z)
(x, y, n)
__
x,y,z,n
c
__
VIEW
ideal
F,S(z)
(x, y, n), OUT
ideal
F,S(z)
(x, y, n)
__
x,y,z,n
.
The above denition implies that computing T by means of a secure protocol essen-
tially amounts to computing T in the ideal model by means of a trusted party, where the
adversary can prevent the honest party from learning his output.
Theorem 2.8. If there exists a collection of enhanced trapdoor permutations, then for any
two-party functionality T there exists a protocol that securely computes T with abort.
2.2 Fairness
2.2.1 Denition
From the previous discussion, we see that for malicious adversaries, security is guaranteed as
long as we disregard fairness. What happens if we demand that, as long as one of the parties
learns the desired output then both of them do? We need to redene our security notions by
introducing yet another ideal model. Using the notation from the previous section, consider
the following ideal scheme (gure 2.3):
8
Inputs: P
1
holds 1
n
and x, P
2
holds 1
n
and y. Adversary / is given an auxiliary
input z 0, 1
.
Parties send inputs: The honest party sends his input to T , the corrupted party
sends a value of the adversarys choice. Write (x
, y
or y
, y
) for the resulting values. Party T then chooses a string r uniformly at random
and computes f
n
(x
, y
; r) = (f
1
n
(x
, y
; r), f
2
n
(x
, y
; r)).
Trusted party sends outputs: Trusted party sends f
1
n
(x
, y
; r) to P
1
and f
2
n
(x
, y
; r)
to P
2
.
Outputs: The honest party outputs whatever T sent him, the corrupted party outputs
nothing and the adversary outputs a probabilistic polynomial-time function of its view.
P
1
x, 1
n
P
2
y, 1
n
T
/
x
P
1
P
2
T
r, f
n
(x
, y
; r)
f
1
n
f
2
n
Figure 2.3: Ideal model for complete fairness
Denition 2.9. Given security parameter n, suppose that parties P
1
, P
2
wish to compute
T on inputs x and y by means of a two-party protocol . Let / denote an adversary
corrupting one of the parties. The adversary is given the auxiliary input z 0, 1
. Dene
the following variables.
OUT
ideal
F,A(z)
(x, y, n) denotes the honest partys output in the ideal model.
VIEW
ideal
F,A(z)
(x, y, n) denotes the adversary view in the ideal model.
OUT
real
,A(z)
(x, y, n) denotes the honest partys output in the real model.
VIEW
real
,A(z)
(x, y, n) denotes the adversary view in the real model.
Remark 2.10. Note that denitions 2.3, 2.6 and 2.9 dier, in that the underlying ideal model
is dierent in each case. We essentially wrote the same denitions 3 times to stress this point.
9
Denition 2.11 (Complete fairness). Let be a two-party functionality for computing
T. We say that securely computes T with complete fairness if for every non-uniform
probabilistic polynomial time adversary / in the real model there exists a non-uniform
probabilistic polynomial time adversary o in the ideal model such that
__
VIEW
real
,A(z)
(x, y, n), OUT
real
,A(z)
(x, y, n)
__
x,y,z,n
c
__
VIEW
ideal
F,S(z)
(x, y, n), OUT
ideal
F,S(z)
(x, y, n)
__
x,y,z,n
.
2.2.2 A Negative Result: Cleve (1986)
The rst result regarding fairness is a negative one. Namely, we show that for a particular
class of functionalities, there is no protocol that guarantees security with respect to denition
2.11. Suppose parties P
1
, P
2
wish to compute a common random bit by means of a protocol
. In other words, after an execution of , parties P
1
and P
2
obtain output such that
Pr[ = 0] = 1/2. Motivated by the concept of fairness, we demand that the parties output
is always well dened. Thus, if some anomaly occurs, we would like for the parties to output
a truly random bit. Given security parameter n, we model as an r(n)-round protocol,
where r is some polynomially-bounded function such that:
At the beginning, P
2
computes a bit b
0
.
At round i, P
1
computes a bit a
i
then sends a message to P
2
who in turn computes a
bit b
i
and sends a reply to P
1
.
After the nal round, P
1
computes a
r(n)+1
.
Parties output the last bit they successfully constructed. In particular, if no anomaly
occurs, P
1
and P
2
output a
r(n)+1
and b
r(n)
respectively.
Denition 2.12. Let > 0. Using the notation above, we say that is -consistent if
Pr[a
r(n)+1
= b
r(n)
]
1
2
+ .
Denition 2.13. For a random variable taking values in 0, 1, dene the bias of to be
Pr[ = 0]
1
2
.
Theorem 2.14 (Cleve). If for some > 0, protocol is -consistent then there exists an
adversary corrupting one of the parties such that the bias of the honest partys output is at
least
4r(n) + 1
.
Proof. We dene 4r(n) + 1 adversaries
/, /
1
1,0
, . . . /
1
r(n),0
, /
1
1,1
, . . . /
1
r(n),1
,
/
2
1,0
, . . . /
2
r(n),0
, /
2
1,1
, . . . /
2
r(n),1
with the following quitting strategies:
10
Round 1 P
1
a
1
P
2
b
0
P
2
b
0
b
1
Round 2 P
1
P
2
P
1
a
2
P
2
b
2
Round r(n) 1 P
2
P
1
P
1
a
r(n)1
P
2
b
r(n)1
Round r(n) P
2
P
1
P
1
a
r(n)
P
2
b
r(n)
P
1
a
r(n)
a
r(n)+1
Figure 2.4: Coin-tossing protocol
Adversary / instructs P
1
to quit immediately (no information is exchanged).
Adversary /
j
i,
instructs P
j
to proceed normally until round i 1. At round i, if the
corrupted partys backup bit (a
i
or b
i
) is equal to then proceed to the next round
and quit. Otherwise quit at round i.
Next, let and
j
i,
denote the bias of the honest partys output under the attack of / and
/
j
i,
, respectively. From the denition of the bias, we deduce the following lower bounds:
maxPr[b
0
= 0], Pr[b
0
= 1]
1
2
,
1
i,0
Pr[a
i
= 0 b
i
= 0] + Pr[a
i
= 1 b
i1
= 0]
1
2
,
1
i,1
Pr[a
i
= 1 b
i
= 1] + Pr[a
i
= 0 b
i1
= 1]
1
2
,
2
i,0
Pr[b
i
= 0 a
i+1
= 0] + Pr[b
i
= 1 a
i
= 0]
1
2
,
2
i,1
Pr[b
i
= 1 a
i+1
= 1] + Pr[b
i
= 0 a
i
= 1]
1
2
.
Let denote the average of the above values. After simplication, we deduce that
(4r(n) + 1) + Pr[b
0
,= a
1
] + Pr[b
r(n)
= a
r(n)+1
] 1. (2.1)
Knowing that a
1
and b
0
are independent variables (no information is exchanged), notice
that
Pr[b
0
,= a
1
] = 1 Pr[b
0
= a
1
]
= 1 Pr[a
1
= 0]Pr[b
0
= 0] Pr[a
1
= 1]Pr[b
0
= 1]
1 maxPr[b
0
= 0], Pr[b
0
= 1].
11
Plug the above expression into equation (2.1), use the -consistency of and deduce that
there exists > 0 such that
4r(n) + 1
.
Corollary 2.15. Finite two-input Boolean functions f : X Y 0, 1 satisfying
x X, y Y, Pr[f(x, ) = 0] = Pr[f(, y) = 0] =
1
2
are not computable with complete fairness.
Proof. Let f be as above. Any fair protocol for computing f can be used as a fair coin-
tossing scheme by instructing the parties to choose their inputs randomly. However, in the
ideal model, the honest partys output is always truly random whereas in the real model,
by theorem 2.4, there exists an adversary that can bias the honest partys output in a non-
negligible way. This contradicts our assumption that there exists a protocol that computes
f with complete fairness.
2.2.3 Fair Computation: Gordon/Katz/Hazay/Lindell (2008)
The (-hybrid model
Let T = f
n
nN
and ( = g
n
nN
be two-party functionalities. We say that protocol
computes T in the (-hybrid model with abort if the following conditions are satised.
1. At any given round, exactly one of the following occurs:
(a) Parties exchange information in the real model or
(b) Parties make a single call to a trusted party that computes ( in the ideal model
with abort.
2. The joint distribution of the parties outputs after an execution of is statistically
close to the output of T on the same inputs.
Denition 2.16. Given security parameter n, suppose that parties P
1
, P
2
wish to compute
T on inputs x and y by means of a two-party protocol . Let / denote an adversary
corrupting one of the parties. The adversary is given the auxiliary input z 0, 1
. Dene
the following variables.
OUT
ideal
,A(z)
(x, y, n) denotes the honest partys output in ideal model for complete fair-
ness.
VIEW
ideal
,A(z)
(x, y, n) denotes the adversary view in the ideal model for complete fairness.
OUT
G-hyb
F,A(z)
(x, y, n) denotes the honest partys output in the (-hybrid model with abort.
VIEW
G-hyb
F,A(z)
(x, y, n) denotes the adversary view in the (-hybrid model with abort.
Denition 2.17. Let be a two-party protocol for computing T in the (-hybrid model
with abort. We say that securely computes T with complete fairness in the (-hybrid
model if for every non-uniform probabilistic polynomial time adversary / in the (-hybrid
12
model there exists a non-uniform probabilistic polynomial time adversary o in the ideal
model such that
__
VIEW
G-hyb
,A(z)
(x, y, n), OUT
G-hyb
,A(z)
(x, y, n)
__
x,y,z,n
c
__
VIEW
ideal
F,S(z)
(x, y, n), OUT
ideal
F,S(z)
(x, y, n)
__
x,y,z,n
.
If is a two-party protocol for computing T in the (-hybrid model with abort and is
a two party protocol for computing (, we write
P
1
P
2
T
r, g
n
(u
, v
; r)
g
1
n
or g
2
n
or
P
i
P
i
/
OR
Figure 2.5: A round in the execution of in the (-hybrid model with abort
The protocol
Suppose that P
1
, P
2
wish to compute a functionality T = f
n
nN
, where for all n, f
n
=
f : X Y 0, 1 is a nite function. We will now describe a generic protocol, say ,
for fair computation. The scheme consists of a preliminary phase, where the parties jointly
compute functionality ShareGen (gure 2.6), followed by m iterations described in gure
2.7. The protocol is parametrized by a function = (n) and the number of rounds is
13
ShareGen
Inputs: The security parameter is n, P
1
holds x X, P
2
holds y Y .
Computation: Dene a
1
, . . . , a
m
and b
1
, . . . , b
m
.
Choose i
1 do
Choose y Y and set a
i
= f(x, y).
Choose x X and set b
i
= f( x, y).
For i = i
to m set a
i
= b
i
= f(x, y).
For 1 i m, choose (a
(1)
i
, a
(2)
i
) and (b
(1)
i
, b
(2)
i
) as random secret sharings of a
i
and b
i
respectively.
Generate keys k
a
and k
b
from the Gen(1
n
) algorithm and compute all tags
t
a
i
= Mac
k
a
(k|a
(2)
i
), t
b
i
= Mac
k
b
(i|b
(1)
i
).
Output:
P
1
receives a
(1)
1
, . . . , a
(1)
m
, (b
(1)
1
, t
b
1
), . . . , (b
(1)
m
, t
b
m
) and the Mac-key k
a
.
P
2
receives b
(2)
1
, . . . , b
(2)
m
, (a
(2)
1
, t
a
1
), . . . , (a
(2)
m
, t
a
m
) and the Mac-key k
b
.
If the initial received inputs are not in the appropriate domain, then they are
both given output .
Figure 2.6: The ShareGen functionality
set to m = (
1
log n). Finally, we assume the existence of a m-times secure message
authentication code (Gen, Mac, Vrfy) (denition A.1, page 24).
Denition 2.19. For a nite function f : X Y , where X = x
1
, . . . , x
X
and Y =
y
1
, . . . , y
Y
. Dene
(M
f
)
i,j
= (f(x
i
, y
j
))
i,j
,
p
x
= Pr[f(x, ) = 1],
p
y
= Pr[f(, y) = 1],
f
= min
i,j
_
[1 f(x
i
, y
j
) p
x
i
[ [1 f(x
i
, y
j
) p
y
j
[
[1 f(x
i
, y
j
) p
x
i
[ [1 f(x
i
, y
j
) p
y
j
[ +[f(x
i
, y
j
) p
y
j
[
_
.
Finally, for all i 1, . . . ,
X
dene the following real vectors:
C
(0)
x
i
(j) =
_
p
y
j
if f(x
i
, y
j
) = 1,
k
p
y
j
(1)(1p
x
i
)
+ p
y
j
otherwise.
C
(1)
x
i
(j) =
_
k
(p
y
j
1)
(1)p
x
i
+ p
y
j
if f(x
i
, y
j
) = 1,
p
y
j
otherwise.
14
Gordon/Katz/Hazay/Lindell Protocol (2008)
Inputs: P
1
, P
2
hold x and y respectively. The security parameter is n.
Protocol:
1. Preliminary phase:
(a) P
1
chooses y Y uniformly at random and sets a
0
= f(x, y). P
2
chooses
x X uniformly at random and sets b
0
= f( x, y).
(b) Parties run a protocol for ShareGen on their respective inputs.
(c) If P
1
(resp. P
2
) receives , then he outputs a
0
(resp. b
0
). Otherwise proceed
to the next step.
(d) Let a
(1)
1
, . . . , a
(1)
m
, (b
(1)
1
, t
b
1
), . . . , (b
(1)
m
, t
b
m
) and k
a
denote the output of P
1
. Let
b
(2)
1
, . . . , b
(2)
m
, (a
(2)
1
, t
a
1
), . . . , (a
(2)
m
, t
a
m
) and k
b
denote the output of P
2
.
2. For i = 1, . . . , m do:
P
2
sends the next share to P
1
:
(a) P
2
sends (a
(2)
i
, t
a
i
) to P
1
.
(b) If Vrfy
k
a
(i|a
(2)
i
, t
a
i
) ,= 1 (or some other anomaly occurs), then P
1
outputs
a
i1
and halts.
(c) If Vrfy
k
a
(i|a
(2)
i
, t
a
i
) = 1, then P
1
sets a
i
= a
(1)
i
a
(2)
i
and continues.
P
1
sends the next share to P
2
:
(a) P
1
sends (b
(1)
i
, t
b
i
) to P
2
.
(b) If Vrfy
k
b
(i|b
(2)
i
, t
b
i
) ,= 1 (or some other anomaly occurs), then P
2
outputs
b
i1
and halts.
(c) If Vrfy
k
b
(i|b
(1)
i
, t
b
i
) = 1, then P
2
sets b
i
= b
(1)
i
b
(2)
i
and continues.
Output: If all iterations have been run, then P
1
outputs a
m
, P
2
outputs b
m
.
Figure 2.7: Protocol for computing f
15
Theorem 2.20. Let f be as above. If for all b 0, 1 and x X there exists a probability
vector X
(b)
x
such that
X
(b)
x
M
f
= C
(b)
x
,
then, assuming (Gen, Mac, Vrfy) is a m-times secure MAC, protocol securely computes f
with complete fairness in the hybrid model.
Sketch of proof. For an adversary / corrupting P
i
in the hybrid model, we dene an ad-
versary o in the ideal model, with black-box access to / and show that the probability
distribution of the adversarys view and the honest partys output in the hybrid and ideal
model are computationally indistinguishable.
Adversary corrupts P
2
:
1. o invokes / on input y, security parameter n, auxiliary input z. Simulator o also
chooses a value y Y at random.
2. o receives y
1:
(a) o receives ( a
(2)
i
,
t
a
i
). If Vrfy
k
a
(i| a
(2)
i
) = 0 (or some other anomaly occurs), then o
sends y to the trusted party, halts and outputs whatever / outputs. Otherwise,
o proceeds.
(b) o chooses x X at random, computes b
i
= f( x, y
), sets b
(1)
i
= b
(2)
i
b
i
and
computes t
b
i
= Mac
k
b
(i|b
(1)
i
). The simulator hands message (b
(1)
i
, t
b
i
) to /.
6. For i = i
:
(a) o receives ( a
(2)
i
,
t
a
i
). If Vrfy
k
a
(i
| a
(2)
i
) = 0 (or some other anomaly occurs), then
o sends y to the trusted party, halts and outputs whatever / outputs. Otherwise,
o sends y
).
(b) o sets b
(1)
i
= b
(2)
i
and computes t
b
i
= Mac
k
b
(i
|b
(1)
i
). The simulator hands
message (b
(1)
i
, t
b
i
) to /.
7. For i = i
+ 1 . . . m:
(a) o receives ( a
(2)
i
,
t
a
i
). If Vrfy
k
a
(i| a
(2)
i
) = 0 (or some other anomaly occurs), then
o halts and outputs whatever / outputs.
16
(b) o sets b
(1)
i
= b
(2)
i
and computes t
b
i
= Mac
k
b
(i|b
(1)
i
). The simulator hands
message (b
(1)
i
, t
b
i
) to /.
8. If all m iterations have been executed but i
1:
(a) o chooses y at random, sets a
i
= f(x
, y), a
(2)
i
= a
(1)
i
a
i
and computes t
a
i
=
Mac
k
a
(a
(2)
i
). He sends (a
(2)
i
, t
a
i
) to /.
(b) The simulator receives (
b
(1)
i
, t
b
i
) from /. If Vrfy
k
b
(i|
b
(1)
i
) = 0 (or some other
anomaly occurs), then o sends x to the trusted party, where x is chosen according
to probability distribution X
(a
i
)
x
. The simulator then halts and outputs whatever
/ outputs. Otherwise, o proceeds.
6. For i = i
. . . m:
(a) If i = i
, o sends x
, y).
(b) The simulator sets b
(1)
i
= b
(2)
i
, computes t
a
i
= Mac
k
a
(a
(2)
i
) and sends (a
(2)
i
, t
a
i
)
to /.
(c) The simulator receives (
b
(1)
i
, t
b
i
) from /. If Vrfy
k
b
(i|
b
(1)
i
) = 0 (or some other
anomaly occurs), then o halts and outputs whatever / outputs. Otherwise, o
proceeds.
7. If all m iterations have been executed but i
= i[i
i] = to
show that the probabilities are equal.
Corollary 2.21. Using the notation above, if there exist a family of enhanced trapdoor
permutations and an m-times secure MAC, then f is computable with complete fairness.
Proof. By theorem 2.8, there exists a secure protocol for computing ShareGen with abort.
By theorem 2.20, protocol securely computes f with complete fairness in the hybrid model.
Finally, by theorem 2.18, protocol
Pr[ = 0] p
.
Theorem 3.3. If for some p (0, 1), protocol is p-consistent then there exists an ad-
versary corrupting one of the parties such that the p-bias of the honest partys output is at
least
4r(n) + 1
,
for some 0.
Proof. The proof goes along the lines of theorem 2.14. We introduce the same 4r(n) +1 ad-
versaries and by replacing bias and -consistency with p-bias and p-consistency respectively,
the same averaging argument proves the theorem.
19
Corollary 3.4. Let T = f
n
nN
be a two-party functionality such that, for all n, f
n
outputs
(, ), where is a random variable that yields 0 with probability p and 1 with probability
1 p. Then T is not computable with complete fairness.
Proof. In the ideal model, the p-bias of the honest partys output is always equal to 0. By
theorem 3.3, there exists an adversary in the real model that has no ideal-world counterpart.
Hence, fairness cannot be achieved.
Corollary 3.5. Let f : X Y 0, 1 be a nite function such that for some p (0, 1),
for all (x, y) X Y ,
Pr[f(x, ) = 0] = Pr[f(, y) = 0] = p.
Then f is not computable with complete fairness.
Proof. By instructing the parties to chose their inputs randomly, such a function can be used
to produce a common bit according to probability distribution (p, 1 p). Consequently, the
existence of a fair protocol that computes f contradicts corollary 3.4.
3.2 A family of fair functions using the protocol of Gor-
don et al.
We assume k 3. Consider the following function
f
k
: x
1
, . . . , x
k
y
1
, . . . , y
k
0, 1
(x
i
, y
j
)
_
0 if i 2 and j + i = k + 2,
1 otherwise.
Function f
k
has the following matrix representation:
M
k
=
_
_
_
_
_
_
_
_
_
_
_
1 1 1 1 1 1
1 1 1 1 1 0
1 1 1 1 0 1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1 1 1 0 1 1
1 1 0 1 1 1
1 0 1 1 1 1
_
_
_
_
_
_
_
_
_
_
_
Proposition 3.6. As a real matrix, the determinant of M
k
is 1 if k 0, 1( mod 4) and 1
otherwise. Its inverse is
M
1
k
=
_
_
_
_
_
_
_
_
_
_
_
2 k 1 1 1 1 1
1 0 0 0 0 1
1 0 0 0 1 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1 0 0 1 0 0
1 0 1 0 0 0
1 1 0 0 0 0
_
_
_
_
_
_
_
_
_
_
_
Proof. For the determinant, use Laplace formula along the last column and notice that all
the minor matrices are non-invertible except the rst one which is M
k1
. For the inverse,
just check that the product yields the identity matrix.
20
We want to show that functions f
k
k3
can be computed securely with complete fairness.
To this end we will apply corollary 2.21. Hence, we will show that our functions full the
hypothesis of the corollary.
Denition 3.7. Fix i, j 1, . . . , k and dene
p
x
i
= Pr[f
k
(x
i
, ) = 1],
p
y
j
= Pr[f
k
(, y
j
) = 1].
Thus in our case:
p
x
i
=
_
1 if i = 1,
k1
k
otherwise.
p
y
j
=
_
1 if j = 1,
k1
k
otherwise.
Denition 3.8. For a given k, dene
k
= min
i,j
_
[1 f
k
(x
i
, y
j
) p
x
i
[ [1 f
k
(x
i
, y
j
) p
y
j
[
[1 f
k
(x
i
, y
j
) p
x
i
[ [1 f
k
(x
i
, y
j
) p
y
j
[ +[f
k
(x
i
, y
j
) p
y
j
[
_
We distinguish 4 possible values of
k
:
(i, j) = (1, j),
(i, j) = (i, 1),
(i, j) is not one of the above and f(i, j) = 1,
(i, j) is not one of the rst two and f(i, j) = 0.
We use the appropriate values in the formula and deduce the following possible alphas.
k
?
=
k1
k
,
k
?
= 1,
k
?
=
(k1)
2
k
2
k+1
,
k
?
=
1
k
2
k+1
.
A quick analysis shows that
k
=
1
k
2
k + 1
.
Denition 3.9. For all i 1, . . . , k dene the following real vectors:
C
(0)
x
i
(j) =
_
p
y
j
if f(x
i
, y
j
) = 1,
k
p
y
j
(1)(1p
x
i
)
+ p
y
j
otherwise.
C
(1)
x
i
(j) =
_
k
(p
y
j
1)
(1)p
x
i
+ p
y
j
if f(x
i
, y
j
) = 1,
p
y
j
otherwise.
21
Hence for us, if i = 1 we have
C
(0)
x
1
=
_
1,
k 1
k
, . . . ,
k 1
k
_
C
(1)
x
1
=
_
1,
k
3
2k
2
+ k 1
k
3
k
2
, . . . ,
k
3
2k
2
+ k 1
k
3
k
2
_
.
and for i ,= 1,
C
(0)
x
i
=
_
1,
k 1
k
, . . . ,
k 1
k
,
f(x
i
,y
j
)=0
1,
k 1
k
, . . . ,
k 1
k
_
C
(1)
x
i
=
_
1,
(k 1)
3
1
k(k 1)
2
, . . . ,
(k 1)
3
1
k(k 1)
2
,
k 1
k
,
f(x
i
,y
j
)=0
(k 1)
3
1
k(k 1)
2
, . . . ,
(k 1)
3
1
k(k 1)
2
_
.
Theorem 3.10. There exists a protocol for computing f
k
securely with complete fairness.
Proof. We show that our family of functions fulls the hypothesis of theorem 2.20 i.e. We
show that for all b 0, 1 and all i 1, . . . , k there exists a probability vector X
(b)
x
i
such
that
X
(b)
x
i
M
k
= C
(b)
x
i
.
Now, we know that M
k
is invertible, thus the equation above admits a unique solution for
each C
(b)
x
i
which is
X
(b)
x
i
= C
(b)
x
i
M
1
k
.
It remains to write down everything and verify that the Xs are probability vectors.
X
(0)
x
1
=
_
1/k, . . . , 1/k
_
X
(1)
x
1
=
_
k 1
k
2
,
k
2
k + 1
k
3
k
2
. . . ,
k
2
k + 1
k
3
k
2
_
X
(0)
x
i
=
_
2/k, 1/k, . . . , 1/k, 0,
f(x
i
,y
j
)=0
1/k, . . . , 1/k
_
X
(1)
x
i
=
_
1
k
k 2
k(k 1)
2
, 1
(k 1)
3
1
k(k 1)
2
, . . . ,
1
(k 1)
3
1
k(k 1)
2
,
1
k
,
f(x
i
,y
j
)=0
1
(k 1)
3
1
k(k 1)
2
, . . . , 1
(k 1)
3
1
k(k 1)
2
_
We see that all the values above are positive. To see that each vector sums to one, one can
either check it by hand or notice that that the rst row of M
k
is all-1 and that the rst
entry of C
(b)
x
i
is 1 as well.
22
CHAPTER 4
FUTURE DIRECTIONS
Towards a classication of nite Boolean functions
Other directions
23
APPENDIX A
ADDITIONAL NOTIONS
Denition A.1 (MAC). A message authentication code consists of three polynomial time
algorithms (Gen, Mac, Vrfy) such that:
Gen takes as input the security parameter 1
n
and outputs a key k,
Mac takes as input the key k and a message m 0, 1
n
and outputs a tag t =
Mac
k
(m),
Vrfy takes as input, k, m and tag t and outputs a bit b = Vrfy
k
(m, t). We regard b = 1
as acceptance and b = 0 as rejection.
For correctness, we also require that Vrfy
k
(m, Mac
k
(m)) = 1.
Denition A.2. A MAC (Gen, Mac, Vrfy) is an information-theoretic secure -times mes-
sage authentication code if for any sequence of messages m
1
, . . . , m
, a computationally
unbounded adversary A succeeds in the following game with negligible probability:
1. Messages m
1
, . . . , m
.
2. Adversary outputs a new message m
/ m
i
i
and a tag t
.
3. Adversary wins if Vrfy
k
(m
, t
) = 1.
24
BIBLIOGRAPHY
[1] J. Katz, Bridging game theory and cryptography: recent results and future directions,
in Proceedings of the 5th conference on Theory of cryptography, TCC08, (Berlin, Hei-
delberg), pp. 251272, Springer-Verlag, 2008.
[2] R. Cleve, Limits on the security of coin ips when half the processors are faulty
(extended abstract), in Hartmanis [3], pp. 364369.
[3] J. Hartmanis, ed., Proceedings of the 18th Annual ACM Symposium on Theory of Com-
puting, May 28-30, 1986, Berkeley, California, USA, ACM, 1986.
[4] O. Goldreich, Foundations of cryptography. II. Cambridge: Cambridge University Press,
2004. Basic Applications.
[5] S. D. Gordon, C. Hazay, J. Katz, and Y. Lindell, Complete fairness in secure two-party
computation, in Dwork [6], pp. 413422.
[6] C. Dwork, ed., Proceedings of the 40th Annual ACM Symposium on Theory of Com-
puting, Victoria, British Columbia, Canada, May 17-20, 2008, ACM, 2008.
[7] G. Asharov, R. Canetti, and C. Hazay, Towards a game theoretic view of secure
computation, IACR Cryptology ePrint Archive, vol. 2011, p. 137, 2011.
[8] R. Canetti, Security and composition of multi-party cryptographic protocols, Journal
of Cryptology, vol. 13, p. 2000, 1998.
[9] K. Truemper, Matroid decomposition. Academic Press, 1992.
[10] J. G. Oxley, Matroid theory. Oxford University Press, 1992.
25