Vous êtes sur la page 1sur 7

Ph219a/CS219a

Solutions Nov 8, 2013

Problem 1.1
(a) Since pa = Tr(Ea ) and p a = Tr( Ea ), d(p, p ) = 1 1 |pa p a | = |Tr( ( )Ea )| 2 a=1 2
i N

(1)

Writing in its eigen-basis, we have =


N

i |i i|, so that

d(p, p )

=
a=1

|Tr( (
i

i |i i|)Ea )| (i i| Ea |i )|

1 | 2 a=1 1 2 1 2

i N

|i |(
i a=1

i| Ea |i ) (Triangle inequality!) (2)


N a

|i |
i

using the fact that Ea are orthogonal projectors satisfying

Ea = I .

(b) Construct one-dimensional projectors that project onto the basis in which , ie. Ei = |i i|. Notice that these are orthogonal projectors that sum to the identity, as desired. With this construction, the upper bound of equation (24) is saturated d(p, p ) =
j

|
i

(i i| Ej |i )| (i ij )|
i

=
j

| |j |
j

(3)

(c) Using the given denition, lets write down the L1 norm of :
1

= = =

Tr( ( )( ) ) Tr( ( )2 ) (since is Hermitian) |i |


i

= Thus we have,

2d(p, p ) 1 2

(4)

d(p, p ) = (d) Using the given phase convention,

(5)

| = = | | | The eigenvalues are simply cos so that d(p, p ) =

cos 0

0 cos

(6)

1 (| cos | + | cos |) = | cos | 2

(7)

) is given by (e) The Hilbert space norm of (| | | |


2

= = =

|)(| | ) ( | | + | )+ | ( | 2(1 sin ) (8)

Comparing this with the result of part (d), |) )2 ( d(| | , | = = |) implying that d(| | , | cos2 1 sin2 = (1 sin )(1 + sin ) 2(1 sin ) . (9)

| |

. Since for = 3 so that | = | (f ) Consider for example, the states | and | 2 quantum states are really rays, vectors which dier only by an overall phase factor represent should be indistinguishable. the same physical state. In other words, for = 32 , | and | , we nd, But computing the Hilbert space norm | | | | = | + | = 2 | | (10) is

are maximally distinguishable! The norm which would imply that | and | therefore not a good measure of distingushability. 2

On the other hand, distance measure d, also called the Kolmogrov distance between two states, gives: | 1 = 1 | | | | 1 = 0 |) = 1 | | | (11) d(| | , | 2 2 implying that the two states are indeed indistinguishable.

Problem 1.2
(a) The probability that Bob guesses wrong is perror = = Tr[ p1 1 E2 + p2 2 E1 ] Tr[ p1 1 (I E1 ) + p2 2 E1 ]

= p1 + Tr[ (p2 2 p1 1 )E1 ] = p1 + Tr


i

(i |i i|)E1 i i| E1 |i (12)

= p1 +
i

where |i is the eigenstate of (p2 2 p1 1 ) corresponding to eigenvalue i , as dened. (b) Since E1 , E2 are nonnegative and sum to identity (E1 + E2 = I ) their expectation values satisfy 0 i| E1 |i 1. In order to minimize perror , the coeecient i| E1 |i should be set to its minimum value(= 0) when |i corresponds to a positive eigenvalue and maximum value(= 1) when |i corresponds to a negative eigenvalue. In other words, Bob must choose E1 to be the projector onto the subspace spanned by all eigenvectors of (p2 2 p1 1 ) corresponding to negative eigenvalues to obtain the optimal error probability (perror )optimal = p1 +
neg

(13)

(c)From the denition of the L1 norm, we have, p2 2 p1 1 whereas p2 p1 = Tr[ p2 2 p1 1 ] =


i:i 0 1= i:i 0

i
i:i < 0

(14)

i +
i:i < 0

(15)

Combining equations (3) and (4), we have i


i:i < 0

= =

1 1 (p1 p2 ) 2 2 1 1 p1 2 2

p2 2 p1 1

p2 2 p1 1

(16)

Therefore (perror )optimal = p1 +


i:i < 0

i =

1 1 2 2

p2 2 p1 1

(17)

When 1 = 2 , equation(6) gives an optimal error probability of (1 |p1 p2 |)/2. This makes sense, because when Bob is given identical states in both cases, his optimal strategy is to choose based on the a priori probabilities the state with the higher probability. Thus if p1 > p2 , his probability of making an error is equal to the smaller probability p2 and indeed, (1 |p1 p2 |)/2 = p2 ! When 1 and 2 have support on orthogonal subspaces, then we expect that Bob should always be able to win by performing a POVM that distinguishes perfectly between the two states that Alice is sending. In terms of equation (6), this means that we can now diagonalize (p2 2 p1 1 ) by diagonalizing 1 and 2 separately so that (p2 2 p1 1 ) 1 = p1 + p2 . Therefore the optimal error probability is 1 2 [1 (p1 + p2 )] = 0 as expected! (d) In this case, p2 2 p1 1 = 1 2 cos(2) 0 0 cos(2) (18)

which has eigenvalues = 1 2 cos(2). The eigenvector corresponding to the negative eigen1 value is |0 so that 0 1 0 E1 = |0 0| = (19) 0 0 which gives (perror )optimal = (e) Bobs error probability is now given by perror =
i

1 (1 cos(2)) = sin2 2

(20)

pi perror (i) pi
i

= = =

1 1 |p(2|i) p(1|i)| 2 2 | pi p(2|i) pi p(1|i) |


i

1 1 2 2 1 1 2 2

| Tr[ (p2 2 p1 1 )Ei ] |


i

(21)

using the equalities pi p(2|i) = p2 p(i|2) = p2 Tr[2 Ei ] etc.

(f) Expanding (p2 2 p1 1 ) in its eigenbasis as (p2 2 p1 1 ) = perror = = = 1 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2 |


i j

j |j

j |, we have,

j j | Ei |j | |j | j | Ei |j (Triangle inequality!)

|j | (since
j i

Ei = I ) (22)

p2 2 p1 1

using the denition of the L1 norm.

Problem 1.3
(a) Writing out the operator EDK in the standard basis, we have, EDK = 1A A sin(2) A sin(2) 1A (23)

which has eigenvalues = (1 A) A sin(2). For EDK to be positive, the smaller of the two eigenvalues must be positive, ie. = 1 A A sin(2) 0 A 1 1 + sin(2) (24)

Thus the largest possible value of A that Bob can choose (so that EDK is still positive) is 1 A = 1+sin(2 ) . The probability of outcome EDK when Alice sends |u and |v are respectively, p(EDK |u) p(EDK |v ) = = u| EDK |u = (1 2A) + A + A| < u|v > |2 = 1 A(1 sin2 (2)) v | EDK |v = (1 2A) + A(sin2 (2) + A = 1 A(1 sin2 (2)) (25)

Assuming Alice chooses between these two states equiprobably, the probability of outcome EDK 2 is p(EDK ) = 1 2 (p(EDK |u) + p(EDK |u)) = 1 A(1 sin (2)) which takes on the minimum 1 value (p(EDK ))min = sin(2) when A takes on its maximum value A = 1+sin(2 ) . (b) Lets denote the basis in which Eve is measuring as |0 = 1 0 and |1 = 0 1 probabilities of Eves outcomes, conditioned on the state that Alice is sending are: p(0|u) p(1|u) = = | < 0|u > |2 = cos2 , p(0|v ) = | < 0|v > |2 = sin2 | < 1|u > |2 = sin2 , p(1|v ) = | < 1|v > |2 = cos2 (26) . The

(For example, p(0|u) denotes the probability of Eve obtaining outcome |0 when Alice sends |u .) 5

Similarly, the probabilities of Bobs outcomes conditioned on the state he receives are: p(Ev |v ) p(Eu |u) = = 0 , p(Eu |v ) = A(1 | < u|v > |2 ) = A(1 sin2 (2)) 0 , p(Ev |u) = A(1 sin2 (2)) (27)

(We have already computed p(EDK |u) and p(EDK |v ) in part(a).) There are ways in which Bob could obtain a wrong, but conclusive outcome. Lets compute their probabilities, assuming that Alice is making her choice equiprobably (i) Alice sends |u , Eve obtains outcome |1 and sends |v to Bob, who obtains the outcome u. This occurs with probability p(u 1 u) = 1 1 A sin2 cos2 (2) p(1|u)p(Eu |v ) = (sin2 )(A cos2 (2)) = 2 2 2 (28)

(ii) Alice sends |v , Eve obtains outcome |0 and sends |u to Bob, who obtains the outcome v . This occurs with probability A sin2 cos2 (2) 1 p(0|v )p(Ev |u) = 2 2 Thus the probability of Bob obtaining an error conclusively is given by, p(v 0 v ) = p(conclusive, error) = A sin2 cos2 (2) = (sin2 )(1 sin(2)) (29)

(30)

where we have used the largest possible value of A to minimize the probability. This tells us what fraction of Alices messages went wrong due to Eves interference. Now lets look at it from the point of view of being able to detect Eves presence. Given either |u or |v , the probability that Bob obtains a conclusive outcome is simply p(conclusive) = 1 1 p(EDK ) = 1 (p(EDK |u) + p(EDK |v )) 2 = 1 (1 A cos2 (2)) = 1 sin(2) (using the max value of A) (31)

Therefore the probability that Bob obtains an error, given that his outcome is conclusive is given by p(error|conclusive) = = p(conclusive, error) p(conclusive) (sin2 )(1 sin(2)) = sin2 1 sin(2) (32)

which is also the probability of being able to detect Eves presence.

Problem 1.4
(i) First we prove the converse From the GJHW theorem we know that two dierent realizations of a density operator =
i

pi |i i | q | |

(33)

are related by

q | =
i

pi Vi |i

(34)

for some unitary V . Computing the norm on both sides of equation (10), we have q =
i

pi |Vi |2 =
i

pi Di

(35)

The matrix D with elements Di = |Vi |2 is doubly stochastic (since V is unitary), thus proving that q p. (ii) The forward direction Given probability vectors q and p such that q p, we know from Horns lemma that there exists a unitary V such that q = pi |Vi |2 (36)
i

Consider the density operator expressed in its diagonal form as = unitary V , we can construct pure states | as follows q | =
i

pi |i i |. Using the

pi Vi |i

(37)

Computing the norm on both sides, we nd that the states {| } have unit norm q | Further, q | | =
i,j

=
i

pi |Vi |2 1 (using equation (12)) (38)

pi pj Vi Vj |i j |

=
i

pi |i |i

(39)

since Vi Vj = ij when V is unitary. Thus we have shown that pure, normalized {| } such that = q | |.

Vous aimerez peut-être aussi