Académique Documents
Professionnel Documents
Culture Documents
G, v
c [1, n 1]
t = s +c r mod n
, k
mac
, k
mac
}
E(Pub
C
i
, m) Encryption of m under Pub
C
i
using DHIES, E(Pub
C
i
, m) := {Q, H(kc), E
Auth
k
(m)}
The reference to the ciphertext E(Pub
C
i
, m)
ZKP A Zero Knowledge Proof to prove the well-formedness of ciphertext
SLA
del
C
i
A Service Level Agreement for the deletion of client instance Ci
Sig(...) A signed message using the TPMs ECDSA private key Prvt
Table I: Notations and meaning
Host TPM : 1
k
, C
TPM : Generate Prv
Ci
:= d
Ci
TPM Host : Pub
Ci
:= d
Ci
G, C
i
2) Encryption: Encrypt(C
i
, m) takes as input the reference to the created user instance C
i
, a message m and returns the
encrypted message under the public key Pub
Ci
. For the encryption, we adopt the Dife-Hellman Integrated Encryption Scheme
(DHIES) [1]. First, the TPM generates an ephemeral public key Q
= d
G where d
R
[1, n1]. It then calculates two session
keys, which include an encryption key k
enc
= H(d
Ci
Q
= H(d
Ci
Q
:= d
G, H(k
c
), E
Auth
k
(m)
3) Decryption: Decrypt(C
i
, Q
, H(k
c
), E
Auth
k
(m)) takes as input the reference to an existing user instance C
i
, the ciphertext
obtained from the earlier encryption step, and returns the decrypted message if the verications on the key conrmation string and
MAC are successful. The TPM rst validates that Q
c
= H(d
Ci
Q
|| 0x11)
and proceeds to decryption only if H(k
c
) = H(k
c
). The decryption procedure follows subsequently as described in DHIES [1].
Upon the successful verication of the MAC tag, the encrypted message will be decrypted and the original plaintext m will be
returned. The Decrypt function can be formalized as:
Host TPM : C
i
, Q
, H(k
c
), E
Auth
k
(m)
TPM Host : m
4) Audit: Audit(C
i
, Q
, H(k
c
)) takes as input the reference to an existing user instance C
i
, the ephemeral public key Q
and
the key conrmation string H(k
c
), and allows the user to verify whether the earlier encryption operation was done correctly.
The TPM rst checks that Q
|| 0x11)) = H(k
c
). It then outputs
d
Ci
Q
= H(d
Ci
Q
|| 0x01) and k
mac
= H(d
Ci
Q
|| 0x10). With these symmetric keys, the host is able to fully verify if
the message was encrypted correctly using these keys. Note that this auditing only reveals the symmetric encryption and MAC
keys within one DHIES session; the secrecy of the keys derived in other sessions is not affected. The Audit function can be
formalized as:
8
Host TPM : C
i
, Q
, H(k
c
)
TPM Host : d
Ci
Q
, . . .
ZKP
[log
G
d
Ci
G = log
Q
d
Ci
Q
]
5) Delete: Delete(C
i
) deletes a user instance C
i
by overwriting its private key d
Ci
in the TPMs protected memory and
returns artifact
del
Ci
, which is a Service Level Agreement {Delete, Pub
Ci
} signed by the TPMs ECDSA signing key. After
the erasure of the private key, all messages encrypted under Pub
Ci
can no longer be decrypted. Assume the TPM had failed
to erase the private key d
Ci
properly and that the key is later discovered by the user. The user can present d
Ci
together with
SLA
del
Ci
, as publicly veriable evidence, that the TPM had failed to provide the secure data deletion service as promised. Based
on the evidence and the terms in the Service Level Agreement, the user should be entitled to compensation. We will discuss the
amount of compensation and the pricing strategy in further detail in Section VI. The Delete function can be formalized as:
Host TPM : C
i
TPM Host : SLA
del
Ci
:= Sig(Delete, Pub
Ci
)
V. IMPLEMENTATION
In this section, we will describe a full prototype implementation of the proposed SSE system, based on using a standard Java
card [37] as a TPM for key management, a MacBook laptop (1.7 GHz with 4 GB memory) for the host and a standard disk
drive for mass data storage. As we will show, this is a non-trivial development effort. To our knowledge, what we provide is the
rst public implementation of DHIES and Chaum-Pedersen ZKP on a resource constrained Java card. (The full source code for
the prototype can be found at the end of the paper.)
The Java card we use has a dual interface, supporting both contact and contactless communication. We use the contactless
interface for all experiments. The chip on the card has an 80 KB EEPROM for persistent storage and an 8 KB RAM for holding
volatile data in memory. The card is compliant with Java Card Standard 2.2.2, but also supports some additional APIs from Java
Card Standard 3.0.1. In particular, it supports ALG_EC_SVDP_DHC_PLAIN under the javacard.security.KeyAgreement interface,
which allows obtaining the plain shared secret (instead of a SHA-1 hash of the secret) from the Elliptic Curve Dife-Hellman
(ECDH) key exchange protocol. This API is essential for the prototype implementation of our system.
One obstacle we encountered is that the existing Java card API standards does not support modular multiplication of big
numbers (see [30], [37]). To the best of our knowledge, no Java Cards currently available in the market provide the API support
to perform this basic modular operation. Therefore, we had to implement the big number modular multiplication from scratch by
ourselves using the primitive arithmetic operators and byte arrays (without involving any hardware support from the low-level
native C library on the card). It takes about 150 lines of Java code to execute one modular multiplication.
Regarding the elliptic curve setting, we chose the standard P-256 curve as dened in the Digital Signature Standards
specication [35]. When the Java card applet is rst loaded into the chip, upon initialization it generates a random ECDSA
key pair over the P-256 curve. The same curve is used for the generation of all further public/private key pairs required.
In the following, we will explain the implementation details and performance measurements of all the functions specied
in the SSE protocol. For each function, the latency is measured in terms of the delay in the card processing and in the card
communication (via the contactless interface). We repeated the experiments thirty times and summarize the average results in
Figure 4 and Table II.
KeyGen. This function involves generating a random public/private key pair over the P-256 curve for a new user instance.
The public key, along with a 16-bit unique identier, is returned to the user. The private key (32 bytes) is stored in the TPMs
EEPROM. To facilitate the encryption operation later, we also keep the public key in EEPROM. The card only supports the EC
public key in the uncompressed form, so the size of the public key is 64 bytes. Given that the Java card that we use has 80
KB EEPROM in total and that the SSE program takes up 16 KB storage in EEPROM, we can create about 650 random EC
public/private pairs. As shown in Table II, this operation takes a constant 835 ms in total.
Encrypt. The function receives a plaintext le, encrypts it using DHIES and returns the ciphertext. In one DHIES session,
two symmetric session keys are derived to encrypt the le in an authenticated manner (see Figure 1). In theory, there should be
no limit in how long is the input le that can be encrypted under one DHIES session. However, in practice, there is an upper
limit due to the constrained memory size in the Java card. (The reason shall become more evident later when we explain how
the Decrypt operation works.) In our implementation, up to 2 KB data can be encrypted in one DHIES session. For a plaintext
le bigger than 2 KB, the host program needs to divide the le into block with each less than 2 KB and encrypt each block in
one DHIES session.
Another constraint in the implementation is the size of the APDU buffer. The card receives and sends messages through
an APDU buffer, which can hold data up to 255 bytes at one time. Therefore, for a long message, the encryption cannot be
done in one operation, and needs to be done in four steps. First, the card receives an instance ID that identies the public key.
Accordingly, it creates an ephemeral public key, and computes the DHIES session keys. The session keys comprise a 128-bit
AES key for encrypting data and another 128-bit AES key for computing MAC. The encryption is performed in the CBC mode.
9
512 1024 1536 2048 2560 3072 3584 4096 4608 5120 5632 6144
0
2000
4000
6000
8000
10000
12000
Latency measurements for encryption
Size of the input plaintext file (bytes)
L
a
t
e
n
c
y
(
m
s
)
Card processing
Communication delay
(a) Performance of Encrypt
512 1024 1536 2048 2560 3072 3584 4096 4608 5120 5632 6144
0
2000
4000
6000
8000
10000
12000
Latency measurements for decryption
Size of the input ciphertext file (bytes)
L
a
t
e
n
c
y
(
m
s
)
Card processing
Communication delay
(b) Performance of Decrypt
Figure 4: Performance evaluation based on a proof-of-concept implementation using a resource-constrained Java card (5 MHz
processor). The same implementation should work several hundred times faster on a high-performance Tamper Resistant Module
such as IBM Storage Manager HSM (2 GHz processor) [41].
Algorithm 1 Encryption in one DHIES session
Input: User instance reference C
i
, message m, elliptic curve E with generator G of order n, secure hash function H;
Output: Ephemeral public key Q
[1, . . . n];
4: Card sets Q
= d
.G;
5: Card sets k
enc
= H(d
Ci
.Q
0x01);
6: Card sets k
mac
= H(d
Ci
.Q
0x10);
7: Card generates random IV and returns this to the client;
8: Client divides m into segments m
i
with each segment not more than 255 bytes;
9: for segments m
i
do
10: Client sends m
i
to card;
11: Card generates E
k
(m
i
) using AES-CBC with key k
enc
;
12: Card generates MAC
i
for E
k
(m
i
) using AES-CBC with key k
mac
0x11);
15: Card returns to the client Q
, H(k
c
), E
Auth
k
(m);
A random IV for AES-CBC is generated and returned to the host in this step (this to optimize the bandwidth usage so that in the
subsequent step, the plaintext data can ll up the entire APDU buffer and the returned ciphertext will occupy the whole buffer as
well). Second, the message is divided into segments with each segment not more than 255 bytes. The card receives each segment
in turn, performs encryption and saves the intermediate results in RAM. This step is repeated until the penultimate segment of
the message. Third, it receives the last segment of the message and nalizes the encryption. Fourth, a MAC is returned, which is
computed over the entire ciphertext using CBC-MAC. Full implementation details about the DHIES encryption are summarized
in Algorithm 1
The latency measurements for the encryption operation are shown in Figure 4a. For each input le of different sizes, the
Encrypt operation is invoked to encrypt the le and return the ciphertext. The measured total latency includes both the card
processing and card communication delays. In order to obtain the communication delay, we conduct a separate experiment. We
add a dummy API to the card, which works supercially similar to Encrypt in that it accepts an input le and returns an output
le that has the same size as what the Encrypt API would return. However, the dummy API does not perform any processing
on the input data and it immediately outputs a xed data string that is stored in the card memory back to the host. We measure
10
Algorithm 2 Decryption in one DHIES session
Input: User instance reference C
i
, ephemeral public key Q
, H(k
c
), IV to card;
2: Card retrieves the instance private key d
Ci
corresponding to user instance C
i
;
3: Card validates Q
c
= H(d
Ci
.Q
0x11);
5: if H(k
c
) = H(k
c
) then
6: Card Sets k
enc
= H(d
Ci
.Q
0x01);
7: Card sets k
mac
= H(d
Ci
.Q
0x10);
8: Client divides E
Auth
k
(m) into segments M
i
9: for segments M
i
do
10: Client sends M
i
to card;
11: Card sets m
i
to be the decryption of M
i
using AES-CBC and key k
enc
and IV ;
12: Card generates a MAC
i
for M
i
using key k
mac
G, H(k
c
)}, the audit function returns {d
Ci
d
G, ZKP
}. The ZKP
reveals
nothing more than one bit information about the truth of the statement: the tuple {G, d
Ci
G, d
G, d
Ci
d
G} is a DDH tuple
4
(see [10]). We assume that the audit function is called of an unlimited number of times. The passive attacker records every input
and output, and eventually builds up a transcript of all possible tuples, each comprising {d
G, d
Ci
d
G} (recall that d
is
dynamic and d
Ci
is static.). However, he can simulate the same transcript by generating the random values d
by himself and
computing d
d
Ci
G accordingly. In conclusion, he learns nothing about d
Ci
from the transcript that he can simulate all by
himself .
Active attack. Second, we consider an active attacker and make the following claim with a sketch of its proof.
Claim 2. Under the assumption that the Computational Dife-Hellman (CDH) problem in the designated group is intractable,
and given that the ciphertext input, supplied by an active attacker, has passed the internal verication in the TPM, the input
must have been generated with the knowledge of the ephemeral private key d
.
Proof: Assume the attacker has calculated the input to the audit function on his own, which includes {d
G, H(k
c
)}. To
obtain a contradiction, we assume the attacker does not know d
G, it shows
that d
G is a valid public key in the designated group over the elliptic curve, so the discrete logarithm (i.e., the private key) with
respect to the base point G must exist. In other words, the value d
G||0x11).
Hence, the attacker must have obtained the same ECDH shared plain secret. In summary, without knowing d
Ci
or d
, the attacker
has computed d
Ci
d
G from {d
Ci
G, d
G}. This contradicts the CDH assumption as stated in the claim. In conclusion, the
active attacker must have known d
, he will learn nothing from the Audit function as he is able to compute the DDH tuple
{G, d
G, d
Ci
G, d
Ci
d
G} all by himself.
B. Threat analysis
In the threat model dened in Section IV, we have highlighted threats from three different angles. We now analyze those
threats in detail.
1) Data thief: We assume the attacker has physically captured the TPM and the disk. Clearly, the attacker cannot make use
of the TPM without passing the authentication mechanism. We further assume that the attacker has had the users authentication
credential, so he can invoke all API functions of the TPM. Obviously, if the keys have not been deleted, the attacker will be
able to trivially decrypt the ciphertext stored on the disk. This is unstoppable as the attacker is essentially no different from a
legitimate user from the systems perspective. The basic design goal of the SSE system is to prevent the attacker from recovering
deleted data. Hence, before the system falls into the enemy hands, we assume that the user erases keys by calling the Delete
function, or in the extreme case, physically destroying the TPM chip. The latter guarantees the complete erasure of the keys,
but in our analysis we will focus on non-destructive means to delete data.
If the Delete function has been implemented correctly, the key should have been erased and its location in memory be
overwritten with random data. This can prove extremely costly for the attacker to recover the deleted key; without the key, the
attacker will have to do a ciphertext-only attack against DHIES, which has been proved infeasible [1].
In order to recover the deleted key, the attacker has to penetrate two layers of defence. First, he needs to bypass the physical
tamper resistance, so he can gain access to the protected memory in the TPM. Second, he needs to recover the overwritten bits
in the memory cells where the key was stored before the deletion. Compromising both layers is not impossible, but will incur
a high cost to the attacker. This will be an arms race between defenders and attackers, but if the cost to attack is signicantly
higher than the value of the target data, the thief may be deterred.
2) TPM provider: As explained above, if the TPM has i) encrypted data correctly based on the DHIES algorithm, and ii) also
erased keys properly from the protected memory, it can prove prohibitively expensive for a data thief to recover the deleted data.
However, we shall not take it for granted that the TPM provider must have implemented both operations correctly. Software bugs
are one concern. We should also be wary of the possibility that the TPM provider might be coerced by a powerful state-funded
adversary to insert a trapdoor into the products.
4
Since the non-interactive ZKP is obtained by applying the Fiat-Shamir heuristics, a random oracle model is assumed.
13
E
k
( compress(m) ) E
k
( k )
E
k
( m ) Ciphertext 1
Ciphertext 2
Figure 5: Ciphertext 1 is produced by an honest TPM while ciphertext 2 is by a dishonest TPM. k is an encryption key and k
is a trapdoor key (known to a state-funded security agency). Given that the encryption algorithm is semantically secure, users
cannot distinguish the two ciphertexts.
Trust-but-verify. Instead of completely trusting the TPM, we adopt a trust-but-verify approach. More specically, this trust-
but-verify is reected in the design of the SSE protocol in two aspects: veriable encryption and veriable deletion.
Veriable encryption. First, the encryption should be veriable. The SSE protocol allows the user to verify if the encryption
has been implemented correctly following the DHIES specication. This verication is critical, because if the encryption had
not been done correctly in the rst place, then deleting the key will not logically lead to the deletion of data. In past work,
such verication is usually done implicitly the fact that the software program can reverse the encryption process and recover
the same original plaintext gives implicit assurance that the encryption was done correctly. This kind of implicit verication is
widely used in software testing to ensure the encryption and decryption are implemented correctly.
In a security-critical application, this kind of implicit assurance is insufcient, especially when the software program is
encapsulated within a tamper resistant device and its source code is totally inaccessible. We provide one possible attack in Figure
5. Since the plaintext data normally contain redundancies, the TPM can compress the data rst before doing encryption. The
compression will create spare space to insert a trapdoor block, which is the decryption key wrapped by a trapdoor key (known
to a state-funded security agency). Given that the ciphertext length remains the same and the encryption cipher is semantically
secure (i.e., the output of the encryption is indistinguishable from random), users cannot distinguish the two ciphertexts in
Figure 5. During the decryption, the TPM can simply ignore the trapdoor block and decrypt data as normal. This attack may
be mitigated by always requiring the data compression rst before encryption. However, a powerful state-funded adversary may
know a compression algorithm that is more efcient than the publicly known ones. A slight advantage in the compression ratio
would prove sufcient to insert a few extra bytes as the trapdoor. We assume the attackers goal to enable mass surveillance
over the Internet once the ciphertext is sent over the Internet (say to a remote storage server), the attacker is able to trivially
decrypt data without anyone being aware of it.
Our solution to the above problem is through the audit function. One trivial way to allow auditing the encryption is to reveal
the user instances private key d
Ci
. But the private key d
Ci
may have been used in many DHIES sessions (each session is an
invocation of the Audit function). The auditing should be limited to one specic session, but the revelation of d
Ci
will affect
the secrecy of all other sessions. This reveals too much information.
Another solution is to reveal the random factor d
. First of
all, the TPM reveals the plain ECDH shared secret: S = d
Ci
d
G, d
Ci
d
G}, the revealed S will make the tuple form a valid DDH tuple.
This is equivalent to proving either of the following two statements: 1) log
G
C = log
N
S; or 2) log
G
N = log
C
S. The choice
of the statement depends on whether the prover knows either d
Ci
or d
, but it knows
d
Ci
, hence is able to compute the ZKP based on the Chaum-Pedersen protocol.
Veriable deletion. As we explained earlier, the deletion operation should return a proof (ECDSA signature) that cryptograph-
ically binds the commitment in deleting a secret key with the outcome of that operation. If the TPM has failed to erase the key
correctly, the digital signature will serve as publicly veriable evidence to indicate the security failure. Based on the evidence
and the terms in the Service Level Agreement, the user should be entitled for compensation.
Traditionally, when one (say a researcher) wants to demonstrate a security failure (or vulnerability) of a TPM, he would need
to write a technical article, post a video or do a live demo. Our protocol makes this exposure process easier and more directly:
just publishing a short string of data (an ECDSA signature and the recovered key) on the internet. Anyone will be able to verify
the digital signature and conrm the evidence of security failure.
C. User
We consider a user who is a legitimate owner of a SSE system. However, the user might want to prot from claiming for
compensation. To claim compensation, the user will need to present an ECDSA signature together with the private key d
Ci
that
14
is supposed to have been deleted.
In one attack, the user can do as a data thief would do: 1) compromising the tamper resistance to gain access to the TPMs
protected memory; 2) recovering the overwritten key value in the protected memory in the TPM.
However, instead of penetrating two layers of defence, the user actually just needs to compromise one layer. Once she is able
to gain access to the protected memory, she can extract an existing private key d
Ci
in memory and call the delete function to
erase this key in order to obtain an ECDSA signature. Equivalently, she can extract the ECDSA private key and generate her
own ECDSA signature. The evidence itself does not tell if the security failure is due to the compromise of the ECDSA signing
key or due to the recovery of the allegedly deleted private key. But both keys should have been kept in the secure memory of
the TPM. Hence, in any case, it should become publicly clear that the claimed tamper resistance has been compromised. As
compared to a data thief, a user exploits a short-cut in the attack as she does not need to go further to recover the overwritten
bits in the memory. This needs to be considered in the pricing strategy on determining the compensation amount, which we
discuss in the next section.
VII. PRICING STRATEGY
In the proposed new business model, the user pays a premium for the secure data deletion service. According to the Service
Level Agreement, the solution provider is obliged to provide the promised security or to pay a penalty for the failure. In
WEIS10, Cezar, Cavusoglu and Raghunathan have discussed contracting issues in the information security market [9], hence
the idea of imposing contractual penalty for the security failure is not wholly new. (As a practical example, the IBMs Service
Level Agreement offers Money-Back Payment: i.e., providing US$50,000 to the client rm for any security breach listed in the
contract. [9]) However, to our knowledge, in the eld of secure data storage, we are the rst to propose to treat secure data
deletion as a paid service. No one has discussed the contractual penalty for the information leakage in the data deletion before.
In the following, we will attempt to provide the initial investigation into this subject.
In the SSE protocol, the evidence of security failure is designed to be publicly veriable. Hence, if the user shows evidence
to prove that the TPM has failed to provide the promised security, she should be entitled to compensation. Depending on the
details in the Service Level Agreement, the compensation may be the same as the product price (i.e., get money back) or a
variable amount of indemnity. To make it general, we assume the amount of compensation is N.
We begin by taking C
1
to be the cost of breaking the tamper resistance of the TPM (i.e., so as to extract the ECDSA signing
key) and C
2
to be the cost of recovering an overwritten key from the secure memory. Hence, the overall cost to recover a deleted
key from the TPM is: C = C
1
+C
2
. We denote P as the selling price of that TPM, and N is the amount of compensation. As
long as N P +C
1
, then a user will not be able to make a prot by reverse engineering a TPM. (Note that we use C
1
instead
of C because of the short-cut in the attack, as explained in the previous section.)
Let us denote T as the cost of producing the TPM. To make the TPM more tamper-resistant will usually increase the cost
of the TPM. A TPM vendor may choose to cut corners and produce TPMs that are inexpensive but not very tamper resistant.
When P T > N, the TPM vendor will be able to unfairly prot by simply producing cheap but weak TPMs and attempt to
sell a large amount of them before this insecurity is discovered. Then, when the insecurity is discovered, the vendor can pay
compensation to every user, making a prot of P T N for each TPM. Hence, the amount for compensation should be within
the following (rough) range:
P T N P +C
1
(1)
The key observation from the above equation is that the range for the amount of compensation largely depends on C
1
, which
is the cost to break tamper resistance. Take the Java card as an example. A Java card typically sells at a price between $5 and
$15 [39]. The manufacturing cost T is xed. But the exact cost of C
1
is difcult to estimate as it may vary greatly from a few
hundred to a few million dollars [3]. To a large extent, it reects the core technology in the design and manufacturing of the
TPM chip. Within this broad price range (see Equation 1), we expect a reasonable amount of compensation would be N = P.
This falls within the range in eq. 1 and also ts the common expectation that if a product is found faulty (or below the promised
quality standard), the customer should have the right to at least ask the money back.
It is worth noting that the above analysis is an initial attempt to dene a pricing strategy for a new business model. The
estimated pricing range is only approximate. For example, we have not taken into account the cost of the company losing
reputation when its security product is found awed. In future research, we plan to work on a more rened model.
VIII. CONCLUSION
In this paper, we investigate how to make a black-box data deletion system more transparent and veriable. Current cryptography-
based solutions universally work by rst using a cryptographic key to encrypt data and later deleting the data through discarding
the key. However, we identify two problems in this approach. First, existing schemes commonly assume that the encryption must
have been done correctly, but that assumption can be easily challenged when the encryption is done within a TPM. Second, in
all schemes, only one bit is returned from the deletion operation without any proof of liability (in case anything goes wrong),
15
thus the user is expected to just trust the one-bit outcome. The rst problem is particularly important if data is not encrypted
correctly in the rst place (say left with a covert trapdoor in the ciphertext), then discarding the key is meaningless in deleting
the data.
We aim to address the above two problems in a systematic approach. Our work includes both theoretical and practical
contributions, which are summarised below.
We introduced a trust-but-verify paradigm, based on which we designed a Secure Storage and Erasure (SSE) protocol.
To this end, we modied the DHIES scheme by adding explicit key conrmation. Our modication improves the message
size complexity from O(n) to O(1) in the auditing process. Furthermore, we enhanced the security of DHIES by adding
an audit function based on Chaum-Pedersens ZKP. The result is a veriable public key encryption primitive, which can
prove generally useful. Finally, we applied a digital signature scheme to cryptographically bind the TPMs commitment to
delete a key to the outcome of this operation. This serves to provide publicly veriable evidence in case the TPM failed
to delete the key properly.
We proposed a new business model that recognizes the secure data deletion as a paid service. Furthermore, we provided
an initial pricing strategy for the new business model.
We provided the rst public implementation of DHIES and Chaum-Pedersen ZKP on a Java card after overcoming several
obstacles due to the extremely constrained resource on the card. The Java card and the host programs for the SSE prototype
are released as open source code. With a physical Java card ($5-15), a user can build her own SSE system without any
extra cost. The user may carry the card with her and use it to encrypt/decrypt data on the y (e.g., encrypting all data
before sending them to the cloud).
Future work includes extending the trust-but-verify paradigm to other crypto primitives, in particular, the secure random
number generator. The problem of permitting end users to audit if a random number has been generated correctly in a TPM as
part of the encryption process (or a cryptographic protocol) is still largely unsolved and deserves further research.
SOURCE CODE
The source code for the Java card and host programs is publicly available at: https://github.com/SecurityResearcher/SSE). Java
cards can be purchased from various sources, e.g., [39], [40].
REFERENCES
[1] M. Abdalla, M. Bellare and P. Rogaway, The Oracle Dife-Hellman Assumptions and an Analysis of DHIES, Topics in Cryptology - CT-RSA01, LNCS
Vol. 2020, 2001.
[2] B. Adida, Helios: Web-Based Open-Audit Voting, Proceedings of the 17th USENIX Security Symposium, pp. 335-348, 2008.
[3] R.J. Anderson, Security Engineering : A Guide to Building Dependable Distributed Systems, 2nd edition, New York, Wiley 2008.
[4] A. Antipa, D. Brown, A., Menezes, R. Struik, S. Vanstone, Validation of Elliptic Curve Public Keys, Proceedings of the 6th International Workshop on
Practice and Theory in Public Key Cryptography Public Key Cryptography (PKC03), LNCS 2567, pp. 211-223, 2003.
[5] S. Bauer, N.B. Priyantha, Secure Data Deletion for Linux File Systems, Proceedings of the 10th USENIX Security, 2001.
[6] C, Burton, C. Culnane, J.A. Heather, P.Y.A. Ryan, S. Schneider, T. Srinivasan, V. Teague, R. Wen, Z. Xia, Using Pret a Voter in Victorian State Elections,
Proceedings of the 2012 Electronic Voting Technology/Workshop on Electronic Voting (EVT/WOTE12), 2012.
[7] D. Boneh, R. Lipton, A Revocable Backup System, Proceedings 6th USENIX Security Conference, pp. 91-96, 1996.
[8] C. Cachin, K. Haralambiev, H.C. Hsiao, A. Sorniotti, Policy-Based Secure Deletion, Proceedings of the 2013 ACM Conference on Computer and
Communications Security (CCS13), pp. 259-270, 2013.
[9] A. Cezar, H. Cavusoglu, S. Raghunathan, Outsourcing Information Security: Contracting Issues and Security Implications, Proceedings of the ninth
Workshop on the Economics of Information Security (WEIS10), 2010.
[10] D. Chaum and T.P. Pedersen, Transferred Cash Grows in Size, Proceedings of EUROCRYPT, pp. 390-407, 1993.
[11] D. Wheeler, Protocols Using Keys from Faulty Data (Transcript of Discussion), Proceedings of the 9th Security Protocols Workshop (SPW01), LNCS
No. 2467, pp. 180-187, 2002.
[12] A. Fiat, A. Shamir, How to Prove Yourself: Practical Solution to Identication and Signature Problems, Proceedings of CRYPTO, pp. 186-189, 1987.
[13] S. Garnkel, A. Shelat, Remembrance of Data Passed: A Study of Disk Sanitization Practices, IEEE Security & Privacy, Vol. 1, No. 1, pp. 17-27, 2003.
[14] P. Gutmann, Secure Deletion of Data from Magnetic and Solid-State Memory, Proceedings of the Sixth USENIX Security Symposium, pp. 22-25, 1996.
[15] P. Gutmann, Data Remanence in Semiconductor Devices, Proceedings of the 10th conference on USENIX Security Symposium, 2001.
[16] N. Joukov, H. Papaxenopoulos, E. Zadok, Secure Deletion Myths, Issues, and Solutions, Proceedings of the second ACM workshop on Storage Security
and Survivability (StorageSS), pp. 61-66, 2006.
[17] M. Kallahalla, E. Riedel, R. Swaminathan, Q. Wang, K. Fu, Plutus: Scalable Secure File Sharing on Untrusted Storage, Proceedings of the 2nd USENIX
Conference on File and Storage Technologies (FAST03), pp. 29-41, 2003.
[18] R. Kissel, M. Scholl, S. Skolochenko, X. Li, Guidelines for Media Sanitization, NIST Special Publication 800-88, 2006.
[19] T. Kohno, A. Stubbleeld, A.D. Rubin, and D.S. Wallach, Analysis of an Electronic Voting System, Proceedings of the 25th IEEE Symposium on
Security and Privacy, May, 2004.
[20] J. Lee, S. Yi, J.Y. Heo, H. Park, S.Y. Shin and Y.K. Cho, An Efcient Secure Deletion Scheme for Flash File Systems, Journal of Information Science
and Engineering, Vol. 26, pp. 27-38, 2010.
16
[21] B. Lee, K. Son. D. Won, S. Kim, Secure Data Deletion for USB Flash Memory," Journal of Information Science and Engineering, Vol. 27, pp. 933-952,
2011
[22] M. Paul, A. Saxena, Proof Of Erasability for Ensuring Comprehensive Data Deletion in Cloud Computing, Communications in Computer and Information
Science, Vol. 89, Part 2, pp. 340-348, 2010.
[23] D. Perito, G. Tsudik, Secure Code Update for Embedded Devices via Proofs of Secure Erasure, Proceedings of the 15th European Conference on
Research in Computer Security (ESORICS), pp. 643-662, 2010.
[24] R. Perlman, File System Design with Assured Delete, Proceedings of the Third IEEE International Security in Storage Workshop (SISW), pp. 83-88,
2005.
[25] Z.N.J. Peterson, R. Burns, J. Herring, A. Stubbleeld, A.D. Rubin, Secure Deletion for a Versioning File System, Proceedings of the 4th Conference
on USENIX Conference on File and Storage Technologies (FAST), Vol. 4, pp. 143-154, 2005.
[26] J. Reardon, S. Capkun, D. Basin, Data Node Encrypted File System: Efcient Secure Deletion for Flash Memory, Proceedings of the 21st Usenix
Symposium on Security, 2012.
[27] J. Reardon, D. Basin, S. Capkun, SoK: Secure Data Deletion, Proceedings of the 2013 IEEE Symposium on Security and Privacy, pp. 301-315, 2013.
[28] J. Reardon, H. Ritzdorf, D. Basin and S. Capkun, Secure Data Deletion from Persistent Media, Proceedings of the ACM Conference on Computer and
Communications Security, 2013.
[29] D. Stinson, Cryptography: Theory and Practice, Third Edition, Chapman & Hall/CRC, 2006.
[30] H. Tews, B. Jacobs, Performance Issues of Selective Disclosure and Blinded Issuing Protocols on Java Card, Proceedings of the 3rd IFIP WG 11.2
International Workshop on Information Security Theory and Practice (WISTP09), 2009.
[31] M. Wei, L.M. Grupp, F.E. Spada, S. Swanson, Reliably Erasing Data From Flash-Based Solid State Drives, Proceedings of the 9th USENIX Conference
on File and Storage Technologies (FAST), 2011.
[32] M. Wei, S. Swanson, SAFE: Fast, Veriable Sanitization for SSDs, Technical Report CS2011-0963, University of California, San Diego, 2011.
[33] C.P. Wright, M.C. Martino, E. Zadok, NCryptfs: A Secure and Convenient Cryptographic File System, Proceedings of the 2003 USENIX Annual
Technical Conference, pp. 197-210, 2003.
[34] C. Wright, D. Kleiman, S. Sundhar, Overwriting Hard Drive Data: The Great Wiping Controversy, Proceedings of the 4th International Conference on
Information Systems Security, pp. 243-257, 2008.
[35] Digital Signature Standard (DSS), Federal Information Processing Standards Publication, NIST FIPS Pub 186-4, July 2013.
[36] Specication of High Capacity Smart Cards, http://www.ecebs.com/high-capacity-smart-cards-a212 (accessed in Feburary, 2014)
[37] Java Card Platform Specication 2.2.2, http://www.oracle.com/technetwork/java/javacard/specs-138637.html (accessed in Feburary, 2014)
[38] The Guardian news on the Snowden documuments, September, 2013, http://www.theguardian.com/world/2013/sep/05/nsa-gchq-encryption-codes-security
(accessed in Feburary, 2014)
[39] Smart Card Solution Provider, http://www.javacardsdk.com/ (accessed in Feburary, 2014)
[40] Mobile Technologies, http://www.motechno.com/ (accessed in February, 2014)
[41] IBM Tivoli Storage Manager HSM, http://www-01.ibm.com/support/docview.wss?uid=swg21319299 (accessed in May, 2014)