Académique Documents
Professionnel Documents
Culture Documents
CHAPTER 1
INTRODUCTION
From a users perspective, data outsourcing raises security and privacy concerns.
One must trust third-party cloud providers to properly enforce confidentiality, integrity
checking, and access control mechanisms against any insider and outsider attacks.
However, deduplication, while improving storage and bandwidth efficiency, is
incompatible with traditional encryption. Specifically, traditional encryption requires
different users to encrypt their data with their own keys. Thus, identical data copies of
different users will lead to different cipher texts, making deduplication impossible.
1.1.1.1 Characteristics
Device and location independence enable users to access systems using a web
browser regardless of their location or what device they are using e.g., PC,
mobile phone. As infrastructure is off-site typically provided by a third-party
and accessed via the Internet, users can connect from anywhere.
o Peak-load capacity increases users need not engineer for highest possible
load-levels
o Utilization and efficiency improvements for systems that are often only
1020% utilized.
Reliability is improved if multiple redundant sites are used, which makes well-
designed cloud computing suitable for business continuity and disaster recovery.
In this most basic cloud service model, cloud providers offer computers as
physical or more often as virtual machines , raw block storage, firewalls, load balancers,
and networks. IaaS providers supply these resources on demand from their large pools
installed in data centers. Local area networks including IP addresses are part of the
offer. For the wide area connectivity, the Internet can be used or - in carrier clouds -
dedicated virtual private networks can be configured.
6
To deploy their applications, cloud users then install operating system images on
the machines as well as their application software. In this model, it is the cloud user who
is responsible for patching and maintaining the operating systems and application
software. Cloud providers typically bill IaaS services on a utility computing basis, that
is, cost will reflect the amount of resources allocated and consumed.
In the PaaS model, cloud providers deliver a computing platform and/or solution
stack typically including operating system, programming language execution
environment, database, and web server. Application developers can develop and run
their software solutions on a cloud platform without the cost and complexity of buying
and managing the underlying hardware and software layers. With some PaaS offers, the
underlying compute and storage resources scale automatically to match application
demand such that the cloud user does not have to allocate resources manually.
In this model, cloud providers install and operate application software in the
cloud and cloud users access the software from cloud clients. The cloud users do not
manage the cloud infrastructure and platform on which the application is running. This
eliminates the need to install and run the application on the cloud user's own computers
simplifying maintenance and support. What makes a cloud application different from
other applications is its elasticity. This can be achieved by cloning tasks onto multiple
virtual machines at run-time to meet the changing work demand.
Load balancers distribute the work over the set of virtual machines. This
process is transparent to the cloud user who sees only a single access point. To
accommodate a large number of cloud users, cloud applications can be multitenant, that
is, any machine serves more than one cloud user organization. It is common to refer to
special types of cloud based application software with a similar naming convention:
desktop as a service, business process as a service, Test Environment as a Service,
communication as a service.
7
Users access cloud computing using networked client devices, such as desktop
computers, laptops, tablets and smart phones. Some of these devices - cloud clients -
rely on cloud computing for all or a majority of their applications so as to be essentially
useless without it. Examples are thin clients and the browser-based Chrome book. Many
cloud applications do not require specific software on the client and instead use a web
browser to interact with the cloud application. With AJAX and HTML5 these Web user
interfaces can achieve a similar or even better look and feel as native applications. Some
cloud applications, however, support specific client software dedicated to these
applications e.g., virtual desktop clients and most email clients. Some legacy
applications line of business applications that until now have been prevalent in thin
client Windows computing are delivered via a screen-sharing technology.
A public cloud is one based on the standard cloud computing model, in which a
service provider makes resources, such as applications and storage, available to the
general public over the Internet. Public cloud services may be free or offered on a pay-
per-usage model.
8
1.1.1.5 Architecture
1.1.2 Deduplication
As the analysis continues, other chunks are compared to the stored copy and
whenever a match occurs, the redundant chunk is replaced with a small reference that
points to the stored chunk. Given that the same byte pattern may occur dozens,
hundreds, or even thousands of times (the match frequency is dependent on the chunk
size), the amount of data that must be stored or transferred can be greatly reduced.
1.1.2.1 Benefits
Network data deduplication is used to reduce the number of bytes that must be
transferred between endpoints, which can reduce the amount of bandwidth
required. See WAN optimization for more information.
Conventional cryptosystem to encrypt its files, then two identical files encrypted
with different users keys would have different encrypted representations, and the DFC
subsystem could neither recognize that the files are identical nor coalesce the encrypted
files into the space of a single file, unless it had access to the users private keys, which
would be a significant security violation.
Data management through cloud is being viewed as a technique that can save the
cost for data sharing and management. A key concept for remote data storage is client-
side deduplication, in which the server stores only a single copy of each file, regardless
of how many clients need to store that file. That is only the first client needs to upload
the file to the server. This design will save both the communication bandwidth as well
as the storage capacity. Data deduplication is a technique for eliminating duplicate
copies of data, and has been widely used in cloud storage to reduce storage space and
upload bandwidth.
Dekey, in which users do not need to manage any keys on their own but instead
securely distribute the convergent key shares across multiple servers. It preserves
semantic security of convergent keys and confidentiality of outsourced data.
12
In [1] Jin Li, Xiaofeng Chen, Mingqiang Li, Jingwei Li, Patrick P.C. Lee, and
Wenjing Lou ,proposes a technique, data deduplication for eliminating duplicate copies
of data, and has been widely used in cloud storage to reduce storage space and upload
bandwidth. Promising as it is, an arising challenge is to perform secure deduplication in
cloud storage. Although convergent encryption has been extensively adopted for secure
deduplication, a critical issue of making convergent encryption practical is to efficiently
and reliably manage a huge number of convergent keys. This makes the first attempt to
formally address the problem of achieving efficient and reliable key management in
secure deduplication.
Active Disks present a promising architectural direction for two reasons. First,
since the number of processors scales with the number of disks, active-disk
architectures are better equipped to keep up with the processing requirements for rapidly
growing datasets. Second, since the processing. Components are integrated with the
drives, the processing capacity will evolve as the disk drives evolve. This is similar to
the evolution of disk caches as the drives get faster, the disk cache becomes larger.
The introduction of Active Disks raises several questions. First, how are they
programmed? What is disk-resident code i.e., a disklet allowed to do? How does it
communicate with the host-resident component. Second, how does one protect against
buggy or malicious programs. Third, is it feasible to utilize Active Disks for the classes
of datasets that are expected to grow rapidly - i.e. commercial data warehouses, image
databases and satellite data repositories. To be able to take advantage of processing
power that scales with dataset size, it should be possible to partition algorithms that
process these datasets such that most of the processing can be offloaded to the disk-
resident processors. Finally, how much benefit can be expected with current technology
and in foreseeable future.
Each peer is responsible for a subset of the network. This organization improves
the efficiency of P2P networks ensuring routing in O (log n) but can also lead to
security issues like the Sybil Attack.
The Sybil Attack, as consists in creating a large number of fake peers called the
"Sybils" and placing them in a strategic way in the DHT to take control over a part of it.
Douceur proved that the Sybil Attack cannot be totally avoided as long as the malicious
entity has enough resources to create the Sybils. This problem was not considered when
designing most of the major structured P2P networks. In this context, the goal of the
defense strategies described in the literature is to limit the Sybil Attack as completely
stopping it is impossible.
The latest versions of the major KAD clients have introduced new protection
mechanisms to limit the Sybil attack, making the previous experiments concerning the
security issues of KAD like inefficient. These newly implemented protection
mechanisms have neither been described nor been evaluated and assessed. The purpose
of this study is to evaluate the implemented security mechanisms against real attacks.
We will then be able to have an updated view of KAD vulnerabilities: Is the network
still vulnerable to the attack proposed.
This work is a first and necessary step to design improved defense mechanisms
in future works. As far as we know, this paper is also the first attempt to experiment and
assess practical protections set by the P2P community to protect a real network. Even if
the security mechanisms are a step forward, they are not yet sufficient. We have shown
that a distributed eclipse attack focused on a particular ID still remains possible with a
moderate cost. This result shows that the main weakness of KAD has shifted to the
possibility to reference many KADIDs with the same IP address to the left possibility to
freely choose its KADID. More-over, if we consider an attacker with many resources,
particularly considering the number of IP addresses, the overall protection can be
threatened due to the specific design using local rules.
15
In other applications the tradeoff is not between secrecy and reliability, but
between safety and convenience of use. Consider, for example, a company that digitally
signs all its checks. If each executive is given a copy of the company's secret signature
key, the system is convenient but easy to misuse. If the cooperation of all the company's
executives is necessary in order to sign each check, the system is safe but inconvenient.
The standard solution requires at least three signatures per check, and it is easy to
implement with a (3, n) threshold scheme.
Given that mobile devices have limited storage space in general, individuals can
move audio/video files to the cloud and make effective use of space in their mobile
devices. However, privacy and integrity concerns become relevant as we now count on
third parties to host possibly sensitive data. To protect outsourced data, a
straightforward approach is to apply cryptographic encryption onto sensitive data with a
set of encryption keys, yet maintaining and protecting such encryption keys will create
another security issue
17
FADE generalizes time-based file assured deletion i.e., files are assuredly
deleted upon time expiration into a more fine-grained approach called policy based file
assured deletion, in which files are associated with more flexible file access policies
e.g., time expiration, read/write permissions of authorized users and are assuredly
deleted when the associated file access policies are revoked and become obsolete.
In this paper makes the following contributions:
Proposes a new policy-based file assured deletion scheme that reliably deletes
files with regard to revoked file access policies. In this context, we design the
key management schemes for various file manipulation operations.
Empirically evaluate the performance overhead of FADE atop Amazon S3, and
using realistic experiments, we show the feasibility
Representation of Metadata
For each file protected by FADE, we include the metadata that describes the
policies associated with the file as well as a set of encrypted keys. In FADE, there are
two types of metadata: file metadata and policy metadata. File metadata. The file
metadata mainly contains two pieces of information: file size and HMAC. We hash the
encrypted file with HMAC-SHA1 for integrity checking. The file metadata is of fixed
size with 8 bytes of file size and 20 bytes of HMAC and attached at the beginning of the
encrypted file. Both the file metadata and the encrypted data file will then be treated as
a single file to be uploaded to the storage cloud.
In [6] C. Wang, Q. Wang, K. Ren, and W. Lou, defines how public auditing
preserves storage security in cloud computing. Cloud Computing has been envisioned as
the next-generation information technology architecture for enterprises, due to its long
list of unprecedented advantages in the IT history: on-demand self-service, ubiquitous
network access, location independent resource pooling, rapid resource elasticity, usage-
based pricing and transference of risk. As a disruptive technology with profound
implications, Cloud Computing is transforming the very nature of how businesses use
information technology. One fundamental aspect of this paradigm shifting is that data is
being centralized or outsourced to the cloud.
This problem, if not properly addressed, may impede the successful deployment
of the cloud architecture. As users no longer physically possess the storage of their data,
traditional cryptographic primitives for the purpose of data security protection cannot be
directly adopted. In particular, simply downloading all the data for its integrity
verification is not a practical solution due to the expensiveness in I/O and transmission
cost across the network. Besides, it is often insufficient to detect the data corruption
only when accessing the data, as it does not give users correctness assurance for those
unaccessed data and might be too late to recover the data loss or damage.
In this paper work is among the first few ones to support privacy-preserving
public auditing in Cloud Computing, with a focus on data storage. Besides, with the
prevalence of Cloud Computing, a foreseeable increase of auditing tasks from different
users may be delegated to TPA.
To address these problems, our work utilizes the technique of public key based
homomorphic linear authenticator or HLA for short which enables TPA to perform the
auditing without demanding the local copy of data and thus drastically reduces the
communication and computation overhead as compared to the straightforward data
auditing approaches. By integrating the HLA with random masking, our protocol
guarantees that the TPA could not learn any knowledge about the data content stored in
the cloud server during the efficient auditing process.
The aggregation and algebraic properties of the authenticator further benefit our
design for the batch auditing. Specifically, our contribution can be summarized as the
following three aspects:
To the best of our knowledge, our scheme is the first to support scalable and
efficient public auditing in the Cloud Computing. Specifically, our scheme
20
achieves batch auditing where multiple delegated auditing tasks from different
users can be performed simultaneously by the TPA.
We prove the security and justify the performance of our proposed schemes.
The architecture is based on the assumption that systems at the petabyte scale
are inherently dynamic: large systems are inevitably built incrementally, node failures
are the norm rather than the exception, and the quality and character of workloads are
constantly shifting over time. Ceph decouples data and metadata operations by
eliminating file allocation tables and replacing them with generating functions. This
allows Ceph to leverage the intelligence present in OSDs to distribute the complexity
surrounding data access, update serialization, replication and reliability, failure
detection, and recovery. Ceph utilizes a highly adaptive distributed metadata cluster
architecture that dramatically improves the scalability of metadata access, and with it,
the scalability of the entire system.
21
The unique aspects of the Panasas system are its use of per-file, client-driven
RAID, its parallel RAID rebuild, its treatment of different classes of metadata block,
file, system and a commodity parts based blade hardware with integrated UPS. Of
course, the system has many other features such as object storage, fault tolerance,
caching and cache consistency, and a simplified management model that are not unique,
but are necessary for a scalable system implementation.
An object is a container for data and attributes; it is analogous to the inode inside
a traditional UNIX file system implementation. Specialized storage nodes called OSD
store objects in a local OSDFS file system. The object interface addresses objects in a
two-level partition ID/object ID namespace.The OSD wire protocol provides byte
oriented access to the data, attribute manipulation, creation and deletion of objects, and
several other specialized operations [OSD04]. We use an iSCSI transport to carry OSD
commands that are very similar to the OSDv2 standard currently in progress within
SNIA and ANSI-T10 [SNIA].
The Panasas file system is layered over the object storage. Each file is striped
over two or more objects to provide redundancy and high bandwidth access. The file
system semantics are implemented by metadata managers that mediate access to objects
from clients of the file system. The clients access the object storage using the
iSCSI/OSD protocol for Read and Write operations. The I/O operations proceed directly
and in parallel to the storage nodes, bypassing the metadata managers.
23
ASUs allow processing capacity to scale naturally with the size of storage. They
also have the potential to reduce data movement across the interconnect if searching,
filtering, or read/modify/write steps execute directly on ASUs. This allows aggregation
of larger numbers of drives behind each network port, and it can improve host
processing performance since data movement in host memory is often a leading drain
on host CPU resources. However, ASUs introduce new distributed computing
challenges relating to controlling the mapping of application functions to ASUs,
coordinating functions across ASUs and hosts, and sharing of ASU resources.
A user in possession of a VDO can retrieve the plaintext prior to the expiration
time T by simply reading the secret shares from at least k indices in the DHT and
reconstructing the decryption key. When the expiration time passes, the DHT will
expunge the stored shares, and, the Vanish authors assert, the information needed to
reconstruct the key will be permanently lost. The Vanish team released an
implementation based on the million-node Vuze DHT, which is used mainly for Bit
Torrent tracking.
The paper present two Sybil attacks against the current Vanish implementation,
which stores its encryption keys in the million-node Vuze BitTorrent DHT. These
attacks work by continuously crawling the DHT and saving each stored value before it
ages out. The security guarantees that Vanish sets out to provide would be extremely
useful, but, unfortunately, the system in its current form does not provide them in
practice. As we have shown, efficient Sybil attacks can recover the keys to almost all
Vanish data objects at low cost.
26
CHAPTER 2
SYSTEM DESIGN
Deduplication is not done by the authorized one which may lead to the security
violation
Any user can modify the data, if they know about the information of files.
28
To better protect data security, this paper makes the first attempt to formally
address the problem of authorized data deduplication. Different from traditional
deduplication systems, the differential privileges of users are further considered in
duplicate check besides the data itself. We also present several new deduplication
constructions supporting authorized duplicate check in a hybrid cloud architecture.
Security analysis demonstrates that our scheme is secure in terms of the definitions
specified in the proposed security model.
29
Suppose user 1 is the first user who uploads file F. He will execute algorithm E with file
F and security parameter 1 as input and obtain a short secret encryption key , a short
encoding C {0, 1}3 and a long encoding CF . User 1 will send both C and CF to the
cloud storage server User 2. User 2 will compute the hash value hash(CF ), put C in
secure and small primary storage, and put CF in the potentially insecure but large
secondary storage. At the last, User 2 will add (key = hash(F); value = (hash(CF );C )
into his lookup database. Suppose User 3 who tries to upload the same file F after User
1. User 3 will send hash(F) to the cloud storage server User 2. User 2 finds that hash(F)
is already in his lookup database. Then User 2, who is running algorithm V with C as
input, interacts with User 3, who is running algorithm P with F as input. At the end of
interaction, User 3 will learn and user 2 will compare the hash value hash(CF )
provided by user 3 with the one computed by himself. Later, User 3 is allowed to
download CF from User 2 at any time and decrypt it to obtain the file F by running
algorithm D( , CF ).
The user is only allowed to perform the duplicate check for files marked with the
corresponding privileges.
Reduce the storage size of the tags for integrity check. To enhance the security
of deduplication and protect the data confidentiality
30
Feasibility studies aim to objectively and rationally uncover the strengths and
weaknesses of the existing business or proposed venture, opportunities and threats as
presented by the environment, the resources required to carry through, and ultimately
the prospects for success. In its simplest term, the two criteria to judge feasibility are
cost required and value to be attained. As such, a well-designed feasibility study should
provide a historical background of the business or project, description of the product or
service, accounting statements, details of the operations and management, marketing
research and policies, financial data, legal requirements and tax obligations. Generally,
feasibility studies precede technical development and project implementation.
This study is carried out to check the economic impact that the system will have
on the organization. The amount of fund that the company can pour into the research
and development of the system is limited. The expenditures must be justified. Thus the
developed system as well within the budget and this was achieved because most of the
technologies used are freely available. Only the customized products had to be
purchased.
Technical feasibility study is carried out to check the technical feasibility, that is,
the technical requirements of the system. Any system developed must not have a high
demand on the available technical resources. This will lead to high demands on the
available technical resources. This will lead to high demands being placed on the client.
The developed system must have a modest requirement, as only minimal or null
changes are required for implementing this system.
31
The aspect of study is to check the level of acceptance of the system by the user.
This includes the process of training the user to use the system efficiently. The user
must not feel threatened by the system, instead must accept it as a necessity. The level
of acceptance by the users solely depends on the methods that are employed to educate
the user about the system and to make him familiar with it. His level of confidence must
be raised so that he is also able to make some constructive criticism, which is
welcomed, as he is the final user of the system.
Java solves this problem by severely restricting what an applet can do. A
Java applet cannot write to your hard disk without your permission. It cannot write to
arbitrary addresses in memory and thereby introduce a virus into your computer. It
should not crash your system.
CHAPTER 3
SYSTEM DESCRIPTION
1. System Setup
2. User Module
3. Secure DeDuplication System
4. Security Of Duplicate Check Token
5. Send key
6. Performance Evaluation
In this module cloud environment setup will be done. The number of users
present in the cloud environment and the cloud service providers will be setup. Cloud
server is the one who provides an data storage access to the data owners. Its provides an
efficient access to the cloud servers in the effective manner. Cloud user is the one who
access and modify the information present in the cloud data centers.
In Dekey, we assume that the number of KM-CSPs is n.
In this module, Users are limited from having authentication and security to
access the detail which is presented in the ontology system. Before accessing or
searching the details user should have the account in that otherwise they should register
first.
Suppose User 1 is the first user who uploads a sensitive file F to the cloud
storage. She will independently choose a random AES key , and produces two cipher
texts. The first cipher text CF is generated by encrypting file F with encryption key
using AES method, and the size of CF is almost equal to the size of F; the second cipher
text C is generated by encrypting the short AES key with file F as the encryption key
using some custom encryption method, and the size of C is in O(j j) which is very
small.
If a user has privilege p, it requires that he adversary cannot forge and output a
valid duplicate token with any other privilege p on any file F, where p does not match
p. Furthermore, it also requires that if the adversary does not make a request of token
with its own privilege from private cloud server, it cannot forge and output a valid
duplicate token with p many F that has been queried.
Once the key request was received, the sender can send the key or he can decline
it. With this key and request id which was generated at the time of sending key request
the receiver can decrypt the message.
The proposed method will be compared with the existing mechanism in order to
evaluate the performance of our proposed methodology.
User
No
Is user is
authorize
d?
Yes
No Is duplicate
found?
Yes
End
Confidentiality Level:
In the above graph, the x axis plots the level at which confidentiality is
measured. And y axis plot the confidentiality level. From the graph it is clear that 0.68
confidetniality level achived at encoding level and 0.73 confidentiality achieved at
decoding level.
38
Reliability Level
This graph assures that the encoding can be done at 0.7 reliability rate and decoding can
be performed at 0.76 reliability rate.
39
CHAPTER 4
CONCLUSION
Dekey, an efficient and reliable convergent key management scheme for secure
deduplication. Dekey applies deduplication among convergent keys and distributes
convergent key shares across multiple key servers, while preserving semantic security
of convergent keys and confidentiality of outsourced data In this paper, an important
security concern is addressed in cross-user client-side deduplication of encrypted files in
the cloud storage. Confidentiality of users sensitive files against both outside
adversaries and the honest-but-curious cloud storage server in the bounded leakage
model. On technique aspect, we enhanced and generalized the convergent encryption
method, and the resulting encryption scheme could support client-side deduplication of
encrypted file in the bounded leakage model.
40
APPENDIX 1
SOURCE CODE
STORAGE SERVER
public StorageServer()
System.out.println("*********STORAGE SERVER********");
try
logs=new Vector();
String lins="";
while((lins=brs.readLine())!=null)
logs.add(lins.trim());
db=new database_conn();
pan.setLayout(null);
firstMenu.add(firstItem);
firstMenu.add(secondItem);
bar.add(firstMenu);
pan.add(bar);
bar.setBounds(0,0,600,35);
frm.setSize(600,650);
frm.setLocation(350,50);
41
jsp.setBounds(5,40,585,580);
firstItem.addActionListener(this);
secondItem.addActionListener(this);
tarea.setBackground(new Color(210,220,220));
pan.add(jsp);
frm.add(pan);
frm.setVisible(true);
Management\n\n");
tarea.setEditable(false);
KEY SERVER
public KeyServer()
try
System.out.println("*********KEYSERVER********");
db=new database_conn();
pan.setLayout(null);
firstMenu.add(firstItem);
firstMenu.add(secondItem);
firstMenu.add(thirdItem);
42
bar.add(firstMenu);
pan.add(bar);
bar.setBounds(0,0,600,35);
frm.setSize(600,650);
frm.setLocation(350,50);
jsp.setBounds(5,40,585,580);
firstItem.addActionListener(this);
secondItem.addActionListener(this);
thirdItem.addActionListener(this);
tarea.setBackground(new Color(220,220,210));
pan.add(jsp);
frm.add(pan);
frm.setVisible(true);
Management\n\n");
tarea.setEditable(false);
catch(Exception jj)
jj.printStackTrace();
}
43
ObjectOutputStream out=null;
ObjectInputStream in=null;
storageComm(Socket sc)
try
out=new ObjectOutputStream(sc.getOutputStream());
in=new ObjectInputStream(sc.getInputStream());
kid="KS"+ID;
out.writeObject(kid);
String rid=(String)in.readObject();
inputStr.add(in);
outputStr.add(out);
conId.add(rid);
db.st.executeUpdate(iqry);
catch(Exception jj)
jj.printStackTrace();
}
44
USER
public user()
System.out.println("*********USER********");
try
db=new database_conn();
pan.setLayout(null);
firstMenu.add(firstItem);
firstMenu.add(seventhItem);
firstMenu.add(secondItem);
firstMenu.add(thirdItem);
firstMenu.add(fourthItem);
firstMenu.add(fifthItem);
firstMenu.add(sixthItem);
bar.add(firstMenu);
pan.add(bar);
bar.setBounds(0,0,600,35);
frm.setSize(600,650);
frm.setLocation(350,50);
jsp.setBounds(5,40,585,580);
lbl_mail.setBounds(620,80,200,30);
jsp1.setBounds(620,130,200,100);
btn_mail.setBounds(620,240,150,30);
45
firstItem.addActionListener(this);
secondItem.addActionListener(this);
thirdItem.addActionListener(this);
fourthItem.addActionListener(this);
fifthItem.addActionListener(this);
sixthItem.addActionListener(this);
seventhItem.addActionListener(this);
btn_mail.addActionListener(this);
tarea.setBackground(new Color(210,220,220));
pan.add(jsp);
pan.add(lbl_mail);
pan.add(jsp1);
pan.add(btn_mail);
frm.add(pan);
frm.setVisible(true);
lbl_mail.setEnabled(false);
jsp1.setEnabled(false);
btn_mail.setEnabled(false);
txt_mail.setEnabled(false);
Key Management\n\n");
//tarea.setEditable(false);
}
46
catch(Exception jj)
jj.printStackTrace();
int ret=0;
try
int res=((4*(a*a))+(27*(b*b)))%prime;
// System.out.println("Result "+res);
if(res==0)
ret=0;
else
ret=1;
catch(Exception jj)
jj.printStackTrace();
47
return ret;
CONSUMER
public Consumer()
{
try
{
System.out.println("*********CONSUMER********");
db=new database_conn();
pan.setLayout(null);
firstMenu.add(firstItem);
firstMenu.add(secondItem);
firstMenu.add(thirdItem);
//firstMenu.add(fourthItem);
firstMenu.add(seventhItem);
firstMenu.add(sixthItem);
bar.add(firstMenu);
pan.add(bar);
bar.setBounds(0,0,600,35);
frm.setSize(600,650);
frm.setLocation(350,50);
jsp.setBounds(5,40,585,580);
firstItem.addActionListener(this);
secondItem.addActionListener(this);
thirdItem.addActionListener(this);
fourthItem.addActionListener(this);
sixthItem.addActionListener(this);
seventhItem.addActionListener(this);
48
tarea.setBackground(new Color(210,220,220));
tarea.setFont(new java.awt.Font("Bookman Old Style", 1, 18));
pan.add(jsp);
frm.add(pan);
frm.setVisible(true);
tarea.append("\t Consumer Process\n\n");
tarea.append(" Secure Deduplication with Efficient and Reliable Convergent
Key Management\n\n");
//tarea.setEditable(false);
}
catch(Exception jj)
{
jj.printStackTrace();
}
}
49
APPENDIX 2
SCREEN SHOTS
USER SETUP
54
USER PROCESS
56
FILE UPLOAD
CONFIRMATION
57
CONSUMER SETUP
58
REFERENCES
[1] Jin Li, Xiaofeng Chen, Mingqiang Li, Jingwei Li, Patrick P.C. Lee, and Wenjing
Lou(2014) , Secure Deduplication with Efficient and Reliable Convergent Key
Management, IEEE TRANSACTIONS ON PARALLEL AND
DISTRIBUTED SYSTEMS, VOL. 25, NO. 6.
[2] Acharya.A, Uysal.M, and Saltz.J (1998), Active disks: Programming model,
algorithms and evaluation, in Proc. 8th Conf. Architectural Support for
Programming Languages and Operating System (ASPLOS), pp. 8191.
[4] Shamir.A(1979), How to share a secret, Commun. ACM, vol. 22, no. 11, pp.
612613.
[5] Tang.Y, Lee P.P.C, Lui J.C.S, and Perlman.R (2010), FADE: Secure overlay
cloud storage with file assured deletion, in Proc. SecureComm.