Vous êtes sur la page 1sur 19

Chapter 1

INTRODUCTION

1.1 CLOUD COMPUTING – INTRODUCTION

The Cloud computing has turned into evolving technology solution for
individual and organization use. It deals a novel way to convey diverse services,
however, considerably altering the cost structure essential for those services. This
novel technique and its estimating structure vary in the approach, the business
operates. It embraces the structures of traditional computing technology alike a
distributed, parallel and grid computing and so on. Numerous people and
organizations are using the diverse cloud services through the internet. It entails of
diverse services such as Infrastructure as a Service (IaaS), Platform as a Service
(PaaS) and Software as a Service (SaaS).

1.2 ORIGIN OF CLOUD COMPUTING


The cloud computing revolution has started in the year 1950’s with an
invention of mainframe computing. The origin of mainframe system invented a new
technology called the time sharing system in which the several users gets the ability to
access this mainframe system through the dumb terminals. In 1970’s, the Virtual
Machine (VM) concept was created. The virtualized software created the possibility
to work with multiple operating systems concurrently in a separate environment in a
single system. Using this virtualized concept, the mainframe computing environment
could be shared with distinct users and created the important milestone in the
communication and information revolution. This virtualized concept enables the time-
sharing system concept through the virtual machines. The concept of distributed time-
sharing systems is introduced due to the development of networking technologies in
the late 1960’s and early 1970’s after the arrival of main networking technologies
Local Area Network (LAN) and Wide Area Network (WAN). Further, the data rates
continued to improve progressively in the late 1980’s to till date which made the
possibility of sharing the multimedia applications that deal with managing
combination of information such as voice, video, image and normal data. This made
the revolution in the arrival of the cloud computing technology.

1
Further, in 1990’s, the telecommunication industries that earlier offered point-
to-point data communication started contributing a virtual private network with lower
cost and better quality of service. With the introduction of high power computing
system into the market, the researchers and scientists turn into utilizing this system
through the time sharing system. They investigated with several algorithms to
enhance the infrastructure, platform, and applications to prioritize CPUs and improve
the efficiency for end users.

Later in 2000, the cloud computing technology has risen into reality. In early
2008, the open source software called OpenNebulla tool was introduced, which was
used for setting up private and hybrid clouds. In 2008 mid, Gartner created a prospect
sectors
for increased relationship among customers of different IT and found that
enterprises are moving from company-owned infrastructure to per usage based service
models. The evaluation continues yet and still plenty of issues found and researchers
projected towards finding the solution to overcome these issues.

1.3 SALIENT FEATURES OF CLOUD COMPUTING

1.3.1 On-demand self-service: It is a primary feature of cloud computing in which


the cloud vendors are allowed to allocate the resources on demand when they need it.
It refers to the service provided by cloud computing vendors that enable the provision
of cloud resources on demand whenever they are required. In on-demand self- service,
the user accesses cloud services through an online control panel.

1.3.2 Broad network access: All the resources such as tablets, PC’s and smartphones
are accessed through the internetwork technologies that promote the use of
heterogeneous client devices to be integrated into the network. These resources are
also manageable from a varied choice of locations that deal online network access.
Enterprises that have broad network access inside a cloud network required to deal
with certain security concerns that arise. It's a disputed topic because it touches at the
soul of the alteration among private and public cloud computing. Repeatedly,
enterprises select private cloud service as they are worried about the possibility of
information leakages over the gaps left open to external networks in a public cloud.

1.3.3 Resource pooling: The cloud service provider’s computing infrastructures are
dynamically pooled to work for several consumers by means of a multi-tenant model,

2
with diverse resources either it is physical or virtual resource allocated dynamically
and reallocated as per the consumer needs. The types of services that can relate to a
resource pooling approach consist of data storage, processing and bandwidth services.

It’s very difficult to provide the dynamic computing resource to efficiently provide
the provision and de-provision of resources for the customers without tolerating the
security and reliability of the system.

The association given by virtualization, fixed with provisioning automation, creates a


high degree of utilization and reuse enable the efficient use of capital infrastructure. It
enables the location transparency to the customers as like other distributed systems
such that the customer has not aware of the specific location of the allocated
resources, but knowing the higher degree of abstraction.

1.3.4 Rapid elasticity: This property enables consumers to automatically request


extra storage space in the cloud server or it can avail other types of services. Due to
cloud setup of various cloud computing services, provisioning can be made smooth
for the consumer side. The statistic that providers quiet need to assign and de-assign
resources is repeatedly inappropriate on the client or user's side. This is a very
important feature of cloud technology. In a sense, cloud computing resources look
like to be unlimited or routinely available.

1.3.5 Measured service: Cloud service provider can control and enhance consumer
resource use by enhancing metering service at a certain degree of abstraction suitable
to the specific type of services such as computing, bandwidth, and storage. However,
building a system capable of monitoring the utilization of resources and creating a
granular report is still being a high order issue.

1.4 SERVICE DELIVERY MODELS

An IaaS model affords the ability to provision the computing and storage
resources on demand by cloud consumers. The consumer gets the capability to deploy
and run the software which comprises an operating system and other applications. The
consumer owns the responsibility to manage underlying OS, developed applications,
data storage and some selected network resources; however they cannot control the
cloud infrastructure. Cloud service provider bill IaaS consumers depend on a number

3
of resources allocated and consumed by them. The resources assigned are virtualized
resources, which needs to be properly managed. The vulnerabilities related to
virtualization techniques and risks will affect the IaaS model.

The PaaS model affords the application development platform and solutions
stack as a service to the consumers. The consumers are capable of developing their
applications without purchasing and managing the hardware and software essential for
their application development. The entire life cycle support for providing applications
and services are delivered by the PaaS model. The consumer ensures control over the
deployed applications and application hosting environment configurations, though
they cannot manage or control the underlying cloud infrastructure, network, servers,
operating systems, or storage. The authentication and authorization issues, data
storage security are the main important security considerations for PaaS model.

The SaaS model affords the cloud consumers to access the applications from
cloud service providers. This eliminates the cloud users to install and maintain the
application that runs on their own system. The applications are mostly accessed by
users using thin clients via the web browser. The consumer has control only over their
application configuration settings. The required cloud resources should be maintained
and controlled by the cloud service providers. The various business applications used
for accounting, invoicing, collaborations and employee management make use of
SaaS delivery model. Due to this, more focus needs to be shown for access control
and identity management used in enterprise applications deployed in the cloud. The
consumers in SaaS model are charged based on the usage of monthly or yearly basis.
These cloud services can be accessed using any cloud clients that are connected to the
internet. Regardless of the service models, the four deployment models existing in the
cloud computing are private cloud, public cloud, hybrid cloud and community cloud.
Each of these models has its unique features and characteristics that meet the cloud
user’s specific requirement.

1.5 CLOUD DEPLOYMENT MODELS

In the private cloud deployment model, the cloud infrastructure exclusively


operated for the particular organization needs. As the organization gets the high
degree of control and transparency, it is easier for corporates to comply with its own

4
security policies, standards, and regulatory compliance. The corporates like HP, IBM,
Cisco, VMware, and EMC are the significant players of the private cloud model.

In the public cloud deployment model, the cloud resources like several
applications, storage, and computing resources are offered by public vendors for large
corporate customers and for individual users. These kinds of resources are managed
by third-party cloud service provider who is in-charge for the public cloud services
offering. Henceforth, the consumer of public cloud gets less control over the physical
and logical security aspects of public cloud resources. The Amazon Web Services,
Microsoft Azure, and IBM’s Blue Cloud are examples of the public cloud service
provider. Velte et al., (2009).
In the community cloud, the cloud resources are shared by the multiple
organizations whose nature of work is same for all the organization's user community
with common concerns like security policy and compliance requirements. The cloud
resources are managed by the organization or by a third party vendor. It offers the
benefits of the private cloud, without its high investment costs.

In the hybrid cloud deployment model, the cloud resources are a combination
of private, public and community cloud. This model offers the prospect to store
sensitive information in a private cloud and non-sensitive information in a public
cloud. It helps to provide wavering levels of security, control, and scalability support
to the cloud consumers.

5
Figure 1.1 A cloud model represents three cloud services and four deployment
models

Figure 1.1 shows the cloud computing model with three cloud delivery services and
four deployment models used in the cloud. Services offered over the different cloud
models are still growing and obstacles are being overwhelmed. In spite of the diverse
services offered by the cloud in different deployment model, the data storage in cloud
stands out amongst the most fundamental services offered by numerous cloud service
providers. Though, such service providers cannot be trusted to ensure the
confidentiality of the organizations and individual user’s data.

6
Figure 1.2 Security concern at the various levels cloud computing model

Figure 1.2 indicates the security in depth at the different levels of cloud computing
Fortiş et al.,(2015). This layered approach provides the way to increase the
survivability of a cloud environment in the occurrence of a various attack.

1.6 SECURITY THREATS AND CHALLENGES AT NETWORK


LEVEL

With respect to the network level security, the threats are more in the case of
public clouds than the private clouds. Since private cloud resources present within the
organization limits, the customer has more control over the cloud resources. However,
in the public clouds, ensuring the appropriate access control, ensuring confidentiality
and integrity of the consumer data in transit, ensuring internet resources available is
the most important threats that need to be focused on safeguarding the network level
security. Subashini and Kavitha (2011).

7
1.7 HOST LEVEL SECURITY THREATS AND CHALLENGES
Host level security concerns are those that affect the host resources when it is
associating itself to the cloud environment. Security issues at host level can be
considered in the perspective of diverse service delivery models and deployment
models. Mather et al., (2009)

Threats and challenges that are precise to a cloud environment at the host level
are closely related to virtualization vulnerabilities such as VM escape, hypervisor
threats triggered in a public cloud environment.

1.8 APPLICATION LEVEL SECURITY THREATS AND


CHALLENGES

Most of the organization and academia customers are keen to deploy their
applications to a cloud model in order to save money and to raise efficiency and
reliability of their applications.
Even though, due to the inefficient access control over the networking
resources with servers, audit logs access, patch management makes the cloud
applications are further vulnerable to the numerous security threats. A web-based
application developed and deployed in the private cloud must be secured from the
outside hackers by implementing appropriate access control of the network and host
level. Web-based application deployed in a public cloud must be intended to use
secure Software Development Life Cycle (SDLC) and need to assure that API’s have
been carefully verified for security.

1.9 DATA LEVEL SECURITY THREATS AND CHALLENGES

All types of service delivery models require the security at the data level.
Several aspects of data security embrace data-in-transit, data-at-rest, data process,
data lineage, data provenance and data remanence. Mather et al., (2009)

The data transmission over the network considers as “Data-in-transit”. The


data during transmission should be protected with its proper authentication,
authorization from the communicating entities with the help of highly secure
encryption methods and ensuring proper transport level security.

8
The data stored on storage medium consider as “data-at-rest”. This data can be
protected by using highly secure encryption methods. Even though, in the cloud
computing environment, the data encryption during the “data-at-rest” for
applications hosted in the cloud are not possible since encryption might thwart
indexing and data search.
Even though if the data is being encrypted during the transmission and at rest
in the cloud service provider database, it must be decrypted beforehand it is
processed. Though the algorithms such as homomorphic encryption are designed to
support computation within cipher-text itself, it will reduce the system performance
due to the computation complexity of that algorithm.
Data lineage is a technique of data path tracing in order to see what time and
where the data is placed in cloud service provider locations and it is essential for data
audit. Even though, finding the precise data path is not actually possible in a public
cloud.
Data provenance is a technique of verifying the integrity of data and certifying
the computation data accuracy. Data provenance is difficult with shared resources that
are used in a cloud environment by several users.
Data remanence is the residual illustration of the data that presents even
afterward the effort made to delete that data. This residue happens when data is left in
normal file deletion or with the duplicate copy that resides in another location server.
This may leak the sensitive data to the illegal user. Bloomberg (2011)
Bring Your Own Device, or BYOD is mainstream procedures to update
business today and take into account an adaptable, adaptable venture, and I have
comprehended the significance of utilizing this innovation successfully yet safely.
The most recent disclosure with respect to Dropbox (n.d.) or comparable projects and
BYOD is that documents erased from cell phones aren't generally genuinely gone.
Analysts found that records, sound documents, pictures, and more had the capacity
recuperated despite the fact that they were thought to be for all time erased both from
the gadget and from the cloud. Another vexing revelation was that metadata, for
example, client action history could likewise be found with a bit of burrowing.
Notwithstanding these discoveries, there have dependably been issues of
programmers, and the way that consumer information is put away on a mutual server
with different clients utilizing these "safe cloud storage" organizations. Encryption is
their response to this charge, yet even that isn't trick verification. There is just an
9
excess of chances that consumer information will spill when they run with one of
these open server storage cloud choices.
With regards to utilizing a cloud storage administration, the consumers have
no influence over where the cloud service providers are putting away their
information. They claim the servers and they will disperse the consumers’ documents
anyway, it is helpful, which can be a major security danger.

Another issue which has as of late been found with a portion of the prevalent
cloud storage organizations is a blemish in encryption insurance. Numerous
organizations like to utilize the cloud for secure storage as well as a sheltered sharing
and coordinated effort technique. It was found, on the other hand, that when
information is shared between two or more clients in the cloud, it is powerless against
assault by workers of the cloud storage organization itself. They can utilize a fake key
to open the information when it is sent for sharing and view it before re-encoding it
and sending it to the planned viewer. While no real examples of this have been found
starting yet, the likelihood is failing. Secure cloud storage organizations can no more
brag of a "zero learning environment" for consumer data. The consumers need to
assume that the cloud administration won't look into their documents. This is simply
one more inborn threat of open server cloud storage.

Figure 1.3 Data access in cloud framework.

10
The cloud framework related to data access as shown in figure 3 are mainly
concerned about security related issues such as confidentiality, integrity, and
availability of the consumer data.

To ensure confidentiality of the data, the exploiting edge of the guard for any
cloud framework is encryption. Encryption routines use complex calculations to
disguise cloud-ensured data. However, applying encryption routines is not an
appropriate solution to impose organization complex hierarchical structure. Pearson,
(2009).

The same way, ensuring the integrity of the outsourced data in the cloud can
be accomplished with the implementation of a suitable data auditing framework using
trusted third party entity.

The data availability can be accomplished with redundant data storage in


multiple servers may further lead to data leakage issue and retain backup copies,
though the consumers are revoking their data storage services from the cloud.
The best way of dealing with this security concern is to come up with suitable
data access control framework which will provide a way out to solve these issues.
When the data access control for the cloud is discussed, it is essential to recollect the
general access control methods available to protect the information from the
unauthorized users.

1.10 ACCESS CONTROL METHODS

Access control is a technique used in systems to regulate the kind of activities


a user can accomplish on a resource. In proper terms objects denote the resources that
are being secured by the system, subjects denote, either users or processes performing
activities on an object and operation denotes all the activities that the subjects can
accomplish on the objects.
Generally, access control methods have been classified into Mandatory Access
Control (MAC), Discretionary Access Control (DAC) and Role-Based Access Control
(RBAC).

11
1.10.1 Mandatory Access Control (MAC)

It is a kind of access control methods by which the operating


system constrains the ability of a subject to access or to carry out some sort of
operation on an object.
A thread or process considered as a subject. Files or directories are considered
as objects. The subjects or objects, each are associated with a set of related attributes.
Each and every time a subject tries to access an object, the access is granted if and
only if authorization rules present for the user to access the resource.
MAC model classifies the objects and subjects into different security levels
and disclosure of objects is controlled if the subjects pose to the same security level.
David Bell and Leo Lapadula proposed Multi Level Security (MLS) in 1973. The
MLS model protects information from traveling downwards in the classification
system.
Table 1.1 Document Classifications
Document Classification
Top secret
Secret
Confidential
Unclassified

Top secret is the uppermost secret level of classified information and secret
information may cause “serious destruction” to national security if it were publicly
available. Confidential document would cause destruction or be damaging to national
security if it were publicly available.
Based on the Table 1.1, a Multi-Level Security (MLS) system allows a subject
to access an object if subject classification is greater or equal to the object
classification.
For example, a subject with Secret classification is capable of reading and
writing Unclassified, Confidential, and Secret documents, but not the Top Secret
documents.

12
1.10.2 Discretionary Access Control

It is a kind of access control methods which provide the access to objects


based on the unique identity of subject and/or group of subjects to which they fit. In a
DAC model, the access control policy for their objects is described by the subjects
itself. Since the subjects themselves define the access control policy for their own
objects, they are eligible to grant access to their objects to another subject. This will
enable the subject to share their files on other subjects.
The access rights can be defined by an access control matrix. It consists of
rows denote subjects and columns denote objects. The cell values define the
operation that can be performed by the subjects over the objects.

Table 1.2 An access control matrix denotes sample files in a Linux system.

Subject /users/student/john /users/faculty/bob /users/admin/steve


Bob Read, execute Read, write, Read
execute
John Read, write, execute Read Read
Steve Read, write, execute Read, write, Read, write,
execute execute

As shown in eq. 1.1, if we consider Pso⊆ OP denotes the set of permission the
subject has over an object o ∈ O and subject s ∈ S, i.e. S consider to be a set of
subjects and O consider to be a set of objects
ACM = (Pso⊆ OP) s ∈ S, o ∈ O, Pso⊆ OP 1.1

Even though the Access Control Matrix considers being a good choice, its
precise implementations require extreme memory storage. The Access Control List
and Capability based approach is the two practical implementations using an access
control matrix.

1.10.2.1 Access Control Lists

The access control list is a method which considers the access control matrix
in column perspective. It specifies the set of operations that are allowed to be
performed on a particular object by the different subjects. Each ACL entry states a
subject and an operation for the object.

13
For e.g. the set of operations for the file1 object are denoted as shown below.

ACLF1 = (John: read; Steve: read, write, execute; Bob: read, execute) 1.2

While looking into the Eq. 1.2, we can easily identify the set of operations
permitted for the subjects {John, Steve, Bob} over the File1 object. However,
identifying the set of objects in which the particular subject, such as John or Steve is
permitted to perform the various operations is difficult.
ACL’s are practically implemented in UNIX file system, Solaris and
Microsoft Windows NT and Mac OS operating systems, network resources such as
routers, switches, and firewalls to protect the illegal user’s access and hosted in SQL
relational database systems. Amazon cloud uses the ACL based access control
system.

1.10.2.2 Capability based approach

The Capability-based approach is a method which considers the access control


matrix in row perspective. A capability considers being an authority token given to
the subjects to access the set of objects. Though the ACL provides the access details
about the particular object, the capability provides the access knowledge about the
specific subject. The capability resides with the subject, whereas the ACL resides with
the objects. Geambasu et al., (2006).

Each process conveys a capability list when it attempts to access an object; the
access control system verifies this list to check whether the process has the right
capability. A capability is usually implemented as a privileged data structure that
consists of a section that decides access rights, and a section that exclusively
discovers the object to be accessed. Capabilities are usually stored by the operating
system in a list, with some mechanism in place to prevent the program from directly
modifying the contents of the capability.

1.10.3 Role Based Access Control (RBAC)

In the previous approach such as MAC and DAC systems, the new subjects
added to and revoked from the access control policy often. The new user is given a
security classification which grants access to certain objects in a MAC system.

14
Whereas in a DAC system, based on its type as ACLs or capabilities, the new subject
should be either added to or revoked from all ACL’s or should be delivered
capabilities to all relevant objects. Bammigatti, (2008).
But generally, in many cases, the new subjects are considered to be a new role.
For e.g. in an organization whenever the new employees are joined, according to their
designation, they will be added to the specific role. It doesn’t require changing the
access permission of any roles directly. In case if the role permission changed, it will
affect all the members included in that role. Because of this RBAC consider being the
alternative model for DAC or MAC based system. Na and Cheon, (2000). The
Microsoft Azure cloud is working based on Role based Access Control method.
Three primary rules are well-defined for RBAC:

1. Role assignment: A subject can use access permission only if it has selected or
been allotted a role.
2. Role authorization: Each subject must be authorized with particular role
considered to be an active role.
3. Permission authorization: Permission is granted to the subject only for its
active role.

The RBAC role assignment can take place according to the organization needs,
by default the higher role can be granted permission owned by its sub-role.

The following notations are used in RBAC model

 Sub = Subject = A user or an agent


 R = Role = Title that outlines an authority level
 P = Permissions = access level to a resource
 SE = Session = A mapping relating S, R and/or P
 SA = Subject Assignment
 PA = Permission Assignment

A subject can have several roles. A role can have several subjects and several
permissions. Permission can be allotted to many roles. An operation can be allotted
several permissions. Permission can be allotted to several operations.

This can be denoted as follows

15
A subject may have various simultaneous sessions with distinct permissions.

1.10.4 Attribute-Based Access Control

Apart from this, another access control model which is popular as ABAC
called as Attribute-Based Access Control. This model is most suggested access
control model for sharing information among diverse and different organizations.

It describes an access control method in which access permission is given to


users with the help of policies that combines attributes together. The attributes used in
policies can be a user related attributes, resource-related attributes, and environment
related attributes, etc. Attribute values can be atomic value or set-value. The atomic
value consists of only one value such as employee number or employee id whereas
the set-value consists of multiple atomic values such as blood groups, blood donors or
projects. Generally, the ABAC evaluates the subject attributes and object attributes,
environment settings and access control policies that define the allowable operations
to the subjects over the objects. ABAC models are capable of imposing both
Discretionary Access Control (DAC) and Mandatory Access Control (MAC) models.

The policies that can be applied to an ABAC model are restricted to the degree
enforced by the computational language. This flexibility empowers the subjects to
access the objects without identifying individual relations among each subject and
each object. For e.g. a subject is given a set of attributes when they joined an
organization. An object is given its object attributes upon creation. (e.g. A subject is a
John smith is a junior nurse in the cardiology department. An object is a folder with
heart patients Medical Records). The administrator or object owner generates an
access control policy to rule the set of permitted operations (e.g., all junior nurses in
the Cardiology Department can access the heart patient's Medical Records). The
attributes and their values can all be modified through the subject lifecycle, objects,
and attributes without changing the each subject/object relationship. This property
affords a more flexible access control capability to ABAC model.

1.10.5 Attribute-Based Encryption

Another access control model proposed for outsourced data in the cloud is
Attribute-Based Encryption (ABE) method. This method is classified into two types

16
as a Key-Policy Attribute-Based Encryption (KP-ABE) and CipherText-Policy
Attribute-Based Encryption (CP-ABE).

In the KP-ABE scheme, data files are related to a set of attributes and the
public key, private key pairs are created for each attribute. Each key is linked to an
access tree policy that shows which type of cipher text the key can decrypt whereas
the ciphertexts are labeled with the set of expressive attributes. Generally, the KP-
ABE scheme with re-encryption technique is used for creating cloud access control
method.
In the CP-ABE scheme, a user’s private key will be linked with a random
number of attributes. Whenever a data owner encrypts a data file, they state a related
access tree policy over attributes. A user will only be able to decrypt a cipher text if
that user’s attributes pass through the cipher text’s access tree policy. Even if the
storage server is an untrusted entity, the encrypted data can be kept confidential in
cloud outsourced data.

1.11 MOTIVATION FOR THIS RESEARCH WORK

Considering all these data security aspects and risk related to it makes
consumers for concern about data security in the cloud. Especially the secured data
storage in the cloud is the most important aspect that the consumers need to worry
about it in the recent trend. Cloud storage Zip cloud, (n.d.); Amazon S3, (n.d.);
MyAsiaCloud, (n.d.); Google drive, (n.d.) offers a finite storage space with cost
effective manner for clients to store their sensitive data. However, it imposes the data
security challenges when the users or enterprises outsource their sensitive data to
third-party cloud servers.
As data owners move their data on untrusted cloud servers, it brings forth the
high demand and concern for data confidentiality Di Vimercati et al, (2007a). In
addition to data confidentiality and privacy breach, the untrusted servers could use the
data for their financial benefit and brings the huge amount of economic loss for the
owners. In December 2010, first major data breach happened at Microsoft and it
announced that data contained within its Business Productivity Online Suite (BPOS)
have been downloaded by unauthorized users. Another example is AT &T, Apple data
leak protection issues in cloud breaches 100,000 of email addresses of iPad user’s
public. Deltcheva (2010)

17
There are various research works developed to provide secure access control
mechanism to prevent cloud outsourced data from unauthorized users. A direct
method is to use cryptographic schemes onto highly secured and sensitive data and
reveal encryption keys only to authorized users. However, issuing and protecting the
encryption keys from the unauthorized users create another security issue. A number
of schemes Yu et al., (2010); Wan et al., (2012); Hota et al., (2011) has in recent
times been proposed to achieve flexible and fine-grained access control in the cloud.
Unfortunately, these schemes do not focus the data file deletion upon data owner
request to revoke data file access from the cloud consumer. Cloud storage provider
may not totally expunge all backup file copies from its storage servers and it may
reveal the data to malicious users if encryption keys are obtained by malicious attacks.
The motivation of this research work carried out to resolve this issue and incorporated
a feature to ensure file assured deletion with a highly secured dynamic and scalable
access control schemes.

1.12 THESIS ORGANIZATION


In addition to this introductory chapter, this dissertation “Study and Analysis
of efficient access control mechanishms in cloud environment” has been structured
into seven chapters that present in detail about the research carried out. The following
are the descriptions of various chapters in this thesis.
Chapter 2 summarizes the literature survey made on the various access control
models proposed for cloud computing environment and the security and performance
analysis of this research work and its benefit and its limitations is discussed.
Chapter 3 elaborates on proposed access control mechanisms based on a
discretionary access control model called as ACAFD: Secure and Scalable Access
Control with Assured File Deletion for Outsourced Data in Cloud is discussed. The
security and performance analysis of this scheme is presented in this chapter. The
experimental analysis of this scheme is compared with the existing access control
models and results are shown in this chapter.
Chapter 4 deals with the access control mechanism proposed as part of my
research work which is based on ABAC method called as HB-PPAC: Hierarchy
Based Privacy-Preserving Access Control technique in public clouds is discussed. The
scheme is designed in such a way that it could be suitable for energy deficient

18
devices. The security and performance analysis and its experimental results are
compared with the existing scheme are detailed in this chapter.
Chapter 5 elaborates on proposed access control mechanisms based on
attribute-based access control method called as CB-HPAC: Cluster-based Hierarchical
Privacy preserving access control techniques in clouds is discussed. The security and
performance analysis of this scheme is presented in this chapter. The experimental
analysis of this scheme is compared with the existing access control models and
results are shown in this chapter. The experimental results of our suggested scheme
confirmed to provide dynamic, efficient and scalable access control scheme for cloud
outsourced data.

Chapter 6 elaborates on proposed efficient access control mechanisms based


on cipher text - attribute based encryption called as ERAC-MAC: Efficient Revocable
Access Control for Multi-Authority Cloud storage system is discussed. The security
and performance analysis of this scheme is presented in this chapter. The
experimental analysis of this scheme is compared with the existing access control
models and results are shown in this chapter. The experimental results of our
suggested scheme confirmed to provide scalable, efficient and fine-grained access
control scheme for cloud outsourced data.

Chapter 7 includes the conclusion part of the presented scheme with the
consolidation of the results of all these schemes are discussed and proved that our
schemes are efficient in terms of less computation, communication cost and in better
security perspective.

SUMMARY
A systematic overview of the basics of cloud computing and its service
delivery models, deployment models, and its security threats with respect to host
level, network level, data level and various access control models and motivation for
the research work has been presented in this chapter.

19

Vous aimerez peut-être aussi