Vous êtes sur la page 1sur 127

1

RCA-E-31 Cloud Computing

Introduction to Cloud Computing

What is cloud computing?


The cloud symbol is typically used to represent the internet. Cloud computing is now
commonly used to describe the delivery of software, infrastructure and storage services
over the internet.
Or
Cloud computing is a general term for anything that involves delivering hosted services over
the Internet
OR
Cloud computing is a pool where we can access all services under one roof though internet.
Cloud computing has the most powerful architecture of computation.
According to Evgeny Morozov, “Cloud computing is a great idea for centralization of
computer services under one server/roof.”
OR
Cloud computing is the on-demand availability of computer system resources, especially data
storage and computing power, without direct active management by the user. The term is
generally used to describe data centers available to many users over the Internet.
Cloud computing is the delivery of on-demand computing services -- from applications to storage
and processing power – typically over the internet
and on a pay-as-you-go basis or “Pay for what you need”.
• Cloud computing is a new concept for hosting and delivering services over the Internet
under one roof.
• Cloud computing is online-based computing services, where each one deliver/shared data
, resources, software, hardware and portable devices on-demand based using visualization
among the clients.
• Cloud computing is a combination of distributed computing, grid computing and parallel
computing.
• The small to medium business companies are now adopting cloud computing in much
more higher rate than large size of companies.
• The main thing of cloud computing is reducing the processing cost on the users.
• User use a various type of online services offered by cloud computing providers such as
hardware, software, storage area and application development platform and to access
multiple type of utility program, application and storage devices over the internet.
• The Main benefit of the cloud computing includes cost saving, energy saving, high
availability and support to green computing technology.
2

Characteristics of cloud computing:


Cloud computing technology is a new era computing technology where
collection of software and hardware devices placed over a network to access as per
requirement or on demand basis through internet instead of having them locally within the
enterprise. Cloud computing service provider company holds the responsibility to control
and deliver all the software and hardware using virtualization among the various types of
clients.
According to NIST (National Institute of Standards and Technology) definition lists five
essential characteristics of cloud computing:
1. On-Demand Self-Service
2. Broad Network Access
3. Resource Pooling
4. Rapid Elasticity or Expansion
5. Measured Service.
6. Multi-tenancy
3

1. On-demand self-service: The ability for an end user to sign up and receive services without
the long delays that have characterized traditional IT services.

2. Broad network access: Ability to access the service via standard platforms (desktop, laptop,
mobile etc)

3. Resource pooling: Resources are pooled across multiple customers.

– Example of resources includes storage, processing, memory, and network bandwidth.

4. Rapid elasticity: Capability can scale to cope with demand peaks

5. Measured Service: Billing is metered and delivered as a utility service.

Users of the cloud can benefit from other organizations delivering services associated with
their data, software and other computing needs on their behalf, without the need to own or
run the usual physical hardware (such as servers) and software (such as email) themselves.

OR

1. On Demand self service: - User can get computer services such as Email, Application
Network, or Server service can be provided without requiring interaction with each service
provider.
4

Self-service means that the consumer performs all the actions needed to acquire the service
himself, instead of going through an IT department. For example – The consumer’s request is
then automatically processed by the cloud infrastructure, without human intervention on the
provider’s side.

2. Broad Network Access:-Cloud capabilities are available over the network and accessed
through standard mechanism that promote use by heterogeneous client such as mobile
phone, laptop.

3. Resource pooling-– The providers computing resources are pooled together to serve
multiple customers, with different physical and virtual resources dynamically assigned and
reassigned according to the customers demand.
– There is a sense of location independence in that the customer generally has no control or
knowledge over the exact location of the provided resources.(e.g. country, state, or
datacenter).
– Example of resources includes storage, processing, memory, and network bandwidth.

4. Rapid elasticity-– Capabilities can be elastically provisioned and released, in some cases
automatically, to scale rapidly outward and inward commensurate with demand.
– To the consumer, the capabilities available for provisioning often appear to be unlimited
and can be appropriated in any quantity at any time.

5. Measured service:-– Cloud systems automatically control and optimize resource use by
leveraging a metering capability at some level of abstraction appropriate to the type of
service (e.g. storage, processing, bandwidth, and active use account).
– Resource usage can be monitored, controlled, and reported, providing transparency for
both the provider and consumer of the utilized service.

6. Multi-tenancy:-In a private cloud, the customers are also called tenants, can have different
business divisions inside the same company. In a public cloud, the customers are often
entirely different organizations.

Most public cloud providers use the multi-tenancy model. Multi-tenancy allows customers to
run one server instance, which is less expensive and makes it easier to deploy updates to a
large number of customers.
5

• Advantages:
• Cloud computing is low in cost and affordable because we get the bills as per the usage.
• The storage and maintenance of a large amount of information or data are possible.
• The cloud computing is very flexible.
• It provides high security.
• The option of data recovery is available.
• Data can be managed easily.
• It has the automatic update option.

• Disadvantages:
• Along with the advantages, it also has some disadvantages and they are as follows:
• One of the disadvantages of the cloud computing is that it is risky.
• It requires the continuous internet connection and has the migration issue.
6

• Advantages Of Cloud Computing:-


• Low Cost: To run cloud technology, users don't require high power computer & technology
as because the application will run on the cloud and not on users' PC.
• Storage capacity: The Cloud storage capacity is unlimited & generally offers a huge storage
capacity of 2000-3000 GBs or more based on the requirement.
• Low cost of IT infrastructure: As discussed earlier, the investment will be less if an
organization uses Cloud technology; even the IT staffs and server engineers are also not
required.
• Increase computing power: Cloud servers have a very high-capacity of running and
processing tasks as well as the processing of applications.
• Reduce Software Costs: Cloud minimizes the software costs as users don't need to
purchase software for organizations or every computer.
• Updating: Instant software update is possible & users don't have to face the choice
problem between obsolete & high-upgrade software.

• Disadvantages Of Cloud Computing:-


• Internet speed: Cloud technology requires high-speed internet connection as web-based
applications often require large bandwidth amount.
• Constant Internet Connection: Its impossible to use cloud infrastructure without the
internet. To access any application or cloud storage, a constant internet connection is
required.
• Security: Data storage might not be secure. With cloud computing, all the data gets stored
in the cloud & hence unauthorized user may gain access to user's data in the cloud.
• History of Cloud Computing:-
• Before emerging the cloud computing, there was Client/Server computing which is
basically a centralized storage in which all the software applications, all the data and all
the controls are resided on the server side.
• If a single user wants to access specific data or run a program, he/she need to connect to
the server and then gain appropriate access, and then he/she can do his/her business.
• Then after, distributed computing came into picture, where all the computers are
networked together and share their resources when needed.
7

• On the basis of above computing, there was emerged of cloud computing concepts that
later implemented.
• Cloud Computing is not a latest technology. Cloud computing has evolved through a
number of phases which includes Grid Computing, Utility Computing, Application Service
and Software as a Service etc.
But the overall concept of delivering Computing resource through a global network is
started in the sixties ().
• By 2020 The Cloud co1960mputing market is forecast to exceed $241 Billion. But how did
we get here and where did all this started is the history of Cloud computing
• At around in 1961, John Mac-Charty suggested in a speech at MIT that computing can be
sold like a utility, just like a water or electricity. It was a brilliant idea, but like all brilliant
ideas, it was ahead if its time, as for the next few decades, despite interest in the model,
the technology simply was not ready for it.
• It was a gradual evolution that started in the 1950s with mainframe computing.
• Multiple users were capable of accessing a central computer through dumb terminals,
whose only function was to provide access to the mainframe. Because of the costs to buy
and maintain mainframe computers, it was not practical for an organization to buy and
maintain one for every employee. Nor did the typical user need the large (at the time)
storage capacity and processing power that a mainframe provided. Providing shared
access to a single resource was the solution that made economical sense for this
sophisticated piece of technology.
• After some time, around 1970, the concept of virtual machines (VMs) was created.
• Using virtualization software like VMware, it became possible to execute one or more
operating systems simultaneously in an isolated environment. Complete computers
(virtual) could be executed inside one physical hardware which in turn can run a
completely different operating system.
• The VM operating system took the 1950s’ shared access mainframe to the next level,
permitting multiple distinct computing environments to reside on one physical
environment. Virtualization came to drive the technology, and was an important catalyst
in the communication and information evolution.
• In the 1990s, telecommunications companies started offering virtualized private
network connections.
• Historically, telecommunications companies only offered single dedicated point–to-point
data connections. The newly offered virtualized private network connections had the same
8

service quality as their dedicated services at a reduced cost. Instead of building out
physical infrastructure to allow for more users to have their own connections,
telecommunications companies were now able to provide users with shared access to the
same physical infrastructure.
• The following list briefly explains the evolution of cloud computing:
• • Grid computing: Solving large problems with parallel computing
• Utility computing: Offering computing resources as a metered service
• SaaS: Network-based subscriptions to applications
• Cloud computing: Anytime, anywhere access to IT resources delivered dynamically as a
service.
The actual history of Cloud computing is not that old, the first business and consumer
Cloud Computing services website (Salesforce.com and Google) were launched in 1999.
Cloud computing is tied directly to the development of the Internet and Business
technology since Cloud computing is the solution to the problem of how the Internet can
help improve Business Technology.
• Cloud computing is one the most innovative technology of our time. Following is a brief
history of Cloud computing.


But of course time has passed and the technology caught that idea and after few years we
mentioned that:
In 1999, Salesforce.com started delivering of applications to users using a simple website. The
applications were delivered to enterprises over the Internet, and this way the dream of
computing sold as utility were true.
9

In 2002, Amazon started Amazon Web Services, providing services like storage, computation
and even human intelligence. However, only starting with the launch of the Elastic Compute
Cloud in 2006 a truly commercial service open to everybody existed.
In 2009, Google Apps also started to provide cloud computing enterprise applications.
Of course, all the big players are present in the cloud computing evolution, some were earlier,
and some were later.
In 2009, Microsoft launched Windows Azure, and companies like Oracle and HP have all
joined the game. This proves that today, cloud computing has become mainstream.
Or
History & Evolution of Cloud Computing:-
EARLY 1960S:-
The computer scientist John McCarthy, come up with concept of timesharing, and enabling
Organization to simultaneously use an expensive mainframe. This computing is described as a
significant contribution to the development of the Internet, and a pioneer of Cloud
computing.
IN 1969:-
The idea of an “Intergalactic Computer Network” or “Galactic Network” (a computer
networking concept similar to today’s Internet) was introduced by J.C.R. Licklider, who was
responsible for enabling the development of ARPANET (Advanced Research Projects Agency
Network). His vision was for everyone on the globe to be interconnected and being able to
access programs and data at any site, from anywhere.
IN 1970:-
Using virtualization software like VMware. It becomes possible to run more than one
Operating System simultaneously in an isolated environment. It was possible to run a
completely different Computer (virtual machine) inside a different Operating System.
IN 1997:-The first known definition of the term “Cloud Computing” seems to be by Prof.
Ramnath Chellappa in Dallas in 1997 – “A computing paradigm where the boundaries of
computing will be determined by economic rationale rather than technical limits alone.”
IN1999:-
The arrival of Salesforce.com in 1999 pioneered the concept of delivering enterprise
applications via simple website. The services firm covered the way for both specialist and
mainstream software firms to deliver applications over the Internet.
10

IN 2003:-
The first public release of Xen, which creates a Virtual Machine Monitor (VMM) also known as
a hypervisor, a software system that, allows the execution of multiple virtual guest operating
systems simultaneously on a single machine.
IN 2006:-
In 2006, Amazon expanded its cloud services. First was its Elastic Compute cloud (EC2), which
allowed people to access computers and run their own applications on them, all on the cloud.
Then they brought out Simple Storage Service (S3). This introduced the pay-as-you-go model
to both users and the industry as a whole, and it has basically become standard practice now.
IN 2013:-
The Worldwide Public Cloud Services Market totalled £78bn, up 18.5 per cent on 2012, with
IaaS (infrastructure-as-a-service) the fastest growing market service.
IN 2014:-
In 2014, global business spending for infrastructure and services related to the cloud will
reach an estimated £103.8bn.

• Types of Cloud Model :-


If we analyze the Cloud technology intelligently, we will see that most people separate the
cloud computing model into two distinct set:
On the basis of deployment Model: refers to the management of the cloud's infrastructure.
Cloud hosting deployment model designates the exact category of the cloud environment, its
size and accessing mechanism. It also tells the nature and purpose of the cloud.
There are three types of cloud model –
1. Public Cloud 2. Private Cloud 3. Hybrid Cloud 4. Community Cloud.
On the basis of Service model: Cloud computing is a broad term which holds a more extensive
range of services. It is composed of a particular type of services; cloud computing platform
allows its users to access
There are three types of cloud model –
1. Infrastructure-as-a-Service (IaaS),
2. Platform-as-a-Service (PaaS) and
3. Software-as-a-Service (SaaS).
11

1. Public Cloud:-Public cloud allows the accessibility of systems and services easily to general
public. When a cloud is available to the general public on a pay-per-use basis, that cloud is
called a ‘Public Cloud’. The customer has no visibility over the location of the cloud computing
infrastructure. It is based on the standard cloud computing model. Examples of public cloud
are Amazon EC2, Windows Azure service platform, IBM’s Blue cloud, Microsoft, Google, Rack-
space etc.


• Advantages of Public Cloud Model
1) Low Cost:-Public cloud is having low cost as compared to private or hybrid cloud, because it
shares same resources with large number of consumer.
2) Reliable:-Public cloud provides large number of resources from different locations, if any of
the resource fail, public cloud can employ another one.
3) Flexible:-It is very easy to integrate public cloud with private cloud and hence it gives
flexible approach to consumers.
4) Location Independent:-It ensures the independency of location, because public cloud
services are delivered through Internet.
5) High Scalability:-Cloud resources are available as per the demand from the pool of
resources that means they can be scaled up or down according to the requirement.
Disadvantages of Public Cloud Model
1) Low security:-In public cloud model, data is present off-site and resources are shared
publicly. Hence it does not ensure the high level security.
2) Less customizable:-It is less customizable than private cloud.
2. Private Cloud:-The Private cloud allows the accessibility of systems and services within the
organization. Private cloud is operated only within a particular organization. But it will be
managed internally or by third party. The internal data centers of business organizations
12

which are not made available to the general public are termed as a private cloud. As the
name suggests, the private cloud is dedicated to the customer itself. These are more secured
as compared to public clouds. It uses the technology of virtualization. A private cloud is
hosted on the company’s own servers. Example of private cloud technology is Eucalyptus and
VMware.


Advantages of Private Cloud Model:-
1) High security and privacy:-Private cloud resources are shared from distinct pool of
resources and hence highly secured.
2) More Control:-Private clouds have more control on its resources and hardware than public
cloud because it is accessed only within the boundary of an organization.
Disadvantages of Private Cloud Model:-
1) Restriction:-Private cloud is only accessible locally and it is very difficult to deploy globally.
2) More Cost:-Private cloud is having more cost than public clouds.
3) Inflexible price:-In order to fulfill demands, purchasing new hardware is very costly.
4) Less Scalability:-Private clouds are scaled only within capacity of internal hosted resources.
3. Hybrid Cloud:-A combination of private and public cloud is called a hybrid cloud.
Companies use their own infrastructure for normal usage and hire the cloud at events of
heavy network traffic or high data load.
Non-critical activities are performed by public cloud while critical activities are performed by
private cloud.
13


• Advantages of Hybrid Cloud Model:-
1) Scalable:-It provides both the features of public and private cloud scalability.
2) Flexible and secure:-It provides secure resources because of private cloud and scalable
resources because of public cloud.
3) Cost effective:-It is having less cost as compared to private cloud.
Disadvantages of Hybrid Cloud Model:-
1) Networking issues:-Networking becomes complex because of private and public cloud.
2) Security Compliance:-It is necessary to ensure that cloud services are compliant with the
security policies of an organization.
4. Community cloud :-A community cloud is a cloud service model that provides a cloud
computing solution to a limited number of individuals or organizations that is governed,
managed and secured commonly by all the participating organizations or a third party
managed service provider.
Or
The cloud service shares among various organizations and companies which belong to the
same community with the common concerns. This can manage either by the third party or
internally. Example- Salesforce.com
Or
Community Cloud is another type of cloud computing in which the setup of the cloud is
shared manually among different organizations that belong to the same community or area.
Example of such a community is where organizations/firms are there along with the financial
institutions/banks. A multi-tenant setup developed using cloud among different organizations
that belong to a particular community or group having similar computing concern.
14

2. Cloud Service Models:-These services are broadly divided into three categories:
1. Software-as-a-Service (SaaS)
2. Infrastructure-as-a-Service (IaaS),
3. Platform-as-a-Service (PaaS) and
Software as a Service | SaaS:-SaaS is a software distribution model in which applications are
hosted by a cloud service provider and made available to customers over internet. SaaS is also
known as "On-Demand Software". OR
SaaS is a method of software delivery that allows data to be accessed from any device with an
Internet connection and web browser. In this web-based model, software vendors host and
maintain the servers, databases and code that constitute an application. This is a significant
departure from the on-premise software delivery model.
In SaaS, software and associated data are centrally hosted on the cloud server. SaaS is
accessed by users using a thin client via a web browser. SaaS uses the web to deliver
applications that are managed by a third-party vendor and whose interface is accessed on the
clients’ side. Most SaaS applications can be run directly from a web browser without any
downloads or installations required, although some require plugins.
Because of the web delivery model, SaaS eliminates the need to install and run applications
on individual computers.
SaaS Examples: Google Apps, Salesforce, Workday, Concur, Citrix GoToMeeting, Cisco WebEx
Advantages of SaaS cloud computing layer:-

1) easy to buy:-SaaS pricing is based on a monthly fee or annual fee, SaaS allows
organizations to access business functionality at a low cost which is less than licensed
applications.

Unlike traditional software which is sold as a licensed based with an up-front cost, SaaS
providers generally pricing the applications using a subscription fee, most commonly a
monthly or annually fee.

2) Less hardware required for SaaS:-The software is hosted remotely, so organizations don't
need to invest in additional hardware.

3) Low Maintenance required for SaaS:-Software as a service removes the necessity of


installation, set-up, and often daily upkeep and maintenance for organizations. Initial set-up
cost for SaaS is typically less than the enterprise software. SaaS vendors actually pricing their
15

applications based on some usage parameters, such as number of users using the application.
So SaaS does easy to monitor and automatic updates.

4) No special software or hardware versions required.

All users will have the same version of software and typically access it through the web
browser. SaaS reduces IT support costs by outsourcing hardware and software maintenance
and support to the IaaS provider.

Disadvantages of SaaS cloud computing layer:-

1) Security:-Actually data is stored in cloud, so security may be an issue for some users.
However, cloud computing is not more secure than in-house deployment. Learn more cloud
security.

2) Latency issue:-Because the data and application are stored in cloud at a variable distance
from the end user, so there is a possibility that there may be more latency while interacting
with the application than a local deployment. So, SaaS model is not suitable for applications
whose demand response times are in milliseconds.

3) Total Dependency on Internet:-Without internet connection, most SaaS applications are


not usable/working.

4) Switching between SaaS vendors is difficult:-Switching SaaS vendors involves the difficult
and slow task of transferring the very large data files over the Internet and then converting
and importing them into another SaaS also.

2. Infrastructure as a Service | IaaS:-IaaS is one of the layers of cloud computing platform


where in the customer organization outsources its IT infrastructure such as servers,
networking, processing, storage, virtual machines and other resources. Customers access
these resources over internet i.e. cloud computing platform, on a pay-per-use model.
Iaas, earlier called Hardware as a Service (HaaS), is a cloud computing platform based model.
In traditional hosting services, IT infrastructure was rented out for a specific periods of time,
with pre-determined hardware configuration. The client paid for the configuration and time,
regardless of the actual use. With the help of IaaS cloud computing platform layer, clients can
dynamically scale the configuration to meet changing requires, and are billed only for the
services actually used.
IaaS cloud computing platform layer eliminates the need for every organization to maintain
the IT infrastructure.
16

IaaS is offered in three models: public, private, and hybrid cloud. Private cloud implies that
the infrastructure resides at the customer-premise. In case of public cloud, it is located at the
cloud computing platform vendor's data center; and hybrid cloud is a combination of two
with customer choosing the best of both worlds.
IaaS Examples: Amazon Web Services (AWS), Cisco Metapod, Microsoft Azure, Google
Compute Engine (GCE), Joyent
Advantages of IaaS cloud computing layer:-

1) You can dynamically choose a CPU, memory and storage configuration as per your needs.
2) You easily access the vast computing power available on IaaS cloud platform.
3) You can eliminate the need of investment in rarely used IT hardware.
4) IT infra will be handled by the IaaS cloud computing platform vendors.
Disadvantages of IaaS cloud computing layer:-

There is a risk of IaaS cloud computing platform vendor by gaining the access to the
organizations data. But it can be avoided by opting for private cloud.
2) IaaS cloud computing platform model is dependent on internet availability.
3) It is also dependent on the availability of virtualization services.
4) IaaS cloud computing platform can limit the user privacy and customization options.’
Platform as a Service | PaaS:- PaaS cloud computing platform is a developer programming
platform which is created for the programmer to develop, test, run and manage the
applications.
A developer is able to write the application as well as deploy it directly into this layer easily.
PaaS allows you to create applications using software components that are built into the PaaS
(middleware). Applications using PaaS inherit cloud characteristic such as scalability, high-
availability, multi-tenancy, SaaS enablement, and more
PaaS extend and abstract the IaaS layer by removing the hassle of managing the individual
virtual machine.
In PaaS cloud computing platform, back end scalability is handled by the cloud service
provider and the end user does not have to worry about to manage the infrastructure.
PaaS services are hosted in the cloud and accessed by users simply via their web browser.
All the infrastructure to run the applications will be over the internet.
Apprenda is one provider of a private cloud PaaS for .NET and Java.
17

Advantages of PaaS cloud computing layer


1) Simplified Development:-Developers can focus on development and innovation without
worrying about the infrastructure.

2) Lower risk: No requirements of up-front investment in hardware and software. Developers


only need a PC and an internet connection to start building applications.

3) Prebuilt business functionality: - Some PaaS vendors also provide already defined business
functionality so that users can avoid building everything from very scratch and hence can
directly start the projects only.

4) Instant community:- PaaS vendors frequently provides online communities where


developer can get the ideas, share experiences and seek advice from others.

5) Scalability: - Applications deployed can scale from one to thousands of users without any
changes to the applications.

Disadvantages of PaaS cloud computing layer:-

1) Vendor lock-in:- One have to write the applications according to the platform provided by
PaaS vendor so migration of an application to another PaaS vendor would be a problem.
2) Data Privacy:- Corporate data, whether it can be critical or not, will be private so if it is not
located within the walls of the company there can be a risk in terms of privacy of data.
3) Integration with the rest of the systems applications: - It may happen that some
applications are local and some are in cloud. So there will be chances of increased complexity
when we want to use data which in the cloud with the local data.

Cloud Computing Architecture:-The cloud computing architecture consists of cloud


services, middleware, software components, resources, their geo location, and the externally
noticeable attributes among them their relationship. In cloud computing, security mainly
depends on choosing the right architecture for the right application.

There are many components in the architecture of cloud computing. These components are
connected loosely with each other. Cloud architecture can be broadly classified as follows:
1. Front-end, where the client interacts.
2. Back-end, which is the cloud section.
18


1. Frontend: - The client or user side of the cloud computing model is called front end.
Frontend consist of clients or user computer device, different application and interfaces
needed to access the cloud computing platforms.
2. Backend:-The cloud model itself is called backend. It includes all the resources, such as
computers, server’s storage devices, deployment models, services and various security
mechanisms which are required to offer different cloud computing services. Backend provides
traffic control, built in security and different protocols. It consists of servers which have
different protocols, which interface the devices with each other.
• A cloud computing system contains any type of web application programs, like data
processing, video games, entertainment and software development. To administrate the
entire cloud computing system, a central server is established. It manages the traffic and
also monitors the user demand to make sure that the entire task of the system work well
without any complexity. The server follows some set of rules called protocols. It uses a
special type of software, known as middleware, communicates with the user who are
connected to the cloud server.
• If the service provider has many customers, then there is a great demand for vast storage
space. The cloud computing system must maintain a copy of the user’s data which is
known as redundancy.
• Advantage of cloud computing Architecture :- The following are the advantage of cloud
computing architecture:-
• 1. Minimum efforts of the administrator are required.
• 2. Pay as you go, that is, contract flexibility.
• 3. Elasticity and availability 4. Quick application deployment

• 5. Easy to manage 6. More efficient usage 7. Rapid deployment


19

• Major Cloud Computing Vendors:-There are quite a number of cloud computing vendors.
Here are some of the major players in the marketplace and their products:

Major issue /Challenges of Cloud Computing:-


Cloud computing is a technology which allows the user to access resources by means of front end machines
,there is no requirement to install any software .

• Cloud computing is used for enabling global access to mutual pools of resources such as services, apps,
data, servers, and computer networks. It is done on either a third-party server located in a data center or
a privately owned cloud. This makes data-accessing contrivances more reliable and efficient, with nominal
administration effort.

• Here are six common challenges you must consider before implementing cloud computing technology:

1. Security Issues

2. Data Issues

3. Performance Issues

4. Energy Related Issues

5. Fault Tolerant

• 1. Security Issues :- Security is a big issue of cloud computing machine, There is a risk, where a malicious
user can pierce the cloud by imitating a legitimate user, thereby contaminating the whole cloud thus
affecting several customers who are distributing are as follows :-

• 1. Data Integrity
20

• 2. Data theft

• 3. Security on vender

• 4. Security on user level

• 5. Information security

• 2. Data Issues :-

• 1. Data loss

• 2. Data location

• 3. Data lock-in

• 4. Data confidentiality and audit ability

• 5. Data deletion

• 6.Data restitution

• 7. Service level agreements

• 3. Performance Issues: - Bad application and performance leads companies to deplete customers,
decreases employee productivity, as well as reduces the bottom line profits. Application clash due to bad
performance. If the application cannot perform effectively due to rise in traffic then the organization may
ignore the customers. The following are the performance issues face in cloud computing :-

• 1. Poor application performance

• 2. Slow access to application and data

• 3. Administrative and geographical scalability

• 4. Energy Related Issues:-

• Cloud computing is rapidly increasing in significance as growing numbers of enterprises, as well as


individuals are moving their workloads to cloud service providers. Services presented by cloud providers
for example Microsoft, Google and IBM are executed on thousands of servers extend across several
geographical shared data centers.

• 5. Fault Tolerance: - fault tolerance is one of the significant challenges of cloud computing. Fault
tolerance is related to all the technique required to allow a system to bear software defects

• Amazon EC2:-
• EC2 short for Amazon Elastic Computing Cloud, Amazon EC2 is a commercial Web service from Amazon's
Web Services (AWS) that lets customers "rent" computing resources from the EC2 cloud. EC2 provides
storage, processing, and Web services to customers. EC2 is a virtual computing environment, that enables
customers to use Web service interfaces to launch instances with a variety of operating systems, load
them with your custom applications, manage your network's access permissions, and run your image
using as many or few systems as you need.
21

• OR

• Amazon Elastic Compute Cloud is a pioneer cloud infrastructure product that allows users to create
powerful virtual servers on demand. Amazon EC2 is hosted on the server consolidation/virtualization
concept, where the entire computing power of server hardware can be divided into multiple instances
and offered to the end-user over the Internet as a computing instance.

• Because the computing instances provided are software based, each unique instance is scalable and users
can create an entire virtual data center over the cloud. Amazon EC2-created instances can be accessed by
open-source Simple Object Access Protocol (SOAP) application programming interface (API) support,
giving developers the liberty to create various types of applications, just as with an on-premises
computing infrastructure. The instance provided by EC2, commonly known as a virtual machine, is
created using Amazon Virtual Image and is hosted over XEN Hypervisor, a server virtualising software.

• Features of Amazon EC2:-

• Amazon EC2 provides the following features:

• Virtual computing environments, known as instances

• Preconfigured templates for your instances, known as Amazon Machine Images (AIsM), that package the
bits you need for your server (including the operating system and additional software)

• Various configurations of CPU, memory, storage, and networking capacity for your instances, known
as instance types

• Secure login information for your instances using key pairs (AWS stores the public key, and you store the
private key in a secure place)

• Storage volumes for temporary data that's deleted when you stop or terminate your instance, known
as instance store volumes

• Persistent storage volumes for your data using Amazon Elastic Block Store (Amazon EBS), known
as Amazon EBS volumes

• Multiple physical locations for your resources, such as instances and Amazon EBS volumes, known
as Regions and Availability Zones

• A firewall that enables you to specify the protocols, ports, and source IP ranges that can reach your
instances using security groups

• Static IPv4 addresses for dynamic cloud computing, known as Elastic IP addresses

• Metadata, known as tags, that you can create and assign to your Amazon EC2 resources

• Virtual networks you can create that are logically isolated from the rest of the AWS cloud, and that you
can optionally connect to your own network, known as virtual private clouds(VPCs)
22

• Cloud Computing Tool/software :-

There are various tooled or software which is used for control, manage & operate to any cloud
computing machine.

• 1. Eucalyptus (Software):-

• 2. Nimbus (Software):-

• 3. OpenNebula(Software):-

• 4. CloudSim (Software):-

• 1. Eucalyptus (Software):-
Eucalyptus is paid and open-source computer software for building Amazon Web Services (AWS)-
compatible private and hybrid cloud computing environments, originally developed by the company
Eucalyptus Systems.

• Eucalyptus is an acronym for Elastic Utility Computing Architecture for Linking Your Programs To Useful
Systems. Eucalyptus allows pooling resources i.e. -compute, storage, and network resources that can be
dynamically scaled up or down as application workloads change.

• Amazon Web services are accessible over the internet through Amazon .com which was officially
launched in 2006. Amazon Simple Storage Services (S3) is the fundamental services of Amazon.

• Marten Mickos was the CEO of Eucalyptus. In September 2014, Eucalyptus was acquired by Hewlett-
Packard and then maintained by DXC Technology. After DXC stopped developing the product in late
2017, AppScale Systems forked the code and started supporting Eucalyptus customers.
23

• History: - The advancement of software began with the origin of the project, that is, virtual grid
application development at Rice University and additional institutions from the year 2003 to 2008.
Eucalyptus software was integrated with the Ubantu 9.04 version in the year 2009.


Released version of Eucalyptus:-In order to sustain compatibility, Eucalyptus machine declared an official
agreement with Amazon Web services (AWS) in March 2012.

• The following is the release history of Eucalyptus :

• 1. Eucalyptus 1.6 released in November 2009

• 2. Eucalyptus 2.0 released in August 2010

• 3. Eucalyptus 3.0 released in 8 February 2012

• 4. Eucalyptus 3.1 released in 27 June 2012

• 5. Eucalyptus 3.2 released in 19 December 2012

• 6. Eucalyptus 3.3 released in 18 June 2013

• 7. Eucalyptus 3.4 released in 24 October 2013

• Overview: - Eucalyptus is an explanation that permits the installation of a private & Hybrid cloud
infrastructure. It is coded in java, python and C languages with a central storage Controller walrus and
controllers on each node. The network is directed by the component that is cloud controller (CLC). Each
controller is authenticated by SSH (Secure S Hell) key files and authenticates the transactions. Eucalyptus
scalability is restricted when compared to the huge scalability. The Amazon web services EC2 works on
the Eucalyptus platform.

• Goals of Eucalyptus: - Eucalyptus is open-source computer software for creating private clouds that are
friendly with Amazon Web Services (AWS) API (Application Programming Interface). The following are the
objective of :-

• 1. Promote greater understanding and uptake of cloud computing

• 2. Testing the vehicle, prior to buying commercial services.


24

• 3. Standardize local IT environment with public cloud

• 4. Provide a fundamental software development environment for the open source community.

• Eucalyptus Software Architecture:

• Eucalyptus commands deal with either Amazon or Eucalyptus instances.

• Users can navigate instances between a eucalyptus private cloud and the Amazon EC2 to build a hybrid
cloud.

• Hardware virtualization segregates application from computer hardware.

• Notes:-Virtualization is the "creation of a virtual (rather than actual) version of something, such as a
server, a desktop, a storage device, an operating system or resources".

• Eucalyptus uses the terminology:


• 1. Images – An image is a fixed collection of software modules, system software, application software,
and configuration information that is started from a known baseline (immutable/fixed). When bundled
and uploaded to the Eucalyptus cloud is known an Eucalyptus machine image (EMI).

• 2. Instances – When an image is put to use, it is called an instance. The configuration is executed at
runtime, and the Cloud Controller decides where the image will run, and storage and networking is
attached to meet resource needs.

• 3. IP addressing – Eucalyptus instances can have public and private IP addresses. An IP address is assigned
to an instance when the instance is created from an image. For instances that require a persistent IP
address, such as a web-server, Eucalyptus supplies elastic IP addresses. These are pre-allocated by the
Eucalyptus cloud and can be reassigned to a running instance.

• 4. Access Control – A user of Eucalyptus is assigned an identity, and identities can be grouped together for
access control.

• 5. Security – TCP/IP security groups share a common set of firewall rules. This is a mechanism to firewall
off an instance using IP address and port block/allow functionality. Instances are isolated at TCP/IP layer.
25

If this were not present, a user could manipulate the networking of instances and gain access to
neighboring instances violating the basic cloud tenet of instance isolation and separation.

• 6. Networking – There are three networking modes.

• 1. In Managed Mode- Eucalyptus manages a local network of instances, including security groups and IP
addresses.

• 2. In System Mode- Eucalyptus assigns a MAC address and attaches the instance's network interface to
the physical network through the Node Controller's bridge. System Mode does not offer elastic IP
addresses, security groups, or VM isolation.

• 3. In Static Mode- Eucalyptus assigns IP addresses to instances. Static Mode does not offer elastic IPs,
security groups, or VM isolation.

• Notes:-A media access control address (MAC address) of a hardware device is a unique identifier address
assigned to a network interface controller (NIC). For communications within a network segment, it is
used as a network address for most IEEE 802 network technologies, including Ethernet, Wi-Fi, and
Bluetooth.

• OR
Media Access Control (MAC) Address – MAC Addresses are unique 48-bits hardware number of a
computer, which is embedded into network card (known as Network Interface Card) during the time of
manufacturing. MAC Address is also known as Physical Address of a network device or each node.

• Eucalyptus Components: - SOAP is an acronym for Simple Object Access Protocol. It is an XML-based
messaging protocol for exchanging information among computers. SOAP is an application of the XML
specification.

Notes:-Representational State Transfer (REST) is a software architectural style that defines a set of constraints
to be used for creating Web services. Web services that conform to the REST architectural style,
called RESTful Web services (RWS), provide interoperability between computer systems on the Internet. RESTful
Web services allow the requesting systems to access and manipulate textual representations of Web
resources by using a uniform and predefined set of stateless operations. Other kinds of Web services, such
as SOAP Web services, expose their own arbitrary sets of operations.
26

• Eucalyptus Components :-
• Eucalyptus contain six components :

• The Cloud Controller (CLC) is a Java program that offers EC2-compatible interfaces, as well as a web
interface to the outside world. In addition to handling incoming requests, the CLC acts as the
administrative interface for cloud management and performs high-level resource scheduling and system
accounting. The CLC accepts user API requests from command-line interfaces like euca2ools or GUI-based
tools like the Eucalyptus User Console and manages the underlying compute, storage, and network
resources. Only one CLC can exist per cloud and it handles authentication, accounting, reporting, and
quota management.

• Walrus, also written in Java, is the Eucalyptus equivalent to AWS Simple Storage Service (S3). Walrus
offers persistent storage to all of the virtual machines in the Eucalyptus cloud and can be used as a simple
HTTP put/get storage as a service solution. There are no data type restrictions for Walrus, and it can
contain images (i.e., the building blocks used to launch virtual machines), volume snapshots (i.e., point-in-
time copies), and application data. Only one Walrus can exist per cloud.

• Notes: - Euca2ools are command-line tools for interacting with Web services that export a REST/Query-
based API compatible with Amazon EC2 and S3 services.

• Walrus is also called “WS3” and is the storage service provided by Eucalyptus. The Storage Service
provides simple storage functionality, which is exposed by ReSTful and Soap APIs. Walrus takes care of
storing the virtual machine images, storing the snapshots and serving Files. As with all other public facing
Services in Eucalyptus, these Services are based on the Amazon Web Services API.

• Containers in Walrus Storage are called „Buckets“ and they have to be unique across accounts, just like it
is with Amazon Web Services (AWS). Some naming restrictions are:

• Containers can contain lowercase letters, numbers, periods (.), underscores (_), and dashes (-)

• Container Names must start with a number or letter

• The Length of a Name must be between 3 and 255 characters long

• It is not allowed to use an IP-Address as Name (e.g., 265.255.5.4)

• The maximum File Size in a Walrus Container is 5 Terabytes and Files can either be public or private. If the
Container should be deleted, a container must be empty, which means that all files have to be deleted
prior to deleting the container. Files are identified via unique Keys represented by Uniform Resource
Identifiers (URIs).

• The Cluster Controller (CC) is written in C and acts as the front end for a cluster within a Eucalyptus cloud
and communicates with the Storage Controller and Node Controller. It manages instance (i.e., virtual
machines) execution and Service Level Agreements (SLAs) per cluster.

• The Storage Controller (SC) is written in Java and is the Eucalyptus equivalent to AWS EBS. It
communicates with the Cluster Controller and Node Controller and manages Eucalyptus block volumes
and snapshots to the instances within its specific cluster. If an instance requires writing persistent data to
memory outside of the cluster, it would need to write to Walrus, which is available to any instance in any
cluster.
27

• The VMware Broker is an optional component that provides an AWS-compatible interface


for VMware environments and physically runs on the Cluster Controller. The VMware Broker overlays
existing ESX/ESXi hosts and transforms Eucalyptus Machine Images (EMIs) to VMware virtual disks. The
VMware Broker mediates interactions between the Cluster Controller and VMware and can connect
directly to either ESX/ESXi hosts or to vCenter Server.

• The Node Controller (NC) is written in C and hosts the virtual machine instances and manages the virtual
network endpoints. It downloads and caches images from Walrus as well as creates and caches instances.
While there is no theoretical limit to the number of Node Controllers per cluster, performance limits do
exist.

• Notes:-The VMware Service Broker provides a single point where you can request and manage catalog
items. As a cloud administrator, you create catalog items by importing released VMware Cloud Assembly
blueprints and Amazon Web Services Cloud Formation templates that your users can deploy to your cloud
vendor regions or data stores. As a user, you can request and monitor the provisioning process. After
deployment, you manage the deployed catalog items throughout the deployment lifecycle.

• OpenNebula:-
• OpenNebula is a cloud computing platform for managing heterogeneous distributed data center
infrastructures. The OpenNebula platform manages a data center's virtual infrastructure to build private,
public and hybrid implementations of infrastructure as a service.

• History:-

• The OpenNebula Project was started as a research venture in 2005 by Ignacio M. Llorente and Ruben S.
Montero. The first public release of the software occurred in 2008. The goals of the research were to
create efficient solutions for managing virtual machines on distributed infrastructures.

• It was also important that these solutions had the ability to scale at high levels.

• Open-source development and an active community of developers have since helped mature the project.

• As the project matured it began to become more and more adopted and in March 2010 the primary
writers of the project founded C12G Labs, now known as OpenNebula Systems, which provides value-
added professional services to enterprises adopting or utilizing OpenNebula.


28

• Features: - The OpenNebula project focuses on providing a full featured cloud


computing platform in a simplified, easy to use way.
• The following features are available in the platform-
• 1. Interfaces for cloud consumers and administrators.-

• A number of API’s are available for the platform, including AWS EC2, EBS, and OGF OCCI.

• A powerful, yet familiar UNIX based, command-line interface is available to administrators.

• Further ease of use is available via the SunStone Portal, a graphical-user interface for cloud consumers and
data center administrators.

• 2. Appliance Marketplace:-

• The OpenNebula Marketplace offers a wide variety of applications capable of running in OpenNebula
environments.

• A private catalogue of applications is deployable across OpenNebula instances.

• The marketplace is fully integrated with the Sunstone GUI.

• 3. Capacity and Performance Management:-

• Resource allocation is possible.

• Resource Quota Management enables users to track and limit computing, storage, and networking
resource utilization.

• Load balancing, high availability, and high-performance computing possible via the dynamic creation of
clusters which share data stores and virtual networks.

• The dynamic creation of virtual data centers allows a group of users, under control of a central admin, the
ability to create and manage computing, storage, and networking capacity.

• A powerful scheduling component allows for the management of tasks based on resource availability.

• 4. Security:-

• The platform fully integrates with user management services such as LDAP ((Lightweight Directory Access
Protocol) and Active Directory. A built-in user name and password, SSH, and X.509 are also supported.

• Login token functionality, fine-grained auditing, and the ability to isolate various levels also provide
increased security levels.

• 5. with third-party tools:-

• The platform features a modular and extensible architecture allowing third-party tools to be easily
integrated.
29

• Custom plug-ins is available for the integration of any third-party data center service.

• A number of API’s allow for the integration of tools such as billing and self-service portals.

• Internal architecture:-
• Basic components:-OpenNebula Internal Architecture:-

• Host: Physical machine running a supported hypervisor.

• Cluster: Pool of hosts that share data stores and virtual networks.
• Template: Virtual Machine definition.

• Image: Virtual Machine disk image.

• Virtual Machine: Instantiated Template.

• A Virtual Machine represents one life-cycle, and several Virtual Machines can be

• Created from a single Template.

• Virtual Network: A group of IP leases that VMs can use to automatically obtain IP addresses. It allows the

• Creation of Virtual Networks by mapping over the physical ones.

• They will be available to the VMs through the corresponding bridges on hosts. Virtual network can be
defined in three different parts:

• Underlying of physical network infrastructure.

• The logical address space available (IPv4, IPv6, dual stack).

• Context attributes (e.g. net mask, DNS, gateway).

• Open Nebula also comes with a Virtual Router appliance to provide networking services like DHCP, DNS
etc
30

• Components and Deployment Model of Open Nebula:-


• The OpenNebula Project's deployment model resembles classic cluster architecture which utilizes

• 1. A front-end (master node)

• 2. Hypervisor enabled hosts (worker nodes)

• 3. Data stores

• 4. A physical network

• SSH:-SSH, also known as Secure Shell or Secure Socket Shell, is a network protocol that gives users,
particularly system administrators, a secure way to access a computer over an unsecured network. OR
The SSH protocol (also referred to as Secure Shell) is a method for secure remote login from one
computer to another. It provides several alternative options for strong authentication, and it protects the
communications security and integrity with strong encryption.

• 1. Front-end machine:-

• The master node, sometimes referred to as the front-end machine, executes all the OpenNebula services.
This is the actual machine where OpenNebula is installed. OpenNebula services on the front-end machine
include the management daemon (oned), scheduler (sched), the web interface server (Sunstone server),
and other advanced components.

• These services are responsible for queuing, scheduling, and submitting jobs to other machines in the
cluster. The master node also provides the mechanisms to manage the entire system. This includes
adding virtual machines, monitoring the status of virtual machines, hosting the repository, and
transferring virtual machines when necessary. Much of this is possible due to a monitoring subsystem
which gathers information such as host status, performance, and capacity use. The system is highly
scalable and is only limited by the performance of the actual server.

• 2. Hypervisor enabled-hosts:-

• The worker nodes, or hypervisor enabled-hosts node, provide the actual computing resources needed for
processing all jobs submitted by the master node.
31

• OpenNebula hypervisor enabled-hosts use a virtualization hypervisor such as Vmware, Xen, or KVM. The
KVM hypervisor is natively supported and used by default. Virtualization hosts are the physical machines
that run the virtual machines and various platforms can be used with OpenNebula.

• A Virtualization Subsystem interacts with these hosts to take the actions needed by the master node.

• 3. Storage:-

• The datastores simply hold the base images of the Virtual Machines. The datastores must be accessible to
the front-end; this can be accomplished by using one of a variety of available technologies such as NAS,
SAN, or DAS (direct attached storage).

• Three different datastore classes are included with OpenNebula, including system datastores, image
datastores, and file datastores. System datastores hold the images used for running the virtual machines.
The images can be complete copies of an original image, deltas, or symbolic links depending on the
storage technology used. The image datastores are used to store the disk image repository. Images from
the image datastores are moved to or from the system datastore when virtual machines are deployed or
manipulated. The file datastore is used for regular files and is often used for kernels, ram disks, or context
files.

NOTES:-

NAS (Network-attached storage):-A NAS is a single storage device that operates on data files, while a SAN is a
local network of several devices. The differences between NAS and SAN

SAN :-( Storage Area Network) can be seen when comparing their cabling and how they're connected to the
system, as well as how other devices communicate with them.

• 4. Physical networks:-Physical networks are required to support the interconnection of storage servers
and virtual machines in remote locations. It is also essential that the front-end machine can connect to all
the worker nodes or hosts. At the very least two physical networks are required as OpenNebula requires a
service network and an instance network. The front-end machine uses the service network to access
hosts, manage and monitor hypervisors, and to move image files. The instance network allows the virtual
machines to connect across different hosts. The network subsystem of OpenNebula is easily customizable
to allow easy adaptation to existing data centers.
32

• NIMBUS: CLOUD COMPUTING FOR SCIENCE:-


• Nimbus is an open-source toolkit focused on providing Infrastructure-as-a-Service (IaaS) capabilities to
the scientific community. It is written in Python and Java for rapid development of custom community-
specific solutions.
• Nimbus offers a “cloud Kit” that allows users to lease remote resources by allocation number of
components based on the web service technology and configuring Virtual Workspace Service (VWS). The
design of Nimbus which consists of a

• OR
• Nimbus is a powerful toolkit focused on converting a computer cluster into an Infrastructure-as-a-Service
(IaaS) cloud for scientific communities. Essentially, it allows a deployment and configuration of virtual
machines (VMs) on remote resources to create an environment suitable for the users’ requirements.
Being written in Python and Java, it is totally free and open-source software, released under the Apache
License.

• Note: - The Dynamic Host Configuration Protocol (DHCP) is a network management protocol used on
UDP/IP networks whereby a DHCP server dynamically assigns an IP address and other network
configuration parameters to each device on a network so they can communicate with other IP networks.

• Nimbus consists of two basic products:

• Nimbus Infrastructure is an open source EC2/S3-compatible IaaS solution with features that benefit
scientific community interests, like support for auto-configuring clusters, proxy credentials, batch
schedulers, best-effort allocations, etc.

• Nimbus Platform is an integrated set of tools for a multi-cloud environment that automates and simplifies
the work with infrastructure clouds (deployment, scaling, and management of cloud resources) for
scientific users.
33

This toolkit is compatible with Amazon's Network Protocols via EC2 based clients, S3 REST API clients, as well as
SOAP API and REST API that have been implemented in Nimbus. Also it provides support for X509 credentials,
fast propagation, multiple protocols, and compartmentalized dependencies. Nimbus features flexible user, group
and workspaces management, request authentication and authorization, and per-client usage tracking.

• Following is a description of each component

• 1. Workspace service: Allows clients to manage and administer VMs by providing to two interfaces;

One interface is based on the web service resource framework (WSRF) and the other is based on EC2
WSDL. This service communicates with a workspace resource manager or a workspace pilot to manage
instances.

• 2. Workspace resource manager: Implements VM Instance creation on a site and management.

• 3. Workspace pilot: Provides virtualization with significant changes to the site configurations.

• 4. Workspace control: implements VM instance management such as start, stop and pause VM. It also

Provides image management and sets up networks and provides IP assignment.

• 5. Context broker: Allows clients coordinate large virtual cluster launches automatically and repeatedly.

• 6. Workspace client: A complex client that provides full access to the workspace service functionality.

• 7. Cloud client: A simpler client providing access to selected functionalities in the workspace service.

• 8. Storage service: Cumulus is a web service providing users with storage capabilities to store images and
works in conjunction with Grid computing.


34

• CloudSim Simulation Toolkit:An Introduction


• CloudSim: A Framework For Modeling And Simulation Of Cloud Computing Infrastructures And Services

• Cloud computing is a pay as you use model, which delivers infrastructure (IaaS), platform (PaaS) and
software (SaaS) as services to users as per their requirements. Cloud computing exposes data centers
capabilities as network virtual services, which may include the set of required hardware, application with
support of the database as well as the user interface. This allows the users to deploy and access
applications across the internet which is based on demand and QoS requirements.

• As Cloud computing is a new concept and is still in a very early stage of its evolution, so researchers and
system developers are working on improving the technology to deliver better on processing, quality &
cost parameters. But most of the research is focused on improving the performance of provisioning
policies and to test such research on real cloud environment like Amazon EC2, Microsoft Azure, Google
App Engine for different applications models under variable conditions is extremely challenging as:

• Clouds exhibit varying demands, supply patterns, system sizes, and resources (hardware, software, and
network).

• Users have heterogeneous, dynamic, and competing QoS requirements.

• Applications have varying performance, workload, and dynamic application scaling requirements.

• Thus the need to use of simulation tool(s) arises, which may become a viable alternative to
evaluate/benchmark the test workloads in a controlled and fully configurable environment that
can repeatable over multiple iterations and reproduce the results for analysis.

• This simulation-based approach can provide various benefits across the researcher’s community as it
allows them to:

• Test services in a repeatable and controllable environment.

• Tuning the system bottlenecks (performance issues) before deploying on real clouds.

• Simulating the required infrastructure (small or large scale) to evaluate different sets of workload as well
as resource performance, which facilitates for developing, testing and deployment of adaptive application
provisioning techniques.

• Overview of CloudSim functionalities:


• Support for modeling and simulation of large scale Cloud computing data centers.

• Support for modeling and simulation of virtualized server hosts, with customizable policies for
provisioning host resources to virtual machines.

• Support for modeling and simulation of application containers.

• Support for modeling and simulation of energy-aware computational resources.

• support for modeling and simulation of data center network topologies and message-passing applications

• Support for modeling and simulation of federated clouds.


35

• Support for dynamic insertion of simulation elements, stop and resume of simulation.

• Support for user-defined policies for allocation of hosts to virtual machines and policies for allocation of
host resources to virtual machines.

• Architecture of CloudSim:-
• The CloudSim layer provides support for modeling and simulation of cloud environments including
dedicated management interfaces for memory, storage, bandwidth and VMs. It also provisions hosts to
VMs, application execution management and dynamic system state monitoring. A cloud service provider
can implement customized strategies at this layer to study the efficiency of different policies in VM
provisioning.

• The user code layer exposes basic entities such as the number of machines, their specifications, etc, as
well as applications, VMs, number of users, application types and scheduling policies.

• The main components of the CloudSim framework:-

• 1. Regions: It models geographical regions in which cloud service providers allocate resources to their
customers. In cloud analysis, there are six regions that correspond to six continents in the world.
2. Data centers: It models the infrastructure services provided by various cloud service providers. It
encapsulates a set of computing hosts or servers that are either heterogeneous or homogeneous in
nature, based on their hardware configurations.

• Data centre characteristics:- It models information regarding data centre resource configurations.
1. Hosts: It models physical resources (compute or storage).
2. The user base: It models a group of users considered as a single unit in the simulation, and its main
responsibility is to generate traffic for the simulation.
3. Cloudlet: It specifies the set of user requests. It contains the application ID, name of the user base that
is the originator to which the responses have to be routed back, as well as the size of the request
execution commands, and input and output files. It models the cloud-based application services.
CloudSim categorizes the complexity of an application in terms of its computational requirements. Each
application service has a pre-assigned instruction length and data transfer overhead that it needs to carry
out during its life cycle.
4. Service broker: The service broker decides which data centre should be selected to provide the services
to the requests from the user base.
5. VMM allocation policy: It models provisioning policies on how to allocate VMs to hosts.
6. VM scheduler: It models the time or space shared, scheduling a policy to allocate processor cores to
VMs.
36

(Unit-II) SERVICE MODEL OF CLOUD (IN BRIEF):-

This section describes various model for cloud computing. The application in cloud computing are made of
different layers. These are software layer, platform layer and infrastructure layer. These layers are used to host
the distributed applications. These are also used to build the client based on the services needed and the level of
service provided by them.

The service models are classified into following four types:-

1) Software-as-a-Service (SaaS)
2) Platform-as-a-Service (PaaS)
3) Infrastructure-as-a-Service (IaaS)

4) Hardware as a Service (Haas)

5) Function as-a-services (Faas)

6) Database as- a service (Daas)

• As per global Information security survey, PWC 2012, Most of these users (69%) deployed software as a
services (SaaS). 47% users deployed IaaS, where as 33% deployed PaaS cloud model.

• 1) Infrastructure-as-a-Service (IaaS):-
IaaS is a way to deliver a cloud computing infrastructure like server, storage, network and operating
system. The customers can access these resources over cloud computing platform i.e Internet as an on-
demand service. In IaaS, you buy complete resources rather than purchasing server, software, datacenter
space or network equipment. IaaS was earlier called as Hardware as a Service (HaaS).

• It is a Cloud computing platform based model. The infrastructures are managed and maintained by the
service provider. One of the Iaas service providers is Rackspace and Iaas provides the foundation for PASS
and Saas. The main objective of Iaas is to provide standard flexible and virtual environment. Iaas model
37

supports for access, monitor, and manage the remote infrastructure. Users can use Iaas services based on
pay on use basis.

• Sometimes Iaas Services also called HaaS. IaaS Services providers are Microsoft azure, amazon web
services (aws). google compute engine (GCE).

• Some key concept of Iaas as follows :-

# Cloud bursting

# Multitenant computing

Iaas model is very important model for business industry

Cloud bursting: - Cloud bursting is an application deployment model in which an application runs in a
private cloud or data center and bursts into a public cloud when the demand for computing capacity spikes.

• Or

• Cloud bursting in IaaS Is possible with the support of software which Play an important role for
reallocation process to the Iaas cloud model

IaaS Architecture

• Characteristics of Iaas:-
• IaaS has helped improving the infrastructure allocation and utilization. The key characteristics of IaaS
(Infrastructure as a service) are:
IaaS is generally accepted to comply with the following main characteristics:

• Resources are provided as a service.

• Allows for dynamic scaling and elasticity.

• Has a variable cost, usage based pricing model (pay per go and pay per use).

• Has multi-tenet architecture, includes multiple users on a single piece of hardware.

• IaaS typically has enterprise grade infrastructure.


38

• IaaS key Benefits:-


• Usage is metered and price on the basis of units (or instances) consumed

• The ability to scale up and down of infrastructure service based on actual usage.

• Reduced cost of ownership

• Reduced energy and cooling costs. OR

• There are various types of Key benefits in IaaS Layer:-

• 1. Dynamic scaling:-

• 2. Service levels:-

• 3. The rental model:-

• 4. Licensing:-

• 5. Metering and costs:-

• 1. Dynamic scaling:-

• This important characteristic of IaaS is called dynamic scaling — if customers wind up needing more
resources than expected, they can get them immediately (probably up to a given limit). A provider or
creator of IaaS typically optimizes the environment so that the hardware, the operating system, and
automation can support a huge number of workloads.

• 2. Service levels:-

• Consumers acquire IaaS services in different ways. Many consumers rent capacity based on an on-
demand model with no contract. In other situations, the consumer signs a contract for a specific amount
of storage or compute. A typical IaaS contract has some level of service guarantee. At the low end, a
provider might simply state that the company will do its best to provide good service. If the consumers
are willing to pay a premium price, they might get a mirrored service so that there are almost no change-
of-service interruptions.

• 3. The rental model:-

• When companies use IaaS, it’s often said that the servers, storage, or other IT infrastructure components
are rented for a fee based on the quantity of resources used and how long they’re in use. Although this is
true, there are some important differences between this rental arrangement and the traditional rental
models you may be familiar with.

• For example, when you purchase server and storage resources using IaaS services, you gain immediate
virtual access to the resources you need. You aren’t, however, renting the actual physical servers or other
infrastructure. Don’t expect a big truck to pull up to your office and deliver the servers you need to
complete your project. The physical components stay put in the infrastructure service provider’s data
center. This concept of renting is an essential element of cloud computing, and it provides the foundation
for the cost and scalability benefits of the various cloud models.
39

• 4. Licensing:-

• The use of public IaaS has led to innovation in licensing and payment models for software you want to run
in these cloud environments. Note that this licensing is for the software you want to run in your cloud
environment, not the license between you and the cloud provider. For example, some IaaS and software
providers have created a “bring your own license” (BYOL) plan so you have a way to use your software
license in both traditional and cloud environments.

• 5. Metering and costs:-

• Clearly, you derive a potential economic benefit by controlling the amount of resources you demand and
pay for so that you have just the right match with your requirements. To ensure that users are charged
for the resources they request and use, IaaS providers need a consistent and predictable way to measure
usage. This process is called metering.

• Ideally, the IaaS provider will have a transparent process for identifying charges incurred by the user.
With multiple users accessing resources from the same environment, the IaaS provider needs an accurate
method for measuring the physical use of resources to make sure each customer is charged the right
amount.

• IaaS providers often use the metering process to charge users based on the instance of computing
consumed. An instance is defined as the CPU power and the memory and storage space consumed in an
hour. When an instance is initiated, hourly charges begin to accumulate until the instance is terminated.
The charge for a very small instance may be as little as two cents an hour; the hourly fee could increase to
$2.60 for a large resource-intensive instance running Windows.

• Classification/types of IaaS Layer:-


• IaaS can be classified into sub-classes. Some of these are given as follows-

• 1. Compute as a services

• 2. Web hosting

• 3. Storage as a services

• 4. Disaster recovery & Backup as a services

• 5. Desktop as a services

• 6. server as a services

• 7. Networking as services.

• 1. Compute as a services:- Today's one of the most omnipresent IaaS offering is compute as a service.

• It provides computing capacity to the system. These services include server, routers, firewall, operating
system access and load balancing on demand. These services can be shared or public.

• These services can be including security management, storage management, dedicated customer support
and automated patch management.
40

• 2. Web Hosting: - Website is very much useful for the marketing and revenue of many organization. But
any loop holes in the website, such as website may down during the peak hours, may affect the business
completely. Therefore many organizations use IaaS model for hosting of websites. This ensures that the
website can work properly even during peak traffic hours.

• 3. Storage as a services: - Today, the demand for storage has increased. To provide sufficient storage
capacity and manage the same are the big job. The solution for providing and managing the storage is
storage as a service. The service providers of storage as a services have the latest storage technologies
with virtually infinite storage capacity. In this service, the administrator can select storage, transfer the
data to various storages, add or remove the storage as per demand.

• 4. Disaster recovery & Backup as a services:-The users of the cloud computing always except
uninterrupted access to the application and the data. They don’t want any reason for the failure of access
to the services in situation like power failure, system failure or any natural disaster. To provide the
services, such situation includes redundancy and automatic failover so that the downtime should be
reduced to zero. To keep the data secure in case of failure of the system, data should be stored at
multiple places so that it can be recovered easily. Organization or users can recover data and the
application in two ways .- In the first approach, the application and data resides in the organization
premises and data backup to the cloud as well as store the data on their own hardware .

• In another approach, virtual machines are used to store the data in the cloud. Virtual machine is the best
choices in case of emergency to recover the data quickly and completely.

• 5. Desktop as a service: - IaaS cloud model creates desktop as a service (DaaS). It is used mainly for
hosting and serving virtual desktop so that as per their requirements organization or users select the
virtual desktop for their application for some specific time period. For new users, DaaS offers sufficient
desktop environments with appropriate application and storage. DaaS allow the user to access their
workspace from anywhere.

• 6. Server as a service: - Server as a service offers the organization sufficient computing power as per their
need. This computing power is also available at busy time. This model is useful for a project requiring
huge computing power for some specific period. Another advantage of this model is organization do not
have to take care of IT infrastructure, administration and the maintenance. But security is the major issue
in this service.

• 7. Networking as a service:-Network as a service offers networking resources to the organization or users


on their demand as per their requirements. This service is demanded by the organization to support their
virtual network. It includes services like fire walls, WAN acceleration and load balancing. It can support for
quality of service auditing and monitoring of network based services. It support flexibility scalability and
security. There is no upfront cost involvement for this service..

• Advantages of IaaS:-In IaaS, user can dynamically choose a CPU, memory storage configuration according
to need.

• Users can easily access the vast computing power available on IaaS Cloud platform.

• Disadvantages of IaaS:-IaaS cloud computing platform model is dependent on availability of Internet and
virtualization services.
41

Advantages of IaaS Disadvantages of IaaS


No hardware costs, easily controllable running Dependency on the provider, whose sole
costs. responsibility is to make sure the service is
available and secure

Quick to implement and provide new projects. Internet access is essential (problems with the
internet connection also cause problems with
the IaaS environment)

High flexibility thanks to simple scalability of the Changing providers is very complicated.
required resources.
No need to set up, maintain, or update the Possible privacy issues due to the provider’s
hardware. server locations.
Easy to connect several company locations to the
rented IaaS environment.

• 2) Software-as-a-Service (SaaS):-
• SaaS is known as 'On-Demand Software'.

• It is a software distribution model. In this model, the applications are hosted by a cloud service provider
and publicized to the customers over internet.

• In SaaS, associated data and software are hosted centrally on the cloud server.

• User can access SaaS by using a thin client through a web browser.

• CRM, Office Suite, Email, games, etc. are the software applications which are provided as a service
through Internet.

• The companies like Google, Microsoft provide their applications as a service to the end users.

• Or

• Software as a service (Saas) is a software licensing and delivery model in which software is licensed on
a subscription basis and is centrally hosted. It is sometimes referred to as "on-demand software", and was
formerly referred to as "software plus services" by Microsoft or Google. SaaS is typically accessed
by users using a thin client, e.g. via a web browser.
42

• SaaS has become a common delivery model for many business applications, including office
software, messaging software, payroll processing software, DBMS software, management
software, CAD software, development software, gaming software, virtualization software, accounting

software, customer relationship management (CRM), Management Information


Systems (MIS), enterprise resource planning (ERP), invoicing software, human resource
management (HRM), talent acquisition, learning management systems, Geographic Information
Systems (GIS), and service desk management.

SAAS Architecture:
With this model, a single version of the application, with a single configuration is used for all customers. The application is
installed on multiple machines to support scalability (called horizontal scaling). In some cases, a second version of the
application is set up to offer a select group of customers with access to pre-release versions of the applications for testing
purposes. In this traditional model, each version of the application is based on a unique code. Although an exception, some
SaaS solutions do not use multitenancy, to cost-effectively manage a large number of customers in place. Whether
multitenancy is a necessary component for software-as-a-service is a topic of controversy.

• There are two main varieties of SaaS:

• Vertical SaaS:-A Software which answers the needs of a specific industry (e.g., software for the
healthcare, agriculture, real estate, finance industries)

• Horizontal SaaS:-The products which focus on a software category (marketing, sales, developer tools, HR).
43

• SaaS Characteristics
• A good way to understand the SaaS model is by thinking of a bank, which protects the privacy of each
customer while providing service that is reliable and secure—on a massive scale. A bank’s customers all
use the same financial systems and technology without worrying about anyone accessing their personal
information without authorization.

• A “bank” meets the key characteristics of the SaaS model:

• Multitenant Architecture

• A multitenant architecture, in which all users and applications share a single, common infrastructure and
code base that is centrally maintained. Because SaaS vendor clients are all on the same infrastructure and
code base, vendors can innovate more quickly and save the valuable development time previously spent
on maintaining numerous versions of outdated code.

• Easy Customization

• The ability for each user to easily customize applications to fit their business processes without affecting
the common infrastructure. Because of the way SaaS is architected, these customizations are unique to
each company or user and are always preserved through upgrades. That means SaaS providers can make
upgrades more often, with less customer risk and much lower adoption cost.

• Better Access

• Improved access to data from any networked device while making it easier to manage privileges, monitor
data use, and ensure everyone sees the same information at the same time.

Familiar to Consumer Web services

• Anyone familiar with Amazon.com or My Yahoo! will be familiar with the Web interface of typical SaaS
applications. With the SaaS model, you can customize with point-and-click ease, making the weeks or
months it takes to update traditional business software seem hopelessly old fashioned.

• SaaS Trends

• Organizations are now developing SaaS integration platforms (or SIPs) for building additional SaaS
applications. The consulting firm Saugatuck Technology calls this the “third wave” in software adoption:
when SaaS moves beyond standalone software functionality to become a platform for mission-critical
applications.

• Security Issues In SaaS Model:-


• Security is the main aspect of technology. For SaaS , there are different security issues. Some of these are
given as follows :-

• 1. Identity management: - Cloud service providers themselves are not sophisticated for the integration of
SaaS platform and identity services behind the firewall.

• 2. Cloud Standard are weak: - The security standard used for audit the SaaS cloud model is not
appropriate for the security audit of model.
44

• 3. Secrecy: - security provided by the SaaS cloud model is better than that for the user , it is also one of
the issues.

• Vulnerability(भेद्यता, आलोचनI)) in SaaS Model:-


• 1. Multiple users of the same infrastructure:-In cloud SaaS model, same infrastructure is allotted to
multiple users by the cloud service providers. So, the data of one user can be seen and accesssed by
another users. Among these users, there may be intruders who can acess the data of other users. The
attackers may attack the application using the loopholes or exist in the application. Attackers could inject
the client code into the SaaS model. This is a threat to data segregation and data privacy.

• 2. The unknown data location to the user:-In Cloud SasS, the user is unknown about the actual physiacal
location of the data. So there is a question about the authority of the data and the consequenses about it.
Therefore it acts as a restriction for the use of SaaS model. This poses a threat to data integrity.

• 3. Difficulties to provide security in web application:-

• The service provider has ton manage the SaaS application over the web. To maintain the security is a
challenge with web technology. Also there are similar challenges for the SaaS model. The traditional
security solutions such as IDS (Instruction Detection system) firewall are not sufficient to provide the
requisite security to SaaS model.

• 4. Attackers productive major target:-

• Cloud SaaSl is used by large number of organisation and users. So huge data from various organisation
and users are available under one roof. Therefore, the attacker targets the SaaS model for this huge data.

• 5. Virtualization vulnerabilities: - There are many bugs in virtual machine model. These bugs allow by
passing certain restriction. This allows the attackers to enter the SaaS model. For Ex- the vulnerability
presents in Microsoft virtual computers and Microsoft virtual server could allow user with a guest
operating system to execute his code on host operating system or other guest operating systems.

• Advantages and Disadvantages of SaaS.


• There are various advantages and disadvantages of SaaS.

• The advantages are:

• Easy to buy: The cost of SaaS is based on a monthly or yearly fees allowing new organizations to access
the world of business at a low-cost, at least lesser than licensed application.

• Minimization of Hardware Requirement: All SaaS software is hosted remotely & so there is no or lesser
need of hardware for the organizations.

• Special Software: No special software versions are required as all the users will use the same software
version. SaaS reduces IT costs by outsourcing hardware & software maintenance.

• Low Maintenance: SaaS removes the daily problem of installing, maintaining and updating software. The
set-up cost of SaaS is also less in comparison to enterprise software.
45

• Reduced Time:-The software or application are already installed and configuration in software as a
service model. After the request about the service from the user on the cloud, the applications are ready
to use to the user. This reduces the time required for the installation and configuration of the software

• Disadvantages:-

• Disadvantages are also the points that users and vendors must keep in mind while using the SaaS:

• Latency factor: comes due to a variable distance of data between the cloud & the end-user, and hence a
possibility of latency may arise while interacting with applications.

• Internet Connection: Internet Connection is a major issue. Without internet connection, SaaS applications
are unusable. Switching between SaaS vendors in case of any change is very difficult

• The SaaS cloud service is not very secure than in-house deployment.

















46

• Monitoring as a service:-
• Monitoring as a service (MaaS) is one of many cloud computing delivery models under anything as a
service (XaaS). It is a framework that facilitates the deployment of monitoring functionalities for various
other services and applications within the cloud. The most common application for MaaS is online state
monitoring, which continuously tracks certain states of applications, networks, systems, instances or any
element that may be deployable within the cloud.

• Or

• Cloud Monitoring as a Service is referred to a type of on demand IT service that provides cloud monitoring
and management tools for monitoring cloud based platforms, websites, servers, IT Infrastructure etc.
Cloud monitoring as a service provides a fully managed cloud monitoring service for cloud and
virtualization environments in organizations.

• Example of MAAS software –CloudMonix tool

• MaaS offerings consist of multiple tools and applications meant to monitor a certain aspect of an
application, server, system or any other IT component. There is a need for proper data collection,
especially of the performance and real-time statistics of IT components, in order to make proper and
informed management possible.

• Typically, cloud monitoring as a service is delivered through a SaaS based cloud monitoring software that
monitors and detects performance issues across the cloud infrastructure.

• The performance statistics and issues are reported to the cloud administrators for reviewing in a central
dashboard or through email, SMS and other alerts and notifications.

• State monitoring is very powerful because notifications now come in almost every form, from emails and
text messages to various social media alerts like a tweet or a status update on Facebook.


47

• Popular Monitoring Features:-


• 1. Dashboards:Visualize all your systems and resources in one place. Inspect critical side-by-side with
important logs. Find root cause of many issues by navigating back in time and seeing how the overall
system behaved before, during, or after problems occurred. CloudMonix supports many types of servers,
databases, infrastructure and more.


• 2. Metrics:-Track a wide variety of simple and complex metrics. CloudMonix supports tracking and
analyzing tons of important information about your systems every minute of every hour.It is scalable and
can grow with your business needs.


• 3. Alert Engine:-Stay in full control of how and you are notified of production problems. CloudMonix
provides a sophisticated alert engine that can evaluate multiple conditions simultaneously, based on
• real-time or sustained conditions using simple, complex, aggregated or unstructured metrics.


• 4. Immediate Notifications:-With immediate notifications, CloudMonix allows IT professionals to unplug
from constantly watching monitors or dashboards and concentrate on more important things, knowing
that they will be aware of any misconducting environment.


48

• Benefits of Monitoring as a Service (MaaS):-


• The following are the benefits of a monitoring as a service (MaaS) product:
• 1. Ready to Use Monitoring Tool Login: The vendor takes care of setting up the hardware infrastructure,
monitoring tool, configuration and alert settings on behalf of the customer. The customer gets a ready to
use login to the monitoring dashboard that is accessible using an internet browser. A mobile client is also
available for the MaaS dashboard for IT administrators.
• 2. Inherently Available 24x7x365: Since MaaS is deployed in the cloud, the monitoring dashboard itself is
available 24x7x365 that can be accessed anytime from anywhere. There are no downtimes associated
with the monitoring tool.
• 3. Easy Integration with Business Processes: MaaS can generate alert based on specific business
conditions. MaaS also supports multiple levels of escalation so that different user groups can get different
levels of alerts.
• 4. Cloud Aware and Cloud Ready: Since MaaS is already in the cloud, MaaS works well with other cloud
based products such as PaaS and SaaS. MaaS can monitor Amazon and Rackspace cloud infrastructure.
MaaS can monitor any private cloud deployments that a customer might have.
• 5. Maintenance Overheads: As a MaaS, customer, you don’t need to invest in a network operations
centre. Neither do you need to invest an in-house team of qualified IT engineers to run the monitoring
desk since the MaaS vendor is doing that on behalf of the customer.
• Use of Monitored using MaaS:-
• MaaS is capable of monitoring all aspects of IT infrastructure assets.
• 1. Servers and Systems Monitoring:
• 2. Database Monitoring:
• 3. Network Monitoring:
• 4. Applications Monitoring
• 5. Cloud Monitoring:
• 6.Virtual Infrastructure Monitoring:
• 7. Storage Monitoring:
• 1. Servers and Systems Monitoring: Server Monitoring provides insights into the reliability of the server
hardware such as Uptime, CPU, Memory and Storage. Server monitoring is an essential tool in
determining functional and performance failures in the infrastructure assets.
• 2. Database Monitoring: Database monitoring on a proactive basis is necessary to ensure that databases
are available for supporting business processes and functions. Database monitoring also provides
performance analysis and trends which in turn can be used for fine tuning the database architecture and
queries, thereby optimizing the database for your business requirements.
• 3. Network Monitoring: Network availability and network performance are two critical parameters that
determine the successful utilization of any network – be it a LAN, MAN or WAN network. Disruptions in
the network affect business productivity adversely and can bring regular operations to a standstill.
Network monitoring provides pro-active information about network performance bottlenecks and source
of network disruption.
• 4. Applications Monitoring: Applications Monitoring provides insight into resource usage, application availability
and critical process usage for different Windows, Linux and other open source operating systems based
applications. Applications Monitoring is essential for mission critical applications that cannot afford to have even a
few minutes of downtime. With Application Monitoring, you can prevent application failures before they occur and
ensure smooth operations.

• 5. Cloud Monitoring: Cloud Monitoring for any cloud infrastructure such as Amazon or Rackspace gives
information about resource utilization and performance in the cloud. While cloud infrastructure is
49

expected to have higher reliability than on-premise infrastructure, quite often resource utilization and
performance metrics are not well understood in the cloud. Cloud monitoring provides insight into exact
resource usage and performance metrics that can be used for optimizing the cloud infrastructure.

• 6. Virtual Infrastructure Monitoring: Virtual Infrastructure based on common hypervisors such as ESX,
Xen or Hyper-V provides flexibility to the infrastructure deployment and provides increased reliability
against hardware failures. Monitoring virtual machines and related infrastructure gives information
around resource usage such as memory, processor and storage.

• 7. Storage Monitoring: A reliable storage solution in your network ensures anytime availability of
business critical data. Storage monitoring for SAN, NAS and RAID storage devices ensures that your
storage solution are performing at the highest levels. Storage monitoring reduces downtime of storage
devices and hence improves availability of business data.

• Database as a Service (DBaaS):


• Database as a Service (DBaaS):-Database as a service (DBaaS) is a cloud computing service model that
provides users with some form of access to a database without the need for setting up physical hardware,
installing software or configuring for performance. All of the administrative tasks and maintenance are
taken care of by the service provider so that all the user or application owner needs to do is use the
database. Of course, if the customer opts for more control over the database, this option is available and
may vary depending on the provider.

• OR
A cloud database is a collection of informational content, either structured or unstructured, that resides
on a private, public or hybrid cloud computing infrastructure platform. From a structural and design
perspective, a cloud database is no different than one that operates on a business's own on-premises
servers. The critical difference lies in where the database resides.

• A cloud database resides on servers and storage furnished by a cloud or database as a service (DBaaS)
provider and it is accessed solely through the internet. A cloud database may be a traditional database
such as Sql server datbase or My SQL . For example:- Amazon SimpleDB , Microsoft SSDS(SQL Server Data
Service).
50

• DBaaS consists of a database manager component, which controls all underlying database instances via
an API.
• This API is accessible to the user via a management console, usually a web application, which the user
may use to manage and configure the database and even provision or deprovision database instances.

Data base as services Model

• Name of Cloud Database

• Amazon Relational Database Service


• Amazon Aurora, MySQL based service
• Clustrix Database as a Service
• EnterpriseDB Postgres Plus Cloud Database[20]
• Google Cloud SQL
• Heroku PostgreSQL as a Service (shared and dedicated database options)
• Oracle Database Cloud Service
• Microsoft Azure SQL Database (MS SQL)
• Amazon DynamoDB
• Amazon SimpleDB
• Azure Cosmos DB
• Cloudant Data Layer (CouchDB)
• EnterpriseDB Postgres Plus Cloud Database
• Google Cloud Bigtable
• Google Cloud Datastore
• MongoDB Database as a Service (several options)
• Oracle NoSQL Database Cloud Service
Deployment Model of Cloud Database
• Cloud databases, like their traditional ancestors, can be divided into two broad categories:

• 1. Relational Database

• 2.Non relational Database.


51

• 1. Relational Database :-
• A relational database is organized based on the relational model of data, as proposed by E.F. Codd in
1970. This model organizes data into one or more tables (or “relations”) of rows and columns, with a
unique key for each row.

• A relational database, typically written in structured query language (SQL), is composed of a set of
interrelated tables that are organized into rows and columns.

• The relationship between tables and columns (fields) is specified in a schema. SQL databases, by design,
rely on data that is highly consistent in its format , such as banking transactions or a telephone directory.
Popular cloud platforms and cloud providers include MySQL, Oracle, IBM DB2 and Microsoft SQL Server.

• Some cloud platforms such as MySQL are open sourced.

• Or

• 1) Relational databases, which can also be called relational database management systems (RDBMS) or
SQL databases. The most popular of these are Microsoft SQL Server, Oracle Database, MySQL, and IBM
DB2. These RDBMS’s are mostly used in large enterprise scenarios, with the exception of MySQL, which is
mostly used to store data for web applications, typically as part of the popular LAMP stack (Linux, Apache,
MySQL, PHP/ Python/ Perl).

• 2. Non-relational Database.

• Non-relational databases, sometimes called NoSQL, do not employ a table model. Instead, they store
content, regardless of its structure, as a single document. This technology is well-suited for unstructured
data, such as social media content, photos and videos.

• OR

• 2) Non-relational databases, also called NoSQL databases, the most popular being MongoDB,
DocumentDB, Cassandra, Coachbase, HBase, Redis, and Neo4j. These databases are usually grouped into
four categories: Key-value stores, Graph stores, Column stores, and Document stores. NoSQL is simply the
term that is used to describe a family of databases that are all non-relational. While the technologies,
data types, and use cases vary wildly amount them, it is generally agreed that there are four types of
NoSQL databases:

• 1. Key-value stores – These databases pair keys to values. An analogy is a files system where the path acts
as the key and the contents act as the file. There are usually no fields to update, instead, the entire value
other than the key must be updated if changes are to be made.

• 2. Graph stores – These excel at dealing with interconnected data. Graph databases consist of
connections, or edges, between nodes. Both nodes and their edges can store additional properties such
as key-value pairs.
52

• 3. Column stores – Relational databases store all the data in a particular table’s rows together on-disk,
making retrieval of a particular row fast.

• 4. Document stores – These databases store records as “documents” where a document can generally be
thought of as a grouping of key-value pairs (it has nothing to do with storing actual documents such as a
Word document). Keys are always strings, and values can be stored as strings, numeric, Booleans, arrays,
and other nested key-value pairs. Values can be nested to arbitrary depths.

• Databases in Cloud Computing Environment:


• At present the cloud database environment is supposed to be the best possible answer for most of the
need of programming developers over the globe where they need to store the their data of their
application in backend which can be accessed and adaptable to changes from even a single client. This
Database as a service (Dbaas) benefits over lot of traditional databases on local and server machine like
MySQL, Oracle etc. The database which are classified over the traditional RDS (Relational data services
where we get the data on organized way of SQL query writing. For such conventional databases on cloud
are maintained by Amazon RDS,Google SQL and Microsoft Azure.

• We have other side of cloud databases like Amazon Simple DB, Google Data store which are not a RDS but
based on NoSQL Databases concepts, which are unstandardized and doesn’t require traditional RDBMS
53

• queries to work with such databases. There are some most popular databases in cloud computing. They
are mentioned below:

• 1. StromDB MySQL PostgreSQL

• 2. Google Cloud SQL MongoLab.


• As in this paper we are focussing on MySQL Google Databases on Cloud, we will be mentioning all the
details about it.

• MySQL: MySQL is an open-source social database administration framework. It is possessed by Oracle


Corporationand can be utilized under either the GNU General Public License or a standard business
permit acquired fromOracle. MySQL is a hearty, multi-strung, value-based DBMS. It is profoundly versatile
and can be conveyedover numerous servers.

• Google Cloud SQL is the services which one can hire on Google cloud which uses MySQL database

• internally. It provides all the features of the MySQL and it is equally useful the way it was useful on
normal client server architecture. One can use MySQL Google Cloud services to support our database
requirement of application from small to medium size applications.

• MySQL databases sent in the cloud without an object. It is provided you by the Google Cloud Platform
with effective databases that run quick, don't come up short on space and give your application the
excess.

• Following are some of the advantages of using MySQL Database in Cloud Computing.

• 1) Availability: Most of the system which works on open source technology prefers the MySQL databases
over the application development by the programmers. Hence it is available on large scale and
programmers are comfortable using it.

• 2) Buy the database administration only: Some cloud organizations just offer MySQL database facilitating
through a cloud-based facilitating record. As of late, organizations began offering databases as an
administration, permitting people to pay just for the databases and not for a facilitating record that there
is no utilization for.

• 3) Versatility: The versatility that originates from MySQL databases cannot be coordinated by individual
or devoted devices. People would prefer not to ship in a bundle of database servers for trivial needs,
however, cloud-based MySQL databases are ideal for such circumstance.

• GOOGLE CLOUD MYSQL DATABASE ARCHITECTURE: Following Figure 2 represents the overall
architecture of how Google MySQL cloud services works. We had the database stored cluster wise/region
wise on Google cloud. Whenever the request comes from users (clients) through internet it is passed to
the Google Cloud platform. There is one compute engine that processes this request on cloud server
where it will extract the SQL RDBMS query from the request and this query will get processed in Cloud
54

SQL. This query upon the correctness of syntaxes will be executed on Cloud MySQL Database services and
based on the query the result will be passed over internet to the user (Clients).

• Google Cloud Platform (GCP)?


• Google Cloud Platform (GCP), offered by Google, is a suite of cloud computing services that runs on the
same infrastructure that Google uses internally for its end-user products, such as Google
Search and YouTube. Alongside a set of management tools, it provides a series of modular cloud services
including computing, data storage, data analytics and machine learning.

• Google Cloud Platform provides infrastructure as a service, platform as a service, and server-less
computing environments.

• Or

• Google Cloud Platform is a set of Computing, Networking, Storage, Big Data, Machine Learning and
Management services provided Google that runs on the same Cloud infrastructure that Google uses
internally for its end-user products, such as Google Search, Gmail, Google Photos and YouTube.
55

• Features of GCP:
• So now look at some of the features of GCP what really gives it an upper hand over other vendors.

• What are Google Cloud Platform (GCP) Services?


• Google offers a wide range of Services. Following are the major Google Cloud Services:

• 1. Compute

• 2. Networking
56

• 3. Storage and

• Databases

• 4. Big Data

• 5. Machine Learning

• Identity & Security

• 6. Management and

• Developer Tools

• 1. Compute: GCP provides a scalable range of computing options you can tailor to match your needs. It
provides highly customizable virtual machines. and the option to deploy your code directly or via
containers.

• 1. Google Compute Engine


• 2. Google App Engine
• 3. Google Kubernetes Engine
• (https://medium.com/faun/google-kubernetes-engine-explain-like-im-five-1890e550c099)
Kubernetes Engine (GKE):-Kubernetes Engine (GKE) is a managed, production-ready environment for
deploying containerized applications. It brings our latest innovations in developer productivity, resource
efficiency, automated operations, and open source flexibility to accelerate your time to market.

• Launched in 2015, Kubernetes Engine builds on Google's experience of running services like Gmail and
YouTube in containers for over 12 years. Kubernetes Engine allows you to get up and running
with Kubernetes in no time, by completely eliminating the need to install, manage, and operate your own
Kubernetes clusters.


57

• 4. Google Cloud Container Registry:- Container Registry is a single place for your team to manage Dockers
images, perform vulnerability analysis, and decide who can access what with fine-grained access control.

• A Docker image is a file, comprised of multiple layers, used to execute code in a Docker container. ...
When the Docker user runs an image, it becomes one or multiple instances of that container. Docker is
an open source OS-level virtualization software platform primarily designed for Linux and Windows.


• 5. Cloud Functions: - Google Cloud Functions is a server less execution environment for building and
connecting cloud services. With Cloud Functions you write simple, single-purpose functions that are
attached to events emitted from your cloud infrastructure and services.

• 2. Networking: The Storage domain includes services related to networking, it includes the following
services

• 1. Google Virtual Private Cloud (VPC)


• 2. Google Cloud Load Balancing
• 3. Content Delivery Network
• 4. Google Cloud Interconnect
• 5. Google Cloud DNS
• 1. Google Virtual Private Cloud (VPC):-A virtual private cloud is an on-demand configurable pool of shared
computing resources allocated within a public cloud environment, providing a certain level of isolation
between the different organizations using the resources.
• OR

• Virtual Private Cloud (VPC) gives you the flexibility to scale and control how workloads connect regionally
and globally. When you connect your on-premises or remote resources to GCP, you'll have global access
to your VPCs without needing to replicate connectivity or administrative policies in each region.

• A VPC network, sometimes just called a “network,” is a virtual version of a physical network, like a data
center network. It provides connectivity for your Compute Engine virtual machine (VM)
instances, Kubernetes Engine clusters, App Engine Flex instances, and other resources in your project.
58

• 2. Google Cloud Load Balancing:-Cloud Load Balancing includes support for the latest application delivery
protocols. It supports HTTP/2 with gRPC when connecting to backends and also is the control traffic
related issues.

• There are two types of load balancers in Google Cloud Platform:


• Network Load Balancer and
• HTTP(s) Load Balancer.

• Note: - gRPC is a modern open source high performance RPC (Remote Procedure call) framework that
can run in any environment. It can efficiently connect services in and across data centers.

• 3. Content Delivery Network:-A content delivery network (CDN) refers to a geographically distributed
group of servers which work together to provide fast delivery of Internet content. A CDN allows for the
quick transfer of assets needed for loading Internet content including HTML pages, javascript files,
stylesheets, images, and videos. The popularity of CDN services continues to grow, and today the majority
of web traffic is served through CDNs, including traffic from major sites like Facebook, Netflix, and
Amazon.

Why Use a Content Delivery Network?-


CDNs (Content Delivery Networks) have changed the web hosting during the recent years. Rather than hosting
your website on one server, the load is distributed across multiple systems. You can host static content such as
videos, images, audio clips, CSS and JavaScript files.

Not every website needs a CDN but once you start getting more traffic, you should consider using a CDN that suit
your needs. Google’s ranking factor also includes website loading time. Using a CDN not only reduces user
waiting time but also increases your search engine rankings.
59

• 4. Google Cloud Interconnect:-Cloud Interconnect extends your on-premises network to Google's network
through a highly available, low latency connection. You can use Google Cloud Interconnect - Dedicated
(Dedicated Interconnect) to connect directly to Google or use Google Cloud Interconnect - Partner
(Partner Interconnect) to connect to Google through a supported service provider.

• 5. Google Cloud DNS: - Publish your domain names using Google's infrastructure for production-quality,
high-volume DNS services. Google's global network of any cast name servers provide reliable, low-latency
authoritative name lookups for your domains from anywhere in the world.
Notes:-Low latency describes a computer network that is optimized to process a very high volume of data
messages with minimal delay (latency). These networks are designed to support operations that require
near real-time access to rapidly changing data.

• 3. Big Data: - Big data is a term that describes the large volume of data – both structured and
unstructured. Big data can be analyzed for insights that lead to better decisions and strategic business
moves.

• Big Data is also data but with a huge size of data. Big Data is a term used to describe a collection of data
that is huge in size and yet growing exponentially with time. In short such data is so large and complex
that none of the traditional data management tools are able to store it or process it efficiently.

• The Storage domain includes services related to big data, it includes the following services

• 1. Google Big Query:-


• Storing and querying massive datasets can be time consuming and expensive without the right hardware
and infrastructure.

• Big Query is an enterprise data warehouse that solves this problem by enabling super-fast SQL queries
using the processing power of Google's infrastructure. Simply move your data into Big Query and let us
handle the hard work. You can control access to both the project and your data based on your business
needs, such as giving others the ability to view or query your data.

• Or

• Big Query is a RESTful web service that enables interactive analysis of massive datasets working in
conjunction with Google Storage. It is a serverless Platform as a Service that may be used
complementarily with MapReduce.

• Representational State Transfer (REST) is a software architectural style that defines a set of constraints to
be used for creating Web services. Web services that conform to the REST architectural style, called
RESTful Web services (RWS), provide interoperability between computer systems on the Internet.

• 2. Google Cloud Dataproc:-Google Cloud Dataproc is a cloud-based managed Apache Spark and Hadoop
service offered on Google Cloud Platform.
60

• 3. Google Cloud Datalab:-Cloud Datalab is a powerful interactive tool created to explore, analyze,
transform and visualize data and build machine learning models on Google Cloud .

• 4. Google Cloud Pub/Sub:-Cloud Pub/Sub brings the flexibility and reliability of enterprise message-
oriented middleware to the cloud. At the same time, Cloud Pub/Sub is a scalable, durable event ingestion
and delivery system that serves as a foundation for modern stream analytics pipelines.

• 4. Cloud AI:- The Storage domain includes services related to machine learning, it includes the following
services-

• 1. Cloud Machine Learning:-Cloud Machine Learning Engine is a managed service that lets developers
and data scientists build and run superior machine learning models in production. Cloud ML Engine offers
training and prediction services, which can be used together or individually.

• 2. Vision API:-Cloud Vision API allows developers to easily integrate vision detection features within
applications, including image labeling, face and landmark detection, optical character recognition (OCR),
and tagging of explicit content. Cloud Auto ML Vision enables you to create a custom machine learning
model for image labeling.

• 3. Speech API:-Google Cloud Speech-to-Text enables developers to convert audio to text by applying
powerful neural network models in an easy-to-use API. The API recognizes 120 languages and variants to
support your global user base. ... It can process real-time streaming or prerecorded audio, using Google's
machine learning technology.


4. Natural Language API:-The Cloud Natural Language API provides natural language understanding
technologies to developers, including sentiment analysis, entity analysis, entity sentiment analysis,
content classification, and syntax analysis. ... For information on which languages are supported by
the Natural Language API.

• 5. Translation API:-Google Translate is a free multilingual machine translation service developed by


Google, to translate text. It offers a website interface, mobile apps for Android and iOS, and an API that
helps developers build browser extensions and software applications.

• 6. Jobs API:-Transform your job search and candidate matching capabilities with Cloud Talent
Solution. …Talent Solution can interpret the vagueness of any job description, jobsearch query, or profile
search query. ... As Talent Solution learns what job seekers and employers are ...

• 5. Machine Learning Identity & Security:- The Storage domain includes services related to security, it
includes the following services-
61

• 1. Cloud Resource Manager:-Google Cloud Platform provides resource containers such as organizations,
folders, and projects that allow you to group and hierarchically organize other GCP resources. This
hierarchical organization lets you easily manage common aspects of your resources such as access control
and configuration settings. Resource Manager enables you to programmatically manage these resource
containers.

• 2. Cloud IAM (Identity and Access Management):-Cloud Identity and Access Management (Cloud IAM)
enables you to create and manage permissions for Google Cloud Platform resources. Cloud IAM
unifies access control for Cloud Platform services into a single system and presents a consistent set of
operations.

• A crucial part of cloud security involves managing user identities, their permissions, and resources they
have access to. This can be an extremely challenging task for organizations who may have users accessing
public cloud resources from a number of different devices and networks.

• Cloud IAM (Cloud Identity and Access Management) is a key part of an organization’s overall cyber
security strategy when it comes to securing resources in the public cloud. Cloud IAM helps organizations
manage access control by helping to define “who” has “what” access for “which” resource. The who are
members, what are role and the resources are anything we want to grant permissions on in the public
cloud.

• 3. Cloud Security Scanner:-

• Cloud Security Scanner is a web security scanner for common vulnerabilities in App Engine, Compute
Engine, and Google Kubernetes Engine applications. It can automatically scan and detect four common
vulnerabilities, including cross-site-scripting (XSS), Flash injection, mixed content (HTTP in HTTPS), and
outdated/insecure libraries. It enables early identification and delivers very low false-positive rates. You
can easily set up, run, schedule, and manage security scans, and it is available at no additional charge for
Google Cloud Platform users.

• 5. Management Tools: The Storage domain includes services related to monitoring and management, it
includes the following services

• 1. Stack driver (Monitoring Tool):-Google Stack driver is a monitoring service that provides IT teams with
performance data about applications and virtual machines running on the Google Cloud Platform and
Amazon Web Services public cloud. ... It is based on collected, an open source daemon that collects
system and application performance metrics.

• 2. Logging: - Stack driver Logging allows you to store, search, analyze, monitor, and alert on log data and
events from Google Cloud Platform and Amazon Web Services (AWS). Our API also allows ingestion of any
custom log data from any source. Stack driver Logging is a fully managed service that performs at scale
and can ingest application and system log data from thousands of VMs. Even better, you can analyze all
that log data in real time.
62

3. Error Reporting:-Error Reporting is a Beta feature for Google App Engine flexible environment,
Google Compute Engine, and AWS EC2. You can report errors from your application by sending them directly to
Stack driver Logging with proper formatting or by calling an Error Reporting API endpoint that sends them for
you.

4. Trace Cloud Console:-Stack driver Trace is a distributed tracing system that collects latency data from your
applications and displays it in the Google Cloud Platform Console. You can track how requests propagate through
your application and receive detailed near real-time performance insights. Stack driver Trace automatically
analyzes all of your application's traces to generate in-depth latency reports to surface performance
degradations, and can capture traces from all of your VMs, containers, or App Engine projects.

• 6. Developer Tools: The Storage domain includes services related to development, it includes the
following services

– 1. Cloud SDK
– 2. Deployment Manager
– 3. Cloud Test Lab
• 1. Cloud SDK:-The Cloud SDK is a set of tools for Google Cloud Platform. It contains gcloud, gsutil, and bq
command-line tools, which you can use to access Compute Engine, Cloud Storage, Big Query, and other
products and services from the command-line. You can run these tools interactively or in your automated
scripts.
2. Deployment Manager:-Deployment Manager is an infrastructure deployment service that automates the
creation and management of Google Cloud Platform (GCP) resources.

3. Cloud Test Lab:- Google Cloud Test lab basically runs an automated tests in accordance with your app's
targeting criteria on several devices. Cloud Test Lab can run instrumentation tests that you write using
Espresso or Robotium. You can also use the Cloud Test Lab Robo Test to simulate user actions and find
crashes in your app.

• Part of Compute topic in Google cloud platform Creating a cluster


• This page shows you how to create a cluster in Google Kubernetes Engine. To learn about how clusters
work, refer to Cluster Architecture.
• Before you begin
• To prepare for this task, perform the following steps:
• Ensure that you have enabled the Google Kubernetes Engine API.
• ENABLE GOOGLE KUBERNETES ENGINE API
• Ensure that you have installed the Cloud SDK.
• Set your default project ID:
• -> gcloud config set project [PROJECT_ID]
• If you are working with zonal clusters, set your default compute zone:
• -> gcloud config set compute/zone [COMPUTE_ZONE]
• If you are working with regional clusters, set your default compute region:
• ->gcloud config set compute/region [COMPUTE_REGION]
• Update gcloud to the latest version:
63

• ->gcloud components update


• Types of clusters:-
• You can create four types of clusters in GKE:

• 1. Zonal clusters:-A zonal cluster runs in one or more compute zones within a region. A multi-zone cluster
runs its nodes across two or more compute zones within a single region. Zonal clusters run a single cluster
master.

• 2. Regional cluster: A regional cluster runs three cluster masters across three compute zones, and runs
nodes in two or more compute zones.

• 3. Private cluster:- A private cluster is a zonal or regional cluster which hides its cluster master and nodes
from the public Internet by default.

• 4. Alpha cluster:-An alpha cluster is an experimental zonal or regional cluster that runs with alpha
Kubernetes features enabled. Alpha clusters expire after 30 days, cannot be upgraded, do not receive
security updates, and are not supported for production use.

• Cluster architecture:-
• In Google Kubernetes Engine, a cluster consists of at least one cluster master and multiple worker
machines called nodes. These master and node machines run the Kubernetes cluster orchestration
system.

• A cluster is the foundation of GKE: the Kubernetes objects that represent your containerized applications
all run on top of a cluster.

• Cluster master
• The cluster master runs the Kubernetes control plane processes, including the Kubernetes API server,
scheduler, and core resource controllers. The master's lifecycle is managed by GKE when
you create or delete a cluster. This includes upgrades to the Kubernetes version running on the cluster
master, which GKE performs automatically or manually at your request if you prefer to upgrade earlier
than the automatic schedule.

• Cluster master and the Kubernetes API


• The master is the unified endpoint for your cluster. All interactions with the cluster are done via
Kubernetes API calls, and the master runs the Kubernetes API Server process to handle those requests.
You can make Kubernetes API calls directly via HTTP/gRPC, or indirectly, by running commands from the
Kubernetes command-line client (kubectl) or interacting with the UI in the GCP Console.
64

• The cluster master's API server process is the hub for all communication for the cluster. All internal cluster
processes (such as the cluster nodes, system and components, application controllers) all act as clients of
the API server; the API server is the single "source of truth" for the entire cluster.

• Introduction to Microsoft Azure

• A Cloud Computing Service

What is Azure?
• Azure is Microsoft’s cloud platform, just like Google has it’s Google Cloud and Amazon has it’s Amazon
Web Service (AWS). Generally, it is a platform through which we can use Microsoft’s resource. For
example, to set up a huge server, we will require huge investment, effort, physical space and so on. In
such situations, Microsoft Azure comes to our rescue. It will provide us with virtual machines, fast
processing of data, analytical and monitoring tools and so on to make our work simpler. The pricing of
Azure is also simpler and cost-effective. Popularly termed as “Pay As You Go”, which means how much
you use, pay only for that.

•Azure History

• Microsoft unveiled Windows Azure in early October 2008 but it went to live after February 2010. Later in
2014, Microsoft changed its name from Windows Azure to Microsoft Azure. Azure provided a service
platform for .NET services, SQL Services, and many Live Services. Many people were still very skeptical
about “the cloud”. As an industry, we were entering a brave new world with many possibilities. Microsoft
Azure is getting bigger and better in coming days. More tools and more functionality are getting added. It
has two releases as of now. Its famous version Microsoft Azure v1 and later Microsoft Azure v2.
Microsoft Azure v1 was more like JSON script driven then the new version v2, which has interactive UI for
simplification and easy learning. Microsoft Azure v2 is still in the preview version.

• Microsoft Azure Services


• Some following are the services of Microsoft Azure offers:

• 1. Compute: Includes Virtual Machines, Virtual Machine Scale Sets, Functions for server less computing,
Batch for containerized batch workloads, Service Fabric for micro services and container orchestration,
and Cloud Services for building cloud-based apps and APIs.

• 2. Networking: With Azure you can use variety of networking tools, like the Virtual Network, which can
connect to on-premise data centers; Load Balancer; Application Gateway; VPN Gateway; Azure DNS for
domain hosting, Content Delivery Network, Traffic Manager, Express Route dedicated private network
fiber connections; and Network Watcher monitoring and diagnostics

• 3. Storage: Includes Blob, Queue, File and Disk Storage, as well as a Data Lake Store, Backup and Site
Recovery, among others.
65

• 4. Web + Mobile: Creating Web + Mobile applications is very easy as it includes several services for building and
deploying applications.

• 5. Containers: Azure has a property which includes Container Service, which supports Kubernetes, DC/OS
or Docker Swarm, and Container Registry, as well as tools for micro services.

• 6. Databases: Azure has also includes several SQL-based databases and related tools.

• 7. Data + Analytics: Azure has some big data tools like HDInsight for Hadoop Spark, R Server, HBase and
Storm clusters.

• 8. AI + Cognitive Services: With Azure developing applications with artificial intelligence capabilities, like
the Computer Vision API, Face API, Bing Web Search, Video Indexer, Language Understanding Intelligent.

• 9. Internet of Things: Includes IoT Hub and IoT Edge services that can be combined with a variety of
machine learning, analytics, and communications services.

• 10. Security + Identity: Includes Security Center, Azure Active Directory, Key Vault and Multi-Factor
Authentication Services.

• 11. Developer Tools: Includes cloud development services like Visual Studio Team Services, Azure DevTest
Labs, HockeyApp mobile app deployment and monitoring, Xamarin cross-platform mobile development
and more.
OR
• Microsoft Azure Services
• Microsoft Azure is widely considered both a Platform as a Service (PaaS) and Infrastructure as a Service
(IaaS) offering.
• Azure products and services:-
• As of July 2018, Microsoft categorizes Azure cloud services into 12 main product types:
• 1. Compute -- These services enable a user to deploy and manage virtual machines (VMs), containers and
batch processing, as well as support remote application access.
• 2. Web Technology-- These services support the development and deployment of web applications, and
also offer features for search, content delivery, application programming interface (API) management,
notification and reporting.
• 3. Data storage -- This category of services provides scalable cloud storage for structured and
unstructured data and also supports big data projects, persistent storage (for containers) and archival
storage. This category includes Database as a Service (DBaaS) offerings for SQL and NoSQL, as well as
other database instances, such as Azure Cosmos DB and Azure Database for PostgreSQL. It also includes
SQL Data Warehouse support, caching, and hybrid database integration and migration features.
• 4. Analytics Engine & IOT-- These services provide distributed analytics and storage, as well as features
for real-time analytics, big data analytics, data lakes, machine learning, business intelligence (BI), internet
of things (IoT) data streams and data warehousing.
• 5. Networking -- This group includes virtual networks, dedicated connections and gateways, as well as
services for traffic management and diagnostics, load balancing, domain name system (DNS) hosting, and
network protection against distributed denial-of-service (DDoS) attacks.
• 6. Containers -- These services help an enterprise create, register, orchestrate and manage huge volumes
of containers in the Azure cloud, using common platforms such as Docker and Kubernetes.
66

• 7. DevOps Or Development Platform-- These services help application developers share code, test
applications and track potential issues. Azure supports a range of application programming languages,
including JavaScript, Python, .NET and Node.js. Tools in this category also include support for Visual
Studio, software development kits (SDKs) and blockchain.
• 8. Media and content delivery network (CDN) -- These services include on-demand streaming, digital
rights protection, encoding and media playback and indexing.
• Note:-A blockchain, originally block chain, is a growing list of records, called blocks, that are linked using
cryptography. Each block contains a cryptographic hash of the previous block, a timestamp, and
transaction data. By design, a block chain is resistant to modification of the data
• 9. Identity and access management (IAM) -- These offerings ensure only authorized users can access
Azure services, and help protect encryption keys and other sensitive information in the cloud. Services
include support for Azure Active Directory and multifactor authentication (MFA).
• 10. Security -- These products provide capabilities to identify and respond to cloud security threats, as
well as manage encryption keys and other sensitive assets.
• 11. Artificial intelligence (AI) and machine learning -- This is a wide range of services that a developer can
use to infuse machine learning, AI and cognitive computing capabilities into applications and data sets.
• 12. Migration -- This suite of tools helps an organization estimate workload migration costs, and perform
the actual migration of workloads from local data centers to the Azure cloud.
67

• Introduction to Big Data


• Big Data:- Big data is a term that describes the large volume of data – both structured and
unstructured . Big data can be analyzed for insights that lead to better decisions and
strategic business moves.
• Big Data is also data but with a huge size of data. Big Data is a term used to describe a
collection of data that is huge in size and yet growing exponentially with time. In short
such data is so large and complex that none of the traditional data management tools are
able to store it or process it efficiently.
• The Storage domain includes services related to big data, it includes the following services
• Big data is a term that describes the large volume of data – both structured and
unstructured. OR
• Big data refers to voluminous, large sets of data whereas cloud computing refers to the
platform for accessing large sets of data. In other words, big data is information
while cloud computing is the means of getting information. Big Data is a terminology used
to describe huge volume of data and information.
Big Data and Cloud Computing
• Big data deals with massive structured, semi-structured or unstructured data to store and
process it for data analysis purpose
• There are five aspects of Big Data which are described through 5Vs
• Volume – the amount of data
• Variety – different types of data
• Velocity – data flow rate in the system
• Value – the value of data based on the information contained within
• Veracity – data confidentiality and availability
68

• Introduction to Hadoop:-
• What is Hadoop:-
• Hadoop is an open source software programming framework for storing a large amount of data and
performing the computation. Its framework is based on Java programming with some native code in C
and shell scripts.

• History of Hadoop:-

• Apache Software Foundation is the developers of Hadoop, and its co-founders are Doug
Cutting and Mike Cafarella.
Its co-founder Doug Cutting named it on his son’s toy elephant. In October 2003 the first paper release
was Google File System. In January 2006, MapReduce development started on the Apache Nutch which
consisted of around 6000 lines coding for it and around 5000 lines coding for HDFS. In April 2006 Hadoop
0.1.0 was released.

• Hadoop Distributed File System:-

• It has distributed file system known as HDFS and this HDFS splits files into blocks and sends them across
various nodes in form of large clusters. Also in case of a node failure, the system operates and data
transfer takes place between the nodes which are facilitated by HDFS.

• Advantages of HDFS:-
It is inexpensive, immutable in nature, stores data reliably, ability to tolerate faults, scalable, block
structured, can process a large amount of data simultaneously and many more.

• Disadvantages of HDFS:-
It’s the biggest disadvantage is that it is not fit for small quantities of data. Also, it has issues related to
potential stability, restrictive and rough in nature.

• Hadoop also supports a wide range of software packages such as Apache Flumes, Apache Oozie, Apache
HBase, Apache Sqoop, Apache Spark, Apache Storm, Apache Pig, Apache Hive, Apache Phoenix, Cloudera
Impala.

• Introduction to Map Reduce:-


• Map reduce is location or cloud storage area where stores all library files and data file related to your
project.

• Map reduce is technique for NoSQL data that convert a unstructured data into structured data .

• Or
69

• Map Reduce is a programming model for processing large data sets with a parallel, distributed algorithm
on a cluster. Map Reduce when coupled with HDFS can be used to handle big data. The fundamentals of
this HDFS-Map Reduce system, which is commonly referred to as Hadoop.

• Or

• Map Reduce is a framework using which we can write applications to process huge amounts of data, in
parallel, on large clusters of commodity hardware in a reliable manner.

• What is Map Reduce?

• Map Reduce is a processing technique and a program model for distributed computing based on java. The
Map Reduce algorithm contains two important tasks, namely Map and Reduce. Map takes a set of data
and converts it into another set of data, where individual elements are broken down into tuples
(key/value pairs).

• Secondly, reduce task, which takes the output from a map as an input and combines those data tuples
into a smaller set of tuples.

• As the sequence of the name Map Reduce implies, the reduce task is always performed after the map job.

• The major advantage of Map Reduce is that it is easy to scale data processing over multiple computing
nodes. Under the Map Reduce model, the data processing primitives are called mappers and reducers.

• Decomposing a data processing application into mappers and reducers is sometimes nontrivial. But, once
we write an application in the Map Reduce form, scaling the application to run over hundreds, thousands,
or even tens of thousands of machines in a cluster is merely a configuration change.

• This simple scalability is what has attracted many programmers to use the Map Reduce model.

• The Algorithm:-
• Generally Map Reduce paradigm is based on sending the computer to where the data resides! Map
Reduce program executes in three stages, namely map stage, and reduce task (shuffle stage, and reduce
stage) including some teminology-Input Phase, spilt phase, map phase, suffle and sort phase, reduce
phase and output.

• Map stage − The map or mapper’s job is to process the input data. Generally the input data is in
the form of file or directory and is stored in the Hadoop file system (HDFS). The input file is passed
to the mapper function line by line. The mapper processes the data and creates several small
chunks of data.

• Reduce stage − This stage is the combination of the Shuffle stage and the Reduce stage. The
Reducer’s job is to process the data that comes from the mapper. After processing, it produces a
new set of output, which will be stored in the HDFS.
70


• Example :-Let us try to understand the two tasks Map & Reduce with the help of a small diagram −


Another Example


• During a MapReduce job, Hadoop sends the Map and Reduce tasks to the appropriate servers in the
cluster.

• The framework manages all the details of data-passing such as issuing tasks, verifying task completion,
and copying data around the cluster between the nodes.
71

• Most of the computing takes place on nodes with data on local disks that reduces the network traffic.

• After completion of the given tasks, the cluster collects and reduces the data to form an appropriate
result, and sends it back to the Hadoop server.

• Containers:-

• A container organizes a set of blobs, similar to a directory in a file system. A storage account can include
an unlimited number of containers, and a container can store an unlimited number of blobs.

Blob storage is a feature in Microsoft Azure that lets developers store unstructured data in Microsoft's cloud
platform. This data can be accessed from anywhere in the world and can include audio, video and text. Blobs are
grouped into "containers" that are tied to user accounts.

• About Blob storage:-Blob storage is designed for:

• Serving images or documents directly to a browser.

• Storing files for distributed access.

• Streaming video and audio.

• Writing to log files.

• Storing data for backup and restore, disaster recovery, and archiving.

• Storing data for analysis by an on-premises or Azure-hosted service.

• Types of Blobs:-

• Azure Storage supports three types of blobs:

• 1. Page blobs store random access files up to 8 TB in size. Page blobs store virtual hard drive (VHD) files
and serve as disks for Azure virtual machines.

• 2. Block blobs store text and binary data, up to about 4.7 TB. Block blobs are made up of blocks of data
that can be managed individually.

• 3. Append blobs are made up of blocks like block blobs, but are optimized for append operations. Append
blobs are ideal for scenarios such as logging data from virtual machines.

• Disk storage:-

An Azure managed disk is a virtual hard disk (VHD). You can think of it like a physical disk in an on-premises
server but, virtualized. Azure managed disks are stored as page blobs, which are a random IO storage object in
Azure. We call a managed disk ‘managed’ because it is an abstraction over page blobs, blob containers, and
Azure storage accounts. With managed disks, all you have to do is provision the disk, and Azure takes care of the
rest.
72

• Queue storage:-

• The Azure Queue service is used to store and retrieve messages. Queue messages can be up to 64 KB in
size, and a queue can contain millions of messages. Queues are generally used to store lists of messages
to be processed asynchronously.

• For example, say you want your customers to be able to upload pictures, and you want to create
thumbnails for each picture. You could have your customer wait for you to create the thumbnails while
uploading the pictures. An alternative would be to use a queue. When the customer finishes their upload,
write a message to the queue. Then have an Azure Function retrieve the message from the queue and
create the thumbnails. Each of the parts of this processing can be scaled separately, giving you more
control when tuning it for your usage.

• Table storage:-

• Azure Table storage is now part of Azure Cosmos DB. To see Azure Table storage documentation. There is
a new Azure Cosmos DB Table API offering that provides throughput-optimized tables, global distribution,
and automatic secondary indexes.

• Azure Table storage is a service that stores structured NoSQL data in the cloud, providing a key/attribute
store with a schema less design. Because Table storage is schema less, it's easy to adapt your data as the
needs of your application evolve. Access to Table storage data is fast and cost-effective for many types of
applications, and is typically lower in cost than traditional SQL for similar volumes of data.

• You can use Table storage to store flexible datasets like user data for web applications, address books,
device information, or other types of metadata your service requires. You can store any number of
entities in a table, and a storage account may contain any number of tables, up to the capacity limit of the
storage account.

• Table storage concepts:-

• Table storage contains the following components:

• 1. URL format: Azure Table Storage accounts use this format: http://<storage
account>.table.core.windows.net/<table>
73

• Azure Cosmos DB Table API accounts use this format: http://<storage


account>.table.cosmosdb.azure.com/<table>

• 2. Accounts: All access to Azure Storage is done through a storage account..

• 3. Table: A table is a collection of entities. Tables don't enforce a schema on entities, which means a
single table can contain entities that have different sets of properties.

• 4. Entity: An entity is a set of properties, similar to a database row. An entity in Azure Storage can be up
to 1MB in size. An entity in Azure Cosmos DB can be up to 2MB in size.

• 5. Properties: A property is a name-value pair. Each entity can include up to 252 properties to store data.
Each entity also has three system properties that specify a partition key, a row key, and a timestamp.
Entities with the same partition key can be queried more quickly, and inserted/updated in atomic
operations. An entity's row key is its unique identifier within a partition.

• Note: - A timestamp is temporal information regarding an event that is recorded by the computer and
then stored as a log or metadata.

• UNIT-III

• Collaborating Using Cloud Services:-

• Cloud collaboration is the process where in shared documents are stored in a


central cloud storage. Different people -- usually members of a team -- have
access to the cloud, giving them the authorization to upload, open, modify, and
leave comments on documents. As a result of this arrangement, people can
virtually work together on the same files.

In recent years, more and more organizations have turned to the cloud to
make telecommuting convenient for their remote teams.

• OR
Cloud collaboration is a type of enterprise collaboration that allows employees
to work together on documents and other data types, which are stored off-
premises and outside of the company firewall. Employees use a cloud-
based collaboration platform to share, edit and work together on projects.
Cloud collaboration enables two or more people to work on a project at once.
74

A cloud collaboration project begins when one user creates the file or
document and then gives access to certain individuals; for example, the project
creator may share a link to the project that allows others to view and edit it.
Users can make changes to the document at any time, including when
employees are viewing and working simultaneously. All changes are saved and
synced so every user sees the same version of the project.

• Or

• Cloud collaboration is a way of sharing and co-authoring computer


files through the use of cloud computing, whereby documents are uploaded to
a central "cloud" for storage, where they can then be accessed by others. Cloud
collaboration technologies allow users to upload, comment and collaborate on
documents and even amend the document itself, evolving the document.

• Cloud collaboration :-Overview:-

• Cloud computing is a marketing term for technologies that


provide software, data access, and storage services that do not require end-
user knowledge of the physical location and configuration of the system that
delivers the services. A parallel to this concept can be drawn with
the electricity grid, where end-users consume power without needing to
understand the component devices or infrastructure required to utilize the
technology.

• Collaboration refers to the ability of workers to work together simultaneously


on a particular task. Document collaboration can be completed face to face.
However, collaboration has become more complex, with the need to work with
people all over the world in real time on a variety of different types of
documents, using different devices. A 2003 report mapped out five reasons
why workers are reluctant to collaborate more...

• These are:

• People resist sharing their knowledge.


75

• Safety issues

• Users are most comfortable using e-mail as their primary


electronic collaboration tool.

• People do not have incentive to change their behavior.

• Teams that want to or are selected to use the software do not have
strong team leaders who push for more collaboration.

• Senior management is not actively involved in or does not support the team
collaboration initiative.

• As a result, many providers created cloud collaboration tools. These include the
integration of email alerts into collaboration software and the ability to see
who is viewing the document at any time. All the tools a team could need are
put into one piece of software so workers no longer have to rely on email.

• Collaborating on-

• 1. Email Communication over the Cloud –

• 2. CRM Management –

• 3. Project Management-

• 4. Event Management –

• 5. Task Management –

• 6. Calendar –

• 7. Schedules –

• 8. Word Processing – Presentation –

• 9. Spreadsheet –

• 10. Databases –
76

• 11. Desktop - Social Networks and Groupware.

• Centralizing email Communications:-

• Cloud computing for families by examining how a typical family an use cloud-
based tools to help improve communications between family members. The
key here is to enable anywhere/anytime access to email.

• Pre-cloud computing, your email access was via a single computer, which also
stored all your email messages. For this purpose, you probably used a program
like Microsoft Outlook or Outlook Express, installed on your home computer. If
you wanted to check your home email from work, it took a bit of juggling and
perhaps the use of your ISP’s email access web page. That web page was never
in sync with the messages on your home PC, of course, which is just the start of
the problems with trying to communicate in this fashion. A better approach is
to use a web-based email service, such as Google’s Gmail (mail.google.com),
Microsoft’s Windows Live Hotmail (mail.live.com), or Yahoo! Mail
(mail.yahoo.com). These services place your email inbox in the cloud; you can
access it from any computer connected to the Internet.

• You can check your web based email whether you’re in the office or on the
road. Just make sure you’re connected to the Internet, and then open your
web browser and log in to the Gmail or Windows Live Hotmail or Yahoo! Mail
website. Go to your inbox and you’ll find your spouse’s message; reply as
necessary and await your spouse’s response. Even if you change locations or
computers, your spouse’s message remains in your inbox, and your reply
remains in your sent messages folder.

• collaborative CRM:-Collaborative CRM is an approach to customer relationship


management (CRM) in which the various departments of a company, such as
sales, technical support, and marketing, share any information they collect
from interactions with customers. For example, customer feedback gathered
from a technical support session could inform marketing staff about products
77

and services that might be of interest to the customer. The purpose of


collaboration is to improve the quality of customer service, and, as a result,
increase customer satisfaction and loyalty.

• Or

• Collaborative Customer Relationship Management (Collaborative CRM or


CCRM) is a CRM approach in which the customer interaction data of an
organization is integrated and synchronously shared to enhance customer
satisfaction and loyalty for maximized profitability and revenue. Collaborative
CRM integrates customers, processes, strategies and insight, allowing
organizations to more effectively and efficiently serve and retain customers.

• Collaborative CRM can be broadly identified by two aspects/step:

• Interaction Management- This management process deals with designing the


communication or interaction channel process within an organization which is
specific to customer interaction and finally enhancing the extent of
communication between both the parties. The communication channel
depends on the customers’ preference on how they require the interaction to
be dealt with. Some customers prefer to be contacted via phone and email
because of more comfort ability or non availability of manual interaction due to
no time or unavailability of resources. Some of them prefer to have live online
meeting or web meeting to reduce the travel time and lack of time or may be
they prefer more clarified real time environment by sitting at desk and
transact. Some of the customers insist for agent conducted services which is
often face-to-face interaction as they believe that this way is more efficient and
conclusive. Depending on these channels of interaction it is very important for
organization to fulfill these needs of customers and gather information from
them and implementing it into the CRM before interacting to enhance the
interaction power.
78

• Channel Management- After analyzing and implementing the interaction


medium it’s important to enhance the power of channels through which the
customers are interacted. By using latest technological aspects for improving
channel interaction could help to contact customers in an efficient way and
gather information from them to help organization to understand the
customers. Hence it is important for an organization to clearly arrange the
channel responsibilities and duties.

• Advantages of Collaborative CRM systems:

• Cuts down customer service costs.

• Increases the value-add of your products.

• Provides for better up-selling to existing clients.

• Increases customer retention rates and loyalty.

• Helps communicate with, retain and serve more customers with fewer
resources.

• Improve channel interactions.

• What is event management software?

• Event management software, simply put, is a set of business solutions that


covers the different aspects of organizing an event, from planning to post-
event stages. Some solutions are end-to-end systems that provide tools for the
entire event lifecycle, while other apps are focused on specific processes of
event organizing, which can include registration, ticketing, floor planning,
schedulers, analytics and surveys.

• Here are examples of event management software:

• Eventbrite. An end-to-end event management software solution that covers


the entire event lifecycle. From planning to event to post-event, the solution
features tools to streamline and automate mundane processes like
79

registration, badge printing and reporting. It is also easy to promote your event
utilizing the software’s integrated social marketing and Facebook promotion.
Key features include group registration, reserved seating,
fundraising/crowdfunding, online payments and audience polling.

• Cvent. It’s one of the most comprehensive event management solutions, Cvent
has feature set specific to different users by industry, role or as third-party
planners. Among others, the solution helps you find the best venue that fits
your need and budget, track expenses, calculate ROI and improve overall
efficiency of your event teams. It also features tools for targeted email
marketing, mainly a centralized guest list database that can be sorted by
different metrics. Likewise, Cvent makes badge printing and on-site check-ins
faster and smoother using consolidated data culled from online registration.
Key features include: event registration, payment processing, budget
management, on-site tools and mobile payments.

• XING Events. It is adaptive to different events from small networking nights to


large events. The Munich-based solution is geared towards German-speaking
territories but can be utilized for U.S. and other countries. The solution consists
of three modules: ticketing, event promotion and event management. It can
also integrate with CRM systems to manage your post-event marketing. Among
others, this solution features an event site builder and online ticketing booth.

• Eventzilla. Popular event management software, Eventzilla covers your


registration and ticketing processes. The app features secure payment channels
integrating with key gateways like PayPal, Stripe Authorize.net and Braintree.
Attendees can download their tickets online, greatly reducing your team’s
workload. Likewise, it facilitates post-even surveys to give you a deeper
understanding of audience feedback, insights that can help you on your next
event. Features include discount codes, waiting list, location map display and
on-site ticket sales.


80

• Project collaboration:-Project collaboration is a method by which teams and


team leaders plan, coordinate, control and monitor the project they are
working on. This collaborative process works across departmental, corporate
and national boundaries and helps especially with projects as they grow in
complexity.

• Everybody in the project has access to the information in the project such as
tasks, messages, and documents etc. This information is updated in real-time
when changes occur. With the advent of Collaborative software more project
teams use collaboration tools in their projects.

• With the trend towards remote teams and moving data to cloud servers,
collaboration, which has always been foundational to teamwork, has become
even more of a buzzword. But what does project collaboration mean? Many
things, and we’ll discuss them and how you can apply them when leading your
projects.

Virtualization is a
technique.docx
81

UNIT-IV
Introduction to Virtualization in Cloud Computing

What is Virtualization?
Wikipedia says “Virtualization, in computing, is the creation of a virtual (rather than actual) version of something, such as a
hardware platform, operating system, a storage device or network resources”.

Virtualization is a technique, it creates a virtual machine of a device or any computer resources, like storage device,
network, server and operating system in which the framework partition the resources into one or more execution
environment. A cloud can be called a virtualization of resources which gets maintained and managed by itself.

Virtualization allows using a multiple virtual machine on a single computer system. Or

Virtualization can be defined as a methodology of partitioning of the computer resources into multiple environments.

Virtualization is the ability to run multiple operating systems on a single physical system and share the underlying hardware
resources. It is the process by which one computer hosts the appearance of many computers.

Virtualization is used to improve IT throughput and costs by using physical resources as a pool from which virtual resources
can be allocated.

Virtualization is a technique how to separate a service from the underlying physical delivery of that service. It is
the process of creating a virtual version of something like computer hardware. It was initially developed during the
mainframe era. It involves using specialized software to create a virtual or software-created version of a computing
resource rather than the actual version of the same resource. With the help of Virtualization multiple operating systems
and applications can run on same Machine and its same hardware at the same time increasing the utilization and flexibility
of hardware.

• In other words, One of the main cost effective, hardware reducing, energy saving techniques used by cloud
providers is virtualization. Virtualization is a technique, which allows to share a single physical instance of a
resource or an application among multiple customers and organizations at one time.

• The idea of virtualization is not new. It was introduced by IBM in 1960 when mainframe computers were in use.
Mainframe computers were underutilized most of the time, hence to amplify the resource utilization of these
mainframe computers, the virtualization technology was introduced which allows to run multiple OS(Operating
Systems) simultaneously.

• The term virtualization is often synonymous with hardware virtualization, which plays a fundamental role in
efficiently delivering Infrastructure-as-a-Service (IaaS) solutions for cloud computing. Moreover, virtualization
technologies provide a virtual environment for not only executing applications but also for storage, memory, and
networking.

• virtualization is often:

• The creation of many virtual resources from one physical resource.

• The creation of one virtual resource from one or more physical resource.
82

• BENEFITS OF VIRTUALIZATION:

1. More flexible and efficient allocation of resources.


2. Enhance development productivity.
3. It lowers the cost of IT infrastructure.
4. Remote access and rapid scalability.
5. High availability and disaster recovery.
6. Pay per use of the IT infrastructure on demand.
7. Enables running multiple operating system.

• Virtualization Architecture :-

• OS assumes complete control of the underlying hardware.

• Virtualization architecture provides this illusion through a hypervisor/VMM.

• Hypervisor/VMM is a software layer which:

• allows multiple Guest OS (Virtual Machines) to run simultaneously on a single physical host .

• Provides hardware abstraction to the running Guest OSs and efficiently multiplexes underlying hardware
resources.

• Difference Between Multiprogramming & Virtualization -


83

Multi Programming Virtualization

CPU is shared among Processes


CPU is shared among Oss

Memory is shared using Page Tables.


Memory is shared using

Process knows it is being managed- uses system calls. more level of indirections.

Multiple Page tables.

OS may or may not know that it is


being managed.

• Types of virtualization
• 1. Server Virtualization

• Server virtualization allows multiple servers to be installed on one or more existing servers. This saves floor space
and money since you don’t have to purchase new servers or expand the square footage of your server room.

• The benefits of server virtualization include:

Multiple operating systems can be run on a single physical server(host)

• Many physical servers can often be consolidated into one or two physical servers, saving your small business
money that would have been spent on physical servers.

• Your small business’s electricity requirements will decrease—fewer servers run on less power and will also
generate less heat which will reduce your server room cooling bill

• Virtualzing most servers onto one or two physical servers reduces server maintenance costs.

• Additional RAM, processor power or storage space can be quickly and easily allocated to any virtual server.

• In case of virtual server error, quick restores can be done from locally stored backups.

• Virtual servers are easily moved between host servers, allowing maximum use of available processing power.

• 2. Desktop Virtualization

• Desktop virtualization removes the need for a CPU at each computer station. Each user will still have a monitor and
mouse, but will have their desktop CPU virtually stored on a local server.

• Benefits of desktop virtualization include:

• Virtual Desktops can run on multiple types of hardware such as: workstations, Thin Clients, laptops and some smart
phones.
84

Centralizing the virtualized “CPUs” of desktops provides increased stability through better administration of workstations
and increased security because the host system keeps all workstations up to date with patches and hot fixes.

Virtual desktops can be quickly created after an initial “original” virtual machine has been produced—anytime a new
desktop computer is needed, copy the original, give it a name and it is ready for immediate use.

• Users love it because their machine is “never down” and they always have access to their customization on their
virtual machines. Virtual desktops also reduce the carbon foot print and increase Total Cost Ownership when
compared to maintaining physical machines.

• 3. Application Virtualization

• This is a process where applications get virtualized and are delivered from a server to the end user’s device, such as
laptops, smart phones, and tablets. So instead of logging into their computers at work, users will be able to gain
access to the application virtually from anywhere, provided an Internet connection is available.

• Application virtualization separates individual software applications from the operating system allowing the user to
run almost any application on most of the operating systems.

• Other benefits of Application virtualization include:

• Application virtualization separates applications from the operating systems and can run the applications on work
stations, thin clients, laptops and some smart phones.

• Applications are run centrally so you don’t have to worry about having enough storage space on the local desktop
hard drive

• Multiple applications can run at the same time without bogging down the system or conflicting with other apps.

Virtualized applications can be installed, maintained, and patched as soon as updates are available.

• 5. Storage Virtualization

• Storage virtualization is the process of grouping the physical storage from multiple network storage devices so that
it looks like a single storage device. This concept is basically used in Storage Area Network (SAN) environment.

• Through storage virtualization, a new layer of a software and/or hardware is created in between the storage
system and the server and so the applications will no longer be in the need of knowing the information regarding
which drives or storage subsystems their data is residing on.

• The management of storage and data is becoming difficult and time consuming. Storage virtualization helps to
address this problem by facilitating easy backup, archiving and recovery tasks by consuming less time. Storage
virtualization aggregates the functions and hides the actual complexity of the Storage Area Network.

• Benefits of Storage Virtualization:-

• One of the major benefit of abstracting the host or server from the actual storage is the ability to migrate data
while maintaining concurrent I/O access.

• Availability factor gets enriched as the applications are not restricted to specific storage resources and also can be
shielded from all kinds of disruptions.
85

• Disaster Recovery option is offered by the management of data replication at the virtualization layer. This can be
achieved as soon as the primary source is kept as free and is managed by the common interface.

Automation of storage capacity provisioning can be eliminated.

• Need of Virtualization and its Reference Model:-

• There are five major needs of virtualization which are described below:

• 1. 1. ENHANCED PERFORMANCE-

• 2. 2. LIMITED USE OF HARDWARE AND SOFTWARE RESOURCES-

• 3. 3. SHORTAGE OF SPACE-

• 4. 4. ECO-FRIENDLY INITIATIVES-

• 5. 5. ADMINISTRATIVE COSTS-

• 1. ENHANCED PERFORMANCE-
Currently, the end user system i.e. PC is sufficiently powerful to fulfill all the basic computation requirements of the
user, with various additional capabilities which are rarely used by the user. Most of their systems have sufficient
resources which can host a virtual machine manager and can perform a virtual machine with acceptable
performance so far.

• 2. LIMITED USE OF HARDWARE AND SOFTWARE RESOURCES-


The limited use of the resources leads to under-utilization of hardware and software resources. As all the PCs of the
user are sufficiently capable to fulfill their regular computational needs that’s why many of their computers are
used often which can be used 24/7 continuously without any interruption. The efficiency of IT infrastructure could
be increase by using these resources after hours for other purposes. This environment is possible to attain with the
help of Virtualization.

• 3. SHORTAGE OF SPACE-
The regular requirement for additional capacity, whether memory storage or compute power, leads data centers
raise rapidly. Companies like Google, Microsoft and Amazon develop their infrastructure by building data centers as
per their needs. Mostly, enterprises unable to pay to build any other data center to accommodate additional
resource capacity. This heads to the diffusion of a technique which is known as server consolidation.
86

• 4. ECO-FRIENDLY INITIATIVES-
At this time, corporations are actively seeking for various methods to minimize their expenditures on power which
is consumed by their systems. Data centers are main power consumers and maintaining a data center operations
needs a continuous power supply as well as a good amount of energy is needed to keep them cool for well-
functioning. Therefore, server consolidation drops the power consumed and cooling impact by having a fall in
number of servers. Virtualization can provide a sophisticated method of server consolidation.

• 5. ADMINISTRATIVE COSTS-
Furthermore, the rise in demand for capacity surplus, that convert into more servers in a data center, accountable
for a significant increase in administrative costs. Hardware monitoring, server setup and updates, defective
hardware replacement, server resources monitoring, and backups are included in common system administration
tasks. These are personnel-intensive operations. The administrative costs is increased as per the number of servers.
Virtualization decreases number of required servers for a given workload, hence reduces the cost of administrative
employees.

• VIRTUALIZATION REFERENCE MODEL-

• Three major Components falls under this category in a virtualized environment:

• 1. GUEST:
The guest represents the system component that interacts with the virtualization layer rather than with the host, as
would normally happen. Guests usually consist of one or more virtual disk files, and a VM definition file. Virtual
Machines are centrally managed by a host application that sees and manages each virtual machine as a different
application.

• 2. HOST:
The host represents the original environment where the guest is supposed to be managed. Each guest runs on the
host using shared resources donated to it by the host. The operating system, works as the host and manages the
physical resource management, and the device support.

• 3. VIRTUALIZATION LAYER:
The virtualization layer is responsible for recreating the same or a different environment where the guest will
operate. It is an additional abstraction layer between a network and storage hardware, computing, and the
application running on it. Usually it helps to run a single operating system per machine which can be very inflexible
compared to the usage of virtualization.
87

• Virtualization Architecture:- A virtualization architecture is a conceptual model specifying the arrangement and
interrelationships of the particular components involved in delivering a virtual -- rather than physical -- version of
something, such as an operating system (OS), a server, a storage device or network resources.

• The image below illustrates the difference between traditional computing architecture and
a virtualization architecture.

• There are two types of architecture-

• 1. Hosted Architecture

• 2. Bare Metal Architecture

• 1. Hosted Architecture:-In this architecture the OS will be installed on the hardware as the first step .Next, a
hypervisor, also called Virtual machine monitor (VMM) is installed. This helps in installing multiple guest OS (like
Linux/windows) Or VM on the hardware. Finally, application are installed and run on the VMs just like on a
particular machine. This architecture is very useful for running legacy application, software development and
supporting different OSs.

• 2. Bare Metal Architecture:- In this , the hypervisor will be installed directly on the hardware instead of the top
underlying OS. The VMs and their application will be installed on the hypervisor just like it is installed in the case of
hosted architecture. This architecture is very useful for those applications which offer real time access or data
processing.

• Advantages(Pros) of Virtualization

• Following are some of the most recognized advantages of Virtualization, which are explained in detail.

• 1. Using Virtualization for Efficient Hardware Utilization

• Virtualization decreases costs by reducing the need for physical hardware systems. Virtual machines use efficient
hardware, which lowers the quantities of hardware, associated maintenance costs and reduces the power along
with cooling the demand. You can allocate memory, space and CPU in just a second, making you more self-
independent from hardware vendors.
88

• 2. Using Virtualization to Increase Availability

• Virtualization platforms offer a number of advanced features that are not found on physical servers, which increase
uptime and availability. Although the vendor feature names may be different, they usually offer capabilities such as
live migration, storage migration, fault tolerance, high availability and distributed resource scheduling. These
technologies keep virtual machines chugging along or give them the ability to recover from unplanned outages.

• The ability to move a virtual machine from one server to another is perhaps one of the greatest single benefits of
virtualization with far reaching uses. As the technology continues to mature to the point where it can do long-
distance migrations, such as being able to move a virtual machine from one data center to another no matter the
network latency involved.

• 3. Disaster Recovery

• Disaster recovery is very easy when your servers are virtualized. With up-to-date snapshots of your virtual
machines, you can quickly get back up and running. An organization can more easily create an affordable
replication site. If a disaster strikes in the data center or server room itself, you can always move those virtual
machines elsewhere into a cloud provider. Having that level of flexibility means your disaster recovery plan will be
easier to enact and will have a 99% success rate.

• 4. Save Energy

• Moving physical servers to virtual machines and consolidating them onto far fewer physical servers’ means
lowering monthly power and cooling costs in the data center. It reduces carbon footprint and helps to clean up the
air we breathe. Consumers want to see companies reducing their output of pollution and taking responsibility.

• 5. Deploying Servers too fast

• You can quickly clone an image, master template or existing virtual machine to get a server up and running within
minutes. You do not have to fill out purchase orders, wait for shipping and receiving and then rack, stack, and cable
a physical machine only to spend additional hours waiting for the operating system and applications to complete
their installations. With virtual backup tools like Veeam, redeploying images will be so fast that your end users will
hardly notice there was an issue.

• 6. Save Space in your Server Room or Datacenter

• Imagine a simple example: you have two racks with 30 physical servers and 4 switches. By virtualizing your servers,
it will help you to reduce half the space used by the physical servers. The result can be two physical servers in a rack
with one switch, where each physical server holds 15 virtualized servers.

• 7. Testing and setting up Lab Environment

• While you are testing or installing something on your servers and it crashes, do not panic, as there is no data loss.
Just revert to a previous snapshot and you can move forward as if the mistake did not even happen. You can also
isolate these testing environments from end users while still keeping them online. When you have completely done
your work, deploy it in live.

• Shifting all your Local Infrastructure to Cloud in a day


89

• If you decide to shift your entire virtualized infrastructure into a cloud provider, you can do it in a day. All the
hypervisors offer you tools to export your virtual servers.

• 8. Possibility to Divide Services

• If you have a single server, holding different applications this can increase the possibility of the services to crash
with each other and increasing the fail rate of the server. If you virtualize this server, you can put applications in
separated environments from each other as we have discussed previously.

• Disadvantages (Cons)of Virtualization

• Although you cannot find many disadvantages for virtualization, we will discuss a few prominent ones as follows −

• 1. Extra Costs

• Maybe you have to invest in the virtualization software and possibly additional hardware might be required to
make the virtualization possible. This depends on your existing network. Many businesses have sufficient capacity
to accommodate the virtualization without requiring much cash. If you have an infrastructure that is more than five
years old, you have to consider an initial renewal budget.

• 2. Software Licensing

• This is becoming less of a problem as more software vendors adapt to the increased adoption of virtualization.
However, it is important to check with your vendors to understand how they view software use in a virtualized
environment.

• 3. Learn the new Infrastructure

• Implementing and managing a virtualized environment will require IT staff with expertise in virtualization. On the
user side, a typical virtual environment will operate similarly to the non-virtual environment. There are some
applications that do not adapt well to the virtualized environment.

• Introduction to Hypervisor/Virtual Machine Monitor (VMM)

• What is Hypervisor?
• A hypervisor is computer software or hardware that enables you to host multiple virtual machines. Each virtual
machine is able to run its own programs. A hypervisor allows you to access several virtual machines that are all
working optimally on a single piece of computer hardware.

• A hypervisor is a hardware virtualization technique that allows multiple guest operating systems (OS) to run on a
single host system at the same time. The guest OS shares the hardware of the host computer, such that each OS
appears to have its own processor, memory and other hardware resources.

• A hypervisor is also known as a virtual machine manager (VMM) .

• The term hypervisor was first coined in 1956 by IBM to refer to software programs distributed with IBM RPQ for
the IBM 360/65. The hypervisor program installed on the computer allowed the sharing of its memory.
90

• Now, hypervisors are fundamental components of any virtualization effort. You can think of it as the operating
system for virtualized systems. It can access all physical devices residing on a server. It can also access the memory
and disk. It can control all aspects and parts of a virtual machine.

• The hypervisor installed on the server hardware controls the guest operating system running on the host machine.
Its main job is to cater to the needs of the guest operating system and effectively manage it such that the instances
of multiple operating systems do not interrupt one another.

• Or

• A hypervisor is a program that allows multiple operating systems to share a single hardware host. The hypervisor
creates virtual machine (VM) environments and coordinates calls for processor, memory, hard disk, network, and
other resources through the host OS. KVM requires a processor with hardware virtualization extensions to connect
to the guest OSs.

• And another name of hypervisor is known as a Kernel-based Virtual Machine (KVM).

• KVM hypervisor is the virtualization layer in Kernel-based Virtual Machine (KVM), free, open
virtualization architecture for Linux distributions. It was merged into the Linux kernel mainline in kernel version
2.6.20, which was released on February 5, 2007.

• How does it work?


• The servers would need to execute the hypervisor. The hypervisor, in turn, loads the client operating systems of the
virtual machines. The hypervisor allocates the correct CPU resources, memory, bandwidth and disk storage space
for each virtual machine.

• A virtual machine can create requests to the hypervisor through a variety of methods, including API calls.

• Types of Hypervisor :-There are two types of hypervisors:

• 1. Bare metal or native hypervisors.

• 2. Embedded or hosted hypervisors, and

• 1. Bare metal or native hypervisors Or TYPE-1 Hypervisor:


Hypervisor runs directly on underlying host system. It is also known as “Native Hypervisor” or “Bare metal
hypervisor”. It does not require any base server operating system. It has direct access to hardware resources.
Examples of Type 1 hypervisors include VMware ESXi, Citrix XenServer and Microsoft Hyper-V hypervisor.

• A major advantage is that any problem in one of the virtual machine or guest operating system do not affect

the other guest operating systems running on the hardware.


91

• Advantage of Type 1 hypervisor :-

• · Enhance security

• · Allows higher density hardware

• · Hypervisor has direct access to HW

• 2. Embedded or hosted hypervisors or TYPE-2 Hypervisor:


A Host operating system runs on underlying host system. It is also known as ‘Hosted Hypervisor". Basically software
installed on an operating system. Hypervisor asks operating system to make hardware calls. Example of Type 2
hypervisor includes VMware Player or Parallels Desktop. Hosted hypervisors are often found on endpoints like PCs.

• Bare metal hypervisors are faster and more efficient as they do not need to go through the operating system and
other layers that usually make hosted hypervisor slower. Type-I Hypervisors are also more secure than Type-II
Hypervisors. Hosted Hypervisors on the other hand are much easier to set-up than bare metal Hypervisors because
you have an OS to work with. These are also compatible with a broad range of hardware.

• Advantages of Type 2 hypervisor :

• · Host OS Controls HW access

• · Ease of access

• · Allows for multiple operating systems


92

• Virtual Machines:-VM technology allows multiple virtual machines to run on a single physical machine.
• A virtual machine is a software program or an operating system which possesses the characteristic behavior of an
independent computer system and is capable of performing various tasks like running applications and programs
like a computer. A virtual machine is also known as a guest machine.

• The main aim of virtualization is to reduce workloads by transforming the conventional computing to make it more
scalable. Virtualization has been a part of IT landscape for decades and today can be applied to a wide-range layer,
including operating system-level virtualization, hardware-level virtualization, server-level virtualization.

• A virtual machine is a computer file, typically called an image, which behaves like an actual computer. In other
words, creating a computer within a computer. It runs in a window, much like any other programme, giving the end
user the same experience on a virtual machine as they would have on the host operating system itself. The virtual
machine is sandboxed from the rest of the system, meaning that the software inside a virtual machine cannot
escape or tamper with the computer itself. This produces an ideal environment for testing other operating systems
including beta releases, accessing virus-infected data, creating operating system backups and running software or
applications on operating systems for which they were not originally intended.

• Multiple virtual machines can run simultaneously on the same physical computer. For servers, the multiple
operating systems run side-by-side with a piece of software called a hypervisor to manage them, while desktop
computers typically employ one operating system to run the other operating systems within its programme
windows. Each virtual machine provides its own virtual hardware, including CPUs, memory, hard drives, network
interfaces and other devices. The virtual hardware is then mapped to the real hardware on the physical machine
which saves costs by reducing the need for physical hardware systems along with the associated maintenance costs
that go with it, plus reduces power and cooling demand.

• Types of Virtual Machine :

• Virtual machines are implemented by software emulation methods or hardware virtualization techniques.
Depending on their use and level of correspondence to any physical computer, virtual machines can be divided into
two categories:
1. System Virtual Machines: A system platform that supports the sharing of the host computer's physical resources
between multiple virtual machines, each running with its own copy of the operating system. The virtualization
technique is provided by a software layer known as a hypervisor, which can run either on bare hardware or on top
of an operating system.

• Or

• A system virtual machine is an environment that allows multiple instances of the operating system(VMs) to run on
a host system, sharing the physical resources.
93

• 2. Process Virtual Machine: Designed to provide a platform-independent programming environment that masks the
information of the underlying hardware or operating system and allows program execution to take place in the
same way on any given platform.

• OR

• A process virtual machine, also known as an application VM, is used to execute computer programs in a platform-
independent environment. It is designed to run applications in the same way irrespective of the platform.

• A VM is very useful in organizations for many reasons, from application development and testing in different
environments to data backup and storage, etc. as it proves to be cost effective. One example would be a
programmer is developing an application that needs to be launched to the entire workforce for use. The application
will pull data from other sources and the users will be able to run reports for specific information.

• Types of VMs:-This section goes through some of the different types of virtual machines:

• 1. Windows virtual machines

• 2. Android virtual machines

• 3. Mac virtual machines

• 4. iOS virtual machines

• 5. Java virtual machines

• 6. Python virtual machines

• 7. Linux virtual machines

• 8. VMware virtual machines

• 9. Ubuntu virtual machines

• 1. Windows virtual machines:

• Most hypervisors support VMs running the Windows OS as a guest. Microsoft’s Hyper-V hypervisor comes as part
of the Windows operating system. When installed, it creates a parent partition containing both itself and the
primary Windows OS, each of which gets privileged access to the hardware. Other operating systems, including
Windows guests, run in child partitions that communicate with the hardware via the parent partition.

• 2. Android virtual machines:-

• Google’s open-source Android OS is common on mobile devices and connected home devices such as home
entertainment devices. The Android OS runs only on the ARM processor architecture that is common to these
devices, but enthusiasts, Android gamers, or software developers might want to run it on PCs.

• This is problematic because PCs run on an entirely different x86 processor architecture and a hardware
virtualization hypervisor only passes instructions between the VM and the CPU. It doesn’t translate them for
processors with different instruction sets. There are various projects to address this problem.

• 3. Mac virtual machines:


94

• Apple only allows its macOS system to run on Apple hardware, prohibiting people from running it on non-Apple
hardware as a VM or otherwise under its end user license agreement. You can use Type 2 hypervisors on Mac
hardware to create VMs with a macOS guest.

• 4. iOS virtual machines:-

• It is not possible to run iOS in a VM today because Apple strictly controls its iOS OS and doesn’t allow it to run on
anything other than iOS devices.

• The closest thing to an iOS VM is the iPhone simulator that ships with the Xcode integrated development
environment, which simulates the entire iPhone system in software.

• Virtual Machine Monitors:-

• A Virtual Machine Monitor (VMM) is a software program that enables the creation, management and governance
of virtual machines (VM) and manages the operation of a virtualized environment on top of a physical host
machine. VMM is also known as Virtual Machine Manager and Hypervisor.

• VMM is the primary software behind virtualization environments and implementations. When installed over a host
machine, VMM facilitates the creation of VMs, each with separate operating systems (OS) and applications. VMM
manages the backend operation of these VMs by allocating the necessary computing, memory, storage and other
input/output (I/O) resources.

• VMM also provides a centralized interface for managing the entire operation, status and availability of VMs that are

• installed over a single host or spread across different and interconnected hosts.

• OR -Virtual Machine Manager (VMM): Also called a “hypervisor,” this is one of many hardware virtualization
techniques that allow multiple operating systems, termed guests, to run concurrently on a host computer. It is so
named because it is conceptually one level higher than a supervisory program. The hypervisor presents to the guest
operating systems a virtual operating platform and manages the execution of the guest operating systems. Multiple
instances of a variety of operating systems may share the virtualized hardware resources.
Hypervisors are installed on ever hardware whose only task is to run guest operating systems. Non-hypervisor

• Virtualization systems are used for similar tasks on dedicated server hardware, but also commonly on desktop,
portable, and even handheld computers. The term is often used to describe the interface provided by the
95

• specific cloud-computing functionality infrastructure as a service (IaaS).

• Implementation Techniques of virtualization :-

• OR

• Virtualization Implementations Techniques :-

• Hardware virtualization can be implemented using following techniques :-

• 1. Para virtualization 2. Full virtualization

• 1. Para virtualization:-Para virtualization enables several different operating systems to run on one set of hardware
by effectively using resources such as processors and memory.

• In paravirtualization, the operating system is modified to work with a virtual machine. The intention behind the
modification of the operating system is to minimize the execution time required in performing the operations that
are otherwise difficult to run in a virtual environment. Para virtualization has many
significant performance advantages and its efficiencies offer better scaling. As a result, it is used in various areas of
technology such as:

• Partitioning development environments from test systems

• Disaster recovery

• Migrating data from one system to another

• Capacity management

• Para virtualization technology was introduced by IBM and was developed as an open-source software project.

• Para virtualization provisions an interface to VMs that are somewhat similar to underlying hardware. Para
virtualization ensures optimization of system performance and mitigates overhead to prevent underutilization of
VMs as in case of full virtualization.
96

• Para virtualization works differently from the full virtualization. It doesn’t need to simulate the hardware for the
virtual machines. The hypervisor is installed on a physical server (host) and a guest OS is installed into the
environment. Virtual guests aware that it has been virtualized, unlike the full virtualization (where the guest
doesn’t know that it has been virtualized) to take advantage of the functions. When we can use of type 1
hypervisor. i.e. is called paravirtualization.

• Para Virtualization:-Here the guest operating system is aware that it's being virtualized. Due to this advance
information the guest operating system can short circuit its drivers to minimize the overhead of communicating
with physical devices. This virtualization removes the drawback of full virtualization. This is shown in figure 5.
97

• 2. Full virtualization:-
• Full virtualization is a technique in which a complete installation of one machine is run on another. This
virtualization support different operating system but it requires specific hardware combination. The hypervisor
interacts directly with the physical server's CPU and disk space as shown in figure 4. In this virtualization each
virtual server is completely unaware of other virtual servers that are currently running on the physical machine.

• Full virtualization is a common and cost-effective type of virtualization, which is basically a method by which
computer service requests are separated from the physical hardware that facilitates them. With full virtualization,
operating systems and their hosted software are run on top of virtual hardware. It differs from other forms of
virtualization (like para-virtualization and hardware-assisted virtualization) in its total isolation of guest operating
systems from their hosts.

• Full virtualization =Type 1 hypervisor + Type 2 hypervisor.

• OR
Full virtualization is a virtualization technique used to provide a VME (Virtual Machine Environment) that
completely simulates the underlying hardware. In this type of environment, any software capable of execution on
the physical hardware can be run in the VM, and any OS supported by the underlying hardware can be run in each
individual VM.
98

• Users can run multiple different guest OSes simultaneously. In full virtualization, the VM simulates enough
hardware to allow an unmodified guest OS to be run in isolation. This is particularly helpful in a number of
situations. For example, in OS development, experimental new code can be run at the same time as older versions,
each in a separate VM. The hypervisor provides each VM with all the services of the physical system, including a
virtual BIOS, virtual devices, and virtualized memory management. The guest OS is fully disengaged from the
underlying hardware by the virtualization layer

• Full virtualization is achieved by using a combination of binary translation and direct execution. With full
virtualization hypervisors, the physical CPU executes nonsensitive instructions at native speed; OS instructions are
translated on the fly and cached for future use, and user level instructions run unmodified at native speed. Full
virtualization offers the best isolation and security for VMs and simplifies migration and portability as the same
guest OS instance can run on virtualized or native hardware. Figure 1.5 shows the

• The below diagram might help you to understand how Xen supports both full virtualization and paravirtualization.
Due to the architecture difference between windows and Linux based Xen hypervisor, Windows operating system
can’t be para-virtualized. But it does for Linux guest by modifying the kernel. But VMware ESXi doesn’t modify the
kernel for both Linux and Windows guests.

• Virtual Machine Applications Or Virtual Machine Software : Introducing popular Virtual Machine Software:-There
are many types of virtual machine applications available on the internet. Some of the most commonly used virtual
machine applications are:

• KVM

• • VMware Workstation-----

• • XEN----

• • Virtual BOX------

• • Citrix----

• 1. Kernel-Based Virtual Machine (KVM):-


A kernel-based virtual machine (KVM) is a virtualization infrastructure built for Linux OS and designed to operate
on x86-based processor architecture.

• As the name suggests, this is kernel based virtualization technology for Linux OS on hardware that supports
virtualization.
99

• KVM is developed by Red Hat Corporation to provide a virtualization solution and services on the Linux operating
system platform. KVM is designed over the primary Linux OS kernel.

• KVM is a type of hypervisor that enables, emulates and provides for the creation of virtual machines on operating
systems. These machines are built on top of the Linux kernel, using operating systems such as Linux, Ubuntu and
Fedora. KVM can be installed on all x86 processors and provide separate instruction set extensions for Intel and
AMD processors.

KVM supports multiple different guest operating system images including Linux Kernel, Windows, BSD and Solaris.
It also allocates separate virtualized computing resources for each virtual machine such as the processor, storage,
memory, etc.

KVM is an acronym of “Kernel based Virtual Machine”, and is a virtualization infrastructure for the Linux kernel
that turns it into a hypervisor.
It is used with QEMU to emulate some peripherals, called QEMU-KVM.

• QEMU (short for Quick EMUlator) is a free and open-source emulator that performs hardware virtualization.

• QEMU is a hosted virtual machine monitor: it emulates the machine's processor through dynamic binary
translation and provides a set of different hardware and device models for the machine, enabling it to run a variety
of guest operating systems. It also can be used with KVM to run virtual machines at near-native speed (by taking
advantage of hardware extensions such as Intel VT-x). QEMU can also do emulation for user-level processes,
allowing applications compiled for one architecture to run on another.

• 2. Virtual Box:
• VirtualBox is a free, open source, crossplatform application for creating, managing and running virtual machines
(VMs) – computers whose hardware components are emulated by the host computer, the computer that runs the
program. VirtualBox can run on Windows, Mac OS X, Linux and Solaris.
100

• VirtualBox is a cross-platform virtualization application. It installs on your existing computer (and operating system)
and extends the capabilities of your existing computer so that it can run multiple operating systems (inside multiple
virtual machines) at the same time.

• Oracle VM VirtualBox is a free and open-source hosted hypervisor for x86 virtualization, developed by Oracle

Corporation. Created by Innotek GmbH, it was acquired by Sun Microsystems in 2008, which was, in turn, acquired by
Oracle in 2010.

Virtual Box may be installed on Windows, macOS, Linux, Solaris and OpenSolaris. There are also ports to

FreeBSD . It supports the creation and management of guest virtual machines running Windows, Linux,

BSD, OS/2, Solaris, Haiku, and OSx86, as well as limited virtualization of macOS guests on Apple hardware.

• For some guest operating systems, a "Guest Additions" package of


device drivers and system applications is available, which typically improves performance, especially that of
graphics.

• This virtual machine application is made by Oracle and it is free to use. You can use this application on Windows,
Mac and Linux OS. Virtual Box is an easy, user friendly VM application. It has a large number of features that make
sustaining multiple virtual machines simple. Virtual Box allows files, drive and peripheral sharing with the host
machine.

• Virtual Box Terminology.:-When dealing with virtualization, it helps to acquaint oneself with a bit of crucial
terminology, especially the following terms:

• Host Operating System (Host OS):The operating system of the physical computer on which Virtual Box was
installed. There are versions of Virtual Box for Windows, Mac OS X, Linux and Solaris hosts.

• Guest Operating System (Guest OS):The operating system that is running inside the virtual machine.

• Virtual Machine (VM):We’ve used this term often already. It is the special environment that Virtual Box creates for
your guest operating system while it is running. In other words, you run your guest operating system “in” a VM.
Normally, a VM will be shown as a window on your computer’s desktop, but depending on which of the various
frontends of Virtual Box you use, it can be displayed in full screen mode or remotely on another computer.

• 3. VM ware -OR -VM ware Workstation :

• This virtual machine application is a subsidiary of Dell Technologies and it can run on Windows and Linux operating
systems. VMware has two versions, VMware Player and VMware Workstation. The free and basic version is the
VMware Player. This application is for people who just want a simple application to create and run virtual
machines. The VMware Workstation is a paid application and is best used in enterprise settings. The workstation
101

application provides all the benefits of the VMware Player and also includes the feature to clone machines, take
multiple snapshots of the guest OS, replay changes made to the guest OS for testing and record the performance of
software.

• VMware Workstation is a hosted hypervisor that runs on x64 versions of Windows and Linux operating systems (an
x86 version of earlier releases was available); it enables users to set up virtual machines (VMs) on a single physical
machine, and use them simultaneously along with the actual machine.

• B e n e f i t s:-

• Streamline software development and testing

• Enhance productivity of IT professionals

• Facilitate computer-based training and software demos

• Run multiple secure PC environments on a single PC.

• VMware Workstation runs multiple operating systems – Windows, Linux NetWare – and their applications
simultaneously on a single physical PC in fully networked, portable virtual machines.

• K E Y F E A T U R E S:-

• The only desktop virtual machine software that runs on both Windows and Linux host operating systems, allows
users to create two-processor virtual machines, and supports certain 64-bit host and guest operating systems and
64-bit extended processors

• • Broader device support, better performance and more powerful functionality than any other desktop virtual
machine software

• • Powerful virtual networking options with NAT devices, DHCP server, and multiple network switches, let you
connect virtual machines to each other, the host machine, and public networks

• • Shared folders, drag-and-drop operations, and copying and pasting between guest and host

• Get the full functionality of native program debugging in a virtual machine with support for both user and kernel-
level debuggers

• • Easily switch between virtual machines and suspend/resume them

• • Each virtual machine has configurable memory size, disks, and I/O devices, and also support for CD, floppy, USB,
DVD, and CD-ROM devices

• • Virtual machines are isolated from each other, ensuring that if one crashes, the other virtual machines and the
host machine are unaffected

• • A virtual machine is a set of portable, hardware-independent files that can easily be shared.

• VMware ESXi:-

• VMware ESXi is an operating system-independent hypervisor based on the VMkernel operating system that
interfaces with agents that run on top of it. ESXi stands for Elastic Sky X Integrated.
102

• ESXi is a type-1 hypervisor, meaning it runs directly on system hardware without the need for an operating system
(OS). Type-1 hypervisors are also referred to as bare-metal hypervisors because they run directly on hardware.

• ESXi is targeted at enterprise organizations. VMware describes an ESXi system as similar to a stateless compute
node. Virtualization administrators can upload state information from a saved configuration file.

• OR-

• What is VMware ESXi?

• The core of the vSphere product suite is the hypervisor called ESXi. A hypervisor is a piece of software that creates
and runs virtual machines. Hypervisors are divided into two groups:

• Type 1 hypervisors – also called bare metal hypervisors, Type 1 hypervisors run directly on the system hardware. A
guest operating-system runs on another level above the hypervisor. VMware ESXi is a Type 1 hypervisor that runs
on the host server hardware without an underlying operating system.

• Type 2 hypervisors – hypervisors that run within a conventional operating-system environment, and the host
operating system provides I/O device support and memory management. Examples of Type 2 hypervisors are
VMware Workstation and Oracle VirtualBox .

• ESXi provides a virtualization layer that abstracts the CPU, storage, memory and networking resources of the
physical host into multiple virtual machines. That means that applications running in virtual machines can access
these resources without direct access to the underlying hardware. VMware refers to the hypervisor used by
VMware ESXi as vmkernel. vmkernel receives requests from virtual machines for resources and presents the
requests to the physical hardware.

• ESXi is supported on Intel processors (Xeon and above) and AMD Opteron processors. ESXi includes a 64-bit
VMkernel and hosts with 32-bit-only processors are not supported. However, both 32-bit and 64-bit guest
operating systems are supported. ESXi supports up to 4,096 virtual processors per host, 320 logical CPUs per host,
512 virtual machines per host and up 4 TB of RAM per host.

• ESXi can be installed on a hard disk, USB device, or SD card. It has an ultralight footprint of approximately 144 MB
for increased security and reliability.

• 4. Hyper-V or Hyper Version Technology Overview


Hyper-V is Microsoft's hardware virtualization product. It lets you create and run a software version of a computer,
called a virtual machine. Each virtual machine acts like a complete computer, running an operating system and
programs. When you need computing resources, virtual machines give you more flexibility, help save time and
money, and are a more efficient way to use hardware than just running one operating system on physical
hardware.

• Hyper-V runs each virtual machine in its own isolated space, which means you can run more than one virtual
machine on the same hardware at the same time. You might want to do this to avoid problems such as a crash
affecting the other workloads, or to give different people, groups or services access to different systems.

• Some ways Hyper-V can help you

• Hyper-V can help you:


103

• Establish or expand a private cloud environment.:- Provide more flexible, on-demand IT services by moving to or
expanding your use of shared resources and adjust utilization as demand changes.

• Use your hardware more effectively. Consolidate servers and workloads onto fewer, more powerful physical
computers to use less power and physical space.

• Improve business continuity. Minimize the impact of both scheduled and unscheduled downtime of your
workloads.

• Establish or expand a virtual desktop infrastructure (VDI). Use a centralized desktop strategy with VDI can help
you increase business agility and data security, as well as simplify regulatory compliance and manage desktop
operating systems and applications. Deploy Hyper-V and Remote Desktop Virtualization Host (RD Virtualization
Host) on the same server to make personal virtual desktops or virtual desktop pools available to your users.

• Make development and test more efficient. Reproduce different computing environments without having to buy
or maintain all the hardware you'd need if you only used physical systems.

• 5. Xen Hypervisor:
Xen is a hypervisor that enables the simultaneous creation, execution and management of multiple virtual
machines on one physical computer.

• Xen was developed by XenSource, which was purchased by Citrix Systems in 2007. Xen was first released in 2003. It
is an open source hypervisor. It also comes in an enterprise version.

• Xen is primarily a bare-metal, type-1 hypervisor that can be directly installed on computer hardware without the
need for a host operating system. Because it's a type-1 hypervisor, Xen controls, monitors and manages the
hardware, peripheral and I/O resources directly. Guest virtual machines request Xen to provision any resource and
must install Xen virtual device drivers to access hardware components. Xen supports multiple instances of the same
or different operating systems with native support for most operating systems, including Windows and Linux.
Moreover, Xen can be used on x86, IA-32 and ARM processor architecture.

• Difference Between VMware ESX And ESXi

• VMware ESX :
ESX (Elastic Sky X) is the VMware’s enterprise server virtualization platform. In ESX, VM kernel is the virtualization
kernel which is managed by a console operating system which is also called as Service console.
104

• Which is Linux based and its main purpose is it to provide a Management interface for the host and lot of
management agents and other third party software agents are installed on the service console to provide the
functionalists like hardware management and monitoring of ESX hypervisor.

• VMware ESXi:-
ESXi (Elastic sky X Integrated) is also the VMware’s enterprise server virtualization platform. In ESXi, Service console
is removed. All the VMware related agents and third party agents such as management and monitoring agents can
also run directly on the VMkernel.

• ESXi is ultra-thin architecture which is highly reliable and its small code-base allows it to be more secure with less
codes to patch. ESXi uses Direct Console User Interface (DCUI) instead of a service console to perform management
of ESXi server. ESXi installation will happen very quickly as compared to ESX installation.

• vSphere

• vSphere is an umbrella term for VMware’s virtualization platform. The term vSphere encompasses several distinct
products and technologies that work together to provide a complete infrastructure for virtualization. These
products and technologies include the following:

• ESXi: ESXi is the core of vSphere; it is a Type-1 hypervisor that runs on host computers to manage the execution of
virtual machines, allocating resources to the virtual machines as needed. ESXi comes in two basic flavors:

• Installable: The Installable version of software can be installed onto the hard drive on a host computer,
much as any other operating system can be installed.

• Embedded: The Embedded version runs as firmware that is actually built into the host computer. It’s
preinstalled into read-only memory by the manufacturer of the host computer.

• vCenter Server: vCenter Server is a server application that runs on Windows Server installed in a virtual machine.
vCenter is the central point for creating new virtual machines, starting and stopping virtual machines, and
performing other management tasks in a vSphere environment.

• vCenter Client: vCenter Client is a Windows application that you use to access the features of a vCenter Server
remotely. vCenter Client is the tool you’ll work with most when you manage a vSphere environment.

• Citrix Virtual Apps or software tool:-

• Citrix Systems, Inc. is an American multinational software company that provides server, application and desktop
virtualization, networking, software as a service (SaaS), and cloud computing technologies. Citrix solutions are
claimed to be in use by over 400,000 clients worldwide, including 99% of the Fortune 100, and 98% of the Fortune
500.

• Like VMware and Parallels, Citrix offers virtualization solutions that enable telecommuting and teleworking. Citrix
Virtual Apps and Desktops works with Microsoft Remote Desktop Services (RDS), utilizing their HDX protocol for
data transferring. Citrix software publishes applications and resources from datacenters and provides access to
remote devices. Utilizing virtualization technology, Windows applications and desktops are made available to non-
Windows OS devices. Moreover, resources can be accessed from anywhere, anytime, from any device.
105

• “”” Citrix Virtual Apps is a virtual application delivery tool that isolates applications from the underlying OS to
provide access to remote users from any device.””””

• Citrix Virtual Apps (formerly Citrix XenApp) publishes and streams applications from a centralized location into an
isolated environment where they are executed on target devices.

• When you use session virtualization from RDSH, the hosting servers publish the applications and desktops. While
the server receives mouse clicks, keyboard strokes and any user input, it delivers screen updates to the end-user
device—creating a seamless end-user experience. All the processing takes place on the server, leveraging the
available resources.

• When you use application streaming, the application configuration, settings and files are copied to the client device
and data is synced with the server.

• Citrix Virtual Apps enables Windows, Mac®, iOS and Android devices to run Windows applications from Microsoft
Windows Servers (RDSH) running Microsoft RDS utilizing Citrix HDX protocol.

UNIT-V

• Security & Challenges in Cloud Computing :


• Cloud computing security:-Cloud computing security refers to a broad set of policies, technologies,
applications, and controls utilized to protect virtualized IP, data, applications, services, and the associated
infrastructure of cloud computing. It is a sub-domain of computer security, network security, and, more
broadly, information security.

• Or

• Cloud Security:-Cloud security, also known as cloud computing security, consists of a set of policies,
controls, procedures and technologies that work together to protect cloud-based systems, data and
infrastructure.

• These security measures are configured to protect data, support regulatory compliance and protect
customers' privacy as well as setting authentication rules for individual users and devices. From
authenticating access to filtering traffic, cloud security can be configured to the exact needs of the
business.
106

Cloud security is the protection of data stored online from theft, leakage, and deletion. Methods of providing
cloud security include firewalls, penetration testing, tokenization, virtual private networks (VPN), and avoiding
public internet connections.

Major threats to cloud security include data breaches, data loss, account hijacking, service traffic hijacking,
insecure application program interfaces (APIs), poor choice of cloud storage providers, and shared technology
that can compromise cloud security. Distributed denial of service (DDoS) attacks are another threat to cloud
security. These attacks shut down a service by overwhelming it with data so that users cannot access their
accounts, such as bank accounts or email accounts.

Cloud computing security refers to the set of procedures, processes and standards designed to provide
information security assurance in a cloud computing environment.

Cloud computing security addresses both physical and logical security issues across all the different service
models of software, platform and infrastructure. It also addresses how these services are delivered (public,
private or hybrid delivery model).

• cloud security involves maintaining adequate preventative protections so you:

• Know that the data and systems are safe.

• Can see the current state of security.

• Know immediately if anything unusual happens.

• Can trace and respond to unexpected events

• Cloud security encompasses a broad range of security constraints from an end-user and cloud provider's
perspective, where the end-user will primarily will be concerned with the provider's security policy, how
and where their data is stored and who has access to that data. For a cloud provider, on the other hand,
cloud computer security issues can range from the physical security of the infrastructure and the access
control mechanism of cloud assets, to the execution and maintenance of security policy.

• The Cloud Security Alliance (CSA), a nonprofit organization of industry specialists, has developed a pool of
guidelines and frameworks for implementing and enforcing security within a cloud operating
environment.
And because these rules can be configured and managed in one place, administration overheads are
reduced and IT teams empowered to focus on other areas of the business.

The way cloud security is delivered will depend on the individual cloud provider or the cloud security solutions in
place. However, implementation of cloud security processes should be a joint responsibility between the
business owner and solution provider.

Cloud Security Issues (Risks and threats):-


107

Today, cloud computing is a very approachable topic for both small and large enterprises alike. However, while
cloud computing affords businesses near-limitless opportunities for scale and sustainability, it also comes with
risks. Establishing successful cloud security processes is about understanding the common threats experienced by
businesses operating in the cloud. These threats originate from both inside and outside sources and vary in
severity and complexity.

• There are many security issues in clouds as they provide hardware and services over the internet.

• 1. Data breaches:-With so many organizations now operating in cloud-based environments, information


accessibility has never been higher. As enterprises expand their digital footprint, cybercriminals can locate
new access points to exploit, gaining access to private records and other sensitive data.

• 2. Malware injections: Malware injection is a common risk. Attackers upload these malicious scripts of
code to a cloud server that hosts various applications and services. Successfully deployed, these scripts
can cause any number of security issues to enterprises operating on those same servers.

• 3. Regulatory compliance: Fines and penalties for regulatory non-compliance can be steep. The cloud
shared-responsibility model for security—where the cloud provider is responsible for the security of the
cloud and the cloud customer is responsible for security in the cloud—must be properly and diligently
managed to demonstrate and maintain compliance.

• 4. Distributed Denial of Service (DDoS): DDoS attacks can prevent users or customers from accessing
mission-critical data and applications, which often causes significant or even irreparable financial damage
to the business. See the following video for more information on DDoS attack.

• 5. Malicious insiders: Current or former employees, business partners, contractors, or anyone who has
had allowed access to systems or networks in the past could be considered an insider threat if they
intentionally abuse their access permissions.

• 6. Advanced persistent threats (APTs): APTs are a form of cyber attack where an intruder or group of
intruders successfully infiltrate a system and remain undetected for an extended period. These stealthy
attacks operate silently, leaving networks and systems intact so that the intruder can spy on business
activity and steal sensitive data while avoiding the activation of defensive countermeasures.

• 7. Insecure APIs: Cloud service providers commonly use Application Programming Interfaces (APIs) as a
way for customers to access and extract information from their cloud-based services. If not configured
properly, these APIs can leak data and open the door for intrusions and attacks from outside sources.

• 8. Account hijacking: Stolen and compromised account login credentials are a common threat to cloud
computing. Hackers use sophisticated tools and phishing schemes to hijack cloud accounts, impersonate
authorized users, and gain access to sensitive business data.

• 1. Data breaches:-

• Cloud providers are the attractive target for the hackers to attack as massive data stored on the clouds.
How much severe the attack is depend upon the confidentiality of the data which will be exposed. The
108

information exposed may be financial or other will be important the damage will be severe if the exposed
information is personal related to health information, trade secrets and intellectual property of a person
of an organization. This will produce a severe damage. When data breached happened companies will be
fined some lawsuits may also occur against these companies and criminal charges also. Break
examinations and client warnings can pile on critical expenses.

• Aberrant impacts, for example, mark harm and loss of business, can affect associations for a considerable
length of time. Cloud suppliers commonly convey security controls to ensure their surroundings, in any
case, associations are in charge of ensuring their own information in the cloud. The CSA has suggested
associations utilize multifaceted confirmation and encryption to ensure against information ruptures.

• 2. Network security

• Security data will be taken from enterprise in Saas and processes and stored by the Saas provides. To
avoid the leakage of the confidential information Data all over the internet must be secured. Strong
network traffic encryption will be involved to secure the network for traffic.

• 3. Data locality:-

• Consumer’s uses Saas applications in the Saas environment provided them by the Saas providers and also
processing of their data.

• In this case users or clients of clouds are unaware of the fact that where their data is getting stored. Data
locality is much important in May of the countries laws and policies regarding the locality of data are
strict.

• 4. Data access:-

• Data on clouds must be accessible from anywhere anytime and from any system. Cloud storages have
some issues regarding the access of the data from any device. Information breaks and different sorts of
assaults flourish in situations with poor client verification and frail passwords. Take a gander at the
genuine assault on Sony that happened only a few years back.

• They are as yet feeling the budgetary and social impacts of the hack, which to a great extent succeeded
on account of administrators utilizing feeble passwords. The cloud is a particularly appealing target since
it exhibits a concentrated information store containing high-esteem information and brought together
client get to. Utilize enter administration frameworks in your cloud condition, and be sure that the
encryption keys can't without much of a stretch be discovered on the web. Require solid passwords and
place teeth in the prerequisite via consequently turning passwords and different methods for client ID.

• 5. DoS attacks:-

• One cannot stop the denial of service attacks because it is not possible one can mitigate the effect of
these attacks but cannot stop these attacks. DoS assaults overpower resources of a cloud service so
clients can't get to information or applications. Politically roused assaults get
109

• the front features, however programmers are similarly prone to dispatch DoS assaults for pernicious goal
including extortion. What's more, when the DoS assault happens in a distributed computing condition,
process burn charges experience the rooftop. The cloud supplier ought to invert the charges, yet
consulting over what was an assault and what wasn't will take extra time and irritation.

• 6. vulnerabilities

• Vulnerabilities of the system are exploitable program bugs in the OS that programmers intentionally use
to control or invade a

• PC framework. Fortunately, essential IT cleanliness goes far towards shielding you from this sort of
genuine assault. Since machines exist

• in your cloud supplier's server farms, be sure that your supplier hones normal weakness examining
alongside convenient security fixes and overhauls.

• SECURITY CHALLENGES IN CLOUD :-A few security challenges in cloud are:

• 1. Data Protection: In cloud computing the personal data of a user is placed in the hands of a third
party, so it is important to ensure the security of data. Data should be encrypted and the data
encryption keys should be managed and owned by the client himself.

• 2. Contingency Planning: Since the cloud has a centralized repository for storing all important data, so
there are risks for securing the data like the data getting breached or compromised. If the data gets
disrupted in a cloud the people owning the data will be liable for it. Getting the security accessed from
a third party will help in improving the security of the cloud.

• 3. Access Control: The cloud should have the policies for access control for ensuring the promotion of
the legalized users.

• 4. Authentication: All over the internet the data stored in the cloud by the user is accessible to
unauthorized people. To ensure the rectitude of data, the user should be able to view the data access
logs to ensure that only the authenticated users are able to access the data. The user must ensure that
the cloud provider is taking all the security measures for the protection of the data.

• 5. Data ENCRYPTION:-Encryption techniques are used to ensure the security of the cloud and reduce the
risk for the users storing their data in cloud. Bi-Directional DNA encryption algorithm is a technique to
secure data in the cloud. The drawback of this technique is that it uses only ASCII characters and ignores
the non-english users of cloud.

• How Do You Manage Security In The Cloud?

• Cloud service providers use a combination of methods to protect your data.

• Firewalls are a mainstay of cloud architecture. Firewalls protect the perimeter of your network security
and your end users. Firewalls also safeguard traffic between different apps stored in the cloud.
110

• Access controls protect data by allowing you to set access lists for different assets. For instance, you
might allow specific employees application access, while restricting others. A general rule is to provide
employees’ access to only the tools they need to do their job. By maintaining strict access control, you
can keep critical documents from malicious insiders or hackers with stolen credentials.

• Cloud providers take steps to protect data that’s in transit. Data Security methods include virtual private
networks, encryption, or masking. Virtual private networks (VPNs) allow remote employees to connect to
corporate networks. VPNs accommodate tablets and smart phones for remote access.

• Data masking encrypts identifiable information, such as names. This maintains data integrity by keeping
important information private. With data masking, a medical company can share data without violating
HIPAA laws, for example.

• Threat intelligence spots security threats and ranks them in order of importance. This feature helps you
protect mission-critical assets from threats.

• Disaster recovery is key to security since it helps you recover data that’s lost or stolen.

• While not a security component per se, your cloud services provider may need to comply with data
storage regulations. Some countries require data must be stored within their country. If your country has
this requirement, you need to verify that a cloud provider has data centers in your country.

• What are the Benefits of a Cloud Security System?

• Now that you understand how cloud computing security operates, explore the ways it benefits your
business.

• Cloud-based security systems benefit your business through:

• Protecting your business from threats

• Guarding against internal threats

• Preventing data loss:

• Top threats to systems include malware, ransomware, and DDos.

• Malware and Ransomware Breaches

• Malware poses a severe threat to businesses.

• Over 90 percent of malware comes via email. It is often so convincing that employees download malware
without realizing it. Once downloaded, the malicious software installs itself on your network, where it
may steal files or damage content.

• Ransomware is a form of malware that hijacks your data and demands a financial ransom. Companies
wind up paying the ransom because they need their data back.
111

• Data redundancy provided by the cloud offers an alternative to paying ransom for your data. You can get
back what was stolen with minimal service interruption.

• Many cloud data security solutions identify malware and ransom ware. Firewalls, spam filters, and
identity management help with this. This keeps malicious email out of employee inboxes.

• DDoS Protection:

• In a DDoS or distributed denial of service attack, your system is flooded with requests. Your website
becomes slow to load until it crashes when the number of requests is too much to handle.

• DDoS attacks come with serious side effects. Every minute your website is inaccessible, you lose money.

• Half of the companies that suffer DDoS attacks lose $10,000 to $100,000. Many businesses suffer from
reputation damage when customers lose faith in the brand. If confidential customer data is lost in a DDoS
attack, you could face legal challenges.

• Given the severity of these side effects, it’s no wonder that some companies close after DDoS attacks.
Consider that one recent DDoS attack lasted for 12 days and you sense the importance of protection.

• Cloud security services actively monitor the cloud to,identify and defend against attacks. By alerting your
cloud provider of the attack in real time, they can take steps to secure your systems.

• Threat Detection

• Security for cloud computing provides advanced threat detection using endpoint scanning for threats at
the device level. Endpoint scanning increases security for devices that access your network.

• Software as a services SECURITY ISSUES:-


• In Software as a Service (SaaS) model, the client needs to be dependent on the service provider for
proper security measures of the system. The service provider must ensure that their multiple users don‘t
get to see each other‘s private data. So, it becomes important to the user to ensure that right security
measures are in place and also difficult to get an assurance that the application will be available when
needed . Cloud computing providers need to provide some solution to solve the common security
challenges that traditional communication systems face. At the same time, they also have to deal with
other issues inherently introduced by the cloud computing paradigm itself.

• SaaS providers handle much of the security for a cloud application. The SaaS provider is responsible for
securing the platform, network, applications, operating system, and physical infrastructure. However,
providers are not responsible for securing customer data or user access to it. Some providers offer a bare
minimum of security, while others offer a wide range of SaaS security options.

• A. Authentication and authorization The authorization and authentication applications used in enterprise
environments need to be changed, so that they can work with a safe cloud environment. Forensics tasks
112

will become much more difficult since it will be very hard or maybe not possible for investigators may to
access the system hardware physically.

• B. Data confidentiality Confidentiality may refer to the prevention of unintentional or intentional


unauthorized disclosure or distribution of secured private information. Confidentiality is closely related to
the areas of encryption, intellectual property rights, traffic analysis, covert channels, and inference in
cloud system. Whenever a business, an individual, a government agency, or any other entity wants to
shares information over cloud, confidentiality or privacy is a questions nay need to be asked

• C. Availability The availability ensures the reliable and timely access to cloud data or cloud computing
resources by the appropriate personnel. The availability is one of the big concerns of cloud service
providers, since if the cloud service is disrupted or compromised in any way; it affects large no. of
customers than in the traditional model.

• D. Information Security In the SaaS model, the data of enterprise is stored outside of the enterprise
boundary, which is at the SaaS vendor premises. Consequently, these SaaS vendor needs to adopt
additional security features to ensure data security and prevent breaches due to security vulnerabilities in
the application or by malicious employees. This will need the use of very strong encryption techniques for
data security and highly competent authorization to control access private data. .

• E. Data Access Data access issue is mainly related to security policies provided to the users while
accessing the data. Organizations have their own security policies based on which each employee can
have access to a particular set of data. These security policies must be adhered by the cloud to avoid
intrusion of data by unauthorized users. The SaaS model must be flexible enough to incorporate the
specific policies put forward by the organization.

• F. Network Security In a SaaS deployment model, highly sensitive information is obtained from the
various enterprises, then processed by the SaaS application and stored at the SaaS vendor‘s premises. All
data flow over the network has to be secured in order to prevent leakage of sensitive information.

• G. Data breaches

• H. Identity management and sign-on process Identity management (IdM) or ID management is an area
that deals with identifying individuals in a system and controlling the access to the resources in that
system by placing restrictions on the established identities. Aria of IdM is considered as one of the biggest
challenges in information security. When a SaaS provider want to know how to control who has access to
what systems within the enterprise it becomes a lot more challenging task.

• SaaS security solutions:-

• Cloud access security brokers (CASBs)

• CASBs provide a variety of security services, including: Monitoring for unauthorized cloud services.

• Enforcing data security policies including encryption.


113

• Collecting details about users who access data in cloud services from any device or location.

• Restricting access to cloud services based on the user, device, and application.

• Providing compliance reporting.

• Cloud Common Standards Organizations:-

• Cloud Computing Standards Organizations:-


• 1. Cloud Security Alliance:-

• The Cloud Security Alliance was formed to promote a series of best practices to provide security
assurance in cloud computing. Its objectives include promoting understanding, researching best practices,
launching awareness campaigns with the goal of creating a consensus on ways to ensure cloud security.

• 2. Distributed Management Task Force (DMTF):-

• The DMTF focuses on IaaS (Infrastructure as a Service), and providing standards that enable IaaS to be a
flexible, scalable, high-performance infrastructure.

• The DMTF is the group that developed the OVF standard that is formally known as DSP0243 Open
Virtualization Format (OVF) V1.0.0. It describes an open, secure, and portable format for packaging and
distribution of software that will be run in virtual machines.

• 3. National Institute of Standards and Technology (NIST):-

• NIST is a non regulatory federal agency whose goal is to promote innovation and United States
competitiveness by advancing standards, measurement science, and technology. They are focused on
helping federal agencies understand cloud computing.

• 4. Open Cloud Consortium (OCC):-

• The OCC goal is to support the development of standards for cloud computing and frameworks for
interoperating between clouds. The OCC has a number of different working groups devoted to varying
aspects of cloud computing.

• 5. Open Grid Forum (OGF):-

• The OGF is an open community that focuses on driving the adoption and evolution of distributed
computing. This includes everything from distributed high-performance computing resources to
horizontally scaled transactional systems supporting SOA as well as the cloud.

• 6. The Object Management Group (OMG):-

• The OMG is an international group focused on developing enterprise integration standards for a wide
range of industries including government, life sciences, and healthcare. The group provides modeling
standards for software and other processes.
114

• 7. Storage Networking Industry Association (SNIA):-

• The SNIA is focused on developing storage solution specifications and technologies, global standards, and
storage education. This organization’s mission is “to promote acceptance, deployment, and confidence in
storage-related architectures, systems, services, and technologies, across IT and business communities”.

• 8. Cloud Computing Interoperability Forum (CCIF):

• The Cloud Computing Interoperability Forum provides discussion forums to create a cloud computing
ecosystem where organizations can work together. A major focus is on creating a framework that enables
two or more cloud platforms to exchange information in a unified way.

• Consortium tackles cloud computing standards:

• The Open Cloud Consortium is a newly formed group of universities that is both trying to improve the
performance of storage and computing clouds spread across geographically disparate data centers and
promote open frameworks that will let clouds operated by different entities work seamlessly together.

• Everyone’s talking about building a cloud these days. But if the IT world is filled with computing clouds,
will each one be treated like a separate island or will open standards allow all to interoperate with each
other?

• 1. The Open Cloud Consortium (OCC) has only been around since 2008.

• The first phase of the testbed consisted of getting it into operation. It consisted of 240 cores in four U.S.
data centers. Those data centers are located at University of Illinois at Chicago, StarLight in Chicago, Calit2
in LaJolla, and Johns Hopkins University in Baltimore. All the racks were connected to a wide area 10 gb/s
network. Before the end of its first year, the testbed was upgraded to 480 cores.

• In its second year of operation, the OCC conducted phase 2 of operations. In this phase, the number of
racks was increased to 9 and the number of nodes to over 250. The number of cores went to over 1,000.

• Phase 3 began last year and is currently underway. The goal is to increase some of the 10G network
connections to 100G.

• Distributed Management Task Force (DMTF):-

• The DMTF creates open manageability standards spanning diverse emerging and traditional IT
infrastructures including cloud, virtualization, network, servers and storage. Member companies and
alliance partners worldwide collaborate on standards to improve the interoperable management of
information technologies.

• With more than 4,000 active participants representing 44 countries and nearly 200 organizations, the
Distributed Management Task Force, Inc. (DMTF) is the industry organization leading the development of
management standards and the promotion of interoperability for enterprise and Internet environments.
115

• The DMTF board of directors is led by technology companies including: Broadcom Inc., CA
Technologies, Dell Inc., Hewlett Packard Enterprise, Hitachi, Ltd., HP Inc., Intel
Corporation, Lenovo, NetApp, Software AG, Vertiv and VMware.

• DMTF standards include:

• Cloud Infrastructure Management Interface (CIMI) – a self-service interface for infrastructure clouds,
allowing users to dynamically provision, configure and administer their cloud usage with a high-level
interface that greatly simplifies cloud systems management. The specification standardizes interactions
between cloud environments to achieve interoperable cloud infrastructure management between service
providers and their consumers and developers, enabling users to manage their cloud infrastructure use
easily and without complexity.

• Common Information Model (CIM) – the CIM schema is a conceptual schema that defines how the
managed elements in an IT environment (for instance computers or storage area networks) are
represented as a common set of objects and relationships between them. CIM is extensible in order to
allow product specific extensions to the common definition of these managed elements. CIM uses a
model based upon UML to define the CIM Schema. CIM is the basis for most of the other DMTF
standards.

• Common Diagnostic Model (CDM) – the CDM schema is a part of the CIM schema that defines how
system diagnostics should be incorporated into the management infrastructure.

• Web-Based Enterprise Management (WBEM) – defines protocols for the interaction between systems
management infrastructure components implementing CIM, a concept of DMTF management profiles,
that allows defining the behavior of the elements defined in the CIM schema, the CIM Query Language
(CQL) and other specifications needed for the interoperability of CIM infrastructure.

• Systems Management Architecture for Server Hardware (SMASH) – a DMTF Management Initiative that
include management profiles for server hardware management. SMASH 2.0 allows for either WS-
Management or SM-CLP (a command line protocol for interacting with CIM infrastructure). SM-CLP was
adopted as an International Standard in August 2011 by the Joint Technical Committee 1 (JTC 1) of
the International Organization for Standardization (ISO) and the International Electrotechnical
Commission (IEC).[3]

• System Management BIOS (SMBIOS) – defines how the BIOS interface of x86 architecture systems is
represented in CIM (and DMI).

• Desktop Management Interface (DMI) – the first desktop management standard. Due to the rapid
advancement of DMTF technologies, such as CIM, the DMTF defined an "end of life" process for DMI,
which ended March 31, 2005.

• Redfish – DMTF's Redfish API is an open industry standard specification and schema designed to meet the
expectations of end users for simple, modern and secure management of scalable platform hardware.
Created by the Redfish Forum (formerly the Scalable Platforms Management Forum, SPMF), Redfish
116

specifies a RESTful interface and utilizes JSON and O Data to help customers integrate solutions within
their existing tool chains.

• Web Services Management (WS-MAN) – The DMTF’s Web Services Management (WS-Man) provides
interoperability between management applications and managed resources, and identifies a core set of
web service specifications and usage requirements that expose a common set of operations central to all
systems management. A SOAP-based protocol for managing computer systems (e.g., personal computers,
workstations, servers, smart devices), WS-Man supports web services and helps constellations of
computer systems and network-based services collaborate seamlessly.

• Desktop and mobile Architecture for System Hardware (DASH) – a management standard based on DMTF
Web Services for Management (WS-Management), for desktop and mobile client systems. WS-
Management was adopted as an international standard by ISO/IEC in 2013.

• Configuration Management Database Federation (CMDBf) – [facilitates the sharing of information


between configuration management databases (CMDBs) and other management data repositories
(MDRs). The CMDBf standard enables organizations to federate and access information from complex,
multi-vendor infrastructures, simplifying the process of managing related configuration data stored in
multiple CMDBs and MDRs.

• The Cloud Auditing Data Federation (CADF) – The Cloud Auditing Data Federation (CADF) standard
defines a full event model anyone can use to fill in the essential data needed to certify, self-manage and
self-audit application security in cloud environments. CADF is an open standard that addresses this need
by enabling cross-vendor information sharing via its data format and interface definitions.

• Platform Management Components Intercommunication (PMCI) – a suite of specifications defining a


common architecture for intercommunication among management subsystem components. This suite
includes MCTP, PLDM and NC-SI specifications. The Platform Management standard was adopted as a
national standard by ANSI in 2013.
OR

• Distributed Management Task Force, DMTF is an industry organization for the development, adoption
and promotion of interoperable management standards and integration technology for enterprise and
Internet environments.

• DMTF technologies include the following:

• Common Information Model (CIM)

• Web-Based Enterprise Management (WBEM)

• Desktop and mobile Architecture for System (DASH) Initiative

• Systems Management Architecture for Server Hardware (SMASH) Initiative

• System Management BIOS (SMBIOS)


117

• Alert Standard Format (ASF)

• Common Diagnostic Model (CDM)

• Management Component Transport Protocol (MCTP)

• Standards for Application Developers:-


• Browsers (Ajax):-

• Its predecessor AJAX (Asynchronous JavaScript and XML).

• A web application can request only the content that needs to be updated.

• This greatly reduces networking bandwidth usage and page load times.

• Use in interactive animation on web pages.

 Data (XML, JSON)

 XML(Extensible Markup Language)

• Usually combination with other standards.

• Define the content of a document separately.

 JSON(JavaScript Object Notation)

• A lightweight computer data interchange format

• Is specified in Internet Engineering

• Request for Comment (RFC)

• Independent data format


118

• Solution Stacks (LAMP and LAPP):-


• In computing, a solution stack or software stack is a set of software subsystems or components needed
to create a complete platform such that no additional software is needed to support applications.
Applications are said to "run on" or "run on top of" the resulting platform.

LAMP

 The acronym Linux,Apache,MySQL, and PHP (or Perl or Python)

 Open source nature, low cost, and the wide distribution of its components

 Used to

• Run dynamic web sites and servers.

• Development and deployment of high-performance web applications.

• Define a web server infrastructure.

• Creating a programming environment for developing software.

 LAPP

• It is more powerful than LAMP stack

• Linux (operating system)

• Apache (web server)

• PostgreSQL (database management systems)

• Perl, PHP, or Python (scripting languages)


119

JAMstack:-

JavaScript (programming language)

APIs (Application programming interfaces)

Markup (content)

LEAP:

Linux (operating system)

Eucalyptus (free and open-source alternative to the Amazon Elastic Compute Cloud)

AppScale (cloud computing-

framework and free and open-source alternative to

Google App Engine)Python (programming language)

MEAN:-

MongoDB (database)

Express.js (app controller layer)

AngularJS/Angular (web app presentation)

Node.js (web server)

 Standards for Messaging:-


Simple Message Transfer Protocol (SMTP)

• Simple Mail Transfer Protocol (SMTP) is the standard protocol for email services on a TCP/IP
network. SMTP provides the ability to send and receive email messages. SMTP is an application-
layer protocol that enables the transmission and delivery of email over the Internet.

• SMTP is usually used for:


120

• Sending a message from a workstation to a mail server.

• Or communications between mail servers.

• Client must have a constant connection to the host to receive SMTP messages.


• Post Office Protocol (POP)

• Purpose is to download messages from a server.

• This allows a server to store messages until a client connects and requests them.

• Once the client connects, POP servers begin to download

• the messages and subsequently delete them from the server

• Post Office Protocol (POP) is a type of computer networking and Internet standard protocol that
extracts and retrieves email from a remote mail server for access by the host machine. POP is an
application layer protocol in the OSI model that provides end users the ability to fetch and receive
email.

• Internet Messaging Access Protocol (IMAP)

• IMAP (Internet Message Access Protocol) is a standard email protocol that stores
email messages on a mail server, but allows the end user to view and manipulate the messages as
though they were stored locally on the end user's computing device(s).

• IMAP allows messages to be kept on the server.

• But viewed as though they were stored locally.


121

• Syndication (Atom & Atom Publishing Protocol, and RSS):-

• RSS:- RSS stands for Really Simple Syndication. It refers to files easily read by a computer called
XML files that automatically update information. This information is fetched by a user's RSS
feed reader that converts the files into the latest updates from websites in an easy to read format.

• The acronym “Really Simple Syndication” or “Rich Site Summary”.

• Used to publish frequently updated works—such as news headlines

• RSS is a family of web feed formats

• tom & Atom Publishing Protocol;-

• The Atom format was developed as an alternative to RSS

• The name Atom applies to a pair of related Web standards. The Atom Syndication Format is an
XML language used for web feeds, while the Atom Publishing Protocol is a simple HTTP-based
protocol for creating and updating web resources. Web feeds allow software programs to

• check for updates published on a website.

• Communications (HTTP, SIMPLE, and XMPP)

• HTTP :-HTTP means HyperText Transfer Protocol. HTTP is the underlying protocol used by the
World Wide Web and this protocol defines how messages are formatted and transmitted, and
what actions Web servers and browsers should take in response to various commands.

• The acronym “Hypertext Transfer Protocol.

• HTTP is a request/response standard between a client and a server

• For distributed, collaborative, hypermedia information systems.

• SIMPLE:-

• Session Initiation Protocol for Instant Messaging and Presence Leveraging Extensions

• For registering for presence information and receiving notifications.


122

• It is also used for sending short messages and managing a session of realtime messages between
two or more participants.

• XMPP(Extensible Messaging and Presence Protocol)

• Used for near-real-time, extensible instant messaging and presence information.

• XMPP remains the core protocol of the Jabber Instant Messaging and Presence technology

Standards for Security:-


• SAML:- Security Assertion Markup Language (SAML, pronounced SAM-el) is an open standard for
exchanging authentication and authorization data between parties, in particular, between an identity
provider and a service provider.

• Standard for communicating authentication, authorization, and attribute information among online
partners.

• It allows businesses to securely send assertions between partners.

• SAML protocol refers to what is transmitted, not how it is transmitted.

• Three types of statements are provided by SAML: authentication statements, attribute


statements, and authorization decision statements.

 OAuth(Open Authentication):-
123

• OAuth (Open Authorization) is an open standard protocol for authorization of an application for
using user information, in general, it allows a third party application access to user related info like
name, DOB, email or other required data from an application like Facebook, Google etc. without
giving the third party ...OAuth is a method for publishing and interacting with protected data.

• For developers, OAuth provides users access to their data .

• OAuth allows users to grant access to their.

• OAuth by itself provides no privacy at all and depends on other protocols such as SSL .

 OpenID:-
• OpenID is an open, decentralized standard for user authentication.

• And allows users to log on to many services using the same digital identity.

• It is a single-sign-on (SSO) method of access control.

• OpenID allows you to use an existing account to sign in to multiple websites, without needing to create
new passwords.

• You may choose to associate information with your OpenID that can be shared with the websites you
visit, such as a name or email address. With OpenID, you control how much of that information is shared
with the websites you visit.

• With OpenID, your password is only given to your identity provider, and that provider then confirms your
identity to the websites you visit. Other than your provider, no website ever sees your password, so you
don’t need to worry about an unscrupulous or insecure website compromising your identity.

• OpenID is rapidly gaining adoption on the web, with over one billion OpenID enabled user
accounts and over 50,000 websites accepting OpenID for logins. Several large organizations either issue
or accept OpenIDs, including Google, Facebook, Yahoo!, Microsoft, AOL, MySpace, Sears, Universal Music
Group, France Telecom, Novell, Sun, Telecom Italia, and many more.
124

 SSL/TLS:

• Transport Layer Security, and its now-deprecated predecessor, Secure Sockets Layer, are cryptographic
protocols designed to provide communications security over a computer network. Several versions of the
protocols find widespread use in applications such as web browsing, email, instant messaging, and voice
over IP.

• TLS or its predecessor SSL

• To provide security and data integrity for communications.

• To prevent eavesdropping, tampering, and message forgery.

MOBILE CLOUD COMPUTING:-

• MCC refers to an infrastructure where both the data storage and data processing happen outside of the
mobile device.

• Mobile cloud applications move the computing power and data storage away from the mobile devices
and into powerful and centralized computing platforms located in clouds, which are then accessed over
the wireless connection based on a thin native client.

• MOBILE CLOUD COMPUTING = MOBILE COMPUTING + CLOUD COMPUTING

– Mobile devices face many resource challenges (battery life, storage, bandwidth etc.)

– Cloud computing offers advantages to users by allowing them to use infrastructure, platforms and
software by cloud providers at low cost and elastically in an on-demand fashion.
125

– Mobile cloud computing provides mobile users with data storage and processing services in
clouds, obviating the need to have a powerful device configuration (e.g. CPU speed, memory
capacity etc), as all resource-intensive computing can be performed in the cloud.

• PRINCIPLES OF MOBILE CLOUD COMPUTING

• Mobile cloud computing is a combination of mobile computing, cloud computing and mobile Internet. It
can be stated as availability of cloud computing facilities in the mobile environment. It integrates the
advantages of all the three technologies and can thus be called as cloud computing for mobiles. Mobile
cloud computing is a new model where the data processing and storage is moved from mobile devices to
powerful and centralized computing platforms located in clouds. These platforms can then be accessed
through wireless connections via web browsers on the mobile devices. This is similar to cloud computing,
but the client side has changed to make it viable for mobile phones, but the main concept behind it is still
cloud computing.
126

• APPLICATIONS:-

• Mobile Commerce.

• Mobile HealthCare.

• Mobile Learning.

• Mobile Gaming.

• ADVANTAGES:-

• Extending battery lifetime

• Improving data storage capacity and processing power

• Improving reliability and availability

• Dynamic provisioning

• Scalability

• Multi-tenancy

• Ease of Integration

• Mobile communication issues:

• Low bandwidth: One of the biggest issues, because the radio resource for wireless networks is
much more scarce than wired networks
127

• Service availability: Mobile users may not be able to connect to the cloud to obtain a service due
to traffic congestion, network failures, mobile signal strength problems

• Heterogeneity: Handling wireless connectivity with highly heterogeneous networks to satisfy MCC
requirements (always-on connectivity, on-demand scalability, energy efficiency) is a difficult
problem

• Cloudlet Host

The Cloudlet Host is a physical server that hosts

• 1) a discovery service that broadcasts the cloudlet IP address and port to allow mobile devices to find it.

• 2) The Base VM Image that is used for VM synthesis

• 3) a Cloudlet Server that handles code offload in the form of application overlays, performs VM synthesis
and starts guest VM instances with the resulting VM images, and

• 4) a VM Manager that serves as a host for all guest VM instances that contain the computation-intensive
server component of the corresponding mobile app.

• Mobile Client

• The Mobile Client is a handheld or wearable device that hosts

• 1) the Cloudlet Client app that discovers cloudlets and uploads application overlays to the cloudlet and

• 2) a set of Cloudlet-Ready Apps that operate as clients of the server code running in the cloudlet. The
Mobile Client stores an application overlay for each cloudlet-ready app that a user would conceivably
want to execute and for which computation offloading is appropriate. Each application overlay is
generated from the same Base VM Image that resides in the cloudlet.

Vous aimerez peut-être aussi