Académique Documents
Professionnel Documents
Culture Documents
A Load Balancing Model Base d on Cloud Partitioning for the Public Cloud
INTRODUCTION:
Cloud computing is an attracting technology in the field of computer science. In
Gartners report, it says that the cloud will bring changes to the IT industry. The cloud is
changing our life by providing users with new types of services. Users get service from a
cloud without paying attention to the details. NIST gave a definition of cloud computing
as a model for enabling ubiquitous, convenient, on-demand network access to a shared
pool of configurable computing resources (e.g., networks, servers, storage, applications,
and services) that can be rapidly provisioned and released with minimal management
effort or service provider interaction. More and more people pay attention to cloud
computing. Cloud computing is efficient and scalable but maintaining the stability of
processing so many jobs in the cloud computing environment is a very complex problem
with load balancing receiving much attention for researchers.
Since the job arrival pattern is not predictable and the capacities of each node in
the cloud differ, for load balancing problem, workload control is crucial to improve
system performance and maintain stability. Load balancing schemes depending on
whether the system dynamics are important can be either static or dynamic. Static
schemes do not use the system information and are less complex while dynamic schemes
will bring additional costs for the system but can change as the system status changes. A
dynamic scheme is used here for its flexibility. The model has a main controller and
balancers to gather and analyze the information. Thus, the dynamic control has little
influence on the other working nodes. The system status then provides a basis for
choosing the right load balancing strategy.
The load balancing model given in this article is aimed at the public cloud which
has numerous nodes with distributed computing resources in many different geographic
locations. Thus, this model divides the public cloud into several cloud partitions. When
the environment is very large and complex, these divisions simplify the load balancing.
The cloud has a main controller that chooses the suitable partitions for arriving jobs while
the balancer for each cloud partition chooses the best load balancing strategy
A Load Balancing Model Base d on Cloud Partitioning for the Public Cloud
OBJECTIVES:
Resource allocation and job scheduling are the core functions of cloud computing.
These functions are based on adequate information of available resources. Timely
acquiring resource status information is of great importance in ensuring overall
performance of cloud computing. This work aims at building a distributed system for
cloud resource monitoring and prediction. In this paper, we present the design and
evaluation of system architecture for cloud resource monitoring and prediction. We
discuss the key issues for system implementation, including machine learning-based
methodologies for modelling and optimization of resource prediction models. Evaluations
are performed on a prototype system. Our experimental results indicate that the efficiency
and accuracy of our system meet the demand of online system for cloud resource
monitoring and prediction
PROJECT CATEGORY:
NETWORKING-CLOUD
TOOLS/PLATFORM:
The major tools used for the development are as follows:
JAVA EE
A Load Balancing Model Base d on Cloud Partitioning for the Public Cloud
relational mapping, distributed and multi-tier architectures, and web services. The
platform incorporates a design based largely on modular components running on an
application server. Software for Java EE is primarily developed in the Java programming
language.
Java EE includes several API specifications, such as RMI, e- mail, web services,
XML, etc., and defines how to coordinate them. Java EE also features some
specifications unique to Java EE for components. These include Enterprise JavaBeans,
connectors, servlets, Java Server Pages (JSP) and several web service technologies. This
allows developers to create enterprise applications that are portable and scalable, and that
integrate with legacy technologies. A Java EE application server can handle transactions,
security, scalability, concurrency and management of the components it is deploying, in
order to enable developers to concentrate more on the business logic of the components
rather than on infrastructure and integration tasks.
Java Platform, Enterprise Edition (Java EE) is the standard in community-driven
enterprise software. Today Java EE offers a rich enterprise software p latform, and
with over 20 compliant Java EE 6 implementations to choose from, low risk and plenty of
options as the industry begins the rapid adoption of Java EE 7, work has begun on Java
EE 8. With a survey that received over 4,500 responses, the community has prioritized
the desired features for Java EE 8. In fact, the following JSRs have already been
submitted.
A Load Balancing Model Base d on Cloud Partitioning for the Public Cloud
MYSQL
MySQL is the worlds most popular open source database, enabling the costeffective delivery of reliable, high-performance and scalable Web-based and embedded
database applications. The MySQL software delivers a very fast, multi-threaded, multiuser, and robust SQL (Structured Query Language) database server. MySQL Server is
intended for mission-critical, heavy-load production systems as well as for embedding
into mass-deployed software.
A Load Balancing Model Base d on Cloud Partitioning for the Public Cloud
up rules governing the relationships between different data fields, such as one-toone, one-to- many, unique, required or optional, and pointers between different
tables. The database enforces these rules, so that with a well-designed database,
your application never sees inconsistent, duplicate, orphan, out-of-date, or
missing data.
The MySQL Database Server is very fast, reliable, scalable, and easy to use
MySQL Server can run comfortably on a desktop or laptop, alongside your other
applications, web servers, and so on, requiring little or no attentio n.
Its
connectivity, speed, and security make MySQL Server highly suited for
accessing databases on the Internet.
Apache Tomcat (or simply Tomcat, formerly also Jakarta Tomcat) is an open
source web server and servlet container developed by the Apache Software Foundation.
Tomcat implements the Java Servlet and the Java Server Pages (JSP) specifications from
Oracle, and provides a "pure Java" HTTP web server environment for Java code to run in.
In the simplest config, Tomcat runs in a single operating system process. The process
runs a Java virtual machine. Every single HTTP request from a browser to Tomcat is
processed in the Tomcat process in a separate thread. Apache Tomcat includes tools for
6
A Load Balancing Model Base d on Cloud Partitioning for the Public Cloud
configuration and management, but can also be configured by editing XML configuration
files. Tomcat can be used stand-alone, but it is often used behind traditional web
servers such as Apache httpd, with the traditional server serving static pages and Tomcat
serving dynamic servlet and JSP requests. No matter what we call Tomcat, a Java servlet
container or servlet and JSP engine, we mean Tomcat provides an environment in which
servlets can run and JSP can be processed. Similarly, we can absolutely say a CGIenabled Web server is a CGI program container or engine since the server can
accommodate CGI programs and communicate with them according to CGI specification.
Between Tomcat and the servlets and JSP code residing on it, there is also a standard
regulating their interaction, servlet and JSP specification, which is in turn a part of Suns
J2EE (Java 2 Enterprise Edition). But what are servlets and JSP? Why do we need them?
Lets take a look at them in the following subsectio ns before we cover them in much
more detail in the future.
Traditionally, before Java servlets, when we mention web applications, we mean a
collection of static HTML pages and a few CGI scripts to generate the dynamic content
portions of the web application, which were mostly written in C/C++ or Perl. Those CGI
scripts could be written in a platform- independent way, although they didnt need to be
(and for that reason often werent). Another approach to generating dynamic content is
web server modules. For instance, the Apache httpd web server allows dynamically
loadable modules to run on startup. These modules can answer on pre-configured HTTP
request patterns, sending dynamic content to the HTTP client/browser.. Now let us take a
look at the Java side. Java brought platform independence to the server, and Sun wanted
to leverage that capability as part of the solution toward a fast and platform independent
web application standard. The other part of this solution was Java servlets. The idea
behind servlets was to use Javas simple and powerful multithreading to answer requests
without starting new processes. You can now write a servlet-based web application, move
it from one servlet container to another or from one computer architecture to another, and
run it without any change (in fact, without even recompiling any of its code).
A Load Balancing Model Base d on Cloud Partitioning for the Public Cloud
REQUIREMENT SPECIFICATON:
Intel Pentium
RAM
2GB
Hard Disk
80GB
CPU Speed
2.60GHz
Monitor
VGA Color
Processor
Pentium
RAM
1GB
Hard Disk
40GB
CPU Speed
1.60GHz
Monitor
VGA Color
Client:
: Apache Tomcat
Tools
: J2EE
Front End
: JSP
Language
: Java
Back End
: MySQL
Web Technology
Operating System
: Windows 7 /above
A Load Balancing Model Base d on Cloud Partitioning for the Public Cloud
PROBLEM DEFINITION:
A Load Balancing Model Base d on Cloud Partitioning for the Public Cloud
PROPOSED SYSTEM
In recent years a variety of systems to facilitate MTC has been d eveloped.
Although these systems typically share common goals (e.g. to hide issues of parallelism
or fault tolerance), they aim at different fields of application. Map Reduce is designed to
run data analysis jobs on a large amount of data, which is expected to be stored across a
large set of share-nothing commodity servers.
Once a user has fit his program into the required map and reduce pattern, the
execution framework takes care of splitting the job into subtasks, distributing and
executing them. A single Map Reduce job always consists of a distinct map and reduce
program.
Duration
June
July
weeks
1 2 3 4
1 2 3 4 1 2 3 4 1
Project
4months
System study
2weeks
System analysis
4weeks
Design
3weeks
4weeks
implementation
1week
August
10
September
2
A Load Balancing Model Base d on Cloud Partitioning for the Public Cloud
PURPOSE
There are several cloud computing categories with this work focused on a public
cloud. A public cloud is based on the standard cloud computing model, with service
provided by a service provider. A large public cloud will include many nodes and the
nodes in different geographical locations. Cloud partitioning is used to manage this large
cloud. A cloud partition is a subarea of the public cloud with divisions based on the
geographic locations. The architecture is shown in Fig.1The dotted line around the Blade
server/Top of rack (ToR) switch indicates an optional layer, depending on whether the
interconnect modules replace the ToR or add a tier .The load balancing strategy is based
on the cloud partitioning concept. After creating the cloud partitions, the load balancing
then starts: when a job arrives at Typical cloud partitioning. The system, with the main
controller deciding which cloud partition should receive the job. The partition load
balancer then decides how to assign the jobs to the nodes.
When the load status of a cloud partition is normal, this partitioning can be accomplished
locally. If the cloud partition load status is not normal, this job should be transferred to
another partition as server virtualization and client virtualization.
CLIENT VIRTUALIZATION
11
A Load Balancing Model Base d on Cloud Partitioning for the Public Cloud
ANALYSIS:
USECASE DIAGRAM 1:
View and
load status
Bro wse
&encrypt
Allocate data
Cloud user
Upload
file
Block unauthorised
users
Cloud server
Update balancer
Authorise data
Receive data
Decrypted data
Destination
Store data
12
A Load Balancing Model Base d on Cloud Partitioning for the Public Cloud
The DFD (also known as bubble chart) is a simple graphical formalism that can
be used to represent a system in terms of the input data into the system, various process
carried on these data and the output data generated by the system.
The main reason why the DFD technique is so popular is because the fact that the
DFD is very simple formalism it is simple to understand use. A DFD is a very limited
number of primitive symbols to represent the functions performed by a system and the
data flow among the functions. A DFD model hierarchy represents various subfunctions.
Data Flow :
A line with an arrow represents data flows. The arrow shows the direction of flow of data.
Process:
A Circle is used to despite a process. Processes are numbered and given a name.
Data Store :
A data store stores the data. Two parallel lines with square depict a data store. Processes may
store or retrieve data from a data store.
13
A Load Balancing Model Base d on Cloud Partitioning for the Public Cloud
External Entity :
External Entities are represented by the rectangle, and are outside the system, such as venders
or customers wit that the system interacts. The designers have no control over them.
14
A Load Balancing Model Base d on Cloud Partitioning for the Public Cloud
LEVEL 1 DFD
15
A Load Balancing Model Base d on Cloud Partitioning for the Public Cloud
LEVEL 2 DFD
16
A Load Balancing Model Base d on Cloud Partitioning for the Public Cloud
DFD:
17
A Load Balancing Model Base d on Cloud Partitioning for the Public Cloud
ER DIAGRAM:
email
password
uname
addres
s
User_registration
passwrd
key
uname
Reg_mail
User_
login
User_login
Message
Work_as
sign
uname
password
cpswrd
cost
job
name
Agent_reg
cost
feedback
homework
Agent_login
doc
View &verify
unam
e
Work_submit
Agent_login
paswd
pswrd
uname
Admin
homeworksubmit
Work submits
ordelete?
secret
key
status
job
Work assignedto
dept
dept
discipline
www
Grid
Find_cloud
gname
18
A Load Balancing Model Base d on Cloud Partitioning for the Public Cloud
DATA DICTIONARY:
Table 1: Adminlogin
Descriptions: - This table keeps The Information about Admin
Field Name
Data type
Constrain Size
Description
Username
Varchar
Not null
13
Name of admin
Password
Varchar
Not null
13
Password of admin
Department
Varchar
Not null
20
Discipline
Varchar
Not null
20
Data type
Constrain
Size
Description
username
Varchar
null
13
Name of job
Savefile
Varchar
null
60
location
depaertment
Varchar
null
20
Department name
year
Int
null
Year of saving
Table 3: Grid
Descriptions:- This table keeps The Information about grid.
Field Name
Data type
Constrain
Size
Description
gname
Varchar
null
20
Name of grid
www
Varchar
null
60
url address
Svm
Varchar
null
40
19
A Load Balancing Model Base d on Cloud Partitioning for the Public Cloud
Data type
Constrain
Size
Description
Username
Varchar
Null
30
feedback
Varchar
Null
80
history
Varchar
Null
80
Previous data
Cost
Float
Null
File
Varchar
Null
30
File name
doc
Varchar
Null
40
File location
Xyz
Varchar
Null
40
Resource1
win
Varchar
Null
40
Resource2
Winn
Varchar
Null
40
Resource 3
Cost in Rupees
Data type
Constrain
Size
Description
Username
Varchar
Null
70
Homeworkassign Varchar
Null
70
Job Assignments
Department
Varchar
Null
20
Status
Varchar
Null
10
Delete/allocate
20
A Load Balancing Model Base d on Cloud Partitioning for the Public Cloud
Table 6: Agent
Descriptions :- This table keeps The Information about Agent Registration.
Field Name
Data type
Constrain Size
Description
Username
Varchar
Null
20
Username of agent
Password
Varchar
Null
12
password
Confirmpassword Varchar
Null
12
Password retype
Firstname
Varchar
Null
13
Lastname
Varchar
Null
13
Last name
Fathername
Varchar
Null
13
Name of father
Address
Varchar
Null
40
Pincode
Int
Null
Address
correspondence
Postal code
Phonenumber
Varchar
Null
11
Contact no
Varchar
Null
20
Mail id
Discipline
Varchar
Null
20
Department
Varchar
Null
20
Educational
qualification
Subject/dept
Secretkey
Varchar
Null
for
Table 7: Agent_login
Descriptions:- This table keeps The Information about agent login details.
Field Name
Data type
Constrain
Size
Description
Username
Varchar
null
20
username
Password
Varchar
null
13
Password
Secret key
Varchar
null
Key
21
A Load Balancing Model Base d on Cloud Partitioning for the Public Cloud
Table 8: User_login
Descriptions:- This table keeps the Information about User login details.
Field Name
Data type
Constrain
Size
Description
Username
Varchar
Not null
20
username
Password
Varchar
Not null
13
password
Secret key
Varchar
Not null
key
TABLE 9: USER
Descriptions:- This table keeps The Information about User Registration.
Field Name
Data type
Constrain Size
Description
Username
Varchar
Null
20
username
Password
Varchar
Null
13
password
Confirmpassword Varchar
Null
13
Retyape password
Firstname
Varchar
Null
13
Name of user
Lastname
Varchar
Null
13
name
Fathername
Varchar
Null
20
Name of father
Address
Varchar
Null
40
address
Pincode
Int
Null
Phonenumber
Varchar
Null
11
Contact no
Varchar
Null
20
Mail id
Discipline
Varchar
Null
20
education
Department
Varchar
Null
20
Department name
Secretkey
Varchar
Null
Key.
Postal code
22
A Load Balancing Model Base d on Cloud Partitioning for the Public Cloud
1. GRP-PSO Iteration
2. Resources Allocation
3. QOS calculation
4. Evaluation for Minimum Cost Value
1. GRP-PSO Iteration
After successful grid login, the user can choose what are the best resources are
available from the list of resources using GRP-PSO Technique. In Initial state all resource
particle represented as multiple yellow dot. All resources grid network connection is
design from weighted graph in data structure.
2. Resources Allocation
After the user booked the resources, the user has Shortest path algorithms use to
detect the best node based on QOS parameters like Virtual machine capability, user
friendly, latest version, high speed, improved accuracy, resource quality, Good
performance, previous history, power capability, backup capacity, grid network support,
customers current & previous feedback, policy of resource, debugging tools, cooperation
facility and cost. Based on the QOS (Quality of service with cost) of resource,GRP-PSO is
evaluating the all resources for your task. Suitable resources are find and the connected to
task submitting resource after n number of iteration process. Where, n = number of
available resources. The suitable resource is not available on the user specified date. If the
user likes, he can reschedule the resources.
3. QOS calculation
Minimum distance of x/y is use to find the best grid nodes from the lot of
available resources .Here x is a distance between the task submitting resource node to
other available resource node, y is a QOS rate. Where QOS rate is calculated based on
listed attributes. High quality attributes get high QOS rate. After the best grid node
23
A Load Balancing Model Base d on Cloud Partitioning for the Public Cloud
founded transfer your tasks (job) from submitting node to best grid node through Globus
online grid network.
4. Evaluation for Minimum Cost Value
Minimum Cost value can be achieved through iteration using GRP-PSO
Algorithm. Finding the Minimum cost value to find the best node in grid Environment is found
through the Iterated Minimum cost value.
IMPLIMENTATION METHODOLOGY:
System implementation is the final phase i.e., putting the utility into action.
Implementation is the stage in the project where theoretical design turned in to working
system. The most crucial stage is achieving a new successful system and giving
confidence in new system that it will work efficiently and effectively. The system is
implemented only after thorough checking is done and if it is found working in according
to the specifications.
It involves careful planning, investigation of the current system and constrains on
implementation, design of methods to achieve. Two checking is done and if it is found
working according to the specification, major task of preparing the implementation are
educating, training the users.
IMPLEMENTATION PROCEDURES
1. Test Plans:
The implementation of a computer-based system requires that test data be prepared and
that the system and its elements be tested in a planned, structured manner. The computer
24
A Load Balancing Model Base d on Cloud Partitioning for the Public Cloud
2. Training:
The purpose of the training is to ensures that all the personnel who are to be associated
with the computer-based business system possess the necessary knowledge skills,
Operating, Programming, and user personnel are trained using reference manuals as
training aids.
2.1 Programme r Training
2.2 Ope rator Training
2.3 User Training
3. Equipment Installation:
Equipment vendors can provide the specification for equipment installation. They usually
work with the projects equipment installation team in planning for adequate space,
power, and light and a suitable environment. After a suitable site has been completed, the
computer equipment can be installed. Although equipment normally is installed by the
manufacturer, the implementation team should advice and assist. Participation enables the
team to aid in the installation and, more importantly, to become familiar with the
equipment.
4. Conve rsion:
Conversion is the process of performing all of the operations that result directly in the
turnover of the new system to the user. Conversion has two parts:
1.
The creation of a conversion plan at the start of the development phase and
The creation of the system change over plan to the end of the development
phase and the implementation of the plan at the beginning of the operating
system.
25
A Load Balancing Model Base d on Cloud Partitioning for the Public Cloud
ARCHITECTURE:
Cloud
User
Cloud
User
Interface
Admin
Interne
t
Cloud
Node-1
Revelatio
n Services
Cloud
Revelati
on
Evaluato
r
Cloud
Node-N
SECURITY MECHANISM:
The system security problem can be divided into four related issues: security,
integrity, privacy and confidentiality. They determine the file structure, data structure and
access procedures.
26
A Load Balancing Model Base d on Cloud Partitioning for the Public Cloud
System security refers to the technical innovations and procedures applied to the
hardware and operating systems to protect against deliberate or accidental damage from a
defined threat. In contrast, data security is the protection of data from loss, disclosure,
modifications and destruction.
System integrity refers to the proper functioning of programs, appropriate
physical security and safety against external threats such as eavesdropping and
wiretapping. In comparison, data integrity makes sure that do not differ from original
from others and how the organization can be protected against unwelcome, unfair or
excessive dissemination of information about it.
The term confidentiality is a special status given to sensitive information in a data
base to minimize the possible invasion of privacy. It is an attribute of information that
characterizes its need for protection.
FUTURE ENHANCEMENT:
27
A Load Balancing Model Base d on Cloud Partitioning for the Public Cloud
Find other load balance strategy: Other load balance strategies may provide better
results, so tests are needed to compare different strategies. Many tests are needed to
guarantee system availability and efficiency.
BIBLIOGRAPHY:
Appendix A List of Useful Websites
http://www.ijiset.com/v1s5/IJISET_V1_I5_63.pdf
http://www.ijircce.com/upload/2014/february/4_Load.pdf
http://www.iosrjournals.org/iosr-jce/papers/Vol16- issue1/Version6/O016168287.pdf
http://www.ijarcsse.com/docs/papers/Volume_4/3_March2014/V4I3-0303.pdf
http://www.ijetae.com/files/Volume4Issue7/IJETAE_0714_117.pdf
28