Vous êtes sur la page 1sur 6

Optimizing cost and response time for data intensive

services composition based on ABC algorithm


Amel Bousrih

Zaki Brahmi

Higher Institute of Computer Science and


Communication Technique of Hammam Sousse
Sousse University, Tunisia
RIADI-GDL Lab, Manouba University
Email: bousrihamal@gmail.com

Higher Institute of Computer Science and


Communication Technique of Hammam Sousse
Sousse University, Tunisia
RIADI-GDL Lab, Manouba University
Email:zakibrahmi@yahoo.fr

AbstractVarious experts, large institutions (like the US MIT


) administrations and specialists in the eld of technology
consider data-intensive services as one of the major IT challenges
of the 2010-2020 decade and made it one of their new priorities
for researches and development. This type of application is
observed in several scientic elds such as biology, astronomy,
cosmology, social sciences and climate. Considering data-intensive
web services and the cloud together,where data play the dominant
role in the execution of the data-intensive service composition,
the cost and response time of data sets and services inuence
the quality of the whole composition. Thus, we propose a novel
approach towards minimizing cost response time and energy
consumption for composite data-intensive services. It uses the
articial bee colony algorithm in order to select and nd the
optimal cost and response time of a data-intensive service. The
newly composed data-intensive service will be an outstanding
contribution to composite data-intense services research.
Keywords- web service; data-intensive composition; Cloud computing; Bee Colony
1

I.

INTRODUCTION

Cloud computing has become a viable, mainstream solution


for data processing, storage, and distribution [9]. In recent
years, huge collections of data have been created by advances
in technology areas such as digital sensors, communications,
computation, and storage. Every area of the global economy is
feeling the data explosions effects. This data deluge exhibits
not just volume and velocity, but also variability and diversity
of structure, completeness, and domain. Thus, scientists will
need to combine data from various sources, in order to
design a workow of various data-intensive services and get
a more complex and value-added composite service, called
data-intensive composite service. Such service (or composition) needs to be: i) Feasible: by satisfying all user query
requirement ii) Optimal: the service composition must have
good quality of service (QoS) in terms of response time, cost,
energy consumption, reliability, and many other measures. The
evaluation QoS of a composite data-intensive service is made
by aggregation functions of QoS values of its components.
These functions are different from a composition pattern to
another and represent the structure of workow composition
[4] [5] [6]. The impact of enormous new sources of data
extends to many areas of society, far beyond business, industry,
government, science, sports, advertising and public health, so
1 Massachusetts

Institute of Technology (MIT)

most areas were touched by; so, thus the importance of dataintensive computing has been increasing, and becomes the
foremost research eld in industry and academic communities.
Correspondingly, applications based on data-intensive services
have become the most challenging type of applications do
to the impacts of these enormous sources of data on the
data-intensive service compositions cost and response time.
Yet, the data-intensive service composition has many challenges. First, the large number of data sets and the increasing
functionality equivalent services make the composition more
complex. Second, the size and the number of distributed
data sets make the communication and storage response time
increase and thereafter the costs increase, which affect the
performance of the whole composition process. Third, the cost
of transferring data to and from service endpoints increases as
the number of data sets increase. Finally, the dynamic nature
of cloud computing and data replication needs a dynamic
and adaptive mechanism to regulate the interaction between
users and providers [9]. Few approaches have been proposed
in the literature to resolve the problem of optimizing cost
and response time for data-intensive services composition [7]
[8] [9]. These approaches are based on ant colony algorithm
for the selection of the optimal service and its data replicas.
However, most proposed approaches consider the search of
optimal services cost and response time and its data replicas
as two separated sub-problems. Additionally, these approaches
consider the composition of data-intensive services already
predened and just focus on the selection of the services that
give the optimal cost and response time (data-intensive service
selection problem).
This paper addresses the data-intensive services composition problem which gathers the data-intensive services composition and services selection problem, as well as the selection
of their optimal data replicas. In this work, we propose a
novel approach based on the articial bee colony algorithm
(ABC Algorithm). Our proposed approach is based on two
main phases:
i) The pre-composition phase involves two steps:a data movement strategy will be applied for all the data sets by which
we will be able to reduce the transfer frequency between
the services that requiers these datasets; then a web services
instances classication in the cloud will be set in order
to organize the search areas before the selection phase and
ensures an optimal time to answer the clients request.

ii) The composition phase: at this phase there will be used the
articial bee colony algorithm for the selection and composition of the data-intensive services as well as their data replicas,
since this algorithm showed its performance in mathematics
and computer problems compared with other PSO Praticle
Swarm algorithms [2].
The remainder of this paper is organized as follow, in
Section 2; we make a study on the related work as well as their
potential limitations. We focus in Section 3 on the basic used
concepts and data-intensive web service composition problem
and its formulation. We describe our proposed approach in
Section 4. Section 5 shows the evaluation of our approach.
Finally, we conclude our work and we refer to our prospects.
II.

R ELATED W ORK

Finding a feasible composition within a reasonable time,


means minimizing the compositions cost, response time and
energy consumption, the optimization of data-intensive service
composition attracted the attention of many researchers and has
become a motivating research area in cloud computing and escience in particular. Although a few studies developed in the
literature [7][8][9] propose data replica or service selection
strategies, principally, the proposed approaches are based on
articial intelligence using ant colony algorithm, the authors
in those approaches model the problem as a weighted directed
acyclic graph DAG with a starting point S and a target point
T, thus the problem will be transformed from a selection of
the best service or the best data replica for the composition
to a selection of the optimal path in the DAG graph based
on the ant colony algorithm. In thses approaches each task
in the workow, has a set of concrete services candidates
and each concrete service had a set of data sets which also
has a set of duplicated data replicas in several servers in the
network. And so the goal is to select the optimal service that
has the least cost, response time, and energy consumption. In
[8], the authors treat the problem as the effect of data intensity
and the communication cost of mass data transfer on service
composition, and proposes a service selection algorithm based
on an enhanced ant colony system for data-intensive service
provision. In this paper, the data-intensive service composition
problem is modeled as an AND/OR graph, which is able
to deal with sequence relations switch relations, and able
to deal with parallel relations between services. In [9], the
authors propose a data replica selection optimization algorithm
based on an ant colony system. The performance of the data
replica selection algorithm is evaluated by simulations. The
background application of the work is the Alpha Magnetic
Spectrometer experiment, which involves large amounts of
data being transferred, organized and stored. Finally in [7],
the same authors propose a cost minimization model for dataintensive services composition which is in a sense similar to
our approach, which focuses on how to select appropriate data
centers for accessing data replicas and how to select services
with lowest associated costs in parallel.
All approaches mentioned above assume that data-intensive
services composition is predened and focus only on the
selection of the optimal service or the optimal data replica
and assume that every service require a single cloud instance,
whilst the case of multi-cloud services instances was ignored.
On the other hand these approaches [7][8] and [9] prove

ineffective when we deal with a highly dynamic environment


as the eld of Cloud since the location, availability and
resources used for these applications are always dynamic.
Whereas when it comes to service composition in the cloud,
many approaches have been presented, most of them are
QoS-based composition techniques. Despite, these approaches,
are not suited for data-intensive services composition since
it has distinguished characteristics and requirements such as
economics-based, transactional, constraints of pay-as-you-go
data policy and so on. Thus, as the traditional quality-based
service composition approaches are not suitable to solve dataintensive services composition problems, it will be necessary
to nd a novel technique that provides data-intensive services
composition solutions with minimized cost and response time.
Our approach differs from proposed approaches in two
aspects:
1) To reduce the data sets transfer frequency and provide
an answer to the clients request in a shorter time, we have
proposed a data placement strategy and a classication strategy
of web service in the cloud based on their input and outputs.
2) We propose a distributed algorithm for the selection and
the composition of data-intensive service, which guarantees
the selection of the optimal service as well as its data replicas
and the composition simultaneously based on the articial bee
colony algorithm.
To the best of our knowledge, this is the rst work that shows
how to use articial bee colony optimization algorithm to solve
service and data replica selection problem in the cloud.
In this section, some basic denitions and concepts used
in our study are explained, then the problem is described and
nally a cost minimizing service composition model is given.
A. Basic denitions
Denition 1 (User Request):
A request R is dened by the triple (Ri , Ro , W ) with Ri
represents the inputs, Ro represents the output and W denotes
the set of dened user quality criteria. In our paper we resume
those criteria qualities in the response time and cost of the
service and its data replica.
Denition 2 (Web Service):
A web service CSi is a quadratic CSi = (Id, In, Out, Dt, C)
for which the Id represents the identity of the service, In
and Out respectively represent the inputs and outputs of the
web service, Dti ={Dti1 , Dti2 , . . . , Dtik } denotes the set of
data sets of this service with k the number of data sets of
CSi and C={C1 , C2 , . . . , Cp } represents all the instances of
this service in different clouds with p denotes the number of
such instances.
Denition 3 (Class Services):
A class is a set of elements that declares common properties.
In our case, the elements of the class are services that have
the same inputs and outputs parameters. We dene a class
service as follow: Cli = (Clii , Clio , SW im ) where: Clii :
represent the input of the class services Cli . Clio : represent
the output of the class services Cli and SW im : represent the
set of m similar web services within Clii and Clio are the
inputs and the outputs of those services.

Denition 4 (Data set):


Each concrete service CSi requires a set of data sets, denoted
by, DT i , which are composed of k data sets as follow: DT i
= dt1 , dt2 ,. . . , dtk and each data set dt has a set of l data
replicas, given that l represents the number of data servers on
which every data set dti is replicated and from which it is
available.
Denition 5 (Dependencyij ):
We called Dependencyij the dependence between two or more
data sets, i.e. if there are tasks that will use di and dj together
then the quantity of this dependency is the number of tasks
that use both di and dj .
B. Problem description
In a composition of data-intensive web service, each service will necessarily need to process very large amounts of
data from different data centers (inputs) and in turn send large
amounts of data to other centers (the outputs). Formally, we
can dene the data-intensive service composition problem in
the cloud as follow DISCP :Q, CS, C, DT , Rp , 
given: Q: the user query, CS: set of web services, C: the
set of cloud instances on which every service CSi is hosted,
DT : Data sets, Rp : set of data replicas of each data set,:our
algorithm which can nd the optimal composition.
C. Cost and response time miniminsing equations
for data-intensive service composition
Consider a data-intensive service CSij , which is connected
by links of different bandwidths with its data servers. For each
data set dt DT i , the time to transfer it from ddt to y is
denoted by Tt (dt; ddt ; y). If a local copy of data set exists,
the transfer time is assumed to be zero. More formally, it can
be given by (1) inspired by [7]
T t (dt, ddt , y) = Rtdt +

size (dt)
bw (ddt , y)

(1)

Where Rtdt is the time span from requesting dt to getting the


rst byte of dt; bw(ddt, y) is the transfer rate between data
size(dt)
server ddt and y; size(dt) is the size of dt; bw(d
denotes
dt ,y)
the practical transfer time. The estimated transfer time for the
service, Tt (CSi ), is the maximum value of time for transferring
all the data sets required by the service; denoted by (2):
Tt (CS i ) =

max (T t (dt, ddt , y))

dt DT i

(2)

Thus, the estimated execution time for the service, Te t(ASi ),


is given by (3) as follow:
Tet (AS i ) = Trp (AS i ) + Tt (AS i )

(3)

Within T rp (ASi ) is the response time of the concrete dataintensive service. For each data-intensive service CSi , the data
access cost is given by ac(dt). The cost for the service, Cost
(CSi ), can be described by (4):
Cost (csi ) = Cvi (csi ) + Ctr (csi ) + Csr (csi )

(4)

Where Cvi (csi ) is the access cost of all data sets required by:
Cvi (csi ) =


dtDT i

ac(dt)

and Ctr (csi ) is the transfer cost of all data sets as follow:
Ctr (csi ) =


dtDT i

size (dt) tcost

within tcost is the cost per unit of transferring data for a link
and Csr (CSi ) is the cost of the concrete service CSi .
D. Articial bee colony algorithm
The Apis mellifera (Bee) is one of the fascinating social
insects which lives together as a family. All this family
members are engaged with complex services. Each bee in
a colony has an individual and collective (social) behavior
which is very useful for communication, construction and
responsibility. [11] Tereshko and Loengarov developed
a minimal model of forage selection that leads to the
emergence of collective intelligence which consists of three
main components: food sources, employed foragers, and
unemployed foragers, and denes two modes of the behavior:
recruitment to a nectar source and abandonment of a source.
Teodorovic suggested to use bee swarm intelligence in the
development of articial systems aimed at solving complex
problems in trafc and transportation [10]. In ABC algorithm,
the position of a food source represents a possible solution
to the optimization problem and the nectar amount of a food
source corresponds to the quality (tness) of the associated
solution. The number of the employed bees or the onlooker
bees is equal to the number of solutions in the population. At
the rst step, the ABC generates a randomly distributed initial
population (C = 0) of SN solutions (food source positions),
where SN denotes the size of employed bees or onlooker
bees. Each solution xi (1 iSN) is a Ddimensional vector.
Here, D is the number of optimization parameters. After
initialization, the population of the positions (solutions) is
subject to repeated cycles, where (1 C MCN) , of the
search processes of the employed bees, the onlooker bees and
the scout bees. An employed bee produces a modication on
the position (solution) in her memory depending on the local
information (visual information) and tests the nectar amount
(tness value) of the new source (new solution). If the nectar
amount of the new one is higher than the previous one, the
bee memorizes the new position and forgets the old one.
Otherwise, she keeps the the previous position in her memory.
Then all employed bees complete the search process, they
share the nectar information of the food sources and their
informations positions with the onlooker bees. An onlooker
bee evaluates the nectar information taken from the employed
bees and chooses a food source with a probability related to
its nectar amount. As in the case of the employed bee, the
onlooker bee produces herself a modication on the position
in her memory and checks the nectar amount of the candidate
source. If the nectars probability is higher than the previous
one, the onlooker bee memorizes the new position and forgets
the old one. The main steps of the algorithm are as follow:
1: Initialize Population
2: repeat
3: Place the employed bees on their food sources
4: Place the onlooker bees on the food sources depending on
their nectar amounts
5: Send the scouts to the search area to discover new food
sources

6: Memorize the best food source found so far


7: Until requirements are found

III.

A PPROACH FOR OPTIMIZING COST AND TIME OF


DATA - INTENSIVE
WEB SERVICES COMPOSITION USING ABC ALGORITHM
Our approach involves two phases: a pre-composition phase
and a composition and selection phase.
1- The pre-composition phase (Design Time) involves two
steps:

Step 1: A Data Sets movement strategy

Step 2: Cloud instances web services organization


based on the inputs and outputs parameters of those
instances in the cloud inspired by [1] .

2- The composition and selection phase: the articial bee


colony algorithm algorithm used in this phase, in order to
manage the data-intensive web services composiion and selection, given that the ABC algorithm showed its performance
in mathematics and computer problems compared with other
algorithms (PSO praticle Swarm optimazation).
1) The pre-composition phase (Design Time): To run a
service perfectly and without data transfer on the network,
a naive solution is to put all the data sets used by the same
service CSij in the same data center trying to totaly remove the
transfer of the massive data sets in the network; Nevertheless,
this strategy is not practically feasible. In fact, the huge size
of these data sets makes it unrealistic to put all in the same
data center, do to the cost of the data center built, as well as
the energy consumption.
In this article, we are inspired by the work of the authors in
[3], a solution is to group in a same server the maximum data
sets. For example, if we nd two data sets or more dti and dtj ,
which are mostly used together for many services, then these
data sets will be stored in the same data center so that their
services could access to it without having to move them from
one server to another. This strategy will reduce the frequency
of data sets transfer in the network. On other hand it also
reduces the energy consumption and the cost of the service as
well as its response time.
In the case of scientic data-intensive workows, the moving of data set from one data center to another will cost more
than the scheduling of data sets to the center [3]. So the goal is
to put each data set in a reasonable location in each data center.
Thus, we start our data sets placement strategy by grouping all
the data sets that are used by each web Service CSij . Then we
calculate the dependency or the services intersection number,
using the formula (CS i CS j ), in order to nd for each the
max dependency data set that most services use them together
and write it in this form diCS i . . . CSj |Si  . Thereafter, we
use the results to build a dependency matrix MD of dimension
(n*n) where the headers of the rows and columns represent
the existing data sets and every element MD[I,J] contains the
number of dependencies between the column MD[j] and the
line MD[i]. Then, we apply BEA (Bond Energy Algorithm)
used to group similar objects together in a matrix by the
permutation of rows and columns of the existing matrix [9].
In our work, the BEA algorithm takes the dependency matrix

(R) as an input in order to generate a cluster dependency


matrix (MC) besides, it generates in the output the MC matrix
organized in a way where the elements with similar values are
grouped together (i.e. large values with other high values and
small values with other small values), these values represent
the dependencies between data sets. Thus, the data sets will
be moved each one in its appropriate partition as it is modeled
in the matrix MC. With this data sets strategy placement, we
will guarantee a reduction of the frequency transfer data set
and furthermore reduction in the energy consumption. In the
second step of the pre-composition phase to better optimize
the composition of data-intensive services in the cloud, we
are guided by the solution proposed by the authors in [1],
in which a cooperative agents are used to distribute and
classify the different concrete services possibly used in the
composition process; these services will be classied by their
inputs and outputs. Hence, this classication into class services
will be organized according to those parameters. We group the
different cloud instances which have the same parameters of
input and output and we put them in the same class services
Cl. In this way, we optimize the search process during the
stage of the selection and composition of services with the
following ABC algorithm. If a web service CSij has more
than one cloud instances, the later (the instances) will be
considered different given the fact that they are not in the
same location. Thus, the differences in each instances response
time, shows the importance and inuence of the network delay
which eventually satises the clients request in an optimal
time.
2) The composition and selection phase: Our composition
and selection algorithm based on ABC algorithm consists of
three main phases:

The initialization phase

The search phase

The selection phase

Fig. 1.

data-intensive composition workow using ABC algorithm

i. The initialization phase: At the initialization phase, the


bees gather around their hive (start point). The number of the
bee colony are half employed bees and half onlooker bees,
equal 2 number of concrets services that exist. Then, we
initialize the stopping criteria for the end of search which is the
maximum number of cycles for research maxCycle bees equal
the number of classes services Cl. In our model, the production
of a new food source is produced by the existing sources or

web services in each class services Cl. In our example, every


bee k associated with its hive, has an identier of its own and
a memory where all the necessary information are saved. The
two parameters to be optimized are the cost and the response
time of the service.
ii.The search phase: After the initialization phase, each bee
will head to her search area (Class services), i.e. each bee will
launch the search process to each class in order to exploit
their services (the food sources) and nd the appropriate
service with the best parameters. Once the employed bee k
reaches her service CSij , which location is already saved in
her memory, then she starts to compare services optimizing
parameters same for all the others employed bees by applying
the following utility function:
U (csij ) =

Cost (csij ) + Responstime (csij )


2

(5)

Cost(CSij )= Cost of the concrete service CSij


Responsetime(CSij )= Response time of the concrete
service CSij U (CSij ) = Utility function U (CSij ) is the
basic function for bees in order to calculate the tness function
of each specic service. So, each bee is supposed to calculate
the utility function and tness function of her service from
its cost and response time then saves it in her memory to
transfer and share the nectar information of the food sources
and their positions information with the onlooker bees on the
dance area. Nevertheless, before returning to her hive every
bee k will necessarily go through a local search phase in each
service CSij in order to select the best data replica of each
data set dti of this service. In order to accomplish that local
search process, the bee k should access to each data sets of
her web service CSij in order to explore to its data replicas
and calculate their utility functions, after that compares the
tness functions of all the replicated data. If the data sets are
luckily located in the service CSij , the bee will consider the
data set transfer time 0. If one of its data sets is not located
at the same location with the service then the employed bee k
invokes the scout bees located on each specic service. In this
case, we will consider each service as a new virtual hive, at
that time the scout bees will consult each data set in order to
locate their data replica to nd out the data replicas number
and their positions to transmit it to the bee k as shown in
g.2:

its own service CSij ,in return the bee k make a cloning of
several new bees equal to the number of data replica in each
data sets. Subsequently, the cloned bees leave the service
endpoint and begin an exploitation process to their appropriate
data replica (new food sources) and calculate their utility
functions as follow:
 
 
 v
Cost dtvq + Responstime dtvq
(6)
U dtq =
2
Where:
Cost(dtvq )= Cost of the data replica dtvq of the data set dtv
Responsetime(dtvq )= Response time of the concrete service
dtv Then the cloned bees return to the endpoint service CSij to
share these new numbers with the onlooker bees in the virtual
hive (service CSij ) which will decide which is the preferred
data replica based on their utility function and calculate their
tness functions and their probability. Then, they compare
their performance in a way that the data replica which has
the highest probability value will be the selected one (the best
data replica). The selection of each data replica of a given data
set will be modeled by the variable x, i.e. each variable x of
each data replica is initially set to False; once the data replica
is chosen by the onlooker bees, her variable x will assign the
value True. Once the selection is done every cloned bee will
die and only the best bee k stores the value of the best data
replica set as well as the tness function of the main web
service CSij . At the end, the bee k returns to the main hive
to share all the information with the Onlooker bees that are
waiting for her return in the dance area of the main hive(start
point).
iii.The selection phase: In this phase, the employed bees arrive
at the dance area where they nd the onlookers. The employed
bees start doing different dance movements to show the
regions of each food source that are brought. Consequently, the
onlooker bees do many services: rst, they pick regions of web
services shared by the employed bees, second they determine
the amount of nectar and store food in their submissions.
Thereafter, they will decide which web services are the most
appropriate is chosen by the use of this formula:
f it
pi = SN i
n=1 f itn

With: f iti : is the tness value of the solution i which is


proportional to the amount of nectar food source that is in
the position i. SN : the number of food sources. At the end
of this rst cycle, the bees change their food zone area and
star a new search procedure. The same steps will be repeated
with each Class services Cl for each bee until we reach the
maximum number of cycles M axCycle, which is equal to
the total number of all class services Cl, or nding the nal
service endpoint end.
IV.

Fig. 2. ABC Algorithm for selection of data-intensive web service and its
data replica

Haven, transferred the information to the bee k located in

(7)

I MPLEMENTATION AND E VALUATION

The experimentations are performed on an Intel Core


CPU N2600@1.60GHz with 2GB of RAM. As it is not
possible to perform benchmarking experiments using realworld environment in a reproducible, reliable and scalable like
cloud computing environment and especially when it comes
data-intensive services. The possible alternative will therefore
be the simulation tools. Thus to develop our intensive-data
web services composition approach, we use java language.

Fig. 3. The effect of the number of services per class on the running time
for nding the optimal composition

Fig. 4. The effect of the number of data set per service on the running time
for nding the optimal composition.

Our experiments show the evolution of the execution time of


our system. To simplify the understanding of the functioning
of our system, we used the response time of services as a
single QoS metric and we set the number of cloud instances
to 1 cloud for all the services. The response time values are
generated randomly. We set the number of Class services Cl
to 10 classes and the number of data sets of each service to
1 data set per service then we vary the number of services
per class. The results of these experiments are presented in the
Fig. 3. In Fig. 3, we show the inuence of the number of dataintensive services on the evolution of the execution time of the
composition in order to nd the best service with the optimal
cost and response time. The cost and response time of each
candidate service and the access cost and transfer time of each
data set are randomly drawn from a uniform distribution of
several intervals in this paper. The cloud instances placement
in the network is set randomly too,as well as the transfer cost,
the values of the access cost, the transfer time of data sets, and
the values of the link transfer cost, the storage access latency,
the transfer rate, the waiting time and the bandwidth which
their values are in many intervals too.In Fig. 4 we show the
evolution of the execution time of our approach in the case
when we xe the number of services to 10 services and we
vary the number of data set per service. The results of these
experiments are presented in Fig.4 as follow According to the
Fig.3 and Fig.4, we conclude that as the number of data sets
per service increase (from 1 to 10) and the number of service
per class increase(from 10 to 100) the bees will need more
iteration times to nd the best service within the best cost and
reponse time values and within the best data replicas cost and
response.

is characterized by accuracy and reacts to dynamicity of web


service and its data replicas. Our approach is scalable while
keeping the constraint of minimization the QoS. In our future
works, we will compare our optimized cost and response times
data-intensive web service composition approach, with other
works that have dealt with the case of both composition and
selection of data-intensive web services in the cloud.
R EFERENCES
[1]

[2]

[3]

[4]

[5]

[6]

[7]

[8]

V.

C ONCLUSIONS AND F UTURE W ORKS

In our present work, we propose an efcient data-intensive


web service composition approach based on ABC algorithm.
Our solution is based on two main phases: i) The precomposition phase which involves two steps: the rst step is
about a data sets movement strategy in order to reduce the data
sets transfer frequency; and the second step will be a cloud
instances classication of each web services based on their
input and outputs. ii) The second phase is the composition and
selection phase using the articial bee colony. This approach

[9]

[10]
[11]

Brahmi, Z.; Gammoudi, M.M., QoS-aware Automatic Web Service


Composition based on Cooperative Agents, Enabling Technologies:
Infrastructure for Collaborative Enterprises (WETICE), 2013 IEEE
22nd International Workshop on, vol., no., pp.27,32, 17-20 June 2013
Dervis Karaboga Bahriye Basturk A powerful and efcient algorithm
for numerica function optimization: articial bee colony (ABC) algorithm, 13 April 2007.
Dong Yuan, Yun Yang, Xiao Liu, Jinjun Chen, A data placement
strategy in scientic cloud workows, Faculty of Information and
Communication Technologies, Swinburne University of Technology,
Hawthorn, Melbourne 3122, Australia, 2010
G. Canfora, M. D. Penta, R. Esposito, and M. L. Villani, An approach
for qos-aware service composition based on genetic algorithms, Proc. of
the Conference on Genetic and evolutionary computation (GECCO05),
pp. 10691075, 2005
J. Shen and S. Yuan, QoS-Aware Peer Services Selection Using Ant
Colony Optimisation, W. Abramowicz and D. Flejter (Eds.): BIS 2009
Workshop, LNBIP 37, pp. 362374, 2009
L. Zeng, B. Benatallah, A. Ngu, M. Dumas, J. Kalagnanam and H.
Chang. QoS-Aware Middleware for Web Services Composition IEEE
Transactions on Software Engineering, 30(5):311327, 2004
Lijuan Wang, Changyan Di, Yan Li, Jun Shen, Qingguo Zhou Towards
minimizing cost for composite data-intensive services, IEEE 17th
International Conference on Computer Supported Cooperative Work
inDesign 2013
Lijuan Wang, Jun Shen, Ghassan Beydoun Enhanced ant colony algorithm for cost-aware data-intensive service provision, IEEE Ninth World
Congress on Services, 2013
Lijuan Wang, Junzhou Luo, Jun Shen, Fang Dong Cost and time aware
ant colony algorithm for data replica in Alpha Magnetic Spectrometer
experiment, IEEE International Congress on Big Data, 2013
P. Lucic, D. Teodorovic , Transportation modeling: an articial life
approach, in: ICTAI, pp. 216223, 2002
S.Santhosh Kumar, M.Govindaraj A Detailed Study about Foraging Behavior of Articial Bee Colony (ABC) and its Extensions International
Journal of Engineering and Technology (IJET), ISSN : 0975-4024 Vol
5 No 2 Apr-May 2013

Vous aimerez peut-être aussi