Vous êtes sur la page 1sur 6

2016 International Conference on Next Generation Intelligent Systems (ICNGIS)

A Computation Offloading Scheme for


Performance Enhancement of Smart Mobile
Devices for Mobile Cloud Computing
Queen Kaur Gill Kiranbir Kaur
Student: M. Tech., Computer Engineering and Technology Assistant Professor: Computer Engineering and Technology
Guru Nanak Dev University Guru Nanak Dev University
Amritsar, India Amritsar, India
queengill29@gmail.com kiran.dcse@gndu.ac.in

Abstract The advancements in the world of mobile recognition, GPS navigation etc. [9, 12]. Wireless networks
technology have enabled computational intensive applications on are used in most of these applications and their bandwidths are
the latest smart phones. Still Smart Mobile Devices (SMD's) have much lower than the wired networks. All these together put
low potential and are unable to run complex applications due to heavy load on the SMDs and decrease their performance [11].
their limited battery life, storage capacity, processor speed and
energy. These limitations of smart phones can be addressed Cloud computing provides a broad range of shared
through Mobile Cloud Computing (MCC) which allows the processing computing resources. Cloud computing is an
computational intensive applications from smart phones to get Internet based computing or an on-demand computing which
offloaded over the cloud environment. In this paper, computation provides an environment that is scalable, flexible, secure and
offloading scheme has been proposed that distributes the
workload among various Virtual Machines (VM's) on the basis of
immediate [7]. The provisioning of shared computing
their distance from the current VM. The offloading will decrease resources from the cloud is very easy and quick and also the
the load on the current VM and increase the performance and releasing of these resources after use requires very less
speed of execution. The Buffer Allocation Method (BAM) is management effort [22]. In order to enhance the performance
proposed which will further enhance the performance by of SMD, the concept of Cloud Computing can be used in
eliminating the redundancy of the data to be transmitted during various mobile applications running on SMDs. The term
offloading. The results indicate that the speed of execution Mobile Cloud Computing (MCC) emerges with the support
increases, energy consumption decreases and the load gets and integration of Cloud Computing into various complex
balanced due to offloading. mobile applications.
Keywords computation offloading; energy efficiency; mobile MCC brings wide range of computational resources and
cloud computing; load; execution time complex mobile-cloud applications to the SMD users. MCC
allows storage and processing of mobile-cloud applications to
I. INTRODUCTION occur away from the SMD and into the cloud environment [3,
Mobile devices have evolved from being used only for two 6]. One of the main features of MCC is Computation
way communication via text or call few years back to Smart Offloading which involves migration of the computational
Mobile Devices (SMDs) of today that have extended intensive tasks or applications to servers in cloud environment
capabilities to run intense mobile applications using Internet. in order to execute them and then retrieving the results of
SMDs such as Smart-phones, tablets, and Personal execution from these servers. SMDs utilize the resource-rich
Computers (PCs) process the large volume of data and also the infrastructures provided by the MCC by offloading the
complexity of mobile applications is increasing at the fast rate. computation to the cloud in order to save energy due to which
The gap between the demand for the complex mobile battery lifetime of SMD gets increased.
applications and the availability of limited resources is Computation offloading increases the capability of SMD to
increasing [4]. However, latest developments in the area of perform complex applications efficiently. It involves various
mobile computing have overcome this gap but still these parameters such as current memory capacity and battery life of
SMDs are low potential computing devices which are SMD, execution time, data traffic, network bandwidth and
constrained in memory capacity, battery power endurance and latency and run-time migration cost of data from SMD to the
CPU speed. cloud etc. [10]. All these parameters must be taken into
consideration before offloading the computation. After
With the recent developments in technology of mobile
scrutinizing all these parameters, the decision to offload the
computing, SMD offers huge internal memory capacity as
current computation is taken. If the offloading will improve
well as extendable external memory, high speed processors,
the performance of SMD then it is performed, otherwise
bigger screens, sensors as well as various complex
computation can run locally on the SMD itself.
applications such as environmental sensing, object

978-1-5090-0870-4/16/$31.00 2016 IEEE


2016 International Conference on Next Generation Intelligent Systems (ICNGIS)

The rest of the paper is arranged as follows. Section 2 A mobile based framework in [16] uses mobile phone
presents the current computation offloading schemes for sensors to collect and sample the information about the social
MCC. Section 3 presents the proposed computation offloading behavior of various users. It becomes very difficult to process
scheme (COS) with proposed algorithm. Section 4 presents very large information captured by the mobile phone. Hence,
experimental setup. Section 5 presents results and discusses this processing is done remotely on the cloud.
the findings. Section 6 presents conclusion and future work.
The Phone2cloud framework [20] enables static
quantitative analysis which helps to make decision whether to
II. RELATED WORK
offload the applications computation to the cloud or not.
Over the years, many schemes have been proposed in order Users delay tolerance threshold is an important parameter
to make the computation offloading realistic and feasible. All considered in this framework. The average execution time of
these schemes are used to increase the overall performance of the application running on the SMD is compared with the
the SMDs by providing more memory capacity, processing users delay-tolerance threshold. If users delay tolerance is
power, bandwidth, energy and battery life. In this section, the larger than the average execution time required to run the
brief review of some existing computation offloading schemes application then computation is offloaded to the cloud,
is provided. otherwise computation is made to run on the SMD. The main
purpose is to decrease the execution time and energy
An adaptive scheme in [21] maintains the execution profile
utilization cost. Another decision making technique in [13]
which is attained by program execution. Using this execution
decides whether to offload the applications computation to
profile, the process of computation offloading is implemented. the cloud or not. If the values of parameters such as battery
The decision for offloading the application is taken based on life, processing power and memory capacity of the SMD are
the specified time-limit. If the local execution of application
successfully determined by the SMD the application can be
by the SMD is not completed within a given time-limit, then
executed locally on the SMD itself. Else, the same application
only it is made to run on the server. The time taken by the
can be executed remotely within Femtocell (having less power
local execution of application by SMD is compared with the
and low cost and secure Home Node Base Station) under
remote execution of similar application by the server is
which SMD is registered. The computation time and deadline
compared and considerable improvement is observed.
to execute the application are the two parameters used by the
An application partitioning algorithm suggested in [15], Queue Assignment algorithm in order to execute the
which divides the application to be executed into two parts, application.
the un-offloaded part that can execute locally on the SMD and
The COSMOS [17] is another static framework which
the second part that further consists of N parts that can be provides computation offloading as a service for mobile
offloaded to the server. The multi-cost graph is generated systems. Mobile users send requests for resources required for
when these parts are dynamically modeled and shows
computation offloading and this framework allocates on-
communication and execution costs. In order to do the
demand compute resources from cloud service provider to the
effective partitioning, weights of edges and vertex are also
mobile users.
considered by the algorithm. Wishbone [14] is another
application partitioning algorithm. It also involves profiling. A Lightweight Distributed Framework [18] offloads the
Modeling of application is done in the form of data flow graph computation to the resources of cloud on demand basis. This
which minimize CPU load and improve network bandwidth. framework restricts any additional resources used for the
process of computational offloading from SMD to server node
MAUI (Memory Arithmetic Unit and Interface) [2]
by monitoring the various services of cloud. In order to access
presents code partitioning technique used for offloading to
the services of cloud, the SaaS model is used whereas Energy
server node dynamically. The two versions of application
Efficient Computation Offloading Framework (EECOF) [19]
under consideration are created by using code portability. The uses SaaS model as well as IaaS model. The components of
SMD can run one version locally and another version can be application are configured on the cloud or server node through
executed in the cloud. MAUI creates a linear programming SaaS model and the IaaS model is used by mobile applications
formulation of three parameters. At run time, three parameters to configure the process of offloading. Both the frameworks
energy utilization, communication cost and network can run in two modes, offline and online. Execution at the
bandwidth and latency are considered in order to create a offline mode signifies that required amount of resources are
linear programming formulation. These formulations help available locally within the SMD and the given mobile
MAUI to make the optimal decisions regarding application application can be executed locally on the SMD itself,
partitioning. whereas, the execution at the online mode allows the
The CloneCloud framework [1] like MAUI [2] also does computation of mobile application to be offloaded at the
partitioning of application and its reintegration but statically at remote server node.
the level of application. Hence, the framework statically
analyzes the diverse execution conditions at the SMD end and
at the cloud end. In this manner, the CloneCloud synchronizes
the SMD and the server node.
2016 International Conference on Next Generation Intelligent Systems (ICNGIS)

III. THE PROPOSED COMPUTATION OFFLOADING SCHEME (COS) The Buffer Allocation Method shown in figure 2 is the
packet handling mechanism which is a base to determine the
The computation offloading will involve energy overhead associated with the proposed scheme. The processing
consumption as well as run-time migration cost. The existing of input data takes place in the form of packet sequences. It
schemes fail to address this issue which arises due to ensures that same data does not get offloaded again and again
redundancy and shortfall in distance consideration. Hence toward the destination. The number of packets to be offloaded
more resources of SMD are utilized unnecessarily. If the data determines the overhead cost. When the same packet is
is not updated at multiple places then many instances of same transferred multiple times then the overhead associated with
data will exists on mobile side as well as cloud side. Hence, the system also gets doubled. So, the input data which is to be
more energy will be consumed, which will further decrease the offloaded is maintained within the buffer initially. The next
performance of SMD. So, there must be some creteria or sequence number of packet to be offloaded will be compared
technique to handle the redundant data in order to maintain the against the packets sequence number which is already stored
consistency and reduce unnecessary usage of resources. within the buffer. If the match occurs then the current packet is
rejected. The next packet is considered afterwards. After
The main objectives of the proposed scheme are:
removing the redundant data, the data is transmitted through
a) Making the computation offloading more energy-efficient valid path from buffer to the cloud.
by reducing execution time and load on the current
machine.
b) Removing replicated data and maintaining the consistency
between mobile side and cloud side so that same data does
not get offloaded again and again.
c) Reduce run-time migration cost by selecting the VM
having closest distance from the current VM.
d) To increase throughput and decrease energy consumption
cost on SMD.

Fig 2. Buffer Allocation Method


Fig.1. Methodology of proposed computation offloading In the proposed scheme, after removing redundancy from
scheme
the data to be offloaded using buffer allocation method,
The methodology of proposed computation offloading distance is the next criterion which is used to reduce the effort
scheme is shown in figure 1. The proposed computation required to perform offloading. The distance between the local
offloading scheme (COS) first removes the redundancy of the device which acts as current VM (Virtual Machine) and the
data selected to be offloaded and then consider the distance of VM which is selected for offloading will be considered. VM
the selected machines or devices from the local SMD. In the with the minimum distance is selected for offloading at first
proposed scheme, the different Virtual Machines (VMs) act as place. The VM with larger distance is selected if the VM with
the devices to which computation is to be offloaded. the minimum distance is already occupied. The distance is
calculated using Euclidean distance. The formula utilized is
given as follows:
2016 International Conference on Next Generation Intelligent Systems (ICNGIS)

as large as possible. Mathematical formulation of Knapsack


problem is given as:

The p and q are the VM nodes and dist (p, q) give the
Max vi xi subject to wi xi W and xi 0 (2)
distance between the nodes. Minimum distance between the
nodes is selected for further operation. V. RESULTS AND DISCUSSION

The Algorithm for the proposed computation offloading The proposed COS prototype is evaluated by running quick
scheme (COS) considers the distance as well as redundancy of sort and knapsack problem in local and remote execution
the data while offloading. The offloading if efficiently environment. Experimentation is performed in two different
performed reduces the complexity and the energy consumed. scenarios:
a) Execution of applications on local machine and
b) Execution of applications using COS prototype.
Algorithm COS (Datai, PSi, VM, Disti) // PS is the computation
to be offloaded and executed on the VM The execution time, load on the machine and energy
consumption while running applications locally as well on
a) Receive the Data (Datai) and store it into the Buffer remotely are measured and compared.
b) Buffer=Unique(Datai) //Eliminating redundancy using
Buffer allocation method 5.1 Quick Sort
c) Determine Distance of VM
Disti=DistCVM-DistSVM where CVM means Current When the computational load on the local machine exceeds
Virtual Machine and SVM means Selected Virtual the certain limit, it will take more time to execute the heavy
Machine computation. In such case, it will be more efficient to offload
d) If Disti<Disti+1 then the computation to some remote machine in order to perform
Offload (VMi, PS) computation.
Else
During Experiment, the Quick Sort application sorts 10,000
Offload (VMi+1, PS)
elements. It is observed that remote execution by COS method
End of if
e) Stop balances the computational load (fig. 4) and takes lesser
execution time (fig. 3) for running heavy computations as
compared to when the heavy computation is performed
IV. EXPERIMENTAL SETUP without using COS method. Fig. 5 shows that the COS method
The proposed computation offloading scheme (COS) is is more energy efficient. The computation is offloaded to the
evaluated by running Quick sort and Knapsack problem. The nearest VM so that run-time migration cost as well as overall
experiments are performed using Java and Net Beans 8.0 as an energy consumption cost is reduced.
Integrated Development Environment (IDE). The simulation is
conducted using Cloudsim 3.0.3 toolkit [5] which is integrated
with the Net Beans 8.0. Cloudsim provides an efficient way to
create VMs and perform cloud operations in the java platform
[5]. The COS uses only two VM nodes. The simulation
environment will allow the computation of the local system to
get offloaded to the selected VM. The VMs are selected based
on the distance from the current VM.
The computer node used for experimentation runs
Microsoft Windows 8.1 64-bit operating system with Intel(R)
Core(TM) i3 CPU having 2.40 GHz speed and 4.00 GB RAM
capacity.

The computational logic performed on COS prototype is:


a) The sorting application which implements the logic of
Quick sort for sorting linear list of integer type values. b) The
Knapsack problem [8], in which a set of items (v) is given,
each with its weight (wi) and value (vi). Task is to find out the
number (xi) of each item in order to include it in a collection
Fig. 3. Comparison of Execution Time in milliseconds (ms)
such that the total weight (W) must remain less than or equal
to a given weight limit and also the total value/profit becomes
2016 International Conference on Next Generation Intelligent Systems (ICNGIS)

Fig. 4. Comparison of CPU workload (in terms of %) Fig. 6. Comparison of Execution Time in milliseconds (ms)

Fig. 5. Comparison of Energy Consumption in Joules (J) Fig. 7. Comparison of CPU workload (in terms of %)

5.2 Knapsack Problem

The 10 items and maximum weight limit as 50 have been


taken in order to implement the Knapsack problem in the
experiment. The results obtained by running the given problem
locally and remotely using proposed COS are shown below. It
is observed that remote execution by COS method will takes
lesser execution time (fig. 6) for running heavy computations
as compared to when the heavy computation is executed
without using COS method. Also the proposed method
balances the computational load (fig. 7). Fig. 8 shows that
COS method is more energy efficient.

Fig. 8. Comparison of Energy Consumed in Joules (J)


2016 International Conference on Next Generation Intelligent Systems (ICNGIS)

VI. CONCLUSION AND FUTURE WORK [9] K. Kumar, J. Liu, Y. H. Lu, and B. Bhargava, A survey of
computation offloading for mobile systems, Mobile Networks Appl.,
This paper proposes an efficient scheme for offloading the vol. 18, no. 1, pp. 129140, 2013
heavy computation from resource scarce device such as mobile
[10] K. Kumar and Y. Lu, Cloud computing for mobile users: Can
to the resource intensive environment such as cloud. The offloading computation save energy?, no. April, pp. 5156, 2010
proposed scheme uses only two VMs. The simulation is
conducted using Cloudsim. The quick sort and knapsack [11] H. Lee, A Study on the factors affecting smart phone application
problem are used in order to evaluate the proposed scheme. It acceptance, Ipedr.Com, vol. 27, pp. 2734, 2012
is observed that the proposed computation offloading scheme [12] J. Liu, E. Ahmed, M. Shiraz, A. Gani, R. Buyya, and A. Qureshi,
improves the performance of the local device by removing Application partitioning algorithms in mobile cloud computing:
replicated data present within the computation to be performed Taxonomy, review and future directions, J. Netw. Comput. Appl., vol.
48, pp. 99117, 2015
using Buffer Allocation Method. By offloading the
computation to the nearest VM from the current VM, the [13] A. Mukherjee and D. De, Low power offloading strategy for femto-
performance of the device boosts further as less energy will be cloud mobile network, Engineering Science Technology, an
consumed. The computational offloading takes less execution International Journal, pp. 111, 2015
time and also balance the load on the local device as shown by [14] R. Newton, S. Toledo, and L. Girod, Wishbone: Profile-based
the simulation results. In future, this scheme can be Partitioning for Sensornet Applications., in Proceedings of the sixth
implemented on multiple VMs and the load of each VM can USENIX Syposium on networked systems design and implementation
also be taken into consideration for further improvement. (NSDI'09), pp. 395408, 2009

[15] S. Ou, K. Yang, and A. Liotta, An adaptive multi-constraint


References partitioning algorithm for offloading in pervasive systems, in Fourth
[1] B.-G. Chun, S. Ihm, P. Maniatis, M. Naik, and A. Patti, CloneCloud: Annual IEEE International Conference on Pervasive Computing and
elastic execution between mobile device and cloud, In Proceedings of Communications, PerCom, pp. 10125, 2006
the sixth Conference on Computer Systems- EuroSys 11, p. 301, 2011
[16] K. K. Rachuri, C. Mascolo, and P. J. Rentfrow, SociableSense:
[2] E. Cuervo, A. Balasubramanian, D.-k. Cho, A. Wolman, S. Saroiu, R. Exploring the Trade-offs of Adaptive Sampling and Computation
Chandra, and P. Bahl, MAUI: making smartphones last longer with Offloading for Social Sensing Categories and Subject Descriptors,
code offload, Proc. of 8th international conference on Moblie Systems, MobiCom 11 Proceeding. 17th Annual International Conference on
applications and services, San Francisco, CA, vol. 17, pp. 4962, Mobile Computing and Networking, pp. 7384, 2011
ACM, 2010
[17] C. Shi, K. Habak, P. Pandurangan, M. Ammar, M. Naik, and E. Zegura,
[3] H. Dinh and C. Lee, A survey of mobile cloud computing: COSMOS: computation offloading as a service for mobile devices,
architecture, applications, and approaches, Wireless Communications Proc. MobiHoc, pp. 287296, 2014
and Mobile Computing, vol. 13, no. 18, pp. 15871611, 2013
[18] M. Shiraz, A. Gani, R. W. Ahmad, S. Adeel Ali Shah, A. Karim, and Z.
[4] M. V. J. Heikkinen, J. K. Nurminen, T. Smura, and H. Hmminen, A. Rahman, A lightweight distributed framework for computational
Energy efficiency of mobile handsets: Measuring user attitudes and offloading in mobile cloud computing., PLoS One, vol. 9, no. 8, p.
behavior, Telemat. Informatics, vol. 29, no. 4, pp. 387399, 2012 e102270, 2014

[5] R.N. Calheiros, R. Ranjan, A. Beloglazov, C.A.F.D. Rose, R. Buyya, [19] M. Shiraz, A. Gani, A. Shamim, S. Khan, and R. W. Ahmad, Energy
CloudSim: a toolkit for modeling and simulation of cloud computing efficient computational offloading framework for mobile cloud
environments and evaluation of resource provisioning algorithms, computing, Journal of Grid Computing, vol. 13, no. 1, pp. 118, 2015
Software: Practice and Experience, Wiley Press, New York, USA vol.
41, no. 1, pp. 23-50, 2011. [20] F. Xia, F. W. Ding, J. Li, X. J. Kong, L. T. Yang, and J. H. Ma,
Phone2Cloud: Exploiting computation offloading for energy saving on
[6] N. Fernando, S. W. Loke, W. Rahayu, Mobile cloud computing: A smartphones in mobile cloud computing, Inf. Syst. Front., vol. 16, no.
survey, Future Generation Computer Systems, Elsevier, vol. 29 no. 1, 1, pp. 95111, 2014
pp. 84106, 2013
[21] C. Xian, W. Lafayette, Y. Lu, W. Lafayette, and W. Lafayette,
[7] Cloud_computing, Wikipedia, 2016 Adaptive computation offloading for energy conservation on battery-
powered systems, 2007
[8] "Knapsack Problem", Wikipedia, 2016.

Vous aimerez peut-être aussi