Vous êtes sur la page 1sur 16

Abstract

Fog computing in cloud offers many services at the network edge for supporting Internet of
things (IOT) applications with low latency requirements. User expectation parameters enhance
Quality of Service by using application placement request and fog instances without degrading
the service quality. In this paper, different application placement request will get prioritized by
having user expectation based on QOE – aware placement policy and calculate the instances
depending upon its current status. In fog computing user QOE is maximized with respect to the
utility access, service delivery, resource consumption and its well suited to the suitable fog
computing instances. This paper indicate higher performances compared to other proposed
policies, it also improves network congestion, delivery of the packets between the nodes, cost of
the network usage and the energy consumed between the sensors and the suitable fog computing
nodes.

Introduction
Many domains can be rapidly grown using Internet of Things (IOT) applications by using
Modern computing and networking techniques. IOT applications can resist the applications in
remote cloud using Real time interaction and stringent service delivery deadline. Edge network
for IOT applications can extend the cloud based utilities by using Fog computing paradigm. In
Fog, networking entities like gateway servers, routers, switches are considered as Fog nodes and
used for computational purposes. Edge and cloud are combined to form a Fog computing. In the
presence of data source, fog computing process an IoT-application placement request. Hence, it
decreases network load and guarantees in time-service delivery. Fog nodes are geographically
scattered and resource constrained, disparate cloud data center. Application placement in fog
becomes fully demanding among the fog nodes expectation parameters like network round trip
time, data processing speed and resource availability that vary significantly. Quality of service,
resource situation aware among the fog nodes have been utilized

QoE is often considered as a better choice while using the user centric measurement with
different service aspects. It manages user requirements, intention and perceptions regarding the
service quality. Since QoE manages user expectation, QoE-aware policies can increase user
fidelity and reduce service renounce rate. Due to resource and service provisioning fog
computing in fog can improve data processing time, resource consumption and network quality.
Fog services can vary from one to another in a real time environment and the factors are often
changed.

Feedback based approaches like Mean Opinion Scores (MOS), Standard Deviation of Opinion
Scores(SOS),Net Promoters Score(NPS) are commonly used. Human actions in IoT are limited
and the interactions are happen and feedback are notified which is not reasonable. Likely,
prediction based QoE models also reduce significantly by varying QoE dominating factor. In the
variation can be made evaluation of QoE increases the complexity which is placed in a
placement. It is more feasible to identify QoE in prior to placement application. The
inconsistency between the user and the user QoE does not degrade the service quality.

The User Expectation metric includes parameters like service access rate if different
applications, required resources and the processing time. The placement request of each
application is prioritized based on user expectation metrics. Status metric parameter also called
as Fog instances which includes proximity, resource availability and processing speed. Finally,
application placement requests are mapped to the computing instances.

Contributions:
 The QoE-aware application placement manages application placement request and fog
instances, which comprises of Fuzzy logic based approaches that prioritize different
applications.
 The optimized mapping of values between application placement request and fog
instances which results in a maximized rating gain.
 The simulation is based on iFogsim. The QoE enhancement shows increase in
improvement compared to other QoE proposed policies.

List of Acronyms
CCS Capacity Class Score
CoAP Constrained Application Protocol
EEG Electroencephalogram
FCN Fog Computational Node
FGN Fog Gateway Node
IoT Internet of Things
ITU International Telecommunication Union
MCI Micro Computing Instance
MeFoRE MEdia FOg Resource Estimation
NPS Net Promoter Score
NRR Network Relaxation Ratio
PTRR Processing Time Reduction Ratio
QoE Quality of Experience
QoS Quality of Service
REST Representational State Transfer
RG Resource Gain
RoE Rating of Expectation
SCIP Solving Constraint Integer Programs
SNMP Simple Network Management Protocol
Related work

Mahmud and his colleague proposed a “context aware application scheduling policy”
in Mobile cloud computing to enhance the QoE.It can be monitored in a centralized cloudlet and
the battery level of the device and signal to noise ratio where the request considered on the basis
of rank method.To guarantee a user quality of experience multi constraint optimization problrm
is used.This method reduces the latency issues and increases the success rate.Min max
normalization is used here. The output cab be generated by using success rate, average waiting
time nd QoE.Due to ending the access of the services because of power failure, to guarantee the
user request for the output.

Zhou and his colleague proposed a “MCC-based QoE aware cache management
policy for multimedia applications”. It also focuses on continuous streaming of database in
different cases. Video streaming request allocates resources based on rank method by using
access rate in a cache server and to generate in different cases. The relation between the user
feedback and the service response rate shows the guaranteed QoE. To conduct the policy, end
device, base station and cache server also work parallel. In this, they had been taken service
access as input in user expectation metric and round trip time, resource availability in status
metric instances, with these they had gained prioritized placement, and the drawback is that,
since it is a centralized management the number of applications get increased here.

Peng and his colleague proposed a “QoE aware application management framework for
Mobile Edge Computing (MEC)”. This policy is applied due to the ranking method of MEC instances
that guarantees the user’s expectation needs. The networking functionalities are also applied like network
function virtualization and software defined networking. The expectation metric parameter used here is
service access rate. In this, they had been taken service access rate and resource requirement as input in
user expectation metric application requests and round trip time, resource availability in status metric
instances, with these they had gained prioritized placement, and the drawback is the centralized
management the number of applications get increased here. The eco-system of the QoE-applications are
directed through upward and downward directions.

Dutta and his colleague proposed a “QoE-aware transcoding policy for MEC”. Video
processing speed is the application which is accessed in a centralized management. After some time, the
policy checks whether the encoding is correct or not or some of the operations are to be required. These
guarantees edge content customization based on the user expectations. In this, they had been taken
processing time as a input in user expectation metrics and round trip time, processing speed as input in
status metric parameters. No output is gained.
X.Lin and his colleague proposed a “A QoE-aware bandwidth scheduling policy for wireless
communication”. Packet forwarding can be done to forward all the packets from one node to another
while using service access rate and the processing time. Networking functionalities are also performed
here. Forwarding of packets will be done for the processing of delay time during congestion. The attained
ratio will be generated here. In this, they had been taken service access rate, processing time as a input in
user expectation metrics and round trip time as a status metric. The output gained is managed in a
decentralized management. Number of applications can be increased here.

Anand and his colleague proposed a “QoE-optimized scheduler for multi class system”. They
can be interactive with the web interactive and the file down loaders. In wireless networks, the
networking functionalities can be done. The service requests can be ranked on the basis of the packet
forwarding. These packets can be forwarded based on the delay time. The proposed scheduler is an
extension of Gittin index scheduler. In this, they had been taken input as processing time in expectation
metric parameter and resource availability, processing speed in status metric. These requests can be
ranked and it results in a prioritized placement.

Skarlat and his colleague proposed a “QoS aware Fog computing placement”. The service
requests are not actually the fog resources they just ranked in computing resources. The service requests
can follow the ranking method. The requests can be get responses by all the nodes of the fog. Before
termination, the node must give responses to the all the nodes. The required resources within the available
resources among the fog nodes can be managed in a decentralized management. In this, they had taken
input as required resources, processing time as expectation metric and resource availability, processing
speed as a status metric. The output gained is the rank basis of the application placement requests.

Brogi and his colleague proposed a “QoS-aware application placement policy”. This deals with
the responses of the service requests within the specific time. They can solve the latency and monetary
issues. The java based tool can be applied for the design of the life cycle. It can be connected with the
many IoT devices in a top-down approach. In this, they had been taken resource requirement as a input
and round trip time, processing speed as a input in status metric parameter. They can be managed in a
decentralized management.

Aazam and his colleague proposed a A QoE-based Fog resource estimation policy, “Media Fog
Resource Estimation (MeFore)”. The service requests can be ranked according to the fog resources. The
system can give responses to all the service requests and results ia estimation of fog resources. Service
Level Agreements can be generated with the increase of the required resources, so that the user’s
guarantee can be re-gained. In this, they had been taken resource requirement as input in expectation
metrics and the resource availability as a input in status metrics.

3. Motivation and Requirements

3.1 Scope of Quality of Experience

While compared with the QoS, QoE extension to fog can be estimated to be more
efficient and difficult. According to International Telecommunication Union, it guarantees the
system services and features to enhance the fog computing. QoE guarantees the features
regarding the requirements of the users, perceptions. It also encourages the network application
platform for the execution of fog nodes in fog computing. There exists a Service Level
Agreement (SLA) between the users and customers to enhance and monitor the technical
attributes like cost, delivery time, packet loss ratio, jitter, and throughput. It is mainly focused on
the objective parameters with the application network platform and network. It can be expressed
to the system services that can connect both qualitative and quantitative parameters. User
expectation can help to enhance the QoS attributes. For instance, Internet Enabled System, to
enhance the user’s QoE the system can get responses from the nodes; the user can expect less
ranking of requests. They visualize the multimedia contents in the basis of rank method. The
network service providers can allocate sufficient resources for the service requests and can
improve the quality of service. The efficient bandwidth can be given to the services and the time
to send the packets from one node to another is also measured. The end users can get responses
without degrading the service quality. The two users can view the multimedia contents and
download the contents with the time delay of 5 mins. The second user can download the content
with 3 mins. The two users can download the contents with the 3min and 7 min. If the first user
can download the content with 5 mins, they don’t need to move to the second user. As a result,
QoE can’t be same for the two users, although the QoS can be maintained. The QoE can be very
higher for short time download than the QoE for long time download. It can be higher when the
file is downloaded in 1 min, than the file is downloaded in 4 mins. Hence the same technique
can’t be applied for the both QoE and QoS. As a result, QoE makes the service requests more
extreme when compared to QoS.
3.2 Application Scenario

In the real world, users can connect different Fog enabled IoT applications. There are two
techniques can be used such as Electroencephalogram (EEG) and Tractor Beam Game. They can
connect the smart phone to the fog nodes. The fog nodes are connected through the IoT devices.
The users can send and share the information to each other. The device EEG can be used to sense
the data streams with the mobile application and send through fog nodes. Real time users can
exchange EEG data signals and the concentration prediction also conducted at fog nodes. In
multi-user game the players are standing in a round manner and they played the game due to
concentration. The user pulls the objects towards them and as a result wins the game. In this
service access rate can be used. The more service requests can be used because it is a multi-user
game. The required resources are very large to run in the applications.

Multi-user virtual reality online game also used in the IoT applications. Here Fog
computing based face identification system can be used for the purpose of identifying the facial
reaction. For instance, the facial images are captured by using cameras and sensors. These
captured images are sent to the fog nodes and it used for the further operations. It can be detected
by using the face detection algorithm. Image Processing algorithms are used to enhance the
quality and the decrease the noise ratio. These images are used for the extraction and recognition
for feature vector. By using the feature vector, the facial images can be used for the further
segment. They can also used for the storage and the system database. The time to deliver all the
images into the database is more extreme when compared to the other policies. In Fog
environment the required resources can be applied for the different applications that can be
compiled together. This can be through user expectations with the system affordability and the
service requests of the user’s requirements. This will ensure the high throughput and results in a
higher gain of QoE.

4. Problem Description

4.1 Exploration of expectation and status metrics

User expectation metrics can diverse from application to application. They can be used
for the applications to generate the output. The user expectation metrics are service access rate,
required resources and processing time. The system can generate the many service requests and
responses are needed. They can also enhance the responsiveness of the resources. The services
are responded and the time to service all the requests is done by the user expectation metrics.
These resources are run at different applications so that multiple users can use it. This will
reduce the latency issues. By using these components, the system guarantees the high QoE. To
attain the high QoE more service requests can be used in multiple expanded systems. This will
result in a high throughput. These can be executed in a different applications and different
computational platform. For instance, the user service rate for the different applications can be
slower, normal or faster. Besides, Fog computing is a distributed computing pattern. This can be
residing closer to the users. The fog nodes are utilized in a diverse manner and ranking method.
The lower level fog nodes are more extreme than the higher level fog nodes. In this, Status
metric parameters are used for the calculation of capacity class score. This can enhance the QoE
gain of the available resources. The capacity index can be calculated by satisfying user
expectation metric parameters. The responses are given in the form of ranking method. Status
metric parameters are round trip time, resource availability and the processing speed. It
guarantees the status metric parameters in diverse with networking capabilities like application
run time environments and capacity index. So different fog instances are used for the
enhancement of QoE applications.

4.2 Enhancement of Quality of Experience

The big task of the QoE application is the mapping of application placement requests and
the fog instances. After the mapping can be done, maximized QoE can be obtained. In order to
attain the high capabilities, QoS attributes like service delivery time, packet loss rate and cost
can be used. It takes significant amount of time

.
5.System Overview

5.1 Application Model


Various connected components separated using Fog enabled IoT applications. These
components like sensors, actuators are executing at user’s devices for the generation of the
energy consumed. Before used for the consumption of energy, it can be used for the devices like
set top boxes, bed side monitors and android mobiles. Frequency calibration, data analysis, data
filtration and clarification can be done by connecting two systems and terminals. In Fog
computing, fog nodes are considered as the smallest nodes. If the nodes count is large, it can be
extended to cloud based application to avoid time delay issues. Client module and Main
Application module are two types which are divided from Fog Enabled IoT applications.

In Client module, simply the input is given and the output is generated. No data
operations can be done in client module. It just gets the information and the information is given
to it and it grasps the output. In Main Application module the input is get from the user, data
operations can be done for the analysis and filtration. Once the input is given, connection of two
terminals, data operations can be performed and results in a output. The data operations consist
of data analysis, percolation of data, and eventually it leads to processing of data. The output of
the Main Application module can be considered as the input for the application modules. In this
paper, Main Application module can be regarded as the “Application”.

Organization of Fog Layer


In Fog layer, Cloud data hub which is considered as the center of fog layer. It is
considered as the best platform for the computation of resources and generation of data signals.
Fog is considered as the estimating pattern between the cloud datacenters and IoT devices. Here
Fog is represented in a rank method. It is divided into two types Fog Gateway Nodes and Fog
Computational Nodes.

Fog Gateway nodes remains nearer to the users. It is connected with the applications
between the two terminals and it is generated, managed and executed. The upper level of the Fog
nodes is Fog Computational nodes which are used for the computation of the fog resources.
There are many resources which is responsible for the generation of fog nodes and the data
signals. There are many computing standards for the generation of high quality data signals and
the resources. These can be run at the RESTful services which is the Representational State
Transfer used to build the web services. It runs at the daemon process which runs continuously
and exists for the purpose of handling the service requests. The sending and receiving of the
service requests must be secure enough. Fog nodes can communicate secure to the other fog
nodes for the welfare of the generation of resources within the services by the accessible nodes
using the protocols like COAP and SNMP.

The distance between the fog nodes can be extended their services to cloud to avoid
latency issues. Due to change in the system, the capacity and the resource can be monitored.

//DIAGRAM

5.3 Architecture of Fog Nodes

5.3.1 Fog Computational Nodes


Fog Computational Nodes can be divided into three components i.e Controller
Component, Communication component and Computational component.

In Computational component, it can be computed with the resources which are used to
run many applications. It uses CPU for the measurement of the processing speed and time,
Memory for the storage of computational resources and Bandwidth for the sending of many data
and receiving of one data at the time. It can be equipped with the resources which are not the
actual resources and it is used only for the computing purposes. The extra resources in fog nodes
can be given to the other fog nodes or within the nodes for the computation of resources. It can
be operate in independent manner. It can perform many networking functionalities like moving
the packets from one host to another. In Controller component, the resources can be stored in a
large container for the running of applications and micro computing instances for the status
metric parameters. Capacity Class Score can be generated by prioritizing the Capacity index for
the computation of resources.

//DIAGRAM

5.3.2 Fog Gateway Nodes

Fog Gateway Nodes also facilitate the incoming data signals for the generation of fog
nodes. The output of the Client module is considered as the input of the Main Application
module. The computational nodes of the upper level can be computed through RESTful services
used to web services. It runs at the daemon process which runs continuously and exists for the
purpose of handling the service requests. Computation resources can be stored in a large
container for the generation of resources. Rating of Expectation can be calculated by using user
expectation parameters for each application placement requests. As a result the mapping of
application placement requests and the micro computing instances will give the maximized
rating gain.

//DIAGRAM

QOE-aware application placement //


The initial steps of QoE – aware placement policy, is to calculate the priority value ie.,Rating of
Expectation(RoE) for each application placement request based on the user expectation
parameters,then identify a capacity index called Capacity Class Score(CCS) of micro computing
instances which are used for computational purposes by using status metric parameter and ensure
to compute QoE maximized placement of the application.The further steps will be proceed by
having Expectation Rating Unit,Application Placement Unit of Fog GatewayNode and Capacity
Class Scoring Unit of Fog Computational Node.

Calculation of rating of expectation(ROE) //Scheming operations of ROE


In FGN,IoT devices user enlighten Expectation metric parameters to the system by using
Application Initiation Unit.Data Container of the Application Gateway Node and the results in
expectation Rating Unit of gateway node.The range and unit of the numerical values of every
expectation parameters are not same.The normalized values of each expectation parameter
results in Rating of Expectation and the normalized values will be in the range [-1,1].

FORMULA

 U xam  x 
Ux am
 2  1
  x  x 

Here,cloud computing is considered as a best platform for proceeding the further application.

//we have to write a symbols stanza.

The Expectation Rating Unit of various application placement request is calculated by using
normalized value of the expectation metric parameter.The Fuzzy Logic include three phases

Fuzzification

FuzzyInferences

Defuzzification

In fuzzification,membership function is used to convert expectation parameter into equivalent


fuzzy dimensions in a range between [-1,1].

The fuzzy sets are,

Access rate (Ar):€ {slow,normal,fast}

Requiredresource (Rr): € {small,regular,large}

Processingtime (Pt) :€ {stringent,moderate,flexible}

During fuzzyinference,one-to-one function is used ie.,fuzzy inputs are mutually corresponds to


the fuzzy outputs.The fuzzy output of Rating of Expectation € {High,medium,low}.For different
application placement request,the fuzzy rules are generated by using different fuzzy input sets
are given below:
IF accessrate(ω) IS fast OR requiredresources(ᵞ) IS regular OR processingtime(α)
IS stringent THEN roe IS high.
//DIAGRAM
//TABLE VALUES

Rigid expectation parameters are given higher rate,while compared to the fuzzy rules.Compared
to the relaxed parameters like fast access rate,regular processing time are more placed than the
rigid expectation paramaters which results in a exact values of RoE for different application
placement requests.Due to the increase of single expectation paramater the exact RoE value of
Application placement requests are ensured by having two relaxed expectation parameters.The
logical OR operator is used in fuzzy rules,since the expectation metric parameters are not
dependent and are distantly coupled when compared to the expectation metric parameters which
results in a fuzzy outputs.Generally,to determine the maximum RoE the membership degree of
fuzzy output is set to maximum by using logical OR operator.

//FORMULA

 am am am 
r ( fam)  max   (U ),  (U ),  (U ) 
  
 

In Fuzzy Inferences,j number of fuzzy rules can be stimulated based on the Expectation metric
parameters.The combination of the membership degree results in a fuzzy output.For different
application placement request with the combination of membership degree results in a exact RoE
by using Defuzzification technique.A set of singleton values are used in this calculation.Here we
use discrete center of gravity results in a maximum rating of the applications for the fuzzy
output.
k j
   fam
 r f
//FORMULA k
  am
am  k 1
k

 k
k 1  f am 
k j
r
 
Calculation of Capacity Class Score(CCS) // Scheming operations of CCS
For different application placement request,available micro computing instances either
unallocated to fog gateway node or other micro computing instances results in a associated CCS
values which is used for computtional purposes.For different status metric parameter capacity
class score is calculated with different micro computing instances.In nature,both Expectation
metric parameter and Status metric parameters are heterogeneous in numeric range and unit for
each application placement request.
//FORMULA
 V in   ' 
 y y
Vyin  2  1
  y  y 
' '
 
For each parameter,the range is set according to the fog environment.For the membership degree
the normalized value can be associated with the fuzzy sets:

Round trip time: Rtt €{Short,Typical,Lengthy}

Resource availability: Ra €{Poor,Standard,Rich}

Processing speed: Ps €{Least,Average,Intense}.

The fuzzy output is represented as {Higher,Medial,Lower}

IF round-trip time(Ω) IS lengthy AND resource_availability(г) IS rich AND rocessing_speed(ʌ)


IS intense THEN ccs IS medial.
The status parameter are given higher weight in compared to the fuzzy rule of CCS.The exact
value of CCS for different micro computing instances reaches its zenith compared to the
convenience.The lower level of fog computing nodes of micro computing instances provides
shorter amount of round trip delay compared to the upper level fog computing nodes.In addition
to fog computing node placed in a location impacts different status metric parameters.The lower
level of fog computing nodes gorged with processing capabilities estimates the micro computing
instances of upper level fog computing instances.Status metric parameters are combined together
based on availability of resources in a placement.The logical AND operator is used while
comparing different status metric parameters.The membership degree of fuzzy output is set as
minimum and the operator is set as AND.

//FORMULA
 am am am 
 c' ( fam)  min   (U ),  (U ),  (U ) 
  
 
In Fuzzy Inferences,j number of fuzzy rules can be stimulated based on the Status metric
parameters. The combination of the membership degree results in a fuzzy output.For different
application placement request with the combination of membership degree results in a exact CCS
by using Defuzzification technique.A set of singleton values are used in this calculation.Here we
use discrete center of gravity results in a minimum rating of the applications for the fuzzy output.

//FORMULA k j '
  f    f in

' 'k
c in  k
i 
n
k 1

'  'k 
k 1  c  f in 
k j
For the application placement,fuzzy approach is extending their nodes to the fog gateway node
whichresults in a exact capacity class score.

Mapping of application to Fog instances //surveying of placement request to


fog instances.
The product of the Rating of Expectation of the application and Capacity Class Score of an
instance is called Rating Gain.The total Rating Gain of applications get maximized by using
mapping of applictions to computational instances.QoE aware placement of applications
upgrades maximum rating gain.The associate expectation metric parameter with highly
combined manner indicates high RoE of application.Like expectation metric,the status metric
parameters also associate with highly combined manner indicates high CCS of application.Both
RoE and CCS are generated by using similar environment variable that magnify similarity
among values.Since RoE of application is the main parameter of all its expectation metric
parameter.The Status metric parameter ensures the best possible convergences with maximized
rating of gain.The optimization function is generated to manage fog facilities like service
accessibility,computational resources and application runtime without degrading the service
quality and user expectations.Multi-Constraint Objective function is introduced for the mapping
of applications between computing instances and the application placement unit.

a
max    z a 
m
m * in

amAm  i
nN i nIn
n

One-to-one mapping is introduced between applications and instances.The application maintains


the quality of service like service delivery time, service cost, packet loss rate and constraints are
also managed.Decentralized optimization problem is formulated as a objective function.The
optimization problem and placement of application is generated by submiting application
placement request to the fog gateway nodes.Fog gateway nodes can solve multi-constraint
optimization problem,it is considered as a local view of fog system.The latency is low while
receiving large number of application placement request.NP- hard problem is considered as a
less prompt to the optimization problem.

Rationality of applied technique //Logic of concerned technique


Rating of expectation is calculated by using different application requests based on the user
expectation parameter and capacity class score is calculated by using different micro computing
instances.Fuzzy logic is responsible for calculation of two values.With the domiance of multiple
parameter in real time system,one among the best solution is fuzzy logic based reasoning.It’s
simple and easy to understand.It has a great potential to maintain undetermined and semantic
information.It’s the conversion of qualitative data to quantative data.Fuzzy logic based solution
can be expanded according to the circumstances of the system by adjusting the fuzzy sets and
rules,it requires less amount of data to the system for the future work.In addition to,results can be
generated in a fuzzy logic enabled system. Before deploying the applications we must ensure to
determine RoE and CCS of different application request and fog instances respectively.To
maximize the QoE gain Multi-constraint single objective optimization technique is used.To solve
the problem in a linear manner single objective multi-constraints optimization is used.It can be
solved any light-weight optimization within a short span of time.

Multi objective optimization can be applied rather than using fuzzy logic and single objective
optimization.High computational effort is required while using multi objective optimization.It is
time consuming and complex which makes expandable and less adapted to the fog
environment.while solving multi objective optimization problem in a fog where the computation
is done and affects the stringent service requirements.

Illustrative example
Fog gateway nodes receives five applications placement request with data signal size is 1000-
2000 instructions.The singleton RoE values are set as High-10 ,Medium-5,Low-2.The exact
status metric parameters of different micro computing instances.The fog gateway node can
manage different computational nodes by using micro computing instances.There are seven
instances in the system.Two at lower level,two at mid level and three at upper level.The
normalized values of instances uses membeship degree which results in a capacity class
score.The singleton values are set as higher-10,medial-5,lower-2.As a result,RoE of different
application request and CCS of different micro computing instances provides maximum rating
gain of the application.It also provides optimal mapping of applications and instances.The
constraints of service delivery rate is 250-750ms,service cost is 0.12-0.15$ per minute and packet
loss rate is 3%-5% data segments.Here almost all the parameter satisfies by the mapping of the
applications.

Performance evaluation
The QoE aware application is put forward to estimate different QoS and QoE aware policies.The
placement policy meets execution applications of deadlines.Cloud fog optimizes services
coverage,response time and network congestion.Media Fog Resource Estimation fortifies
efficient resource estimation based on user feedback.Performance metrics are network
congestion,amount of allocated resources,reduced processing time and percentage of QoS-
Satisfied data signals.
Simulation environment
iFogsim is simulated in a fog environment in our proposed policy.The iFogsim is the extension
of cloudsim framework which is used for estimating different computing patterns.Different
number of applications have been placed by varying the arrangements of computational nodes.In
this proposed application placement policy,considerably the synthetic workload is agreeable as
real workload but it is not available right now.The data operations of the workload includes data
filtrations,data analysis and event processing.

//TABLE 9\

Future Work

To improve the efficiency resource scheduling work can be used. These improves the
cost, waiting time, make span, resource utilization, execution time and round trip time. In order
to improve the QoS parameters like availability, reliability, Scheduling success rate, speed and
scalability the Particle Swarm Optimization can be used. In a present PSO the scheduling of
applications uses the Genetic Hybrid Particle Swarm Optimization (GHPSO). It gives better
performance with the minimal cost within the execution time. It is responsible for minimizing
the cost. Work flow computation and communication can be used. The minimum costs can be
performed on the basis of computational resources.

Vous aimerez peut-être aussi