Vous êtes sur la page 1sur 21
School of Information Technology and Engineering Programme: MCA Course Code: ITA 5006 Course Name: Distributed Operating

School of Information Technology and Engineering Programme: MCA Course Code: ITA 5006 Course Name: Distributed Operating Systems


Design an Enhanced Scheduling Technique for Distributed Operating System Environment


CPU scheduling is part of a comprehensive resource distribution problems, it is probably the most focused problem in the Operating System Knowledge. Proper Scheduling of processes will provide enhanced hardware usability and speedup system. Distributed scheduling quintessence on overall scheduling since the structural design of the principal system based on global scale. The presence of multiple processing nodes in distributed systems creates a challenging problem for scheduling processes onto processors. This paper about, recommending a metaheuristic optimization technique (Ant Colony Optimization (ACO)) for task scheduling of Distributed Operating System in an effectual way. The study combines optimization problems Examples for, combinatorial optimization problems are the Travelling Salesman Problem (TSP), the Quadratic Assignment Problem (QAP), time tabling and scheduling problems. The complete algorithms are guaranteed to find for every finite size instance of a combinatorial optimization problem an optimal solution in bounded time. As the processes scheduled in optimized fashion another problem this one will came across is the load balancing factors of each process nodes. ACO provides proper scheduling as well as load balancing mechanism with combine optimized contrivance.


A process is defined as an entity which represents the basic unit of work to be implemented in the system. When a program is loaded into the memory and it becomes a process, it can be divided into four sections ─ stack, heap, text and data. The stack contains the temporary data such as method/function parameters, return address and local variables. Heap is a dynamically allocated memory to a process during its

run time. Text includes the current activity represented by the value of Program Counter and the contents of the processor's registers. Data contains the global and static variables. When a process executes, it passes through different states. These stages may differ in different operating systems. There are 5 states in general: Start, Ready, Running, Waiting and Terminated.

Design an Enhanced Scheduling Technique for Distributed Operating System Environment ABSTRACT CPU scheduling is part of

The process scheduling is the activity of the process manager that handles the removal of the running process from the CPU and the selection of another process based on a strategy. Process scheduling is an essential part of a Multiprogramming operating systems. Such operating systems allow more than one process to be loaded into the executable memory at a time and the loaded process shares the CPU using time multiplexing. The Operating System maintains the following important process scheduling queues: JOB QUEUE, READY QUEUE, DEVICE QUEUE.

Schedulers are special system software which handle process scheduling in various ways. Their main task is

Schedulers are special system software which handle process scheduling in various ways. Their main task is to select the jobs to be submitted into the system and to decide which process to run. They are of 3 types:

Long Term Scheduler, Medium Term Scheduler, Short Term Scheduler. Context Switch is a method to store the status of processor or CPU so that process execution can be resumed for the same point at later stages. It enables multiple processes to use single CPU. The Process scheduler algorithm can be either primitive or non-primitive. Primitive process are based on priority and in this scheduler can swap the high priority process with low priority process in running state. Whereas non-preemptive


PAPER TITLE------- Task Scheduling in Large-scale Distributed Systems Utilizing Partial Reconfigurable Processing Elements

In this paper a design of a framework is proposed which simulated the distributed systems processors performance. The framework incorporates the partial reconfigurable functionality to the reconfigurable nodes. Depending on the available reconfigurable area, each node can execute more than one task simultaneously furthermore they present a simple task scheduling algorithm to verify the functionality of the simulation framework. The proposed algorithm supports the scheduling of tasks on partially reconfigurable nodes. The simulation results are based on various experiments and they provide a comparison between full (one node-one task mapping) and partial (one node-multiple tasks mapping) configuration of the nodes, for the same set of parameters in each simulation run. Advances in reconfigurable computing

technology over the past decade have significantly raised interest In high-performance paradigm, some of the characteristics of reconfigurable hardware include, configurability, functional flexibility, power efficiency, ease of use, extensibility (adding new functionality), (reasonably) high Performance, hardware abstraction, and scalability (by adding More soft-cores). Design and implementation to incorporate partial reconfigurable functionality to the reconfigurable nodes in DREAMSim. Dynamic Reconfigurable Autonomous Many-task Simulator (DReAMSim). Design of efficient data structures to maintain the dynamic statuses of the nodes. Task scheduling algorithm is proposed and implemented to verify the functionality of the simulation framework.

PAPER TITLE------- Multi-criteria and satisfaction oriented scheduling for hybrid Distributed computing infrastructures

Task scheduling problem becomes more complex and challenging. In this paper, we present the design for a fault-tolerant and trust-aware scheduler, which allows to execute Bag-of-Tasks applications on elastic and hybrid DCI, following user defined scheduling strategies. Our approach, named Promethee scheduler, combines a pull-based scheduler with multi-criteria Promethee decision making algorithm. Because multi- criteria scheduling leads to the multiplication of the possible scheduling strategies, we propose SOFT, a methodology that allows to find the optimal scheduling strategies given a set of application requirements. The first challenge concerns the design of the resource management

middleware which allows the assemblage of hybrid DCIs. The second challenge is to design task scheduling that are capable of efficiently using hybrid DCIs, and in particular, that takes into account the differences between the infrastructures The third challenge regards the design of a new scheduling approach that maximizes satisfaction of both users and resource owners The Promethee scheduler allows users to provide their own scheduling strategies in order to meet their applications requirements by configuring the relative importance of each criteria . we introduce the design of the fault-tolerant and trust- aware Promethee scheduler, which allows to execute Bag-of-Tasks applications on elastic and hybrid DCI, following user-defined scheduling strategies.

PAPER TITLE------- A locality-aware job scheduling policy with distributed Semantic caches

Paper propose distributed query scheduling policies that consider the dynamic contents of distributed caching infrastructure and employ statistical prediction methods into query scheduling policy. To maximize the overall system throughput. Although many query scheduling policies exist such as round- robin and load-monitoring, they are not sophisticated enough to both balance the load and leverage cached results.so employ the kernel density estimation derived from recent queries and the well-known exponential moving average (EMA) to predict the query distribution in a multi-dimensional problem space that dynamically changes.

The problem is in many modern applications spend a large amount of execution time on I/O and manipulation of the data. The fundamental challenge for improving performance of such data-intensive applications is managing massive amounts of data, and reducing data movement and I/O to reduce the I/O on the large datasets, distributed data analysis frameworks place huge demands on cluster-wide memory capabilities, but the size of the memory in a cluster is not often big enough to hold all the datasets, making in memory computing impossible. However, the caching facilities scale with the number of distributed servers, and leveraging the large distributed caches plays an important role in improving the overall system throughput as many large-scale systems are being built by connecting small machines. The proposed distributed query scheduling policies make Query scheduling decisions by interpreting the queries as multidimensional points, and cluster them so that similar queries cluster together for high cache-hit ratio.





scheduling tuning for distributed processing systems

For performing composite application execution, a large number of smaller applications, which perform some specific tasks and communicate with other parts of the composite application through the signals and data transfer. For performing execution of such workflows computational capacity of one computer is obviously not enough. High performance

computational systems like Grid clusters and cloud environments are used for these purposes. Workflow execution in the distributed systems, very important issue in scheduling of the workflow. In this paper a combined approach is developed for automatic parameters tuning and performance models construction in the background of the WMS lifecycle the automated process works in the background of the Grid platform and updates the database of models and parameters, which is used during the real workflow scheduling. For the performance models construction, symbolic regression was used. Symbolic regression was used. symbolic regression is performed by genetic programming and uses statistical data about packages executions. Parameters tuning was performed by hyper-heuristic genetic algorithm.

PAPER TITLE------- Optimal distributed task scheduling in volunteer clouds

Cloud users can transparently access to virtually infinite resources with the same aptitude of using any other utility. Next to the cloud, the volunteer computing paradigm has gained attention in the last decade, where the spared resources on each personal machine are shared. Conversely, this places complex challenges in managing such a large-scale environment, as the resources available on each node and the presence of the nodes online are not known prior. The complexity further increases in presence of tasks that have an associated Service Level Agreement specified, e.g., through a deadline. Distributed management solutions have then been advocated as the only approaches that are realistically applicable. The volunteer cloud computing is characterized by a large-scale heterogeneous and dynamic environment. In this paper, a framework to allocate tasks per different policies, defined by suitable optimization problems in defined and then, we provide a distributed optimization approach relying on the Alternating Direction Method of Multipliers. In a real domain, a single policy could be driven by multiple goals.

PAPER TITLE------- Resource-aware hybrid scheduling algorithm in heterogeneous distributed computing.

Cloud applications generate huge amount of data, require gathering, processing and then aggregation in

a fault-tolerant, reliable and secure heterogeneous distributed system created by a mixture of Cloud systems (public/private), mobile devices networks, desktop-based clusters, etc. In this context, dynamic resource provisioning for Big Data application scheduling became a challenge in modern systems. We proposed a resource-aware hybrid scheduling algorithm for different types of application: batch jobs and workflows. The proposed algorithm considers hierarchical clustering of the available resources into groups in the allocation phase. Task execution is performed in two phases: in the first, tasks are assigned to groups of resources and in the second phase, a classical scheduling algorithm is used for each group of resources.

PAPER TITLE-------A simulator based performance analysis of Multilevel Feedback Queue Scheduling

Multilevel Feedback Queue (MFQ) permits the processes to switch between the queues relying upon their burst time. Different Queues may have different scheduling policy, (for example, RR, Shortest Job First (SJF) or First Come First Serve (FCFS)). In first queue Round Robin Scheduling is used. The processes which are in ready state comes to first Queue. It can shift to next queue when burst time is larger than time quantum. Time quantum can be generated dynamically by simulator or can be enter by user statically. Simulated MFQ program, can be used to demonstrate that using RR in first queue and applying SJF for rest of the queue may enhance the CPU usage. Dynamic time quantum is used here. To run the simulator, a set of processes are to be entered with arrival and burst time. In this experiment, we conclude that time quantum plays critical part in process scheduling and it is more efficient when found out dynamically. Combination of process scheduling is used to conduct experiment but using SJFRR is best suited to decrease the average waiting time and turnaround time. A more efficient MFQ is produced with this simulator.

PAPER TITLE------------Development of Scheduler for Real Time and Embedded System Domain

Various process scheduling for real time, embedded system is used today. They are categorized into Preemptive and Non-Preemptive. Preemptive algorithms have better efficiency.so in this, Pre- emptive algorithms are taken and compare them to design a scheduler for real time Linux platform for better efficiency. Here, positive aspects of each preemptive algorithm are taken to create the scheduling algorithm with better efficiency. The proposed process is implemented through C or C++.

Our focus is to study the performance and mechanism of each scheduling algorithm and create a better performing scheduling algorithm from that. For that all sort of variation is done in basic mechanism to achieve complicated algorithm with effective results.

PAPER TITLE---------Design and Implementation of a Process Scheduler Simulator and an Improved Process Scheduling Algorithm for Multimedia Operating System

A simulator is designed here to concentrate on evaluating the suitability and performance of different scheduling algorithm for a Multimedia Operating

System. It will take generic algorithms and set them in current scenario effectively to measure their characteristics. Many standard algorithms have been taken into account. A new algorithm is designed and executed to enhance the performance of a MMOS i.e. mixed task traffic. The samples of 20 distinctive task

traffic is taken and run all previous calculations on

them, and compute their performance measurements.

Five standard algorithms viz. Round Robin, (FCFS), (MLFS), (SJF), and Earliest Deadline First (EDF) were implemented alongside the proposed change to EDF. Parameters like due dates missed, add up to setting switches, normal holding up time and normal turnaround time were considered. in all standard algorithms, EDF has the best performance as it prevents missing of deadlines which generally lower the performance. Experiment conducted here shows that proposed algorithm is better than MLFS and EDF. So, the proposed work is the upgradation of EDF.

PAPER TITLE ---------- Approximate Analysis of Reader-Writer Queues

We examine the performance of queues that serve readers and writers. Readers need continuous service whereas writer wants some selected services. We investigate a first-come-first-serve (FCFS) reader- writer queue, and find a formula for processing waiting time and capacity under the Poisson arrivals and exponential services. Further investigation is carried out to handle a one-writer queue, and a queue that have writer locks. The aim is to present it as a guideline for designing existing system. In this they have given an exact model of FCFS reader-writer queue performance. We show formulae to anticipate the shared resources, waiting time for read and write locks. The formulae are easy and basic to be utilized as a part of a general guideline by a designer. Also, we examine one-writer queues, and update locks. We try to anticipate lock waiting times, and analyze the limit of the common resource

PAPER TITLE-------Good Processor Management = Fast Allocation + Efficient Scheduling

Multi-client multicomputer OS mainly comprises of job scheduling algorithms and efficient processor performance wise. One technique called Group scheduling, which put jobs in such a way that jobs belonging to one group do not block each other. In this FCFS is used to schedule the groups so that starvation can be neglected. Also, it reduces the response time by minimizing the waiting queue for the jobs in the same group. In other technique, two novel processor management is used that fulfill the demands for mesh connected multicomputer. In this we take a stack based algorithm that work on spatial subtraction and coordinate calculations to allocate a free space for a job in mesh. Both techniques together provide better and more efficient service. Both job scheduling and allocation algorithm plays a vital role in system outcome. In proposed paper, we use both technique simultaneously, first stack based allocation algorithm is used as allocate free space to job quickly. Second group scheduling is used to categorize jobs in groups. Results show that the stack-based allotment algorithm outperforms all the other approaches as far as allocation overhead is concern and group scheduling works for better for all group size and reduces response time efficiently compared to FCFS

technique. So, we conclude that both policy together gives faster and efficient performance.

PAPER TITLE--------------- An Impact of cross over operator on the performance of Genetic algorithm under operating system process scheduling problem.

The OS scheduling issue is considering as NP hard problem and Genetic algorithm is observed as a meta heuristic enhancement tool. So, the use of genetic algorithm for OS process scheduling issues is described here. The key of genetic algorithm relies on its operators, for example, mutation, transformation, inversion etc. Our priority is to make genetic algorithm adapt itself and become flexible. We use various cross over operator with consistent crossover and mutation. The convergence state, flexibility and performance of genetic algorithm depends on crossover operator and deflecting per that. After simulation, genetic algorithm is working efficiently for proposed issue. We find that under characterized parameter setting, crossover genetic approach to deal with the global maxima is different than dealing with local maxima. As order based crossover is taking more iteration rather than its global maxima approach and we needed that average waiting time should be less, so 1164.73 is the minimum estimation of the request based crossover genetic algo. It performs well under conditions in comparison to other operators.

PAPER TITLE-----------The Multi-Objective Assembly Line Worker Integration and Balancing Problem of Type-2

Traditional assembly line balancing (ALB) research focuses on the simple assembly line balancing

problem (SALBP) initially defined by Bay bars (1986)

through several well-known simplifying hypotheses.

This classical single model problem consists of finding

the best feasible assignment of tasks to stations so that

precedence constraints are fulfilled. Two basic versions of this problem are called type-1, in which the cycle time, c, is given, and the aim is to minimize the number of needed workstations; and type-2, used when there is a given number of workstations, m, and the goal is to minimize the cycle time (Scholl, 1999). We are particularly interested in another variant, in

which heterogeneity is more pronounced, configuring

the so-called assembly line worker assignment and balancing problem (ALWABP) (Miralles et al., 2007). In this problem, inspired by assembly lines in sheltered work centers for the disabled (SWDs), workers are highly heterogeneous. Indeed, not only each worker

might have a specific processing time for each task,

but also each worker has a set of tasks that they cannot

execute, called incompatible tasks.

PAPER TITLE----------Novel Scheduling Algorithm for Uni-Processor Operating System

In this we study that researchers are focuses on design and development aspect of new and novel scheduling algorithm for multi-programming operating system in the view of optimization. They developed a tool which gives output in the form of experimental results with respect to some standard scheduling algorithms e.g. First come first serve, shortest job first, round robin etc. Efficient resource utilization is achieved by sharing system resources among multiple users and system processes. Optimum resource sharing depends on the efficient scheduling of competing users and system processes for the processor, which renders process scheduling an important aspect of a multiprogramming operating system. Part of the reason for using multiprogramming is that the operating system itself is implemented as one or more processes, so there must be a way for the operating system and application processes to share the CPU.Another main reason is the need for processes to perform I/O operations in the normal course of computation. Since I/O operations ordinarily require orders of magnitude more time to complete than do CPU instructions, multiprograming systems allocate the CPU to another process whenever a process invokes an I/O Operation.

RR have problem of high avg. Waiting Time, SRTF gives starvation to longer jobs and though Highest Response Ratio Next is useful in avoiding problem of RR and SRTF, it fails in terms of responsiveness due to its non-pre-emptive mode. So, we proposed this algorithm which will try to minimize avg. WT and starvation to longer jobs. And, increases the responsiveness due to its pre-emptive nature. For 1st job time given(TG) is 2*TQ.It will be useful since many processes may come in that time, which is useful for making effective decision ahead as we have many processes to choose from.

Author also shows the analysis of proposed algorithm on some scheduling factor asThere are many factors like waiting time,turnaround time(TAT) etc on which we can check the performance of the scheduling algorithm. To test the performance, we will use two parameters such as avg. WT, avg. TAT Average WT is mentioned in queueing theory as follows: Queueing theory is the mathematical study of waiting lines, or queues. The theory enables mathematical analysis of several related processes, including arriving at the (back of the) queue, waiting in the queue (essentially a storage process), and being served at the front of the queue. The theory permits the derivation and calculation of several performance measures including the average waiting time in the queue or the system, the expected number waiting or receiving service, and the probability of encountering the system in certain states, such as empty, full, having an available server or having to wait a certain time to be served.

PAPER TITLE ---------------- Distributed Process Scheduling Using Genetic Algorithm

Two fundamental concept of Distributed Operating System are: time sharing and resource sharing. In distributed system environment booking holds a vital role in performance and throughput. Scheduling is recognized as NP-Complete issue even in the best conditions. So here a proposed model is to achieve an ideal algorithm which requires minimum execution time, enhances the processor efficiency. the issue arises here is combinatorial optimization which can be solved by Genetic algorithm. So, a genetic algo based system has been introduced and assessed to overcome issues and get desirable results. In this system, multiple conditions are considered to minimize peak load and cost and to get maximum CPU utilization and to achieve best possible solution.

PAPER TITLE --------------Organization Based Intelligent Process Scheduling Algorithm (OIPSA)





main aim of scheduling

algorithm is to divide CPU time in such a manner that

maximum utilization and efficiency could be

extracted. One way to achieve that is to set jobs priority manually by taking some basic scheduling rules in account irrespective of preferences to processes. As we notice generally some set of tasks are performed repeatedly by companies. So, rather than using pre-defined design rules we should give priority based to jobs priority or activeness. For that a novel algorithm come into picture. It schedules jobs/processes acc. to need not on pre-defined rules. In this, OIPSA (Organization Based Intelligent Process Scheduling Algo.) study the process and give the highest priority to frequently used processes. The proposed novel algorithm arranges every process into high, medium, and low priority pool and then arrange it with preferences. Due to this response time, waiting time and turnaround time decreases visibly and enhances the efficiency of the whole system when compared to fundamental scheduling algorithms. It schedules acc. to needs to organization. At start OIPSA performance is just theoretical but once it gets familiar with company preferences it will yield the better results. In future, user preference will be considered by this algorithm. This will help system to work more efficiently.





balancing problem with bounded



In this paper, we consider the online various leveled planning issue on two parallel machines, with the goal of amplifying the base machine stack. Since no focused calculation exists for this issue, we consider the semi-online adaptation with limited handling

times, in which the preparing times are limited by an

interim [1, α] where α≥1. We demonstrate that no

calculation can have a focused proportion under 1 +αand introduce an ideal algorithm. Hierarchical booking issue on m parallel machines has been generally considered. For the most part, the issue can be de-scribed as takes after. We are given machines and the machines are recognized by various chain of command. A progression of employments arrives one by one over a rundown and every occupation has a positive handling time and a pecking order. Occupations ought to be planned irrevocably at the season of their landings, and every employment must be handled on a subset of the machines. The regular objective is to minimize the most extreme heap of all machines. We are given two machines and a progression of occupations arriving on the web which

are to be planned irreversibly at the season of their landings. The primary machine can handle every one of the occupations while the second one can prepare just part of the employments. The landing of another employment happens simply after the present place of

employment is planned. Let σ={J1, ...


Jn}be the

arrangement of all employments orchestrated in the request of entry. We mean every employment as Ji=(pi, gi), where pi>0is the handling time (additionally called work size) of the occupation Ji and

gi{1, 2}is the pecking order of the employment Ji. gi=1if the employment Ji must be handled by the primary machine, and gi=2if it can be prepared by both two machines. piand giare not known until the landing of the occupation Ji. In this paper, we considered the semi-online form of various leveled booking issue on two parallel machines with the target of expanding the base machine stack. If the handling times are limited

by an interim [1, α], we demonstrated the lower bound

of the aggressive proportion of any online calculation

is 1 +αand introduced a calculation which is appeared

to be ideal. On the off chance that we additionally know the entirety of all occupations' handling times (i.e., the aggregate preparing time), we demonstrated a

lower bound of αfor the case 1 ≤α<2and an ideal

calculation was likewise displayed. In any case, despite the fact that the second outcome enhances the relating result in [1], it is as anyone might expect due

to the additional limitation on the preparing time

interim [1, α]. By and large, if there are m(m

>2)parallel machines, we trust that the focused calculation is likewise exist. We will concentrate on finding the lower bound of the focused radio and outlining an ideal online calculation for this issue in our future research.

PAPER TITLE-------------Towards Reducing Energy Consumption using Inter Process Scheduling in Preemptive Multitasking OS.

Current proportional schedulers do not work on Greedy algorithm for multithreaded processes. And it consumes lot of energy to simulate multithreaded scheduler. To reduce that, we propose a better solution when multithreaded processes run in the system continuously. We use novel proportional sharing scheduler to manage the running thread of same process and adjust its weights. For that TWRS (Thread Weight Readjustment Scheduler) is used, which

improve the number of context switches. TWRS distributes CPU time relatively for threads as per their new weights and pre-allocates some CPU time to every thread. Context Switches is waste time as system is does not work while switching. Energy can be saved while minimizing that. TWRS gives a functional approach for multitasking OSs as it operates with existing kernels. We proposed and explored the efficiency of TWRS which is a kind of proportional sharing scheduler for multitasking system. The main aim is to allocate more CPU time to processes with more threads and simultaneously scheduler try to avoid greedy processes to consume extra CPU time so that deadlock can be prevented. Here TWRS is executed in Linux 2.6.24-1, which is a most prominent scheduler design, e.g. Completely Fair Scheduler (CFS). The proposed scheduler is the modification on the Linux run multithreaded services. Our work demonstrates that TWRS reduces the energy consumption.

PAPER TITLE ----------- FPS: A Fair-progress Process Scheduling Policy on Shared-Memory Multiprocessors

Competition for shared memory resources on multiprocessors is the predominant reason for slowing down applications and making their performance suffer. It is necessary to have Quality of Service (QoS) on such frameworks. We propose a Fair Process Scheduling (FPS) strategy to enhance system performance. In this when slowdown occur then equally weighted processes in running state must suffer. When an application endured more slowdown then, we assign more CPU time to work efficiently.

This strategy can likewise be connected to threads with different weights. We proposed a fair progress scheduling (FPS) algorithm to give better performance on shared-memory multiprocessors. The main priority is to give the same amount of CPU time, if an application did less viable work than others since it suffers greater slowdown. The test while figuring the progress at runtime is to evaluate the run-alone performance in each executed quantum while the application is really running with others. Our solution is to order the execution quanta into phases, and developing the low dispute pre-scheduled processes. We then extend the performance data to other quanta that belong with a similar phase to find out their

progress. Demonstration shows that with little less throughput we can improve system working. We can overcome throughput issue at later stages. We can record the performance information of the process to guide scheduler for future. FPS uses such records so that decency can be kept up without the overhead of the training periods required in FPS. Throughput can be upgraded.

PAPER TITLE----------Analysis of CPU scheduling policies through simulation.

CPU Scheduling is an area of research where computer scientists used to design efficient algorithms for scheduling the processes to get output in the form of optimum turnaround time and average waiting time. CPU scheduling deals with the problem of deciding which of the process in the ready queue is to be allocated the CPU. Per this paper, Terry Regner & Craig Lacey has introduced the concepts and fundamentals of the structure and functionality of operating systems. The purpose of this article was to analyze different scheduling algorithms in a simulated system. Process scheduling algorithms are used to ensure that the components of the system would be able to maximize its utilization and able to complete all the processes assigned in a specified period. In some applications, SJF scheduling algorithm is more suitable than PrS algorithm since it provides less waiting time and less turnaround time. In real-time applications, PrS algorithm must be used to deal with different priorities, since each task has a priority order. Multimedia applications have unique requirements that must be met by network and operating system components. In any multimedia application, we may have several processes running dependently on one another. Multimedia is a real-time application. In context of multimedia applications, the CPU scheduler determines quality of service rendered. The more CPU cycles scheduled to a process, the more data can be produced faster, which results in a better quality, more reliable output. Author of this paper design a simulator using VB 6.0. The simulator analysis four CPU Scheduling policies. CPU scheduling policies is most important function and critical part of an operating system. There are several policies of process allocation such as FCFS, SJF RRS and PBS. Simulator is designed to evaluate the process schedule strategies by considering randomly generated reference Process.

The main objective of this research work is to analyze various policies of CPU scheduling. The foremost criterion for the evaluation of CPU scheduling is the waiting time and burst time of the processes that are produced by each policy under same set of conditions and workload. The workload here is the size of memory whose allocate to coming process. Than simulator has been designed to study the behavior pattern of different policies under similar conditions and for a set of requested process, which are generated randomly.

PAPER TITLE--------------Optimized Round Robin CPU Scheduling Algorithm.

We have observed one of the fundamental function of an operating system is scheduling. There are 2 types of uni-processor operating system in general. Those are uni-programming and multi-programming. Uni- programming operating system execute only single job at a time while multiprogramming operating system can execute multiple jobs concurrently. Resource utilization is the basic aim of multiprogramming operating system. Scheduling is the heart of any computer system. Since it contains decision of giving resources between possible processes. Sharing of computer resources between multiple processes is also called as scheduling. The CPU is one of the primary computer resources, so its scheduling is essential to an operating system’s design. Efficient resource utilization is achieved by sharing system resources amongst multiple users and system processes. Optimum resource sharing depends on the efficient scheduling of competing users and system processes for the processor, which renders process scheduling an important aspect of a multiprogramming operating system. As the processor is the most important resource, process scheduling, which is called CPU scheduling, becomes more important in achieving the above-mentioned objectives.

PAPER TITLE----------Survey of load balancing techniques for Grid

A Grid is a figuring and information administration foundation that gives the electronic under sticking to a worldwide society in business, government, research, science and stimulation. A computational Grid constitutes the product and equipment foundation that gives dependable, consistent, unavoidable and modest access to top of the line computational capacities. The Grid coordinates networking, communication, computation and in arrangement to give a virtual stage to calculation and information administration similarly that the Internet coordinates assets to shape a virtual stage for data. Lately, because of the quick mechanical progressions, the Grid processing has turned into an critical range of research. Framework figuring has risen another field, recognized from customary conveyed processing. It concentrates on expansive scale asset sharing, innovative applications and in a few cases, superior introduction. A Grid is a system of computational assets that may possibly traverse numerous mainland’s. The Grid serves as a far reaching and finish framework for associations by which the greatest usage of assets is accomplished. The heap adjusting is a procedure which includes the asset administration and a compelling burden dispersion among the assets. Accordingly, it is thought to be imperative in Grid systems. The proposed work displays a broad review of the existing burden adjusting strategies proposed so far. These methods are material for different frameworks contingent on the necessities of the computational Grid, the sort of environment, resources, virtual associations and occupation profile it should work with. Each of these models has its own benefits and faults which shapes the topic of this study. A definite grouping of different load adjusting strategies in view of various parameters has additionally been incorporated into the review. The static load-balancing algorithms assume that the information governing load-balancing decisions which include the characteristics of the jobs, the computing nodes, and the communication networks are known in advance. The load-balancing decisions are made deterministically or probabilistically at compile time and remain constant during run time. However, this is the drawback of the static algorithm. In contrast, the dynamic load-balancing algorithms attempt to use the runtime state information to make more informative load-balancing decisions.












Task Scheduling in Large-scale Distributed Systems Utilizing Partial Reconfigurable Processing Elements

IEEE 2012


Average wasted area per task is less. Simultaneously executing more than one application in system node

Need of a good load balancing manager


Multi-criteria and satisfaction oriented scheduling for hybrid Distribute computing infrastructures


Fault tolerant and trust aware scheduler

Need of a scheduling strategies (filtering methods)




maximizing infrastructure








Job scheduling policy with Distributed Semantic caches

balanced and cached results are reused. Less load on servers


Automatic workflow scheduling tuning





Algorithm uses higher number

for distributed processing systems


of iteration which is dependent on CPU cores and ram


Optimal distributed task scheduling in volunteer clouds


Transparently access to virtually infinite resources

Limited resource capacity on single a machine

Can be setup on data centers, personal devices or both


Resource-aware hybrid scheduling algorithm in heterogeneous distributed computing


Scalability Fast execution time.


Need of a dynamic.


A simulator based performance




A process that waits too long

Moving the process around

analysis of Multilevel Feedback Queue Scheduling.


in a lower priority queue may be moved to a higher priority

queue produce more CPU overhead.

Ritesh Gupta

queue. Every process gets equal share of CPU. Newly created process is added to end of ready queue.

Larger waiting and response time. Low throughput. Context switching are there.


Development of Scheduler for Real


Less starvation.


Elapsed time must be

Time and Embedded System Domain.

Panduranga Rao

Throughput is high.

recorded, is results in

K.C. Shet


overhead on processor



Overhead on processor is


Starvation for longer




low. Good response time for short processes.



Design and

Implementation of a

Prabhat K.


Minimum average waiting

Can’t predict the length of

Process Scheduler Simulator and an



next burst.

Improved Process Scheduling Algorithm for Multimedia Operating System.

Prasoon Gupta

Throughput is high. Provably optimal with respect to average turnaround time.

Requires future knowledge. Can lead to starvation. Does not minimize average turnaround time.


Good Processor Management = Fast Allocation + Efficient Scheduling

Byung S. Yoo Chita R. Das

Prevent Starvation. First Come First Served. Easy to Implement.


This scheduling method is non- preemptive, that is, the process will run until it finishes.


Because of






processes which are at the back

of the queue must wait for the

long process at finish


front to









It shows the comparison


Where the executing process

Scheduling Algorithm.




between two system new and existing by an implemented system.

accesses the same resource before another preempted process finished using it.


Analysis of CPU scheduling policies

Kumar, Ashok,





through simulation.

Harish Rohil, and





Suraj Arya.

Simulators are



scheduling policies. In Future,

they can

be use



simulation technique.


Novel Scheduling Algorithm for Uni-


Its helps to improve the CPU


Processor Operating System



efficiency in real time uni-


processor-multi programming


operating system. CPU



Scheduling is the basis of


Suresh Varma.

multi-programmed operating system. The scheduler is responsible for multiplexing processes on the CPU.


Survey of load balancing techniques for



The algorithm, research








focus, contribution, features,



compared model, performance metrics, improvement, gap and future work of each load balancing technique have been analyzed and presented.


Semi-online hierarchical load balancing




This work can be extended to

This communication overhead

problem with bounded processing times


develop a new algorithm to

and load balancing time





decentralized approach to reduce the communication overhead as well as to reduce migration time and also make it scalable.

depends upon the approach selected in the algorithm.

The load balancing algorithm for the clusters can be made more robust by scheduling all jobs irrespective of any constraints so as to balance the load perfectly.


New Algorithms for Load Balancing in Peer-to-Peer Systems


This protocol eliminates the necessity of virtual nodes while maintaining a balanced load. Improving on related protocols. The scheme allows for the deletion of nodes and admits a simpler analysis, since the assignments do not depend on the history of the network.

Complex query data structures are likely to impose some structure on how items are assigned to nodes, and this structure must be maintained by the load balancing algorithm.




PROPOSED ALGORITHM ANT COLONY OPTIMIZATION Conclusions Ant Colony Optimization has been and keeps on being a


Ant Colony Optimization has been and keeps on being a productive worldview outlining viable combinatorial

advancement arrangement calculations. After over ten years of studies, both its application viability and its hypothetical groundings have been increased that makes ACO a standout amongst the best worldview in the metaheuristic range. This outline tries to propose the process scheduling and load balancing in the distributed system. This paper is about, recommending

a metaheuristic optimization technique (Ant Colony Optimization (ACO)) for task scheduling of Distributed Operating System in an effectual way.


[1] Dr. Sanjay. K. Dwivedi, Ritesh Gupta, A simulator based performance analysis of Multilevel Feedback Queue Scheduling,5th International Conference on Computer and Communication Technology 2014. [2] M.V. Panduranga Rao, K.C, Shet, R.Balakrishna, K. Roopa,” Development of Scheduler for Real Time and Embedded System Domain”, 22nd International Conference on Advanced Information Networking and Applications - Workshops. [3] Byung S. Yoo and Chita R. Das, Department of Computer Science and Engineering The Pennsylvania State University,” Good Processor Management = Fast Allocation + Efficient Scheduling”. [4] Prabhat K. Saraswat, Prasoon Gupta, Dhirubhai Ambani Institute of Information and Communication Technology,” Design and Implementation of a Process Scheduler Simulator and an Improved Process Scheduling Algorithm for Multimedia Operating Systems”. [5] Theodore Johnson, Member, IEEE,” Approximate Analysis of Reader Writer Queues”, IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 21, NO. 3, MARCH 1995. [6] Rajiv Kumar, Sanjeev Gill, Ashwani Kaushik,” An Impact of cross over operator on the performance of Genetic algorithm under operating system process scheduling problem”, International Conference on Communication Systems and Network Technologies, 2011. [7] Munam Ali Shah, Muhammad Bilal Shahid, Sijing Zhang, Safi Mustafa, Mushahid Hussain,” Organization Based Intelligent Process Scheduling Algorithm (OIPSA)”, Proceedings of the 21st International Conference on Automation & Computing, University of Strathclyde, Glasgow, UK, 11-12 September 2015. [8] Ranjeet Singh, Santosh Kumar Gupta,” Distributed Process Scheduling Using Genetic Algorithm”. [9] Chenggang Wu, Jin Li, Di Xu, Pen-Chung Yew, Fellow IEEE, Jianjun Li, and Zhenjiang Wang,” FPS:

A Fair-progress Process Scheduling Policy on Shared-Memory Multiprocessors”, IEEE TRANSACTION. [10] Samih M. Mostafa, Shigeru KUSAKABE,” Towards Reducing Energy Consumption using InterProcess Scheduling in Preemptive Multitasking OS”.