Vous êtes sur la page 1sur 9

Comparison between CPU Scheduling in VxWorks and RTLinux

Wiktor Nytomt 850903-5518 wikny312@student.liu.se Magnus Persson 831129-0251 magpe497@student.liu.se University of Linkping Sweden 2006-11-19

This document is written as an assignment at the university of Linkping Sweden in a course about real time systems. The aim of the paper is to look into the CPU scheduling for two hard real time operating systems, RTLinux and VxWorks, compare their design choices and describe what choices the user has when it comes to configuring the CPU scheduler in the operating system. We have found that the difference between RTLinux and VxWorks when it comes to CPU scheduling lies in the way priorities are set. RTLinux comes with the EDF and RMA algorithms which sets the priorities themselves while the native schedulers in VxWorks and RTLinux depend on the user setting the priorities.

There are several critical design choices to be made when making, or remaking, an operating system that will be a hard realtime system. One of these design choices is how to implement the scheduling of the CPU. In other words, how should it be decided which thread gets hold of the CPU and execute its code. There are of course many ways to do this, but this paper will focus on the design choices taken when creating VxWorks and RTLinux. The article will start out with a summary of RTLinux and the algorithms that has been implemented, followed by VxWorks and WindRivers implementation of the CPU scheduling. After that an comparison between VxWorks and RTLinux will follow.

Introduction to RTLinux RTLinux originated from a realtime project at the New Mexico Institute of Mining and Technology, it was developed by V. Yodaiken. RTLinux exist today in two different versions, one commercial version called RTLinux Pro, developed by fsmlabs where V. Yodaiken is President, and one free version called RTLinux Free. Instead of building a new hard realtime operating system from scratch, the creators of RTLinux decided to make an extension to the standard Linux kernel that would make Linux an hard realtime operating system. A standard Linux kernel has its scheduling algorithms that ensures good average performance of the threads in the system, but it is not able to guarantee that tasks will be able to finish before a given deadline. The standard Linux kernel can suspend user threads and give the CPU to another process according to the CPU scheduling algorithm used. This combined with system calls, memory management and so on, can be a problem when you want to be able to guarantee that a realtime task will be able to finish before its deadline [1]. In RTLinux this is solved by adding a layer between the system hardware and the standard Linux kernel running on the system. It is designed in a way that it will trick the standard Linux kernel into believing that the additional layer is the actual hardware in the system and not just another layer of software. This makes it possible for the new software layer to catch all the interrupts and handle them before the standard Linux kernel will notice the interrupts. So, the RTLinux kernel catches interrupts generated by the hardware in the system and passes them on to the Linux kernel as software interrupts when all realtime tasks are done [2]. No interrupts are sent to the standard Linux kernel if there still are realtime tasks that are ready to run. If the interrupt is of relevance to RTLinux the appropriate interrupt service routine will be executed. In this way we can sort out events that may disturb the realtime performance of the system. Basically RTLinux adds an application that takes over the control of interrupts from the standard Linux kernel, and then uses its own scheduling algorithm and interrupt service routines to make the standard Linux kernel in to a hard realtime operating system. The standard Linux kernel continues to run its processes as before unknowing about the realtime application that now controls the interrupts. To be sure that the realtime tasks in the system will be able to finish before their deadline, the standard Linux kernel is assigned the lowest priority by the RTLinux kernel and therefore is always preempted by any realtime task. The standard Linux kernel can not preempt any realtime tasks sharing the same CPU. Scheduling algorithms in RTLinux RTLinux is a module based system, which means that a user can replace modules in the system if one module does not meet his or hers requirements. For example, RTLinux lets the user change the module that handles the scheduling algorithm to one of the users choice. There is a standard scheduling algorithm implemented in RTLinux, the native CPU scheduling algorithm in RTLinux is a preemptive algorithm which runs a realtime task until a task with higher priority becomes ready to run. The currently running task will be preempted and its context will be stored, the task with the highest priority in the ready to run queue will now be executed until it will finish or another task with higher priority enters the ready to run queue. The realtime tasks created with this scheduler has fixed priorities, which means that it is

not possible to change the priority of a task while it is running (opposed to VxWorks which implements dynamic priorities, the EDF scheduler in RTLinux also supports dynamic priority scheduling). The scheduler can do several operations on tasks, such as running, suspending, waking up and deletion of tasks [3]. There also exists two alternative CPU scheduling modules in RTLinux, one EDF-scheduler and one rate-monotonic scheduler, it is up to the user to configure the system to use any of the other two alternative scheduling algorithms. Rate-Monotonic Algorithm (RMA) Rate-monotonic scheduling works in the way that when a task enters the system it will be assigned a static priority based on the period time of the task. RMA only supports periodic tasks, which means that a task will request the CPU periodicly. A shorter period time results in a higher priority and a longer period time results in a lower priority. This means that a task that needs the CPU often is assigned a high priority. RMA use preemption, which means that if a task with higher priority, than the one currently running, enters the ready to run queue it will preempt that task and the scheduler will give the CPU to the task with the higher priority and suspends the current task. In systems where safety is crucial one can assume that all tasks are predictable and therefore a policy that uses static priorities makes sense if you want to be able to guarantee that tasks will finish before their deadlines [4]. If you can predict all tasks running in the system you also know the processing time of all the tasks in the system, RMA assumes that processing time for a periodic task is the same every time it is run. A major drawback with rate-monotonic scheduling is that when the number of tasks in the system increases and approaches infinity, the CPU-utilization converges to 69% [7]. CPU utilization is given by the formula 2((2^1/n) -1), where n is the number of tasks sharing a CPU. If you schedule n number of tasks and the total CPU-utilization exceeds that of the formula for that number of tasks, rate-monotonic scheduling can not guarantee that it can schedule those tasks in a way that they all can finish before their deadlines. And if rate-monotonic scheduling can not schedule the tasks, it can not be done with any other scheduler that uses a static priority policy. Earliest Deadline First (EDF) EDF scheduling works in the way that it assigns priorities based on the deadline of a task [5]. A task which deadline is close is assigned a higher priority than a task with a deadline further away. Priorities is assigned dynamically, which means that the priority of a task may be changed during runtime. EDF implements preemption, which means that when a task becomes ready to run and is assigned a priority higher then that of the task currently running the scheduler gives the CPU to the task with higher priority. In the implementation of EDF in RTLinux each task has two attributes used when scheduling the tasks. The tasks has a priority number and a deadline which the task much be able to finish in time to. The deadline can be split into two types, one relative deadline and one absolute deadline. When a task becomes ready to run it has to announce its deadline to the scheduler so that its priority can be updated if the scheduler thinks that the task may not be able to finish before its deadline. EDF does not require the tasks to be periodic nor do a task need to have the same CPU processing time each time its executed. This is an advantage compared to using the RMA scheduling algorithm. Another advantage EDF has


over RMA is the CPU-utilization, where EDF in theory can accomplish 100% utilization of the CPU at all times and guarantee that all tasks will be able to finish before their deadlines [7].

VxWorks Introduction When writing this document, the current version of VxWorks is 6.3, which can be an important note due to the fact that some things may change in future versions of VxWorks. VxWorks is a hard real time operating system developed by WindRiverSystems. VxWorks is commercially developed and used in a variance of applications, such as auto mobiles, network equipments and so on. VxWorks is build around the microkernel called Wind, that is designed with a minimum number of features. Features that are needed and included in the Wind microkernel are features such as scheduling, interrupts, processes and interprocess communication.[7] In addition to the small number of features included in the kernel there are several libraries that provides support for TCP/IP, POSIX and more. By including different libraries the user can expand VxWorks with additional features. This design choice makes it possible to get the size of VxWorks to not be larger than necessary, which is critical for many embedded systems where the amount of space available for the operating system is limited. CPU scheduling in VxWorks VxWorks gives the user the liberty of setting up the CPU scheduling to match the users needs and demands. In VxWorks there is a scheduling framework that makes it possible for the user to implement his own CPU scheduling algorithm, however VxWorks comes with two different CPU scheduling algorithms already implemented. [6] One algorithm, VxWorks Traditional Scheduler, is a priority based preemtive algorithm with a round-robin extension, this is the native CPU scheduling algorithm in VxWorks and will be used if the user does not configure VxWorks to use another algorithm for CPU scheduling. The other algorithm is for scheduling of POSIX threads in VxWorks. VxWorks Traditional Scheduler The native scheduler in VxWorks is priority based and preemtive, this means that a thread with higher priority will preempt the CPU when it is ready to run. So the CPU will always be occupied with the thread that has the highest priority and is ready to run. As said, when a thread with higher priority than the thread currently being executed becomes ready to run, it will stop the current thread and take the CPU. The priority of an thread is set by the user when executing the thread for the first time, however, VxWorks do have support for dynamic priorities which means that a threads priorities can be changed during runtime. Each thread is given an priority number from 0 to 255 where 0 is considered to be the highest priority while 255 is considered to be the lowest. To avoid loss of data when switching between threads the system will save the currently running threads context so that it can start from where it was stopped once it is given back the CPU. So, a thread that has been stopped, due to a thread with higher priority is ready to run, will, if there are no other thread with higher priority in the ready to run queue, continue to execute where it last left off, with out any loss of data. However, there is a problem that may occur with an priority based preemtive scheduler. If there are several threads with the same priority in the system, sharing one CPU, one of the threads may always be the one getting the CPU, not letting the other threads with the same priority get any CPU time. To solve this problem VxWorks can be configured so that the traditional scheduler uses

a round-robin extension, it is up to the user to enable this feature or not. Round-Robin Scheduling extension When several threads with the same priority are ready to run, a problem may occur. One of the threads may always be the one getting the CPU. Round-robin is an algorithm that is used to solve this problem by using time-slicing. This means that if there are several threads that have the same priority, they are seen as a group of threads, the system then only allows one of the threads in the group to be executed for a certain amount of time, when the thread has been running for the time given, the scheduler will stop the thread, save its context and start executing the next thread in that specific group. To decide which of the threads in the group to run and in what order, the Round-Robin algorithm uses an simple first in-first out scheduling. The thread which was the first one to enter the system will be the first thread to be served by the scheduler. When all the threads in the group has gotten CPU time the algorithm will continue with the first thread in the group. This makes it impossible for one thread to be the only thread with that priority getting CPU time. This however does not affect the priority based preemtive scheduling algorithm, a thread with higher priority can still preempt the CPU from the currently running thread.

The aim of the paper is to compare the design choices taken when implementing CPU scheduling in RTLinux and VxWorks. Though both VxWorks and RTLinux gives the user the liberty of implementing his own CPU scheduling algorithm, we can shortly say that, if measures taken, there are no differences in the available choices when it comes to CPU scheduling in RTLinux and VxWorks. However the native CPU scheduling algorithm in the two operating systems differs. While VxWorks has chosen an preemptive priority based algorithm with an Round-Robin extension to handle the case when there are several threads with the same priority, RTLinux has chosen not to have a Round-Robin extension. If we where to compare only the native solutions of the CPU scheduling we could say that VxWorks has taken care of one case where there can be problems while RTLinux ignores this case. However both of the operating systems lets the user expand the operating system with the CPU scheduling algorithm of his choice when needed, so even the fact that VxWorks have an Round-Robin extension to take care of the case with several threads with the same priority, this does not stop the user from having this feature in RTLinux. One big advantage RTLinux has over the VxWorks operating system is that it ships with the RMA and EDF schedulers already implemented, where in VxWorks the user will have to implement them himself if he wants to be able to use EDF of RMA. Also VxWorks is not guaranteed to work with the new scheduler, while it is supported as standard in RTLinux. If one were to compare the CPU scheduler in RTLinux, which uses static priority scheduling, RMA, with the native scheduler in VxWorks when it uses static priority scheduling one would find that RMA is optimal, which means that if you can not schedule a set of tasks with RMA you can not schedule them with any other scheduler that uses static priority scheduling either, including the native scheduler in VxWorks. So the big difference between RMA and the native scheduler in VxWorks, while using static priorities, lies in the way that tasks are assigned priorities. With RTLinux and RMA the priorities will be set according to period time by the RTLinux core and in VxWorks the user sets them himself when executing a thread for the first time. Therefore the user must assure himself that a task has as high priority as necessary to be able to finish before the deadline, where in RTLinux when running RMA a user must only look at the CPU utilization of his set of task to be sure that RMA can schedule them so that they all will meet their respective deadlines. But where RMA requires the task to be periodic the native scheduler in VxWorks does not have any such requirement, the tasks can request the CPU at any time. So in the case that the user wants to use RTLinux with tasks that are not periodic you will have to switch to EDF to be able to maintain the realtime feature in the system, in VxWorks you would not have to change the scheduler, you can continue to use the same native scheduler even if the tasks are periodic or not. If we compare the EDF scheduler in RTLinux to the native scheduler in VxWorks when using dynamic priority scheduling, the big difference lies in the way priorities are set. Where EDF in RTLinux uses the deadline to assign priorities the native scheduler in VxWorks relies on the user to set the priorities in a way that it can schedule the set of task so they all meet their deadlines. We also know that EDF is optimal, which means that if a set of tasks can not be scheduled using EDF it can not be scheduled with any other

scheduler using dynamic priority scheduling.

The basics of the CPU scheduling in both VxWorks and RTLinux are the same, they both use an preemptive priority based solution. The difference lies in the way that priorities are set, EDF in RTLinux sets them according to deadline, RMA in RTLinux sets the according to period time. The native schedulers in RTLinux and VxWorks depends on the user to set the priorities in a way that the scheduler can schedule the set of task in a way that they all can meet their respective deadlines.

[1] http://www.tldp.org/HOWTO/RTLinux-HOWTO.html [2] http://www.fsmlabs.com/the-rtLinux-approach-to-real-time.html [3] http://rtLinux.lzu.edu.cn/documents/documentation/RTLManual/RTLManual.ht ml [4] http://rtportal.upv.es/apps/edf-sched/rtlinux-edf-sched-1.0/doc/edfsched.pdf [5] http://dslab.lzu.edu.cn/docs/publications/Analyzing_RTLinuxGPL_Source_Cod e_for_Education.pdf [6] VxWorks kernel programmers guide 6.3 [7] Operating System Concepts (Seventh Edition) by Silberschatz Galvin, Gagne.

John Wiley & Sons, Inc. ISBN 0-471-69466-5