Vous êtes sur la page 1sur 8

CHAPTER-5

REAL TIME OPERATING SYSTEMS


CONTEXT SWITCHING MECHANISM:
In CPU, the term "context" refers to the data in the registers and program counter at a specific moment
in time. A register holds the current CPU instruction. A program counter, also known as an instruction address
register, is a small amount of fast memory that holds the address of the instruction to be executed immediately
after the current one.
A context switch is the process of storing and restoring the state (more specifically, the
execution context) of a process, so that execution can be resumed from the same point at a later time. This
enables multiple processes to share a single CPU and is an essential feature of a multitasking operating
system.
(Or)
A context switch (also sometimes referred to as a process switch or a task switch) is the switching of
the CPU (central processing unit) from one process or thread to another.
A context switch can be performed entirely in hardware (physical media). Older CPUs, such as those
in the x86 series, do it that way. However, most modern CPUs perform context switches by means of
software (programming). A modern CPU can perform hundreds of context switches per second. Therefore, the
user gets the impression that the computer is performing multiple tasks in a parallel fashion, when the CPU
actually alternates or rotates between or among the tasks at a high rate of speed.
Context Switching Mechanism:

Prepared by Hari M.tech, Dept of ECEG, ESU

Page 1

Fig1. (a) Current program context (b) new program executes with the new context of the called function or
routine (c) context switching for new routine and another switch on return from routine (d) context switching
for new routine and another switch on return or in-between the call to another routine.
The context must save if a function program or routine left earlier has to run again from the state which was
left. When there is a call to a function 1)(called routine in assembly language, function in C and C++, method
in Java also called task or process or thread when it runs under supervision of the OS), the function or ISR or
exception handling function executes by three main steps:
1) Saving all the CPU registers including processor status word, registers and functions current address
for next instruction in the PC. Saving the address of the PC onto a stack is required if there is no link
register to point to the PC of left instruction of earlier function. Savings facilities the return from the
new function to the previous state.
2) Load new context to switch to a new a function.
3) Read just the context of stack pointer and execute the new function.
These three actions are known as context switching. Fig 1(b) shows a current programs context switching to
the new context.
The last instruction (action) of any routine of function is always a return.
The following steps occur during return from the called function.
1) Before return, retrieve the previously saved status word, registers and other context parameters.
2) Retrieve into the PC the saved PC (address) from the stack and load other part of saved context from
stack and readjust the contents of stack pointer.
3) Execute the remaining part of the function, which called the new function.
These three actions are also known as context switching. Fig 1(c) and (d) shows context switching for new
routine and another context switch on return or on in-between call to another routine.
SCHEDULING POLICIES
The Cooperative Scheduling of Ready Tasks List:
The scheduler's essential role is to decide which tasks should be run next.
Cooperative means that each ready task cooperates to let a running one finish. None of the tasks does a
block anywhere during the ready to finish states. The service is in the order in which a task is initiated on
interrupt and placed in ready list. We can say that the task priority parameters set as its position in the queue.
Figure (a) shows a scheduler in which the scheduler inserts into a list the ready tasks for sequential
execution in cooperative model. Program counter PC changes whenever the CPU starts executing another
process. Fig (b) shows how the PC changes on switch to another context. The scheduler switches the context
such that there is sequential execution of different tasks, which the scheduler calls from the list one by one in
a circular queue.

Prepared by Hari M.tech, Dept of ECEG, ESU

Page 2

Fig (a) An OS scheduling in which the scheduler inserts into a list the ready tasks for a sequential execution in
a cooperative model.(b) Program counter assignment (Switch) at different times, when the scheduler calls the
tasks one by one in the circular queue from the list.
Round Robin Time Slicing Scheduling:
A task may not complete in its allotted time frame. Round robin means that each ready task runs in
turn only in a cyclic queue for a limited time slice Tslice. Tslice = Tcycle / N, where N=number of tasks. It is widely
used model in traditional OS. A scheduler for the time-constrained tasks in the round robin mode can be
understood by a simple example.
Suppose after every 20ms, there is a stream of coded message reaching at port A of an embedded
system. It is then decrypted and retransmitted to the port after encoding each decrypted message. The multiple
processes consist of five tasks: C1, C2, C3, C4 and C5 as follows.
1)
2)
3)
4)
5)

Task C1: check for a new message at port A every 20 msec.


Task C2: read port A and put the message in a message queue.
Task C3: decrypt the new message from the message queue.
Task C4: encode the message from the queue.
Task C5: transmit the encoded message from the queue to port B

Figure (a) shows five tasks, C1 to C5, that are to be scheduled. Fig (b) shows the five contexts in five
time schedules, between 0 and 4ms, 4ms and 8ms, 8ms and 12ms, 12ms and 16ms and 16ms and 20ms,
respectively. Let OS initiate C1 to C5. Figure (b) shows at different time slices the real-time schedules,
process contexts and saved contexts.
1) At the first instance (first row) the context is C1 and task C1 is running.
2) At the second Instance (second row) after 4ms, the OS switches the context to C2, task C1 is
finished, C2 is running, as task C1 is finished, nothing is saved on the task C1 stack.
3) At the third instance (third row), the OS switches the context to C3 on next timer interrupt, which
occurred after 8ms from the start of task C1. Task C1 is finished, C2 is blocked and C3 is running
context C2 is saved on task C2 stack because C2 is in the blocked state

Prepared by Hari M.tech, Dept of ECEG, ESU

Page 3

4) At the fourth instance (fourth row), the OS switches the context to C4 on timer interrupt, which
occurred after 12ms from the start of task C1. Task C1 is finished. C2 and C3 are blocked and C4
is running. Context C2, C3 are at the tasks C2, C3 stacks, respectively.
5) At the fifth instance (fifth row), the OS switches the context to C5 on timer interrupt, which
occurred after 16ms from the start of task C1. Task C1 is finished. C2,C3 and C4 are blocked and
C5 is running. Context C2, C3, and C4 are at the tasks C2, C3 and C4 stacks, respectively.
6) On a timer interrupt at the end of 20ms, the OS switches the context to C1. As task C5 is finished,
only the contexts C2, C3 and C4 remain at the stack. Task C1 is running as per its schedule.

Pre-Emptive Scheduling:
Can the higher priority task preempt a lower priority by blocking it? Yes, the preemptive
scheduler can block a running task at the end of an instruction by a message to the task and
let the one with the higher priority take control of the CPU.
Now, consider a preemptive message scheduler by a simple example, suppose there is a
stream of coded message reaching at port A of an embedded system. Then it decrypts and retransmitted to port B after encoding each decrypted message. So, we consider five tasks, B1,
B2, B3, B4 and B5. The order of its priorities is as follows:
1)
2)
3)
4)
5)

Task
Task
Task
Task
Task

B1:
B2:
B3:
B4:
B5:

check for a message at port A.


Read port A.
Decrypt the message.
Encrypt the message.
Transmitted the encoded message to the port B.

Prepared by Hari M.tech, Dept of ECEG, ESU

Page 4

Fig (a): First five tasks B1 to B5


1. At the first instance (first row) the context is B3 and task B3 is running.
2. At the second instance (second row) the context switches to B1 as context B3 saves on
interrupt at port A and task B1 is of highest priority. Now, task B1 is in a running state
and task B3 is in a blocked state. Context B3 is the task B3 stack.
3. At the third instance (third row) the context switches to B2 on interrupt, which occurs
only after task B1 finishes. Task B1 is in a finished state, B2 in a running state and task
B3 is still in the blocked state. Context B3 is still at the task B3 stack.
4. At the fourth instance (fourth row) context B3 is retrieved and the context switches to
B3. Tasks B1 and B2, both of higher priorities than B3, are finished. Tasks B1 and B2 are
in finished states. Tasks B3 blocked state changes to running state and B3 is now in a
running state.
5. At the fifth instance (fifth row) the context switches to B4. Tasks B1, B2, and B3, all of
higher priorities than B4, are finished. Tasks B1, B2, and B3, are in the finished states. B4
is now in a running state.
6. At the sixth instance (sixth row) the context switches to B5. Tasks B1, B2, B3 and B4, all
of higher priorities than B5, are finished. Tasks B1, B2, B3 and B4, are in the finished
states. B5 is now in a running state.

MESSAGE-PASSING vs. SHARED MEMORY COMMUNICATION


Prepared by Hari M.tech, Dept of ECEG, ESU

Page 5

Processes often need to communicate with each other. Interprocess communication mechanisms are
provided by the operating system. In general, a process can send a communication in one of two ways:
blocking or non-blocking.
Blocking: After sending a blocking communication, the process goes into the waiting state until it receives a
response
(Or)
The sending processing unit is waiting until the receiving process unit is ready for the data transfer.
The advantage of blocking communication is that it doesnt require any storage elements.
As a dis-advantage, suspending task execution may imply degradation in system performance.
Non Blocking:
communication.

this communication allows the process to continue execution after sending the

(Or)
In this communication, both receiving and sending processing units do not have to be synchronized.
It requires some additional storage medium to store the data until they are read, but, in normally results in a
better system performance.
Both types of communication are useful. There are two major styles of interprocess communication: shared
memory and message passing.
Shared Memory Communication:
The below figure illustrates how shared memory communication works in a bus-based system. Two
components, such as a CPU and an I/O device, communicate through a shared memory location. The software
on the CPU has been designed to know the address of the shared location;the shared location has also been
loaded into the proper register of the I/O device. If, as in the figure, the CPU wants to send data to the device,
it writes to the shared location.The I/O device then reads the data from that location. The read and write
operations are standard and can be encapsulated in a procedural interface.

In this model, a shared storage medium is used to inetrchange the data. A shared medium is said to be
persistent when any data written by one task in certain positions remains intact until the same or another task
rewrites new data in the same positons. A memory is persistent by default. A shared medium is said to be nonpersistent when any data written by one task is lost when the same or another task reads the data. Example of
non-persistent media are stacks and buses.

Message Passing:
Prepared by Hari M.tech, Dept of ECEG, ESU

Page 6

In the message passing communication model, a channel is used to interchange data. The channel is
nothing but a FIFO, a bus etc. in each of the communication tasks, send/receive mechanism have to be
defined. A communication channel can be unidirectional, if one task always sends and the other always
receives, or bi-directional, if both tasks may send and receive. Depending on the tasks using the channel, it
can be point-to-point, if only two tasks are connected through the channel or multiway, if more than two tasks
are used it. The message passing communication through the channel is said to be blocking, if the sending task
has to wait until the receiving tasks reaches the point in which it is prepared to receive the data.

INTERPROCESS COMMUNICATION:
1) MAILBOX

Fig: Intertask synchronization through the mailbox


Although some RTOSs allow a certain number of messages in each mailbox, a number that you can
usually choose when you create the mailbox, others allow only one message in a mailbox at a time. Once one
message is written to a mailbox under these systems, the mailbox is full; no other message can be written to
the mailbox until the first one is read.

Prepared by Hari M.tech, Dept of ECEG, ESU

Page 7

In some RTOSs, the number of messages in each mailbox is unlimited. There is a limit to the total
number of messages that can be in all of the mailboxes in the system, but these messages will be distributed
into the individual mailboxes as they are needed.
In some RTOSs, you can prioritize mailbox messages. Higher-priority messages will be read before
lower-priority messages, regardless of the order in which they are written into the mailbox.
PIPES

Fig: Message PIPE

Fig: pipes for inter process communication.


Pipes are also much like queues. The RTOS can create them, write to them, read from them, and so on.
The details of pipes, however, like the details of mailboxes and queues, vary from RTOS to RTOS. Some
variations you might see include the following:
Some RTOSs allow you to write messages of varying lengths onto pipes (unlike mailboxes and
queues, in which the message length is typically fixed).
Pipes in some RTOSs are entirely byte-oriented: if Task A writes 11 bytes to the pipe and then Task B
writes 19 bytes to the pipe, then if Task C reads 14 bytes from the pipe, it will get the 11 that Task A wrote
plus the first 3 that Task B wrote. The other 16 that task B wrote remain in the pipe for whatever task reads
from it next.

Prepared by Hari M.tech, Dept of ECEG, ESU

Page 8

Vous aimerez peut-être aussi