Académique Documents
Professionnel Documents
Culture Documents
Page 1
Fig1. (a) Current program context (b) new program executes with the new context of the called function or
routine (c) context switching for new routine and another switch on return from routine (d) context switching
for new routine and another switch on return or in-between the call to another routine.
The context must save if a function program or routine left earlier has to run again from the state which was
left. When there is a call to a function 1)(called routine in assembly language, function in C and C++, method
in Java also called task or process or thread when it runs under supervision of the OS), the function or ISR or
exception handling function executes by three main steps:
1) Saving all the CPU registers including processor status word, registers and functions current address
for next instruction in the PC. Saving the address of the PC onto a stack is required if there is no link
register to point to the PC of left instruction of earlier function. Savings facilities the return from the
new function to the previous state.
2) Load new context to switch to a new a function.
3) Read just the context of stack pointer and execute the new function.
These three actions are known as context switching. Fig 1(b) shows a current programs context switching to
the new context.
The last instruction (action) of any routine of function is always a return.
The following steps occur during return from the called function.
1) Before return, retrieve the previously saved status word, registers and other context parameters.
2) Retrieve into the PC the saved PC (address) from the stack and load other part of saved context from
stack and readjust the contents of stack pointer.
3) Execute the remaining part of the function, which called the new function.
These three actions are also known as context switching. Fig 1(c) and (d) shows context switching for new
routine and another context switch on return or on in-between call to another routine.
SCHEDULING POLICIES
The Cooperative Scheduling of Ready Tasks List:
The scheduler's essential role is to decide which tasks should be run next.
Cooperative means that each ready task cooperates to let a running one finish. None of the tasks does a
block anywhere during the ready to finish states. The service is in the order in which a task is initiated on
interrupt and placed in ready list. We can say that the task priority parameters set as its position in the queue.
Figure (a) shows a scheduler in which the scheduler inserts into a list the ready tasks for sequential
execution in cooperative model. Program counter PC changes whenever the CPU starts executing another
process. Fig (b) shows how the PC changes on switch to another context. The scheduler switches the context
such that there is sequential execution of different tasks, which the scheduler calls from the list one by one in
a circular queue.
Page 2
Fig (a) An OS scheduling in which the scheduler inserts into a list the ready tasks for a sequential execution in
a cooperative model.(b) Program counter assignment (Switch) at different times, when the scheduler calls the
tasks one by one in the circular queue from the list.
Round Robin Time Slicing Scheduling:
A task may not complete in its allotted time frame. Round robin means that each ready task runs in
turn only in a cyclic queue for a limited time slice Tslice. Tslice = Tcycle / N, where N=number of tasks. It is widely
used model in traditional OS. A scheduler for the time-constrained tasks in the round robin mode can be
understood by a simple example.
Suppose after every 20ms, there is a stream of coded message reaching at port A of an embedded
system. It is then decrypted and retransmitted to the port after encoding each decrypted message. The multiple
processes consist of five tasks: C1, C2, C3, C4 and C5 as follows.
1)
2)
3)
4)
5)
Figure (a) shows five tasks, C1 to C5, that are to be scheduled. Fig (b) shows the five contexts in five
time schedules, between 0 and 4ms, 4ms and 8ms, 8ms and 12ms, 12ms and 16ms and 16ms and 20ms,
respectively. Let OS initiate C1 to C5. Figure (b) shows at different time slices the real-time schedules,
process contexts and saved contexts.
1) At the first instance (first row) the context is C1 and task C1 is running.
2) At the second Instance (second row) after 4ms, the OS switches the context to C2, task C1 is
finished, C2 is running, as task C1 is finished, nothing is saved on the task C1 stack.
3) At the third instance (third row), the OS switches the context to C3 on next timer interrupt, which
occurred after 8ms from the start of task C1. Task C1 is finished, C2 is blocked and C3 is running
context C2 is saved on task C2 stack because C2 is in the blocked state
Page 3
4) At the fourth instance (fourth row), the OS switches the context to C4 on timer interrupt, which
occurred after 12ms from the start of task C1. Task C1 is finished. C2 and C3 are blocked and C4
is running. Context C2, C3 are at the tasks C2, C3 stacks, respectively.
5) At the fifth instance (fifth row), the OS switches the context to C5 on timer interrupt, which
occurred after 16ms from the start of task C1. Task C1 is finished. C2,C3 and C4 are blocked and
C5 is running. Context C2, C3, and C4 are at the tasks C2, C3 and C4 stacks, respectively.
6) On a timer interrupt at the end of 20ms, the OS switches the context to C1. As task C5 is finished,
only the contexts C2, C3 and C4 remain at the stack. Task C1 is running as per its schedule.
Pre-Emptive Scheduling:
Can the higher priority task preempt a lower priority by blocking it? Yes, the preemptive
scheduler can block a running task at the end of an instruction by a message to the task and
let the one with the higher priority take control of the CPU.
Now, consider a preemptive message scheduler by a simple example, suppose there is a
stream of coded message reaching at port A of an embedded system. Then it decrypts and retransmitted to port B after encoding each decrypted message. So, we consider five tasks, B1,
B2, B3, B4 and B5. The order of its priorities is as follows:
1)
2)
3)
4)
5)
Task
Task
Task
Task
Task
B1:
B2:
B3:
B4:
B5:
Page 4
Page 5
Processes often need to communicate with each other. Interprocess communication mechanisms are
provided by the operating system. In general, a process can send a communication in one of two ways:
blocking or non-blocking.
Blocking: After sending a blocking communication, the process goes into the waiting state until it receives a
response
(Or)
The sending processing unit is waiting until the receiving process unit is ready for the data transfer.
The advantage of blocking communication is that it doesnt require any storage elements.
As a dis-advantage, suspending task execution may imply degradation in system performance.
Non Blocking:
communication.
this communication allows the process to continue execution after sending the
(Or)
In this communication, both receiving and sending processing units do not have to be synchronized.
It requires some additional storage medium to store the data until they are read, but, in normally results in a
better system performance.
Both types of communication are useful. There are two major styles of interprocess communication: shared
memory and message passing.
Shared Memory Communication:
The below figure illustrates how shared memory communication works in a bus-based system. Two
components, such as a CPU and an I/O device, communicate through a shared memory location. The software
on the CPU has been designed to know the address of the shared location;the shared location has also been
loaded into the proper register of the I/O device. If, as in the figure, the CPU wants to send data to the device,
it writes to the shared location.The I/O device then reads the data from that location. The read and write
operations are standard and can be encapsulated in a procedural interface.
In this model, a shared storage medium is used to inetrchange the data. A shared medium is said to be
persistent when any data written by one task in certain positions remains intact until the same or another task
rewrites new data in the same positons. A memory is persistent by default. A shared medium is said to be nonpersistent when any data written by one task is lost when the same or another task reads the data. Example of
non-persistent media are stacks and buses.
Message Passing:
Prepared by Hari M.tech, Dept of ECEG, ESU
Page 6
In the message passing communication model, a channel is used to interchange data. The channel is
nothing but a FIFO, a bus etc. in each of the communication tasks, send/receive mechanism have to be
defined. A communication channel can be unidirectional, if one task always sends and the other always
receives, or bi-directional, if both tasks may send and receive. Depending on the tasks using the channel, it
can be point-to-point, if only two tasks are connected through the channel or multiway, if more than two tasks
are used it. The message passing communication through the channel is said to be blocking, if the sending task
has to wait until the receiving tasks reaches the point in which it is prepared to receive the data.
INTERPROCESS COMMUNICATION:
1) MAILBOX
Page 7
In some RTOSs, the number of messages in each mailbox is unlimited. There is a limit to the total
number of messages that can be in all of the mailboxes in the system, but these messages will be distributed
into the individual mailboxes as they are needed.
In some RTOSs, you can prioritize mailbox messages. Higher-priority messages will be read before
lower-priority messages, regardless of the order in which they are written into the mailbox.
PIPES
Page 8