Vous êtes sur la page 1sur 31

Multitasking and Resource Sharing in Embedded Systems:

or Teaching a microprocessor to walk and chew gum at the same time

Prateek Agarwal(200101017) Gauravi Dubey (200101186) Ankur Pandey (200101236)

Embedded Systems and Desktop OS: Differences


Desktops cater to multipurpose needs (Family wagon:satisfy all) Race Car approach for embedded systems
Narrow range of requirements to meet (Application specific architecture) Optimized usage of limited resources Resources: Storage, power, computation capability

Designing for real time: Control Loop


Software simply has a loop Loop calls subroutines with each subroutine concerned with a part of hardware/software. Interrupts generally set flags, or update counters that are read by the rest of the software.

Designing for real-time


Control loop approach
do{ system.delay(100); scan(channel1); scan(channel2); }while(true);

Channel 1 Channel 2

plot()

Problems:
Inaccurate timing Unpredictable response time as more components are added Even if we add flags to check the occurrence of an event, most of the CPU time will be wasted on polling those flags

Control loop approach


Suitable for applications with few deadline Suitable for very small embedded processors with small resources Not suitable for time deterministic systems. What next??

Solution: Multitasking
Modular approach: Divide the overall activity into independent tasks. Tasks can be directly linked with interrupt handlers so that events can directly synchronize with the code. Tasks execution dependent on event occurrence. Activities requiring more attention are assigned to higher priority tasks

Solution: Multitasking
Types of multitasking
Cooperative multitasking
Multiple tasks execute by voluntarily ceding control to other tasks. One defines a series of tasks and each task gets is own subroutine stack. Idle task calls an idle routine Another architecture used is an event queue, removing events and calling subroutines based on their values. Pros & Cons: Same as control loop except that more modular approach. But due to non-determinism of time factor, not used in embedded systems.

Multitasking types(contd.)
Pre-emptive multitasking (Let OS Be the Boss!)
Used in most of the embedded systems. Time-slices allotted to processes. Process context-switched with the next process in the scheduling queue due to following reasons:
the process has consumed its time slice, and the system clock interrupt pre-empts the process the process goes to wait state (often due to planned sleeping, waiting for an I/O event to happen, or just Mutual exclusion) a higher priority process becomes ready for execution, which causes a pre-emption the process gives away its time slice voluntarily the process terminates itself

How is multitasking implemented in embedded systems?


High priority tasks are executed first. Since embedded systems operations are time constrained systems. Hence even if the highest priority task is in waiting state, it is pre-empted by next highest priority task, and so on.

Models for multitasking


Multithreading:
Multiple threads of operation in a single process. Each task shares code and primary data space. Any global variable is accessible to all tasks. Different stacks used by different tasks is the only differentiating factor. Pointers to data items can be used freely in this environment No need for virtual to physical memory translation

Models for multitasking


Multiprocessing:
Each task has a distinct code and data space. Boundaries are enforced by hardware memory management. Increased overhead due to memory translation and context switching. Essentially this is desktop computing done on embedded systems but is necessary for high reliability requirements application.

Memory Address Translation in Multitasking


Legacy Systems (Real Mode)
1 MB of real memory Segment register and offset used Address=(Segment Register<<4) +offset 20 bit of address leads to 1MB memory Overflow handling: carry bit (bit 20) disregarded Single segment allows range of 64K (216)

Problems
Compilers, Linker and Loader
Provide abstraction for the application developer Assign proper values to segment register

System programmer not so privileged Task can access other tasks code or data Task can execute privileged instructions

Protected Mode
Segment registers are indexed into special tables All tables initialized and maintained by the operating system, but interpreted by the CPU. Segment Base Address and 8 byte descriptor used Descriptor contains segment size, flag indicating use of segment

Protected Mode (contd..)


Whenever a segment register is referred, CPU accesses related descriptor and analyzes its control bits Check involves checking the offset used in the address calculation against the segments limit

Means to achieve the End


GDT
table of descriptors stored in memory address of this given in a special register GDTR

LDT
A LDT usually contains a tasks code and data descriptors, and is built when the task is loaded in memory. only one is active at a time

Means to achieve the End


Task state segments (TSS)
A TSS is a placeholder for all the registers of a task when that task doesnt run A jump to a TSS selector makes a complete context switch from the current task to the task referred to by the selected TSS

Privilege levels
Using level 0 for all system software and 3 for application software is very common

Protected mode : Address Translation

Resource sharing in embedded systems


Assigning priorities to tasks and preempting lower priority tasks has always been a proposed solution. Problem:Bounded priority inversion
A low priority task L is in its critical region. A higher priority task H preempts task A. However task H needs the same shared resource which L has access to. So task H needs to wait for L to finish its critical section. This is known as bounded priority inversion

Illustration of bounded priority inversion

Priority inversion
Unbounded Priority inversion: Consider another task M, which has higher priority than task l, but lower than H and does not need the shared resource. While H waits for L to complete its operation, M preempts L (since it doesnt need the shared resource). M finishes its operation, and L gets control of the processor. After L finishes its operation, H gets control of the shared resource. Several other processes like M may lead to indefinite blocking of H. This type of blocking is called unbounded priority inversion.

Illustration of unbounded priority inversion

Mars Pathfinder finds Priority inversion


In 1997, the Mars Pathfinder mission nearly failed because of an undetected priority inversion. When the rover was collecting meteorological data on Mars, it began experiencing system resets, losing data. Pathfinder contained an information bus(shared resource) The bus management task was a high priority once, while the meteorological task ran as an infrequent, low priority task. Spacecraft also ran a medium priority, longer running communications task. Communications task preempted meteorological task, thereby blocking bus management task. A watchdog timer detected no operation on the bus and initiated a total system reset.

Tackling priority inversion


Priority ceiling protocol:
Look into content of each task. Analyze critical section and for each critical section find the highest priority task that may access it. Call this value p. When a task accesses this resource, raise the priority of the task to p+1. When the task releases that resource, change its priority level back to its original value.

Illustration of Priority Ceiling Protocol

Priority ceiling protocol(contd.)


Static analysis of the application is required to determine the priority ceiling for each shared resource. Analysis might become difficult for a complex application. Significant overhead associated with implementing the protocol. So, average-case-response time increases. Also, resource-accessing task has highest priority, so other task cannot contend for that resource thus preventing nested locks from developing.

Priority inheritance protocol


Uses dynamic priority adjustments. When a low-priority task A acquires a shared resource, it continues running at its original priority level. If a high priority task B tries to access that resource, priority of A is raised above that of B. Once the resource is released, A is dropped back to its original priority level, permitting B to acquire that resource.

Illustration of Priority inheritance protocol

Priority inheritance protocol(contd.)


There is no additional overhead due to its dynamic nature. Since the majority of resources are not contended, it has good average case performance. However, in cases where number of tasks and resources increase, if this protocol is not properly implemented, it can still lead to priority inversion.

Complexities in priority inheritance protocol

References
How to use priority inheritance? Priority inheritance protocol: An approach to realtime synchronization The Priority Ceiling Protocol: A Method for Minimizing the Blocking of High-Priority Ada Tasks What really happened on Mars? Embedded x86 programming Working in the protected mode environment

Vous aimerez peut-être aussi