Vous êtes sur la page 1sur 8

Thread Questions 1. What happens to a thread when it exits (i.e., calls thread_exit())? What about when it sleeps?

Answer: Briefly when a Thread calls thread_exit(),then thread_exit() will set interrupts off to avoid race condition with context switch and then will save current threads vmspace and set next threads as null, manipulate(decrement) the refrence counts and add the thread to zombies's list, we dont actullay destroy all its resources, it will be done by thread_destroy(). When thread calls thread_sleep it sets the address of the current thread as sleepaddr(sleep_address)and then gives a call to machinde independent context switch routine( Which in turn gives call to machine dependent switch which calls mips_switch ) and sets sleep address of current thread as null 2. What function(s) handle(s) a context switch? Answer: There are functions at different level and interleaved that handle the context switch. From thread.c we give call to static void mi_switch(threadstate_t nextstate) which is machine independent call to context switch that in turn gives a call to void md_switch(struct pcb *old, struct pcb *nu) which is machine dependent call to handle context switch that in turn calls mips_switch(struct pcb *old, struct pcb *nu), this is the real function at system level that actually does the switch. At each level data structures such as data references and current thread and stack , interrupt are manipulated 3. How many thread states are there? What are they? Answer: There are 4 thread states that are defined in thread.c as enum data structures namely run,ready,sleep,zombie. Following is the structural defination: typedef enum{ S_RUN, S_READY S_SLEEP S_ZOMB} threadstate_t;

4. What does it mean to turn interrupts off? How is this accomplished? Why is it important to turn off interrupts in the thread subsystem code? Answer:Turning interrupts off means that we are disabling interrupt handling till we dont reset it to handle interrupts. We turn it off to ensure atomicity while we execute the folowing

kernel code, so that it is uninterrupted execution moreover, it also allows us to aviod race condition if there is a shared variable. Thereby making the execution safe. It is accomplished using splhigh() function that sets interrupt off. It is important to turn interrupts off especially in thread code so that race condition is avoided when updating the shared variable such as refrence counts, numberof threads with for example context switch which will try to manipulate the same data at the same time . Also to ensure atomicity of the execution as well as in to avoid thread sleeping while handling interrupts.

5. What happens when a thread wakes up another thread? How does a sleeping thread get to run again? Answer: When a thread wakes up another thread than it passes the address of the thread that it wants to wake up. thread_wakeup() is invoked which will wake up that thread and changes its stated to runnable. The calling thread than has to call yeild the cpu to thread but stay runnable using call void thread_yeild(void)

Scheduler Questions 6. What function is responsible for choosing the next thread to run? Answer: We call scheduler() to determine the next thread to run which in turn calls void * q_remhead(struct queue *q), which returns nothing but next in runqueue and sets all the structures. 7. How does that function pick the next thread? Answer: Scheduler() calls q_remhead that will access runqueue which is passed as its parameter and will return the next thread in queue waiting for allocation by assigning the return thread as ret =q->data[q->nextthread]; and sets next thread as q->nextthread = (q->nextthread+)% q->size; 8. What role does the hardware timer play in scheduling? What hardware independent function is called on a timer interrupt? Answer: In OS161 we imply round robin scheduling. Whenever, assigned quanta time is

finished for that process in executing, hardclock timer interrupts are generated indicating the quanta expiartion(interrupts every Hz value) and context switch happens. On timer interrupt, static void mi_switch(threadstate_t nextstate) is called which is hardware independent function.

Synchronization Questions 9. Describe how thread_sleep() and thread_wakeup() are used to implement semaphores. What is the purpose of the argument passed to thread_sleep()? Answer: While using thread_sleep() in tt3.c we have simply disable interrupts and in thread_wakeup() we have used semaphores P and V so that mutual exclusion is achieved as shared variable wakersem, is tested and set using P and V one and only one thread can execute the instructions of setting the done variable at one time. Once a process updates done: it can release the lock the wakersem and starts its execution. While updating the done no other can set it and thus only one thread can execute it at one time. Hence providing the desired functionality The purpose of argument passed to thread_sleep() is not interpreted as address of thread rather it's the address of a synchronization primitives or data structures.

10.

Why does the lock API in OS/161 provide lock_do_i_hold(), but not lock_get_holder()?

Answer: We use lock_do_i_hold() to first check if other thread is holding the lock instead of straight giving lock_get_holder() because we dont want the lock to be hold when other thread has got access to it else it will defeat the purpose of mutual exclusion which is one of the reason that locks are implemeted. Below is the precise condition mentioned in os161 for locks: When the lock is created, no thread should be holding it. Likewise,when the destroyed, no thread should be holding it. lock is

Process Questions 11. If a multi threaded process forks, a problem occurs if the child gets copies of all the parent's threads. Suppose that one of the original threads was waiting for keyboard input.

Now two threads are waiting for keyboard input, one in each process. Does this problem ever occur in single-threaded processes? Answers: Yes, this problem will still occur in single threaded process, as it is created by the parent process in the same manner as in mutithreaded process ie parent process forks child process and if design of operating system provides guideline that child process to be restricted to the parent process resources and inherit them which is our above case than even in single threaded process both the parent and child will wait for the same input (unless child uses exec to load a different binary and that child process can fork even in waiting state) However, this can be solved by changing the design guidelines and inheritance policy of operating system 12. In a block/wakeup mechanism, a process blocks itself to wait for an event to occur. Another process must detect that the event has occurred, and wake up the blocked process. It is possible for a process to block itself and wait for an event that will never occur. 1. Can the operating system detect that a blocked process is waiting for an event that will never occur? 2. What reasonable safeguards might be built into an operating system to prevent processes from waiting indefinitely for an event? Answer:Yes it is possible for a process to block itself But operating system should never allow a currrely running process to block itself as block it will cause operating system to freeze(as it is the current execution and os has no idea about its waiting or it can do so by first doing context switch and then changing its state to sleep/waiting)resulting in either panic code execution if os has its implemetation. In the later case where it does context switch os will never know about its waiting. When scheduler calls kil_all it will simply kill the process resulting in loss of data/unrelaible/undesirable behaviour of operating system. 1.Operating system(OS161) cannot detect such waiting. However it is possible to detect the waiting processes list using resource allocation graph but it is highly expensive and resource intensive for opretaing system to implement it. We can detect the processed if they are waiting on seamaphore as they maintain waiting list , else we cant. 2.Safeguards: We can attach a timing counter to the process if waiting for some resource/signal and initialized it predefined value.For a desirable value of clock tick we can decrement the counter if it reaches zero, we can cause an interrupt to os telling him that this process I swaiting for the signal/resource from a very long time. If the resource can be granted at that instance than it should if not we can kill the process/restart/put it back again in the waiting.However, last option is less desirable as it can lead to starvation.

13. Can two threads in the same process synchronize using a kernel semaphore if the threads are implemented by the kernel? What if they are implemented in user space? Assume no threads in any other processes have access to the semaphore. Discuss your answers. Answers:Yes,if two threads in same process implemented by kernel can access kernel semaphores. However, if they are user threads then they cant access the kernel semaphores. 14. In a system with threads, is there one stack per thread or one stack per process when user-level threads are used? What about when kernel-level threads are used? Explain. Answers: There is a one stack per thread for each user level thread and Every thread has both a user-mode stack, but they also have a kernel-mode stack thats used when they run in kernel mode, for example while executing system calls.(However, there is kernel stack for process as well which we initialize in our fork code) 15. Five batch jobs, A through E, arrive at a computer center at almost the same time. They have estimated running times of 10, 6, 2, 4, and 8 minutes. Their (externally determined) priorities are 3, 5, 2, 1, and 4, respectively, with 5 being the highest priority. For each of the following scheduling algorithms, determine the mean process turnaround time. Ignore process switching overhead. 1. Round robin 2. Priority scheduling 3. First-come, first-served (run in order 10, 6, 4, 2, 8) 4. Shortest job first Answer:Considering all process arrive at the same time: quata? 1. Round Robin With time Quantum = 1 minute a b c d e a b c * d e a b d e a b d * e a b e a b * e a e a e * a a *

States in star means that they terminaed as there execution completed. Mean turnaround time = 8+17+23+28+30/5=21.2 2.Priority Scheduling:

| b 0

| 6

| 14

| c 24

| 26

| 30

Mean turnaround time = 6+14+24+26+30/5 = 20

3.fcfs | a | b 0 10

| d 16

| c 20

| 22

| 30

Mean turnaround time = 10+16+20+22+30/5 = 19.6 4.sjf | c 0 | 2 d | 6 b | 12 e | 20 a | 30

Mean turnaround time = 2+6+12+20+30/5 = 14

16. For each of the following scheduling algorithms, describe a major failing of the algorithm and if appropriate provide a pathological example that illustrates this failing. 1. First come first served 2. Shortest job first 3. Priority scheduling Answer: 1. FCFS : Throughput can be low, since long processes can hog the CPU Turnaround time, waiting time and response time can be high for the same reasons above No prioritization occurs, thus this system has trouble meeting process deadlines. The lack of prioritization means that as long as every process eventually completes, there is no starvation. In an environment where some processes might not complete, there can be starvation 2. SJF:

Starvation is possible, especially in a busy system with many small processes being run. Waiting time and response time increase as the process' computational requirements increase. Since turnaround time is based on waiting time plus processing time, longer processes are significantly affected by this. Overall waiting time is smaller than FIFO, however since no process has to wait for the termination of the longest process. 3. Priority algorithm: Waiting time and response time depend on the priority of the process. Higher priority processes have smaller waiting and response times. . Starvation of lower priority processes is possible with large amounts of high priority processes queuing for CPU time.

Design Consideration Question: 1.Passing arguments from one user program, through the kernel, into another user program, is a bit of a chore. What form does this take in C? This is rather tricky, and there are many ways to be led astray. You will probably find that very detailed pictures and several walk-throughs will be most helpful. Answer: There are several ways to do it in c 1. Use pipes() that will pass arguments from one user program through kernel into another user program. 2. Using Indirect communication via mailbox, which is os object and helps in passing argument between two user process.

2.What primitive operations exist to support the transfer of data to and from kernel space? Do you want to implement more on top of these? Answers: Copy in and copy out exist to support the transfer of data to and from kernel space. Yes, the major problem associated with it is the amount of data transferred. Since kernel is involved

it may lead to performance penalty and diminsh the same. Moreover, we have to maintain atomicity of these operation as race condition might happen as two or more threads might try to call the same functionality. 3.How will you determine: (a) the stack pointer initial value; (b) the initial register contents; (c) the return value; (d) whether you can exec the program at all? Answer: (a) the stack pointer initial value = value at address 0x80000000 (b) the initial register contents = Prior to main() mips-crt0. S has all initial register contents as well as execption.S has the same info but it changes less registers . (c) the return value = v0 returned from crt0 in mips-crt0 to main (d) whether you can exec the program at all = no

4.What new data structures will you need to manage multiple processes? Answer: Process Structure that will take care of exit codes and pids of process and there associated all child pids. 5.What relationships do these new structures have with the rest of the system? Answer: It will help the system to allow multiprocessing and context switching among them as well as keeping there track. 6.How will you manage file accesses? When the shell invokes the cat command, and the cat command starts to read file1, what will happen if another program also tries to read file1? What would you like to happen? Answer: We can manage file access by access control list It will lead to race condition in os161 as os161 doesnt support file access control as of yet and neither locking. We woud like to make sure that only one programm should be able to open the file if exclusive lock is help tha is for writing , However it is more desirable that if one programme I s reading than other program should not wait for it to release the lock.

Vous aimerez peut-être aussi