Vous êtes sur la page 1sur 60

UNIT - II

Concurrent grammar.
Communication & synchronization of
concurrent tasks.
Process / Thread System process
migration.
Shared memory.
Concurrent LISP.
Concurrent YACC.
Concurrent java.

Concurrent Grammer

Concurrency Multiple subtasks that can be executed at the same


time.
Concurrent grammer Is a kind of grammer which contains the production
rules defined at the same instance without depending
upon any other rule.
Use of Petri Nets, Graph models & Node level Controller
(NLC) deals with the concurrency in grammer.
Multiple tansitions in Petri Nets correspond to parallelism
of grammer derivations.

S a SBC | aBC

CB BC

bCbc

aBab

cC cc

bB bb

Petri Net

First introduced by Carl Adam Petri in 1962.


A diagrammatic tool to model concurrency &
synchronization in distributed , concurrent,
asynchronous, parallel, nondeterministic systems.

Very similar to State Transition Diagrams.


Used as a visual communication aid to model
the system behaviour.
Used as Graphical & Mathematical tool
It is useful in allocationg resources to the tasks.
3/17/15

Petri Net
Abstract formal model of information
flow
Major use:
Modeling of systems of events in which
it is possible for some events to occur
concurrently, but there are constraints
on the occurrences, precedence, or
frequency of these occurrences.
5

Petri Net as a Graph


In Petri Nets there are 3 types of components

Places (circles) Represent possible states of the system

Transitions (Bars/rectangles) Cause the change in the states.

Arc (arrows) Connects place with transition.

Petri net has dynamic properties that result


from its execution

Markers (Tokens)
Tokens are moved by the firing of
6
transitions of the net.

Petri Net as a Graph (cont.)

A simple graph
representation
of a Petri net.

Petri Net as a Graph (cont.)

A marked
Petri net.

Petri Net as a Graph (cont.)

The marking
resulting from
firing transition
t 2.
Note that the
token in p1 was
removed &
tokens were
added to p2
& p3
9

Petri Net as a Graph (cont.)

(Figure 4)
Markings
resulting from
the firing of
different
transitions in the
net of Figure 3.

(b) Result of
firing transition t3
10

Petri Net as a Graph (cont.)

(Figure 4)
Markings
resulting from
the firing of
different
transitions in the
net of Figure 3.

(c) Result of
firing transition t5
11

Petri Net as a Graph (cont.)

(Figure 5) A simple model of three conditions and an event


12

(Figure 6)
Modeling of
a simple
computer system

13

Petri Net as a Graph (cont.)

(Figure 8)
Modeling of
simultaneous
which may
occur in either
order

14

Petri Net as a Graph (cont.)

(Figure 9)
Illustration of
conflicting
transitions.
Transitions tj
and tk conflict
since the
firing of one
will disable
the other
15

Example: In a Restaurant (A Petri


Net)
Waiter
free

Customer 1

Customer 2

Take
order

Take
order

wait

eating
Serve food
3/17/15

Order
taken

Tell
kitchen

wait

eating
Serve food

Communication &
Synchronization of Concurrent
Tasks

Thread Vs Process
- Lightweight & heavyweight

Threads & processes have an id, a set of


registers,
a state, a priority, & adhere
to a scheduling policy.

1. Threads are easier to create than processes


since they don't
require a separate address
space.
2. Multithreading requires careful programming
since threads
share data strucures that should
only be modified by one
thread at a time.
Unlike threads, processes don't share the same
address space.
3. Threads are considered lightweight because
they use far less resources than processes.
4. Processes are independent of each other.
Threads, since
they share the same address
space are interdependent, so caution must be
taken so that different threads don't step on each
3/17/15
other.

Communication &
Synchronization of Concurrent
Tasks
Issues : If the communication between
dependent tasks is not properly designed then
data race condition can occur.

Concurrency models:

It dictate how & when communication occurs


& the manner in which work is executed.

The proper coordination of communication &


synchronization between task is achieved.

Dependency Relationships

When processes or threads require communication


or cooperation among each other to accomplish a
common goal, they have a dependency
relationship .
Given any two tasks, there are exactly 4 DR that
can exist between them:
A B : Task A depends on Task B
(Unidirectional).

A B : Task B depends on Task A


(Unidirectional).
AB
other

: Task A & Task B depends on each


(Bidirectional).

Communication Dependencies

When Task A requires data from Task B in order


for it to execute its work, then there is a
dependency relationship between Task A &
Task B.

Posix_queue (Unidirectional Communication)

PIPE (Bidirectional Communication)

1. Posix_queue

The posix_queue is an interface object that help in


communicating two processes.
It resides outside the address space of all the
processes.
It contains the name of files.
If thread A wants to pass the data to thread B then
process A would write the name of file to global
variable of posix_queue & thread B will simply read
that variable.

2. Pipes

A pipe data structure is used for bidirectional


communication channel between the two processes.
It perform both read & write operation.

Cooperation Dependencies

When Task A requires a resource that Task B


owns & Task B must release the resource before
Task A can use it, this is a cooperation
dependency.
For successful execution of both the tasks
cooperation is required.
If multiple processes need to write the names of
files where the code can be located to
posix_queue then only one process should write
at a time. Thus write access must be

Counting Task Dependencies

When dependencies
bteween the threads
are calculated then
overall thread structure
of the process is
available.
The combination is
denoted by C(n,k)

N= total no. of
threads

K= threads involved

Interprocess Communication

One process send its data to other process by


using the API known as IPC.

e.g. Posix_queue.

The O.S. Kernel act as a communication channel


between the processes.
IPC require synchronization between the
processes.

Persistence of IPC

It means existence of object during or beyond


the execution of process.
The object have declared the Storage Class as
Automatic, Atatic or Dynamic.
1. Automatic Object => The object exists
during the invocation of block or code.
2. Static Object => This type of object exists
and retain values throughout the execution of
code.
3. Dynamic Object => Object can be
declared during the runtime & will exist for

Persistence of object

The persistence of object can also be defined


as Filesystem, Kernel & Process persistence.

Filesystem persistence => The IPC


object exists until the object is deleted
explicitly.

Kernel Persistence => The IPC object


remain in existence until kernel is rebooted /
object is deleted explicitly.

Process Persistence => The IPC object


remain in existence until the process
created this object is deleted explicitly.

Environment Variables & Command


Line Arguments

EV store the system dependent information.


It includes path to directories which contain
libraries, functions or commands used by
process.
Also send useful information from parent to
child process.
By using posix_spawn & exec functions,
the parent process can create the child
process with exact copies of its environment
variables.

Interprocess
communication
1. Files
2. Pipe

Named

Unnamed

3. Fifo
4. Shared memory

1. IPC - FILE

Files can be used to transfer data between any


processes.
The simplest & flexible method of
communication.

File name can be included in posix_queue.

STEPS FOR COMMUNICATION

Specify the name of file used for


communication.

Verify file existence.

Check file access permissions.

Open the file.

Synchronize access to the file.

2. IPC- PIPE

Pipes are communication channel that are used to


transfer data between the processes.

During data transfer the process must be active.

Two types of pipes

1. Anonymous pipes (unamed pipes) =>

Used to transfer data between related


process.
e.g.
fork( ).
2. Named pipes =>

Used to transfer data between related &


unrelated processes.
e.g.
posix_spawn( )

2. IPC- PIPE

Named pipes are kernel objects & reside in Kernel


space with kernel persistence.
Piepes create a flow of data from one end in one
process to the other end that is in another process.
The data flows in stream of bytes through pipes.
Two pipes can be created for bidirectional
communication between the processes.
Pipe has two ends one for input and other end for
o/p.
Named pipes can be created using mkfifo( ).
The mkfifo( ) is creates the named pipe using
pathname. The mode specifies the file

3. IPC- Message queue

It is linked list of messages.


Using IPC, processes can write the msgs to or
remove the msgs from this queue.

The sender of the msg gives the priority to the msg.


Writer process writes the msgs to the queue & then
terminate. Other process can RD/WR to it.

Old msgs having highest priority can be returned.

Each msg in the queue has following attributes.

Priority

Length of msg

Msg or data

mq_open => The msg queue is created using

4. IPC- Shared memory

SM block can be used to transfer the information


between the two processes.
SM block does not lie within the address space of
process.
A process gains the access to SM block with the
help of pointer to a block of memory.
If multiple processes are using SM block then all
modification are visible to other processes.
Synchronization is required for accessing the
information in SM block.
e.g. Semaphores => control access to SM.

4. IPC - Posix Shared memory

The SM maps two entities to the shared memory region.


File.
Internal memory.
Functions mmap or unmap

e.g.

mmap arguments are (addr, length, flags, fd, offset)

The description of various Prot & Flags arguments.


Prot

PROT_READ

PROT_WRITE

PROT_EXEC

PROT_NONE

Explanation
We can read the data
We can write the data
Data can be executed
Data become inaccessible

The flags specifies the type of mapped


object.
Flags

Explanation

MAP_SHARED

We can share the changes

MAP_PRIVATE

Changes are private

MAP_FIXED
made

Exact interpretation of addr. is

For removing the memory mapping from the


address space the mumap function is used.

Interthread Communications

Communication between threads is used to:1. Share data:The multiple threads share data for
processing required by concurrent
processes. The data can be modified or
new data can be created.
2. Send a message:

Similarly the msgs can be sent or


received or modified by different threads.
When two processes want to communicate they
use structure which is external to both the
processes.

When two threads communicate they use the

Threads can not communicate to the threads


outside their processes.
The cost of IPC is higher than the cost of
interthread communication.
Various types of ITC mechanisms are:
1. Global data, variables & data
structures.
2. Parameters.
3. File handles.

Synchronizing Concurrency

Therads & processes work together & make use of


system resources.
There are two types of resources.

H/w resources.

I/O devices, memory, h/w interrupt, processor


time.
S/w resources.

Shared library, Applications, Programs,


Utilities.
Sharing needs proper use of IPC or ITC.

Synchronization allows to serializes the execution of


multiple tasks.

Types of Synchronization
To share various resources there should be synchronization bet. the threads &
processes.

There are 3 types of synchronization:1. Data:- This allows concurrent threads or


processes to access block of memory safely. This
is useful to prevent the race condition.
2. H/w:- It is necessary when several h/w
devices need to perform some task. In this, there
is absolute control over real-time performance &
priority settings.
3.Task:- In this sync. logical preconditions &

Synchronizing access to Data

Threads share the same address space of the


processes. Processes have different address space.
IPC exists outside the address space of the
processes.

The SM maps a structure to a blk of memory which is


accessible to processes.
The ITC are global variables & data structures that
are used to accomplish certain task in process's
address space.
IPC & ITC mechanism in the process layout

CRITICAL SECTION

For controlling the race condition the data


synchronization is required.
Data synchronization is required in the tasks code
when there is an access to block of memory, files
or global variables. This memory region is called as
critical section.

Critical Sections

CS are block of code that access the shared


resources which must be controlled.
The CS is marked by entry point & exit point.
The actual code in CS is code in which thread or
process is either reading or writing to memory or
file.
The entry point indicates that you are entering in
CS and exit point indicates that you are leaving
the CS.

Critical Sections

3 conditions needs to be followed w.r. To CS.


1. Progress:- If no task is in CS then any blocked
task can enter in CS is called as progress.
2. Mutual Exclusion:- If task is in a CS then other
tasks sharing the resource can not be executing in
their CS. These tasks are called blocked. This is
called as mutual exclusion.
3. Bounded wait:- There can be waiting period
known as bounded wait. A task may keep entering
in CS and prevent other tasks to enter in CS. The
task can not reenter in the CS if other task are
waiting in the waiting queue.

PRAM MODELS

PRAM stands for Parallel Random Access


Machines.

Semaphores

A semaphore is a special kind of variable


that can be accessed only by very specific
operations.

Process Migration:-

Remote Execution
Client

Server

Control
Function
Object
Main
Program

Arguments

Function/object
transfer

Argument transfer

Dispatcher

Function
Object
f( )

Remote
execution

Return value

Procedure code is
sent together with
arguments.
Server behaves like
a general cycle
server.
Server can evolve
itself.

Code on Demand

Client
Control

Server

Request a remote function/object

Main
Program

Function
Object

func( )

Locally executed

Function/object itself
is returned.

Dispatcher

Remote
Function
Object

Server behaves like a


general remote object
server.
A remote
function/object is sent
back as a return value.
Client executes the
function/object locally.
Client execution
control stays in local
while suspended upon
a request to a server.

Process Migration
Time

Source Site

Destination Site

Process P1
:
:
:
:
Execution
suspended

Freezing
time

Transfer of
control
Execution
Resumed
:
:
:
:
Process P1

Selecting a process to
be migrated
Selecting the
destination node
Suspending the
process
Capturing the process
state
Sending the state to
the destination

Resuming the process

Forwarding future

Concurrent lisp

pcall

(pcall + ( a b) ( c d))

(defun fibonacci (n)

(cond

(( = n 1) 1)

(( = n 2) 1)

(( + (fibonacci (n 1)) (fibonacii ( n


2))))

(time fibonacci (50))


(time (pcall + fibonacci (48)
fibonacci (49)))

newlisp

spawn

Concurrent java

java.util.concurrent

Processes

ProcessBuilder

Threads

Thread

Runnable

Subclass Thread

public class HelloRunnable implements


Runnable
{
public void run()
{
System.out.println("Hello from a
thread!"); }
public static void main(String args[])

(new Thread(new HelloRunnable())).start(); }

Concurrent YACC

A protocol-parser

has to parse many connections


simultaneously and, within each
connection, the two flows on opposite
directions, in parallel

Binpac http parser

paraLR parser

QUESTION BANK- UNIT-1


1.What is computational model? Explain
various types with suitable examples?
2.Explain the features & applications of LISP?
3.Write a short note on YACC?
4.Explain the use & Architecture of OpenGL?
5.Explain the term MPI & its features?

QUESTION BANK- UNIT-2


1.Write a short note on Petri nets? Draw Petri net for
traffic light switching system?
2.What is difference between process & threads?
3.What are dependency relationship bet. Tasks?
4.What are pipes? Explain in detail?
5.What is synchronization? Explain its types?
6.Write a short note on process migration?
7.Explain the following terms?
1. Concurrent LISP

2. Concurrent YACC

2. Concurrent Java
8.Explain the concept.

1. IPC

2. ITC

QUESTION BANK- UNIT-3


1.Explain the concept Death of single core
solution?
2.Draw and explain GPU Architecture?
3.Explain CPU Vs GPU?
4.Write a short note on CUDA?
5.What are different types of parallelisms?
6.Explain Sequential Vs parallel programming?
7.Explain parallel architectures in detail?
8.Explain different Architectural Schemes with
examples?

Vous aimerez peut-être aussi