Vous êtes sur la page 1sur 1

completed simultaneously.

Parallel processing
ABSTRACT has been utilized for a long time, chiefly in high
The sequential programs are time consuming for performance computing. Parallel programming
execution. Nowadays the multiple core is a good practice for solving computationally
architectures have major advantages over single intensive problems in various fields. In
core architectures. According to Moore’s law, operations research for instance, solving
the number of transistors in a dense integrated maximization problems with simplex method is
circuit doubles about every two years. But an area where parallel algorithms are being
because of factors like power dissipation, non- developed. The primary reasons for using
scalability and reliability the performance of parallel computing are:
single core architecture is going down, hence
 Decreasing execution time
there is a need for multi core processors. Multi
 Memory utilization is less
core processors have paved the way to increase
 Solve large volume of data
the performance of any application by the virtue
computations
of benefits of parallelization. Parallelization is
 Afford concurrency
breaking the problem into independent parts so
that each processing element can execute its part  Take advantage of non-local resources
of the algorithm concurrently with the others. Multi-core offers explicit support for executing
This paperwork outlines the survey of different multiple threads in parallel and thus reduces the
methods used in parallelization and analyses the idle time. The factor motivated the design of
same. parallel algorithm for multi-core system is the
performance.
1. INTRODUCTION
Parallel programming is not a new concept.
There are several ways to parallelize a program
containing input/output operations but one of the
most important challenges is to parallelize loops.
The goal is to use the resources of a multi-
processor system efficiently without entirely
rewriting the code. This process is called
parallelization. There are three ways a piece of
code can be parallel depending on the operations
and data dependencies involved.

 Control parallelism
 Data parallelism
 Functional parallelism

Parallel computing is a type of computing in


which numerous directions are done at the same
time. Parallel computing works on the standard
that extensive issues can practically dependably
be isolated into more modest ones, which maybe

Vous aimerez peut-être aussi