Vous êtes sur la page 1sur 4

An Introduction to Parallel Programming

Subject : An Introduction to Parallel Programming


Chapter 1 Why Parallel Computing
1. Why Parallel Computing? - Answer (click here)
2. Why We Need Ever-Increasing Performance - Answer (click here)
3. Why We’re Building Parallel Systems - Answer (click here)
4. Why we Need to Write Parallel Programs - Answer (click here)
5. How Do We Write Parallel Programs? - Answer (click here)
6. Concurrent, Parallel, Distributed - Answer (click here)

Chapter 2 Parallel Hardware and Parallel Software


1. Why Parallel Computing? - Answer (click here)
2. Why We Need Ever-Increasing Performance - Answer (click here)
3. Why We’re Building Parallel Systems - Answer (click here)
4. Why we Need to Write Parallel Programs - Answer (click here)
5. How Do We Write Parallel Programs? - Answer (click here)
6. Concurrent, Parallel, Distributed - Answer (click here)
7. Parallel Hardware and Parallel Software - Answer (click here)
8. Some Background: von Neumann architecture, Processes, multitasking,
and threads - Answer (click here)
9. Modifications to the Von Neumann Model - Answer (click here)
10. Parallel Hardware - Answer (click here)
11. Parallel Software - Answer (click here)
12. Input and Output - Answer (click here)
13. Performance of Parallel Programming - Answer (click here)
14. Parallel Program Design with example - Answer (click here)
15. Writing and Running Parallel Programs - Answer (click here)
16. Assumptions - Parallel Programming - Answer (click here)

Chapter 3 Distributed Memory Programming with MPI


1. Why Parallel Computing? - Answer (click here)
2. Why We Need Ever-Increasing Performance - Answer (click here)
3. Why We’re Building Parallel Systems - Answer (click here)
4. Why we Need to Write Parallel Programs - Answer (click here)
5. How Do We Write Parallel Programs? - Answer (click here)
6. Concurrent, Parallel, Distributed - Answer (click here)
7. Parallel Hardware and Parallel Software - Answer (click here)
8. Some Background: von Neumann architecture, Processes, multitasking,
and threads - Answer (click here)
9. Modifications to the Von Neumann Model - Answer (click here)
10. Parallel Hardware - Answer (click here)
11. Parallel Software - Answer (click here)
12. Input and Output - Answer (click here)
13. Performance of Parallel Programming - Answer (click here)
14. Parallel Program Design with example - Answer (click here)
15. Writing and Running Parallel Programs - Answer (click here)
16. Assumptions - Parallel Programming - Answer (click here)
17. Distributed-Memory Programming with MPI - Answer (click here)
18. The Trapezoidal Rule in MPI - Answer (click here)
19. Dealing with I/O - Answer (click here)
20. Collective Communication - Answer (click here)
21. MPI Derived Datatypes - Answer (click here)
22. Performance Evaluation of MPI Programs - Answer (click here)
23. A Parallel Sorting Algorithm - Answer (click here)

Chapter 4 Shared Memory Programming with Pthreads


1. Why Parallel Computing? - Answer (click here)
2. Why We Need Ever-Increasing Performance - Answer (click here)
3. Why We’re Building Parallel Systems - Answer (click here)
4. Why we Need to Write Parallel Programs - Answer (click here)
5. How Do We Write Parallel Programs? - Answer (click here)
6. Concurrent, Parallel, Distributed - Answer (click here)
7. Parallel Hardware and Parallel Software - Answer (click here)
8. Some Background: von Neumann architecture, Processes, multitasking,
and threads - Answer (click here)
9. Modifications to the Von Neumann Model - Answer (click here)
10. Parallel Hardware - Answer (click here)
11. Parallel Software - Answer (click here)
12. Input and Output - Answer (click here)
13. Performance of Parallel Programming - Answer (click here)
14. Parallel Program Design with example - Answer (click here)
15. Writing and Running Parallel Programs - Answer (click here)
16. Assumptions - Parallel Programming - Answer (click here)
17. Distributed-Memory Programming with MPI - Answer (click here)
18. The Trapezoidal Rule in MPI - Answer (click here)
19. Dealing with I/O - Answer (click here)
20. Collective Communication - Answer (click here)
21. MPI Derived Datatypes - Answer (click here)
22. Performance Evaluation of MPI Programs - Answer (click here)
23. A Parallel Sorting Algorithm - Answer (click here)
24. Shared-Memory Programming with Pthreads - Answer (click here)
25. Processes, Threads, and Pthreads - Answer (click here)
26. Pthreads - Hello, World Program - Answer (click here)
27. Matrix-Vector Multiplication - Answer (click here)
28. Critical Sections - Answer (click here)
29. Busy-Waiting - Answer (click here)
30. Mutexes - Answer (click here)
31. Producer-Consumer Synchronization and Semaphores - Answer (click here)
32. Barriers and Condition Variables - Answer (click here)
33. Read-Write Locks - Answer (click here)
34. Caches, Cache Coherence, and False Sharing - Answer (click here)
35. Thread-Safety - Answer (click here)
36. Shared-Memory Programming with OpenMP - Answer (click here)
37. The Trapezoidal Rule - Answer (click here)
38. Scope of Variables - Answer (click here)
39. The Reduction Clause - Answer (click here)
40. The parallel For Directive - Answer (click here)
41. More About Loops in Openmp: Sorting - Answer (click here)
42. Scheduling Loops - Answer (click here)
43. Producers and Consumers - Answer (click here)
44. Caches, Cache Coherence, and False Sharing - Answer (click here)
45. Thread-Safety - Answer (click here)
46. Parallel Program Development - Answer (click here)
47. Two n-Body Solvers - Answer (click here)
48. Parallelizing the basic solver using OpenMP - Answer (click here)
49. Parallelizing the reduced solver using OpenMP - Answer (click here)
50. Evaluating the OpenMP codes - Answer (click here)
51. Parallelizing the solvers using pthreads - Answer (click here)
52. Parallelizing the basic solver using MPI - Answer (click here)
53. Parallelizing the reduced solver using MPI - Answer (click here)
54. Performance of the MPI solvers - Answer (click here)
55. Tree Search - Answer (click here)
56. Recursive depth-first search - Answer (click here)
57. Nonrecursive depth-first search - Answer (click here)
58. Data structures for the serial implementations - Answer (click here)
59. Performance of the serial implementations - Answer (click here)
60. Parallelizing tree search - Answer (click here)
61. A static parallelization of tree search using pthreads - Answer (click here)
62. A dynamic parallelization of tree search using pthreads - Answer (click
here)
63. Evaluating the Pthreads tree-search programs - Answer (click here)
64. Parallelizing the tree-search programs using OpenMP - Answer (click here)
65. Performance of the OpenMP implementations - Answer (click here)
66. Implementation of tree search using MPI and static partitioning - Answer
(click here)
67. Implementation of tree search using MPI and dynamic partitioning -
Answer (click here)
68. Which API? - Answer (click here)

Vous aimerez peut-être aussi