Vous êtes sur la page 1sur 2

Syllabus

EAS 520 High Performance Scientific Computing


Course Description: A graduate-level course covering an assortment of topics in high
performance computing (HPC). Topics will be selected from the following: parallel processing,
computer arithmetic, processes and operating systems, memory hierarchies, compilers, run time
environment, memory allocation, preprocessors, multi-cores, clusters, and message passing.
Introduction to the design, analysis, and implementation, of high-performance computational
science and engineering applications.
Prerequisites: EAS 502 or instructor approval
Purpose:
This class will introduce the student to the fundamentals of parallel scientific computing. High
performance computing refers to the use of (parallel) supercomputers and computer clusters, and
everything from software and hardware to accelerate computations. In this course students will
learn how to write faster code that is highly optimized for modern multi-core processors and
clusters, using modern software development tools, performance profilers, specialized
algorithms, parallelization strategies, and advanced parallel programming constructs in OpenMP
and MPI.
The course is meant for graduate students in computer science, engineering, mathematics, and
the sciences, especially for those who need to use high performance computing in their research.
The course will emphasize practical aspects of high performance computing on both sequential
and parallel machines, so that you will be able to effectively use high performance computing in
your research.
Learning Objectives:
Students will develop competencies including:
1. An overview of existing HPC software and hardware
2. Basic software design patterns for high-performance parallel computing
3. Mapping of applications to HPC systems
4. Factors affecting performance of computational science and engineering applications
5. Utilization of techniques to automatically implement, optimize, and adapt programs to
different platforms.
Learning Outcomes:
Familiarity with programming language (e.g. FORTRAN, C, C++, or Python) and
computer algebra packages (e.g. Maple, Mathematica, MATLAB or Sage);
The UNIX family of operating systems;
Parallel programming with MPI and OpenMP;
OpenCL/CUDA for GPU computing; and,
Use of the above in the design and optimization of performance driven software.

Recommended Text: Parallel Programming: Techniques and Applications Using Networked


Workstations and Parallel Computers (2nd Ed.), B. Wilkinson and M. Allen, Prentice-Hall. The
text will be supplemented with other online and printed resources.
Course topics (outline):
A. Programming languages and programming-language extensions for HPC
B. Compiler options and optimizations for modern single-core and multi-core processors
C. Single-processor performance, memory hierarchy, and pipelines
D. Overview of parallel system organization
E. Parallelization strategies, task parallelism, data parallelism, and work sharing techniques
F. Introduction to message passing and MPI programming
G. Embarrassingly parallel problems
H. HPC numerical libraries and auto-tuning libraries
I. Using pMATLAB
J. Programming with toolkits (PETSc, Trilinos, etc).
K. Verification and validation of results
L. Execution profiling, timing techniques, and benchmarking for modern single-core and
multi-core processors
M. Problem decomposition, graph partitioning, and load balancing
N. Introduction to shared memory and OpenMP programming
O. General-purpose computing on graphics processing units (GPGPU) using OpenCL
P. Introduction to scientific visualization tools, e.g., Visualization Toolkit (VTK), gnuplot,
GNU Octave, Scilab, MayaVi, Maxima, OpenDX.
Assignments:
This is a hands-on class where students are expected to use the lecture information, a series of
assignments using Matlab, MPI, OpenMP, and GPU (OpenCL) programming, and a final project
to emerge at the end of the class with parallel programming knowledge that can be immediately
applied to their research projects.
Grading:
Four programming exercises (60%)
Final Project (40%)
Computing Laboratory:
Students should be proficient in basic numerical methods and some programming language, e.g.,
Matlab, Python, Fortran, C, or C++. The various software packages are available on University
servers. All use of computer equipment at UMass Dartmouth must comply with acceptable
computer use guidelines of the University. http://www.umassd.edu/cits/policies/responsibleuse/

Vous aimerez peut-être aussi