Parallel Computing

Overview   🔗

The Parallel Computing lecture covers software and hardware-related topics of parallel systems, algorithms, and application design. In addition to learning theoretical foundations, students will get hands-on experience with selected parallel computing topics.

Key information   🔗

ContactRamin Yahyapour
Venue
Time
LanguageEnglish
ModuleM.Inf.1232
SWS4
ECTS6
Presence time56h
Independent study124h

Learning Objectives   🔗

Successfully completing the lecture, students are able to

  • define and describe the benefit of parallel computing and identify the role of software and hardware in parallel computing
  • specify the Flynn classification of parallel computers (SISD, SIMD, MIMD)
  • analytically evaluate the performance of parallel computing approaches (Scaling/Performance models)
  • understand the different architecture of parallel hardware and performance improvement approaches (e.g., caching and cache coherence issues, pipeline, etc.)
  • know and define the interconnects and networks and their role in parallel computing
  • architecture of parallel computing (MPP, Vector, Shared memory, GPU, Many-Core, Clusters, Grid, Cloud)
  • design and develop parallel software using a systematic approach
  • generate parallel computing algorithms and development environments (i.e. shared memory and distributed memory parallel programming)
  • write parallel algorithms/programs using different paradigms and environments (e.g., POSIX Multi-threaded programming, OpenMP, MPI, OpenCL/CUDA, MapReduce, etc.)
  • understand and develop sample parallel programs using different paradigms and development environments (e.g., shared memory and distributed models) expose to some applications of Parallel Computing

Prerequisites   🔗

None.

  • Computer architecture
  • Basic knowledge of computer networks and topologies
  • Data structures and algorithms
  • Programming in C(/C++)

Examination   🔗

Type: Written exam (90 minutes) or oral examination, 20 minutes, with grade

Expectation on the examinee

Profund knowledge of:

  • Parallel programming
  • Shared Memory Parallelism
  • Distributed Memory Parallelism
  • Single Instruction Multiple Data (SIMD)
  • Multiple Instruction Multiple Data (MIMD)
  • Hypercube
  • Parallel interconnects and networks
  • Pipelining
  • Cache Coherence
  • Parallel Architectures
  • Parallel Algorithms
  • OpenMP
  • MPI
  • Multi-Threading (pthreads)
  • Heterogeneous Parallelism (GPGPU, OpenCL/CUDA).
  • An Introduction to Parallel Programming, Peter S. Pacheco, Morgan Kaufmann(MK), 2011, ISBN: 978-0-12-374260-5.
  • Designing and Building Parallel Programs, Ian Foster, Addison-Waesley, 1995, ISBN 0-201-57594-9 (Available online).
  • Advanced Computer Architecture: Parallelism, Scalability, Programmability, Kai Hwang, Int. Edition, McGraw Hill, 1993, ISBN: 0-07-113342-9.
  • In addition to the mentioned text book, tutorial and survey papers will be distributed in some lectures as extra reading material.