CARLA 2024

Instructor: Diego Andrés Roa Perdomo

Affiliation: PhD student of the Electrical and Computer Engineering program at the University of Delaware

Brief Program:

  • Presentation
  • Introduction
  • Parallel Programming Models for HPC
  • Shared Memory: OpenMP
  • Distributed Memory: MPI
  • Accelerators: Offloading
  • High-performance Networks Overview

Harnessing Parallel Paradigms: A Comprehensive Guide to OpenMP, MPI, and GPU Offloading in HPC

Information

The tutorial is designed for HPC practitioners, researchers, and students who seek to deepen their understanding of parallel programming models. We will cover fundamental concepts, advanced techniques, and real-world applications, providing hands-on examples and performance optimization strategies. By the end of this tutorial, participants will be equipped with the knowledge and skills to implement and optimize parallel applications across diverse HPC architectures. This tutorial doesn’t focus on performance, but rather on understanding concepts that can have a significant impact on how parallel programs are executed. This tutorial uses Chameleon Cloud to provide access to compute resources that contain GPUs. It is also written in Jupyter Notebooks, lowering the bar for attendees with little command-line (CLI) experience.

Student´s prerequisites

Attendees are expected to be familiar with:

  • The tutorial will be in C and C++. People with experience in other programming languages are welcome too
  • Simple understanding of how to use compilers
  • Basic experience with Linux systems
    Other desired, but not necessary knowledge includes:
  • Understanding the differences between threads and processes
  • Understanding pointers, memory, and how data is laid out in memory.
  • Basic understanding of accelerator devices (e.g., GPUs)


Materials

  • Laptop with internet connection.
  • Chameleon Cloud access (Provided)

Previous editions

Presented at Argonne National Laboratory (July 25, 2024)

References

More information: T07-Tutorial OpenMP+MPI+Accelerators