Fundamentals of Distributed Memory Parallelism

04/03/26 - 11:00 AM - 12:00 PM EDT

Location

Virtual

A 1-hour introductory training session covering the basic concepts of parallel computing across multiple processes and nodes. Participants will learn how distributed memory systems work, why message passing is needed, and the core ideas behind MPI, including ranks, communication, and data distribution. The session also introduces common challenges such as communication overhead and load balancing in HPC applications.