BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Drupal//recurring_events_ical//2.0//EN
BEGIN:VEVENT
UID:05abca9c-6e31-4e2e-a2a5-82bbd762fbb7@support.access-ci.org
DTSTAMP:20250606T122212Z
DTSTART:20251204T190000Z
DTEND:20251204T203000Z
SUMMARY:COMPLECS: Batch Computing (Part III) High-Throughput and Many-Task 
 Computing - Slurm Edition
DESCRIPTION:SummaryNot all computational problems utilize the types of para
 llel applications traditionally designed to run on high-performance comput
 ing (HPC) systems. Today, many workloads running on these systems often re
 quire a modest amount of computing resources for any given job or task. Fo
 r specific research workloads, however, a more important consideration is 
 how much aggregate compute power can be consistently and reliably leverage
 d against a problem over time. These high-throughput computing (HTC) workl
 oads aim to solve larger problems over extended periods by completing nume
 rous smaller computational subtasks. For example, these often involve sign
 ificant parameter sweeps over simulation input parameters or regular proce
 ssing and analysis of data collected from specialized instruments. In some
  cases, these problems are also composed of numerous district computationa
 l subtasks linked together in highly structured, complex workflows, which 
 can become a challenge in and of themselves to design and manage effective
 ly. If your research problem can leverage a high-throughput or many-task c
 omputing (MTC) model, then learning how to build and run these types of wo
 rkflows safely and effectively on HPC systems is vital.In this third part 
 of our series on Batch Computing, we introduce you to high-throughput and 
 many-task computing using the Slurm Workload Manager. In particular, you w
 ill learn how to use Slurm job arrays and job dependencies, which can be u
 sed to create these more structured computational workflows. We will also 
 highlight some problems you’ll likely encounter when you start running H
 TC and/or MTC workloads on HPC systems. This will include a discussion on 
 job bundling strategies — what they are and when to use them. Additional
  topics about high-throughput and many-task computing workflows will be co
 vered as time permits.InstructorMarty Kandes is a Computational and Data S
 cience Research Specialist in the High-Performance Computing User Services
  Group at SDSC. He currently helps manage user support for Comet — SDSC
 ’s largest supercomputer. Marty obtained his Ph.D. in Computational Scie
 nce in 2015 from the Computational Science Research Center at San Diego St
 ate University, where his research focused on studying quantum systems in 
 rotating frames of reference through the use of numerical simulation. He a
 lso holds an M.S. in Physics from San Diego State University and a B.S. in
  Applied Mathematics and Physics from the University of Michigan, Ann Arbo
 r. His current research interests include problems in Bayesian statistics,
  combinatorial optimization, nonlinear dynamical systems, and numerical pa
 rtial differential equations.See a full list of SDSC's upcoming training a
 nd events here.--- COMPLECS (COMPrehensive Learning for end-users to Effe
 ctively utilize CyberinfraStructure) is a new SDSC program where training 
 will cover non-programming skills needed to effectively use supercomputers
 . Topics include parallel computing concepts, Linux tools and bash scripti
 ng, security, batch computing, how to get help, data management and intera
 ctive computing. Each session offers 1 hour of instruction followed by a 3
 0-minute Q&A. COMPLECS is supported by NSF award 2320934.
URL:https://support.access-ci.org/events/8049
END:VEVENT
END:VCALENDAR