BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Drupal//recurring_events_ical//2.0//EN
BEGIN:VEVENT
UID:f1f50746-e838-4df7-8997-7bb8627c39c1@support.access-ci.org
DTSTAMP:20230126T161147Z
DTSTART:20230216T190000Z
DTEND:20230216T200000Z
SUMMARY:Expanse: Getting Started with Batch Job Scheduling: Slurm Edition
DESCRIPTION:Most high-performance computing (HPC) systems are specialized r
 esources in high demand and shared simultaneously by many researchers acro
 ss all domains of science, engineering, and beyond. In order to fairly dis
 tribute and share the compute resources of an HPC system among these resea
 rchers, which have varying compute demand profiles over time, most computa
 tional workloads on these systems are executed as batch jobs --- prescript
 ed sets of commands that are executed on a certain type or set of compute 
 resources for a given amount of time. Researchers submit these batch job s
 cripts to a batch job scheduler, a piece of software that controls and tra
 cks where and when the batch jobs submitted to the system will eventually 
 run and execute the prescripted sets of commands. However, if this is your
  first time using an HPC system and interacting with a batch job scheduler
  like Slurm, writing your first batch job scripts and submitting them to t
 he scheduler can be somewhat intimidating. Moreover, these batch job sched
 ulers can be configured in many different ways and will often have unique 
 features and options from system to system that you will need to consider 
 when writing your batch jobs.\n\nIn this webinar, we will teach you how to
  write your first batch job script and submit it to a Slurm batch job sch
 eduler. We will also discuss what we think are best practices on how to 
 structure your batch job scripts, teach you how to leverage Slurm environm
 ent variables, and provide you with some tips on how to request resources 
 from the scheduler to get your work done faster.  We will also introduce 
 you to some advanced features like Slurm job arrays and job dependencies f
 or more structured computational workflows. \n\nInstructor\n\nMarty Kande
 s\nComputational & Data Science Research Specialist, HPC User Services Gro
 up - SDSC\n\nMarty Kandes is a Computational and Data Science Research Spe
 cialist in the High-Performance Computing User Services Group at SDSC. He 
 currently helps manage user support for Expanse, SDSC’s NSF-funded super
 computer, and maintains all the Singularity containers supported on these 
 systems.
URL:https://support.access-ci.org/events/4325
END:VEVENT
END:VCALENDAR