BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Drupal//recurring_events_ical//2.0//EN
BEGIN:VEVENT
UID:884038ff-5b59-4f5c-a759-ff33bdbe5c46@support.access-ci.org
DTSTAMP:20250606T113657Z
DTSTART:20250821T180000Z
DTEND:20250821T193000Z
SUMMARY:COMPLECS: Batch Computing (Part II): Getting Started with Batch Job
  Scheduling
DESCRIPTION:SummaryHigh-performance computing (HPC) systems are specialized
  resources used and shared by many researchers across all domains of scien
 ce, engineering, and beyond. In order to distribute these advanced computi
 ng resources in an efficient, fair, and organized way, most of the computa
 tional workloads run on these systems are executed as batch jobs, which ar
 e pre-scripted sets of commands that are executed on a subset of an HPC sy
 stem’s compute resources for a given amount of time. Researchers submit 
 these batch jobs as scripts to a batch job scheduler, the software that co
 ntrols and tracks where and when the batch jobs submitted to the system wi
 ll eventually be run. However, if this is your first time using an HPC sys
 tem and interacting with a batch job scheduler like Slurm, writing and sub
 mitting your first batch job scripts to them may be somewhat intimidating 
 due to the inherent complexity of these systems. Moreover, the schedulers 
 can be configured in many different ways and will often have unique featur
 es and options that vary from system to system, which you will also need t
 o consider when writing and submitting your batch jobs.In this second part
  of our series on Batch Computing, we will introduce you to the concept of
  a distributed batch job scheduler — what they are, why they exist, and 
 how they work — using the Slurm Workload Manager as our reference implem
 entation and testbed. You will then learn how to write your first job scri
 pt and submit it to an HPC System running Slurm as its scheduler. We will 
 also discuss the best practices for structuring your batch job scripts, te
 ach you how to leverage Slurm environment variables, and provide tips on r
 equesting resources from the scheduler to get your work done faster. To c
 omplete the exercises covered in Part II, you will need access to an HPC s
 ystem running the Slurm Workload Manager as its batch job scheduler.Instru
 ctorMarty Kandes is a Computational and Data Science Research Specialist i
 n the High-Performance Computing User Services Group at SDSC. He currently
  helps manage user support for Comet — SDSC’s largest supercomputer. M
 arty obtained his Ph.D. in Computational Science in 2015 from the Computat
 ional Science Research Center at San Diego State University, where his res
 earch focused on studying quantum systems in rotating frames of reference 
 through the use of numerical simulation. He also holds an M.S. in Physics 
 from San Diego State University and a B.S. in Applied Mathematics and Phys
 ics from the University of Michigan, Ann Arbor. His current research inter
 ests include problems in Bayesian statistics, combinatorial optimization, 
 nonlinear dynamical systems, and numerical partial differential equations.
 See a full list of SDSC's upcoming training and events here.--- COMPLECS 
 (COMPrehensive Learning for end-users to Effectively utilize CyberinfraStr
 ucture) is a new SDSC program where training will cover non-programming sk
 ills needed to effectively use supercomputers. Topics include parallel com
 puting concepts, Linux tools and bash scripting, security, batch computing
 , how to get help, data management and interactive computing. Each session
  offers 1 hour of instruction followed by a 30-minute Q&A. COMPLECS is sup
 ported by NSF award 2320934.
URL:https://support.access-ci.org/events/8041
END:VEVENT
END:VCALENDAR