BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Drupal//recurring_events_ical//2.0//EN
BEGIN:VEVENT
UID:418f6400-0150-4660-99c5-ff609b48b00a@support.access-ci.org
DTSTAMP:20240222T100703Z
DTSTART:20240321T180000Z
DTEND:20240321T193000Z
SUMMARY:COMPLECS: Batch Computing: Getting Started with Batch Job Schedulin
 g - Slurm Edition
DESCRIPTION:High-performance computing (HPC) systems are specialized resour
 ces in use and shared by many researchers across all domains of science, e
 ngineering, and beyond. In order to distribute these advanced computing re
 sources in an efficient, fair, and organized way, most of the computationa
 l workloads run on these systems are executed as batch jobs, which are sim
 ply prescripted sets of commands that are executed on a subset of an HPC s
 ystem’s compute resources for a given amount of time. Researchers submit
  these batch jobs as scripts to a batch job scheduler, the software that c
 ontrols and tracks where and when the batch jobs submitted to the system w
 ill eventually be run. However, if this is your first time using an HPC sy
 stem and interacting with a batch job scheduler like Slurm, then writing a
 nd submitting your first batch job scripts to them may be somewhat intimid
 ating due to the inherent complexity of these systems. Moreover, the sched
 ulers can be configured in many different ways and will often have unique 
 features and options that vary from system to system, which you will also 
 need to consider when writing and submitting your batch jobs.In this secon
 d part of our series on Batch Computing, we will introduce you to the conc
 ept of a distributed batch job scheduler — what they are, why they exist
 , and how they work — using the Slurm Workload Manager as our reference 
 implementation and testbed. You will then learn how to write your first jo
 b script and submit it to an HPC System running Slurm as its scheduler. We
  will also discuss the best practices for how to structure your batch job 
 scripts, teach you how to leverage Slurm environment variables, and provid
 e tips on how to request resources from the scheduler to get your work don
 e faster.To complete the exercises covered in Part II webinar session, you
  will need access to an HPC system running the Slurm Workload Manager as i
 ts batch job scheduler.Visit SDSC's training and events page for a full li
 st.----What is COMPLECS? - COMPLECS (COMPrehensive Learning for end-users 
 to Effectively utilize CyberinfraStructure) is a new SDSC program where tr
 aining will cover non-programming skills needed to effectively use superco
 mputers. Topics include parallel computing concepts, Linux tools and bash 
 scripting, security, batch computing, how to get help, data management and
  interactive computing. Each session offers 1 hour of instruction followed
  by a 30-minute Q&A. COMPLECS is supported by NSF award 2320934.---Marty K
 andesComputational and Data Science Research Specialist, SDSCMarty Kandes 
 a Computational and Data Science Research Specialist in the High-Performan
 ce Computing User Services Group at SDSC. He currently helps manage user s
 upport for Comet — SDSC’s largest supercomputer. Marty obtained his Ph
 .D. in Computational Science in 2015 from the Computational Science Resear
 ch Center at San Diego State University, where his research focused on stu
 dying quantum systems in rotating frames of reference through the use of n
 umerical simulation. He also holds an M.S. in Physics from San Diego State
  University and B.S. degrees in both Applied Mathematics and Physics from 
 the University of Michigan, Ann Arbor. His current research interests incl
 ude problems in Bayesian statistics, combinatorial optimization, nonlinear
  dynamical systems, and numerical partial differential equations.
URL:https://support.access-ci.org/events/7384
END:VEVENT
END:VCALENDAR