Training and Events

Title Event Date Description
ByteBoost Cybertraining Webinar: Presenting ACES 03/08/24 - 02:00 PM - 03:00 PM EST

This webinar in the ByteBoost program https://www.stonybrook.edu/ookami/ByteBoost.php will present  https://hprc.tamu.edu/aces/ , an innovative composable hardware platform that helps accelerate transformative changes in multiple scientific research areas. ACES offers a rich accelerator testbed consisting of five different cutting edge accelerators including NVIDIA and Intel Ponte Vecchio GPUs (Graphics Processing Units) and Graphcore IPUs (Intelligence Processing Units).

COMPLECS: Code Migration 03/07/24 - 02:00 PM - 03:30 PM EST

We will cover typical approaches to moving your computations to HPC resources – use of applications/software packages already available on the system through Linux environment modules; compiling code from source with information on compilers, libraries, and optimization flags to use; setting up Python & R environments; use of conda based environments; managing workflows; and use of containerized solutions via Singularity. The session covers general principles, with hands-on activities on SDSC resources.

Open OnDemand on Ookami 03/07/24 - 02:00 PM - 03:00 PM EST

Open OnDemand, an intuitive, innovative, and interactive interface to remote computing resources, is now available on Ookami. Open OnDemand helps computational researchers and students efficiently utilize remote computing resources by making them easy to access from any device. In this webinar we will show you how to use Open OnDemand on Ookami.

Open OnDemand Community Meeting 03/07/24 - 01:00 PM - 01:30 PM EST

Hosted by the community, tips and tricks webinars share best practices of all things Open OnDemand. They take place on the first Thursday of every month at 1 p.m. ET.  

Recordings of previous events are available.

More information and Zoom coordinates are available on the Open OnDemand Discourse forum.

ACES: AlphaFold Protein Structure Prediction 03/05/24 - 02:30 PM - 05:00 PM EST

This short course (2.5 hours) will allow users to work through a hands-on tutorial covering how to run AlphaFold utilizing NVIDIA GPUs on the ACES cluster, a composable accelerator testbed at Texas A&M High Performance Research Computing.

Read more at https://hprc.tamu.edu/training/aces_alphafold.html 

ACES: Graphcore IPU Tutorial 03/05/24 - 11:00 AM - 01:30 PM EST

This short course (2.5 hours) introduces researchers to Graphcore Intelligence Processing Units (IPUs) on the ACES cluster, a composable accelerator testbed at Texas A&M High Performance Research Computing. The instructor will demonstrate the execution of models from different frameworks on the IPU system, and participants will gain practical experience in converting TensorFlow and PyTorch code into IPU code through hands-on exercises.

RCAC Software Installation 101 03/01/24 - 02:30 PM - 04:00 PM EST

Software Installation 101 is for anyone looking to learn the fundamentals of software installation on Linux/Unix-based high-performance computing systems. Prior experience with the command line and cluster computing will be helpful. Topics covered will include the following:

  • Foundations
  • Unpacking
  • Compiling from Source
  • Compiling from Source with Dependencies
  • QA
Ookami Webinar 02/29/24 - 02:00 PM - 03:00 PM EST

Whether you are interested in Ookami and consider getting an account, a new user, or a longtime user, who wants to optimize their usage, this webinar is for you! It will cover the basics of the system, how to get an account, and for existing users also a lot of tips and tricks on how to use it efficiently for your research.

Ookami has available cycles (CPU only) and is welcoming new users.

Data Parallelism: How to Train Deep Learning Models on Multiple GPUs (NVIDIA Deep Learning Institute) 02/29/24 - 11:00 AM - 07:00 PM EST

Modern deep learning challenges leverage increasingly larger datasets and more complex models. As a result, significant computational power is required to train models effectively and efficiently. Learning to distribute data across multiple GPUs during deep learning model training makes possible an incredible wealth of new applications utilizing deep learning.

NCSA Quantum Tutorial: Intro to Quantum Computing with Classiq 02/29/24 - 10:00 AM - 12:00 PM EST

This is a practical introductory workshop for using the Classiq platform to model quantum algorithms using a high-level modeling language, optimizing quantum circuits using a hardware-aware approach and smart synthesis, and running your optimized quantum circuits on various real quantum hardware and simulators. No previous quantum computing knowledge is required, and we encourage participation from everyone interested in learning more about quantum computing. After the workshop, attendees will be able to use the 

Model Parallelism: Building and Deploying Large Neural Networks (NVIDIA Deep Learning Institute) 02/28/24 - 11:00 AM - 07:00 PM EST

Large language models (LLMs) and deep neural networks (DNNs), whether applied to natural language processing (e.g., GPT-3), computer vision (e.g., huge Vision Transformers), or speech AI (e.g., Wave2Vec 2), have certain properties that set them apart from their smaller counterparts. As LLMs and DNNs become larger and are trained on progressively larger datasets, they can adapt to new tasks with just a handful of training examples, accelerating the route toward general artificial intelligence.

ACES: Introduction to Data Science in R 02/27/24 - 11:00 AM - 05:00 PM EST

This course is an introduction to the R programming language and covers the fundamental concepts needed to operate in the R environment with a particular focus on data science. This course assumes no prior experience with R.

Includes a 1-hour lunch break.

More information about this Short Course at https://hprc.tamu.edu/training/aces_intro_r.html
 

Building Transformer Based Natural Language Processing Applications (NVIDIA Deep Learning Institute) 02/22/24 - 11:00 AM - 04:00 PM EST

Applications for natural language processing (NLP) and generative AI have exploded in the past decade. With the proliferation of applications like chatbots and intelligent virtual assistants, organizations are infusing their businesses with more interactive human-machine experiences. Understanding how transformer-based large language models (LLMs) can be used to manipulate, analyze, and generate text-based data is essential.

ACES: GPU Programming 02/20/24 - 02:30 PM - 05:00 PM EST

This short course covers basic topics in CUDA programming on NVIDIA GPUs. Topics include

  • CUDA architecture
  • basic language usage of CUDA C/C++
  • writing, executing CUDA code.

More information about this Short Course https://hprc.tamu.edu/training/intro_cuda.html

Learn About the PATh Facility 02/20/24 - 02:30 PM - 04:00 PM EST

Supported by the same groups that run OSG Services, the PATh Facility provides dedicated throughput computing capacity to NSF-funded researchers for longer and larger jobs than will typically run on OSG services like the OSPool. This training will describe its features and how to get started. If you have found your jobs need more resources (cores, memory, time, data) than is typically available in the OSPool, this resource might be for you!

ACES: AI/ML Techlab in Jupyter Notebooks 02/20/24 - 11:00 AM - 01:30 PM EST

Accelerating AI/ML Workflows on a Composable Cyberinfrastructure

NVIDIA GenAI/LLM Virtual Workshop Series for Higher Ed 02/16/24 - 08:00 AM - 02/29/ - 04:00 PM EST

Join NVIDIA’s Deep Learning Institute (DLI) this February for a series of free, virtual instructor-led workshops providing hands-on experience with GPU-accelerated servers in the cloud to complete end-to-end projects in the areas of Generative AI and Large Language Models (LLMs). Each of these workshops are led by a DLI Certified Instructor and offer an opportunity to earn an industry-recognized certificate of competency based on assessments to support your career growth.

COMPLECS: HPC Security and Getting Help 02/15/24 - 02:00 PM - 03:30 PM EST

HPC systems are shared resources, therefore all users must be aware of the complexity of working in a shared environment and the implications associated with resource management and security. This module also addresses two essential and related sets of skills that should be a part of everyone’s toolbox, but that are frequently overlooked: (1) solving problems on your own leveraging online resources and (2) how to best work with the help desk or user support by properly collecting the information that can be used to help resolve your problem.

ACES: Using the Slurm Scheduler on Composable Resources 02/13/24 - 02:30 PM - 05:00 PM EST

This Short Course (2.5 hours) introduces researchers to the Slurm scheduler on the ACES cluster, a composable accelerator testbed at Texas A&M University. Topics covered include multiple job scheduling approaches and job management tools.

More information about this Short Course at https://hprc.tamu.edu/training/aces_slurm.html

Introduction to Composable Resources: ACES and FASTER 02/13/24 - 11:00 AM - 01:30 PM EST

Research computing on the composable ACES and FASTER clusters

This course will provide an overview of composable technology, where hardware can be reallocated between servers based on user requirements, featuring the advanced accelerators available on the composable ACES and FASTER clusters at Texas A&M University. Topics covered include hardware, access, policies, file systems, and batch processing.

Filter Events & Trainings