Skip to main content

Breadcrumb

  1. ACCESS Home
  2. Support
  3. Knowledge Base
  4. Knowledge Base Resources

Knowledge Base Resources

These resources are contributed by researchers, facilitators, engineers, and HPC admins. Please upvote resources you find useful!
Add a Resource

Filters

Topics

  • Show all (58)
  • (-) gpu (12)
  • deep-learning (4)
  • machine-learning (4)
  • training (4)
  • c (3)
  • cuda (3)
  • neural-networks (3)
  • parallelization (3)
  • ai (2)
  • big-data (2)
  • slurm (2)
  • tensorflow (2)
  • access (1)
  • access-allocations (1)
  • aces (1)
  • artificial-intelligence (1)
  • community-outreach (1)
  • compiling (1)
  • composable-systems (1)
  • containers (1)
  • data-analysis (1)
  • delta (1)
  • distributed-computing (1)
  • expanse (1)
  • hpc-cluster-architecture (1)
  • image-processing (1)

Topics

  • Show all (58)
  • (-) gpu (12)
  • deep-learning (4)
  • machine-learning (4)
  • training (4)
  • c (3)
  • cuda (3)
  • neural-networks (3)
  • parallelization (3)
  • ai (2)
  • big-data (2)
  • slurm (2)
  • tensorflow (2)
  • access (1)
  • access-allocations (1)
  • aces (1)
  • artificial-intelligence (1)
  • community-outreach (1)
  • compiling (1)
  • composable-systems (1)
  • containers (1)
  • data-analysis (1)
  • delta (1)
  • distributed-computing (1)
  • expanse (1)
  • hpc-cluster-architecture (1)
  • image-processing (1)

If you'd like to use more filters, please login to view them all.

Introduction to Deep Learning in Pytorch
2
  • Landing Page
  • Pytorch Quickstart
  • Pytorch Basics
  • Pytorch GPU Support
  • Regression and Classification with Fully Connected Neural Networks
  • High Dimensional Data
  • Datasets and data loading
  • Building the network
  • Computer Vision and Convolutional Neural Networks
This workshop series introduces the essential concepts in deep learning and walks through the common steps in a deep learning workflow from data loading and preprocessing to training and model evaluation. Throughout the sessions, students participate in writing and executing simple deep learning programs using Pytorch – a popular Python library for developing, training, and deploying deep learning models.
aideep-learningimage-processingmachine-learningneural-networkspytorchgpu
2 Likes

Login to like
Type
learning
Level
Beginner, Intermediate
ACCESS HPC Workshop Series
1
  • ACESS HPC Workshop Series
  • MPI Workshop
  • OpenMP Workshop
  • GPU Programming Using OpenACC
  • Summer Boot Camp
  • Big Data and Machine Learning
Monthly workshops sponsored by ACCESS on a variety of HPC topics organized by Pittsburgh Supercomputing Center (PSC). Each workshop will be telecast to multiple satellite sites and workshop materials are archived.
deep-learningmachine-learningneural-networksbig-datatensorflowgputrainingopenmpicc++fortranopenmpprogrammingmpispark
1 Like

Login to like
Type
learning
Level
Beginner, Intermediate
GPU Acceleration in Python
0
  • GPU Acceleration in Python
This tutorial explains how to use Python for GPU acceleration with libraries like CuPy, PyOpenCL, and PyCUDA. It shows how these libraries can speed up tasks like array operations and matrix multiplication by using the GPU. Examples include replacing NumPy with CuPy for large datasets and using PyOpenCL or PyCUDA for more control with custom GPU kernels. It focuses on practical steps to integrate GPU acceleration into Python programs.
machine-learningbig-datadata-analysisoptimizationparallelizationgpucudapython
0 Likes

Login to like
Type
learning
Level
Beginner, Intermediate
ACES: Charliecloud Containers for Scientific Workflows (Tutorial)
0
  • ACES: Charliecloud Containers for Scientific Workflows (Video)
  • ACES: Charliecloud Containers for Scientific Workflows (Slides)
This tutorial introduces the use of Containers using the Charliecloud software suite. This tutorial will provide participants with background and hands-on experience to use basic Charliecloud containers for HPC applications. We discuss what containers are, why they matter for HPC, and how they work. We'll give an overview of Charliecloud, the unprivileged container solution from Los Alamos National Laboratory's HPC Division. Students will learn how to build toy containers and containerize real HPC applications, and then run them on a cluster. Exercises are demonstrated using the ACES cluster, a composable accelerator testbed at Texas A&M University. Students with an allocation on the ACES cluster can follow along with the ACES-specific exercises.
ACESTAMUscratchlammpstensorflowopen-ondemandgpunfsslurmbashtrainingpythoncontainers
0 Likes

Login to like
Type
learning
Level
Beginner
Introduction to Parallel Programming for GPUs with CUDA
0
  • Introduction to Parallel Programming for GPUs with CUDA
This tutorial provides a comprehensive introduction to CUDA programming, focusing on essential concepts such as CUDA thread hierarchy, data parallel programming, host-device heterogeneous programming model, CUDA kernel syntax, GPU memory hierarchy, and memory optimization techniques like global memory coalescing and shared memory bank conflicts. Aimed at researchers, students, and practitioners, the tutorial equips participants with the skills needed to leverage GPU acceleration for scalable computation, particularly in the context of AI.
gpunvidiacc++cuda
0 Likes

Login to like
Type
learning
Level
Intermediate
Introduction to GPU/Parallel Programming using OpenACC
0
  • Intro to OpenACC
Introduction to the basics of OpenACC.
gpucc++compilingfortran
0 Likes

Login to like
Type
presentation
Level
Beginner
Examples of Thrust code for GPU Parallelization
0
  • thrust_ex.txt
Some examples for writing Thrust code. To compile, download the CUDA compiler from NVIDIA. This code was tested with CUDA 9.2 but is likely compatible with other versions. Before compiling change extension from thrust_ex.txt to thrust_ex.cu. Any code on the device (GPU) that is run through a Thrust transform is automatically parallelized on the GPU. Host (CPU) code will not be. Thrust code can also be compiled to run on a CPU for practice.
parallelizationgpucuda
0 Likes

Login to like
Type
learning
Level
Intermediate, Advanced
ACCESS KB Guide - Expanse
0
  • ACCESS KB Guide
Expanse at SDSC is a cluster designed by Dell and SDSC delivering 5.16 peak petaflops, and offers Composable Systems and Cloud Bursting. This documentation describes how to use the Expanse cluster with some specific information for people with ACCESS accounts.
expansecomposable-systemsgpu
0 Likes

Login to like
Type
documentation
Level
Beginner, Intermediate, Advanced
Horovod: Distributed deep learning training framework
0
  • Horovod
Horovod is a distributed deep learning training framework. Using horovod, a single-GPU training script can be scaled to train across many GPUs in parallel. The library supports popular deep learning framework such as TensorFlow, Keras, PyTorch, and Apache MXNet.
deep-learningdistributed-computinggpu
0 Likes

Login to like
Type
tool
Level
Intermediate, Advanced
Thrust resources
0
  • Thrust tutorial from Nvidia
  • Thrust documentation
Thrust is a CUDA library that optimizes parallelization on the GPU for you. The Thrust tutorial is great for beginners. The documentation is helpful for anyone using Thrust.
parallelizationgpuresources
0 Likes

Login to like
Type
learning
Level
Intermediate, Advanced
DELTA Introductory Video
0
  • DELTA Youtube Video
Introductory video about DELTA. Speaker Tim Boerner, Senior Assistant Director, NCSA
deltagputraining
0 Likes

Login to like
Type
video
Level
Beginner, Intermediate, Advanced