Skip to main content

Breadcrumb

  1. ACCESS Home
  2. Support
  3. Knowledge Base
  4. Knowledge Base Resources

Knowledge Base Resources

These resources are contributed by researchers, facilitators, engineers, and HPC admins. Please upvote resources you find useful!
Add a Resource

Filters

Topics

  • Show all (33)
  • (-) mpi (4)
  • (-) parallelization (4)
  • cuda (2)
  • performance-tuning (2)
  • training (2)
  • benchmarking (1)
  • cloud-computing (1)
  • cybersecurity (1)
  • data-analysis (1)
  • file-transfer (1)
  • finite-element-analysis (1)
  • fluid-dynamics (1)
  • github (1)
  • globus (1)
  • jetstream (1)
  • lustre (1)
  • matlab (1)
  • openmp (1)
  • openmpi (1)
  • profiling (1)
  • r (1)
  • slurm (1)
  • stampede2 (1)
  • workforce-development (1)

Topics

  • Show all (33)
  • (-) mpi (4)
  • (-) parallelization (4)
  • cuda (2)
  • performance-tuning (2)
  • training (2)
  • benchmarking (1)
  • cloud-computing (1)
  • cybersecurity (1)
  • data-analysis (1)
  • file-transfer (1)
  • finite-element-analysis (1)
  • fluid-dynamics (1)
  • github (1)
  • globus (1)
  • jetstream (1)
  • lustre (1)
  • matlab (1)
  • openmp (1)
  • openmpi (1)
  • profiling (1)
  • r (1)
  • slurm (1)
  • stampede2 (1)
  • workforce-development (1)

If you'd like to use more filters, please login to view them all.

Cornell Virtual Workshop
1
  • Roadmaps in Cornell Virtual Workshop
  • Search for topics
Cornell Virtual Workshop is a comprehensive training resource for high performance computing topics. The Cornell University Center for Advanced Computing (CAC) is a leader in the development and deployment of Web-based training programs. Our Cornell Virtual Workshop learning platform is designed to enhance the computational science skills of researchers, accelerate the adoption of new and emerging technologies, and broaden the participation of underrepresented groups in science and engineering. Over 350,000 unique visitors have accessed Cornell Virtual Workshop training on programming languages, parallel computing, code improvement, and data analysis. The platform supports learning communities around the world, with code examples from national systems such as Frontera, Stampede2, and Jetstream2.
jetstreammatlabcloud-computingdata-analysisperformance-tuningparallelizationfile-transferglobusslurmtrainingcudamatlabpythonrmpi
1 Like

Login to like
Type
learning
Level
Beginner, Intermediate, Advanced
NCSA HPC Training Moodle
1
  • NCSA HPC Training Moodle Site
Self-paced tutorials on high-end computing topics such as parallel computing, multi-core performance, and performance tools. Other related topics include 'Cybersecurity for End Users' and 'Developing Webinar Training.' Some of the tutorials also offer digital badges. Many of these tutorials were previously offered on CI-Tutor. A list of open access training courses are provided below. Parallel Computing on High-Performance Systems Profiling Python Applications Using an HPC Cluster for Scientific Applications Debugging Serial and Parallel Codes Introduction to MPI Introduction to OpenMP Introduction to Visualization Introduction to Performance Tools Multilevel Parallel Programming Introduction to Multi-core Performance Using the Lustre File System
performance-tuningprofilingparallelizationlustretrainingworkforce-developmentopenmppythonmpicybersecurity
1 Like

Login to like
Type
learning
Level
Beginner, Intermediate
MPI Resources
0
  • Easy MPI Tutorial
  • Open MPI documentation
Workshop for beginners and intermediate students in MPI which includes helpful exercises. Open MPI documentation.
parallelizationmpi
0 Likes

Login to like
Type
learning
Level
Beginner, Intermediate
Benchmarking with a cross-platform open-source flow solver, PyFR
0
  • PyFR documentation
  • PyFR source code from Github
  • Discourse channel for discussions and help
What is PyFR and how does it solve fluid flow problems? PyFR is an open-source Computational Fluid Dynamics (CFD) solver that is based on Python and employs the high-order Flux Reconstruction technique. It effectively solves fluid flow problems by utilizing streaming architectures, making it suitable for complex fluid dynamics simulations. How does PyFR achieve scalability on clusters with CPUs and GPUs? PyFR achieves scalability by leveraging distributed memory parallelism through the Message Passing Interface (MPI). It implements persistent, non-blocking MPI requests using point-to-point (P2P) communication and organizes kernel calls to enable local computations while exchanging ghost states. This design approach allows PyFR to efficiently operate on clusters with heterogeneous architectures, combining CPUs and GPUs. Why is PyFR valuable for benchmarking clusters? PyFR's exceptional performance has been recognized by its selection as a finalist in the ACM Gordon Bell Prize for High-Performance Computing. It demonstrates strong-scaling capabilities by effectively utilizing low-latency inter-GPU communication and achieving strong-scaling on unstructured grids. PyFR has been successfully benchmarked with up to 18,000 NVIDIA K20X GPUs on Titan, showcasing its efficiency in handling large-scale simulations.
finite-element-analysisbenchmarkingparallelizationgithubfluid-dynamicsopenmpic++cudampi
0 Likes

Login to like
Type
tool
Level
Intermediate