RMACC Systems Administrator Workshop Slides
0
A compilation of the slides from this year's RMACC Sys Admin Workshop.
RMACC Sys Admin Workhop Schedule:
Tuesday
12:00 PM Sign-in
1:00 PM Introductions
1:30 PM Lightning Talk - HPC Survival guide
2:00 PM Node Management - Scott Serr
2:30 PM Lightning Talk - Warewulf
3:00 PM Urgent HPC - Coltran Hophan-Nichols and Alexander Salois
Wednesday
9:00 AM Breakfast
10:00 AM Round table Sites - BYU, INL, UMT, ASU, MSU
11:00 AM Open OnDemand setup - Dean Anderson
11:30 AM Lightning talk - Long term hardware support
12:00 PM Lunch
1:00 PM HPC Security - Matt Bidwell
2:00 PM Lightning talk- Security
2:30 PM ACCESS resources - Couso
3:00 PM Easybuild tutorial - Alexander Salois
3:30 PM General Q & A
Thursday
9:00 AM Breakfast
10:00 AM Lightning Talk- Containers and Virtual Machines
11:00 AM University of Montana - Hellgate Site Tour
11:30 AM Closing Remarks
Numba: Compiler for Python
0
Numba is a Python compiler designed for accelerating numerical and array operations, enabling users to enhance their application's performance by writing high-performance functions in Python itself. It utilizes LLVM to transform pure Python code into optimized machine code, achieving speeds comparable to languages like C, C++, and Fortran. Noteworthy features include dynamic code generation during import or runtime, support for both CPU and GPU hardware, and seamless integration with the Python scientific software ecosystem, particularly Numpy.
GPU Acceleration in Python
0
This tutorial explains how to use Python for GPU acceleration with libraries like CuPy, PyOpenCL, and PyCUDA. It shows how these libraries can speed up tasks like array operations and matrix multiplication by using the GPU. Examples include replacing NumPy with CuPy for large datasets and using PyOpenCL or PyCUDA for more control with custom GPU kernels. It focuses on practical steps to integrate GPU acceleration into Python programs.
RMACC Website
0
Rocky Mountain Advanced Computing Consortium Website
AWS Tutorial For Beginners
0
An AWS Tutorial for Beginners is a course that teaches the basics of Amazon Web Services (AWS), a cloud computing platform that offers a wide range of services, including compute, storage, networking, databases, analytics, machine learning, and artificial intelligence.
GDAL Multi-threading
0
Multi-threading guidance when using GDAL.
Introductory Python Lecture Series
0
A lecture and notes with the goal of teaching introductory python. Starting by understanding how to download and start using python, then expanding to basic syntax for lists, arrays, loops, and methods.
Official Documentation for PyTorch and NumPy
0
The official documentation for PyTorch, a machine learning tensor-based framework, and NumPy, which allows for support for ndarrays which is useful to make tensors when implementing NNs. Both libraries can be installed with pip.
Using Dask on HPC Systems
0
A tutorial on the effective use of Dask on HPC resources. The four-hour tutorial will be split into two sections, with early topics focused on novice Dask users and later topics focused on intermediate usage on HPC and associated best practices. The knowledge areas covered include (but are not limited to):
Beginner section
High-level collections including dask.array and dask.dataframe
Distributed Dask clusters using HPC job schedulers
Earth Science data analysis using Dask with Xarray
Using the Dask dashboard to understand your computation
Intermediate section
Optimizing the number of workers and memory allocation
Choosing appropriate chunk shapes and sizes for Dask collections
Querying resource usage and debugging errors
C Programming
0
"These notes are part of the UW Experimental College course on Introductory C Programming. They are based on notes prepared (beginning in Spring, 1995) to supplement the book The C Programming Language, by Brian Kernighan and Dennis Ritchie, or K&R as the book and its authors are affectionately known. (The second edition was published in 1988 by Prentice-Hall, ISBN 0-13-110362-8.) These notes are now (as of Winter, 1995-6) intended to be stand-alone, although the sections are still cross-referenced to those of K&R, for the reader who wants to pursue a more in-depth exposition." C is a low-level programming language that provides a deep understanding of how a computer's memory and hardware work. This knowledge can be valuable when optimizing apps for performance or when dealing with resource-constrained environments.C is often used as the foundation for creating cross-platform libraries and frameworks. Learning C can allow you to develop libraries that can be used across different platforms, including iOS, Android, and desktop environments.
MPI Resources
0
Workshop for beginners and intermediate students in MPI which includes helpful exercises. Open MPI documentation.
Raftlib: Open Source library for concurrent data processing pipelines
0
Raftlib is an open-source C++ Library that provides a framework for implementing parallel and concurrent data processing pipelines. It is designed to simplify the development of high-performance data processing applications by abstracting away the complexities of parallelism, concurrency, and data flow management.
It enables stream/data-flow parallel computation by linking parallel compute kernels together using simple right shift operators, similar to C++ streams for string manipulation. RaftLib eliminates the need for explicit usage of traditional threading libraries such as pthreads, std::thread, or OpenMP, which can lead to non-deterministic behavior when misused.
fast.ai
0
Fastai offers many tools to people working with machine learning and artifical intelligence including tutorials on PyTorch in addition to their own library built on PyTorch, news articles, and other resources to dive into this realm.
Use Windows Subsystem for Linux for HPC Command Line Access from Windows
0
Windows Subsystem for Linux (WSL) provides a Linux environment for Windows users to access HPC resources fast and efficiently.
Big Data Research at the University of Colorado Boulder
0
Background: Big data, defined as having high volume, complexity or velocity, have the potential to greatly accelerate research discovery. Such data can be challenging to work with and require research support and training to address technical and ethical challenges surrounding big data collection, analysis, and publication.
Methods: The present study was conducted via a series of semi-structured interviews to assess big data methodologies employed by CU Boulder researchers across a broad sample of disciplines, with the goal of illuminating how they conduct their research; identifying challenges and needs; and providing recommendations for addressing them.
Findings: Key results and conclusions from the study indicate: gaps in awareness of existing big data services provided by CU Boulder; open questions surrounding big data ethics, security and privacy issues; a need for clarity on how to attribute credit for big data research; and a preference for a variety of training options to support big data research.
The Official Documentation of Pandas
0
Pandas is one of the most essential Python libraries for data analysis and manipulation. It provides high-performance, easy-to-use data structures, and data analysis tools for the Python programming language. The official documentation serves as an in-depth guide to using this powerful tool including explanations and examples.
Language models and using HPC resources
0
Documentation and research based on the latest NLP text generation detection methods for 2023.
Neural Networks in Julia
0
Making a neural network has never been easier! The following link directs users to the Flux.jl package, the easiest way of programming a neural network using the Julia programming language. Julia is the fastest growing software language for AI/ML and this package provides a faster alternative to Python's TensorFlow and PyTorch with a 100% Julia native programming and GPU support.
Neurodesk
0
Neurodesk provides a containerised data analysis environment to facilitate reproducible analysis of neuroimaging data. Analysis pipelines for neuroimaging data typically rely on specific versions of packages and software, and are dependent on their native operating system. These dependencies mean that a working analysis pipeline may fail or produce different results on a new computer, or even on the same computer after a software update. Neurodesk provides a platform in which anyone, anywhere, using any computer can reproduce your original research findings given the original data and analysis code.
ACCESS Events and Training
0
Listing of upcoming ACCESS related events and training activities.
Workshop on LangChain and GPT
0
This interactive workshop introduces participants to the power of GPT and LangChain for solving domain-specific scientific challenges. Participants will learn how to use these tools to address real research problems, such as predicting molecular properties or analyzing large-scale datasets in genomics. Through guided tutorials and hands-on project development, attendees will leave with a working application tailored to their own research needs.
MATLAB bioinformatics toolbox
0
Bioinformatics Toolbox provides algorithms and apps for Next Generation Sequencing (NGS), microarray analysis, mass spectrometry, and gene ontology. Using toolbox functions, you can read genomic and proteomic data from standard file formats such as SAM, FASTA, CEL, and CDF, as well as from online databases such as the NCBI Gene Expression Omnibus and GenBank.
Spack Documentation
0
Spack is a package manager for supercomputers that can help administrators install scientific software and libraries for multiple complex software stacks.
Slurm Tutorials
0
Introduction to the Slurm Workload Manager for users and system administrators, plus some material for Slurm programmers.