- Samtools Documentation0Samtools is a suite of programs for interacting with high-throughput sequencing data, especially in the SAM/BAM format. It offers various utilities for processing, analyzing, and managing sequence data generated from next-generation sequencing (NGS) experiments. Samtools is widely used in bioinformatics and genomics research for tasks such as read alignment, variant calling, and data manipulation.
- Introduction to Linux CLI for Researchers0The goal of this video is to help researchers and students recently given allocations to High Performance Compute resources a basic introduction to Linux commands to help them get started. These are a few of the most fundamental commands for navigating and getting started. If you find this video helpful or would like me to continue this series let me know!
- Charliecloud User Group0Announcements for for users and developers of Charliecloud, which provides lightweight user-defined software stacks for high-performance computing.
- Jetstream2 Docs Site0Jetstream2 makes cutting-edge high-performance computing and software easy to use for your research regardless of your project’s scale—even if you have limited experience with supercomputing systems.Cloud-based and on-demand, the 24/7 system includes discipline-specific apps. You can even create virtual machines that look and feel like your lab workstation or home machine, with thousands of times the computing power.
- QGIS Processing Executor0Running QGIS tools from the command line
- TensorFlow for Deep Neural Networks0TensorFlow is a powerful framework for Deep Learning, developed by google. This specifically is their python package, which is easy to use and can be used to train incredibly powerful models.
- Working with Python on HPC Clusters0This tutorial series and documentation covers topics on using Python on HPC clusters. The specific steps are based on the HOPPER cluster at George Mason University in Fairfax, VA. They should be implementable on most HPC clusters that have the SLURM scheduler installed, the Environment Modules system for managing packages and Open onDemand for a web-based GUI to access the cluster resources.
- Installing Rocky Linux Operating System0Rocky Linux is an open-source enterprise operating system. It is compatible with Red Hat Enterprise Linux (RHEL). It is a community-driven project that provides a stable and reliable platform for production workloads. It is one of the best alternatives to Opensource CentOS, since Centos will be on end of life (EoL) soon in 2024 by shifting to CentOS Stream.
- phenoACCESS-24 workshop program materials0phenoACCESS-24: Workshop on Research Computing and Plant Phenotyping High-throughput plant phenotyping is computationally intensive, requiring data storage, data processing and analysis, research computing expertise, and mechanisms for data sharing. This workshop is aimed at research computing workforce development by addressing questions such as what is plant phenotyping; what types of data are collected; what are the preprocessing and analytical needs; what tools and platforms exist for data capture, management, analysis, and storage; and how best to collaborate and engage with phenotyping researchers. The full-day agenda will include speakers (scientists and research compute staff); panel discussions (how to work with research computing staff and facilities; how to engage with phenotyping scientists), and networking opportunities (meet-and-greet, ice breakers, small group discussions). The videos and slide decks for the talks are included on the linked page.
- Bridges-2 Home Page0Landing Page for Bridges-2 information
- Data Imputation Methods for Climate Data and Mortality Data0
- Data Imputation Methods for Climate Data and Mortality Data - Slices
- Github repository
- Data Imputation Methods for Climate Data and Mortality Data - Full Tutorial
This slices and videos introduced how to use K-Nearest-Neighbors method to impute climate data and how to use Bayesian Spatio-Temporal models in R-INLA to impute mortality data. The demos will be added soon. - What is fairness in ML?0This article discusses the importance of fairness in machine learning and provides insights into how Google approaches fairness in their ML models. The article covers several key topics: Introduction to fairness in ML: It provides an overview of why fairness is essential in machine learning systems, the potential biases that can arise, and the impact of biased models on different communities. Defining fairness: The article discusses various definitions of fairness, including individual fairness, group fairness, and disparate impact. It explains the challenges in achieving fairness due to trade-offs and the need for thoughtful considerations. Addressing bias in training data: It explores how biases can be present in training data and offers strategies to identify and mitigate these biases. Techniques like data preprocessing, data augmentation, and synthetic data generation are discussed. Fairness in ML algorithms: The article examines the potential biases that can arise from different machine learning algorithms, such as classification and recommendation systems. It highlights the importance of evaluating and monitoring models for fairness throughout their lifecycle. Fairness tools and resources: It showcases various tools and resources available to practitioners and developers to help measure, understand, and mitigate bias in machine learning models. Google's TensorFlow Extended (TFX) and What-If Tool are mentioned as examples. Google's approach to fairness: The article highlights Google's commitment to fairness and the steps they take to address fairness challenges in their ML models. It mentions the use of fairness indicators, ongoing research, and partnerships to advance fairness in AI. Overall, the article provides a comprehensive overview of fairness in machine learning and offers insights into Google's approach to building fair ML models.
- Mechanism and Implementation of Various MPI Libraries0
- Tutorial for MPI Working Mechanism and Detailed Implementation
- A Simple Running Case of Open MPI on clusters
There is a detailed explanation about communication routines and managing methods of different MPI libraries, as well as several exercises designed for users to get familiar with the implementation of MPI build process. - Weka0Weka is a collection of machine learning algorithms for data mining tasks. It contains tools for data preparation, classification, regression, clustering, association rules mining, and visualization.
- Thrust resources0Thrust is a CUDA library that optimizes parallelization on the GPU for you. The Thrust tutorial is great for beginners. The documentation is helpful for anyone using Thrust.
- NITRC0The Neuroimaging Tools and Resources Collaboratory (NITRC) is a neuroimaging informatics knowledge environment for MR, PET/SPECT, CT, EEG/MEG, optical imaging, clinical neuroinformatics, imaging genomics, and computational neuroscience tools and resources.
- Time-Series LSTMs Python Walkthrough0A walkthrough (with a Google Colab link) on how to implement your own LSTM to observe time-dependent behavior.
- The Use of High-Performance Computing Services in University Settings: A Usability Case Study of the University of Cincinnati’s High-Performance Computing Cluster0This presentation gives a detailed breakdown of the outcome of my master's thesis which was focused on making HPC Clusters accessible across all disciplines in a university setting "Our Case Study was the university of Cincinnati".
- Building Anaconda Navigator applications0This tutorial explains how to create an Anaconda Navigator Application (app) for JupyterLab. It is intended for users of Windows, macOS, and Linux who want to generate an Anaconda Navigator app conda package from a given recipe. Prior knowledge of conda-build or conda recipes is recommended.
- Machine Learning in R online book0The free online book for the mlr3 machine learning framework for R. Gives a comprehensive overview of the package and ecosystem, suitable from beginners to experts. You'll learn how to build and evaluate machine learning models, build complex machine learning pipelines, tune their performance automatically, and explain how machine learning models arrive at their predictions.
- NERSC Training and Tutorials0
- NERSC Training and Tutorials Main Site
- NERSC Upcoming and Recent Training Events
- NERSC Archived Training and Tutorials
A comprehensive collection of NERSC developed training and tutorial events, offered on regular schedules. All sessions are archived, including slide decks, video recordings, and software examples as are available. Some examples of past training and tutorial topics are listed below Deep Learning for Sciences Webinar Series BerkeleyGW Tutorial Workshop VASP Trainings Timemory Software Monitoring Tutorial, April 2021 HPCToolkit to Measure and Analyzing GPU Applications Performance Tutorial Totalview Tutorial NVidia HPCSDK - OpenMP Target Offload Training Parallelware Training Series ARM Debugging and Profiling Tools Tutorial Roofline on NVIDIA GPUs GPUs for Science events 3-part OpenACC Training Series 9-part CUDA Training Series - Developer Stories Podcast0As developers, we get excited to think about challenging problems. When you ask us what we are working on, our eyes light up like children in a candy store. So why is it that so many of our developer and software origin stories are not told? How did we get to where we are today, and what did we learn along the way? This podcast aims to look “Behind the Scenes of Tech’s Passion Projects and People.” We want to know your developer story, what you have built, and why. We are an inclusive community - whatever kind of institution or country you hail from, if you are passionate about software and technology you are welcome!
- Introduction to Vizualization on HPC Using Python0This workshop has an introduction to the concepts of visualization followed by hands on exercises. The concepts section has Speaker Notes, and the hands on section has an accompanying Jupyter notebook. The workshop is one in a series of Introduction to HPC
- Representation Learning in Deep Learning0Representation learning is a fundamental concept in machine learning and artificial intelligence, particularly in the field of deep learning. At its core, representation learning involves the process of transforming raw data into a form that is more suitable for a specific task or learning objective. This transformation aims to extract meaningful and informative features or representations from the data, which can then be used for various tasks like classification, clustering, regression, and more.
- MOPAC0MOPAC (Molecular Orbital PACkage) is a semi-empirical quantum chemistry package used to compute molecular properties and structures by using approximations of the Schrödinger equation. This tutorial explains the process of using MOPAC for different forms of calculations.