Training an LSTM Model in Pytorch
0
This google colab notebook tutorial demonstrates how to create and train an lstm model in pytorch to be used to predict time series data. An airline passenger dataset is used as an example.
Spack Documentation
0
Spack is a package manager for supercomputers that can help administrators install scientific software and libraries for multiple complex software stacks.
iOS CoreML + SwiftUI Image Classification Model
0
This tutorial will teach step-by-step how to create an image classification model using Core ML in XCode and integrate it into an iOS app that will use the user's iPhone camera to scan objects and predict based on the image classification model.
Recommended Libraries for Cyberinfrastructure Users Developing Jupyter Notebooks
0
This repository contains information about Jupyter Widgets and how they can be used to develop interactive workflows, data dashboards, and web applications that can be run on HPC systems and science gateways. Easy to build web applications are not only useful for scientists. They can also be used by software engineers and system admins who want to quickly create tools tools for file management and more!
Fine-tuning LLMs with PEFT and LoRA
0
As LLMs get larger fine-tuning to the full extent can become difficult to train on consumer hardware. Storing and deploying these tuned models can also be quite expensive and difficult to store. With PEFT (parameter -efficent fine tuning), it approaches fine-tune on a smaller scale of model parameters while freezing most parameters of the pretrained LLMs. Basically it is providing full performance that which is similar if not better than full fine tuning while only having a small number of trainable parameters. This source explains that as well as going over LORA diagrams and a code walk through.
ACCESS Getting Started Quick-Guide
0
A step-by-step guide to getting your first allocation for Access computing and storage resources.
Applications of Machine Learning in Engineering and Parameter Tuning Tutorial
0
Slides for a tutorial on Machine Learning applications in Engineering and parameter tuning given at the RMACC conference 2019.
Fundamentals of Cloud Computing
0
An introduction to Cloud Computing
Slurm Tutorials
0
Introduction to the Slurm Workload Manager for users and system administrators, plus some material for Slurm programmers.
Set Up VSCode for Python and Github
0
VSCode is a popular IDE that runs on Windows, MacOS, and Linux. This tutorial will explain how to get set up with VSCode to code in Python. It will also provide a tutorial on how to set up Github integration within VSCode.
Hour of Ci
0
Hour of Cyberinfrastructure (Hour of CI) is a nationwide campaign to introduce undergraduate and graduate students to cyberinfrastructure and geographic information science (GIS).
Data visualization with Matplotlib
0
Data visualization is a critical aspect of data analysis. It allows for a clear and concise representation of data, making it easier for users to understand and interpret complex datasets. One of the most popular libraries for data visualization in Python is Matplotlib. The included website aims to provide a brief overview of Matplotlib, its features, and examples/exercises to dive deeper into its functionalities.
Working with Python on HPC Clusters
0
This tutorial series and documentation covers topics on using Python on HPC clusters. The specific steps are based on the HOPPER cluster at George Mason University in Fairfax, VA. They should be implementable on most HPC clusters that have the SLURM scheduler installed, the Environment Modules system for managing packages and Open onDemand for a web-based GUI to access the cluster resources.
ACES: Charliecloud Containers for Scientific Workflows (Tutorial)
0
This tutorial introduces the use of Containers using the Charliecloud software suite. This tutorial will provide participants with background and hands-on experience to use basic Charliecloud containers for HPC applications. We discuss what containers are, why they matter for HPC, and how they work. We'll give an overview of Charliecloud, the unprivileged container solution from Los Alamos National Laboratory's HPC Division. Students will learn how to build toy containers and containerize real HPC applications, and then run them on a cluster. Exercises are demonstrated using the ACES cluster, a composable accelerator testbed at Texas A&M University. Students with an allocation on the ACES cluster can follow along with the ACES-specific exercises.
Raftlib: Open Source library for concurrent data processing pipelines
0
Raftlib is an open-source C++ Library that provides a framework for implementing parallel and concurrent data processing pipelines. It is designed to simplify the development of high-performance data processing applications by abstracting away the complexities of parallelism, concurrency, and data flow management.
It enables stream/data-flow parallel computation by linking parallel compute kernels together using simple right shift operators, similar to C++ streams for string manipulation. RaftLib eliminates the need for explicit usage of traditional threading libraries such as pthreads, std::thread, or OpenMP, which can lead to non-deterministic behavior when misused.
Introduction to Linux CLI for Researchers
0
The goal of this video is to help researchers and students recently given allocations to High Performance Compute resources a basic introduction to Linux commands to help them get started. These are a few of the most fundamental commands for navigating and getting started.
If you find this video helpful or would like me to continue this series let me know!
Better Scientific Software (BSSw)
0
The Better Scientific Software (BSSw) project provides a community to collaborate and learn about best practices in scientific software development. Software—the foundation of discovery in computational science & engineering—faces increasing complexity in computational models and computer architectures. BSSw provides a central hub for the community to address pressing challenges in software productivity, quality, and sustainability.
InsideHPC
0
InsideHPC is an informational site offers videos, research papers, articles, and other resources focused on machine learning and quantum computing among other topics within high performance computing.
Introduction to Probabilistic Graphical Models
0
This website summarizes the notes of Stanford's introductory course on probabilistic graphical models.
It starts from the very basics and concludes by explaining from first principles the variational auto-encoder, an important probabilistic model that is also one of the most influential recent results in deep learning.
Open Storage Network
0
The Open Storage Network, a national resource available through the XSEDE resource allocation system, is high quality, sustainable, distributed storage cloud for the research community.
Official Documentation of VisIt
0
VisIt is a prominent open-source, interactive parallel visualization and graphical analysis tool predominantly used for viewing scientific data. Its GitHub repository offers a detailed insight into the software's source code, documentation, and contribution guidelines. In particular, it offers useful examples on how it
Thrust resources
0
Thrust is a CUDA library that optimizes parallelization on the GPU for you. The Thrust tutorial is great for beginners. The documentation is helpful for anyone using Thrust.
AHPCC documentary
0
This link is a documentary website to use AHPCC.
Bridges-2 Home Page
0
Landing Page for Bridges-2 information
Displaying Scientific Data with Tableau
0
Tableau is a popular and capable software product for creating charts that present data and dashboards that allow you to explore data. It is typically used to present business or statistical data, but can also create compelling visualizations of scientific data. However, scientific data is often generated or stored in formats that are not immediately accessible by Tableau. This seminar will explore the data formats that work best with Tableau and the available mechanisms for generating scientific data in (or converting it to) those formats so that you can apply the full power of Tableau to create the best possible visualizations of your data.