
Bayesian nonparametric ensemble air quality model predictions at high spatio-temporal daily nationwide 1 km grid cell
I aim to run a Bayesian Nonparametric Ensemble (BNE) machine learning model implemented in MATLAB. Previously, I successfully tested the model on Columbia's HPC GPU cluster using SLURM. I have since enabled MATLAB parallel computing and enhanced my script with additional lines of code for optimized execution.
I want to leverage ACCESS Accelerate allocations to run this model at scale.
The BNE framework is an innovative ensemble modeling approach designed for high-resolution air pollution exposure prediction and spatiotemporal uncertainty characterization. This work requires significant computational resources due to the complexity and scale of the task. Specifically, the model predicts daily air pollutant concentrations (PM2.5 and NO2 at a 1 km grid resolution across the United States, spanning the years 2010–2018. Each daily prediction dataset is approximately 6 GB in size, resulting in substantial storage and processing demands.
To ensure efficient training, validation, and execution of the ensemble models at a national scale, I need access to GPU clusters with the following resources:
- Permanent storage: ≥100 TB
- Temporary storage: ≥50 TB
- RAM: ≥725 GB
In addition to MATLAB, I also require Python and R installed on the system. I use Python notebooks to analyze output data and run R packages through a conda environment in Jupyter Notebook. These tools are essential for post-processing and visualization of model predictions, as well as for running complementary statistical analyses.
To finalize the GPU system configuration based on my requirements and initial runs, I would appreciate guidance from an expert. Since I already have approval for the ACCESS Accelerate allocation, this support will help ensure a smooth setup and efficient utilization of the allocated resources.
Conservation Stewardship Legacy for Porcupines and Chipmunks
I am, for the first time, transitioning workflows from lab computers to HPC via ACCESS. While I have advanced knowledge of R and command-line operations, I have not used HPC before. I have navigate the ACCESS portals without struggle; my challenge is getting jobs on machines.
The project I am focusing on uses hierarchical Bayesian models written in R to 1) evaluate the effects of weather and climate on a vulnerable species of chipmunk; and 2) assess whether the North American porcupine has experienced an otherwise unnoticed range-wide decline. I have working code in R written and the data structures I need to run the code. Runtime and memory requirements are beyond a typical desktop's capacity, though. Since this is a Bayesian model, I expect that I need just 4 cores (at a time), but each running for quite a few hours (days/weeks?), with "modest" memory requirements (100-200GB). Data storage is likely < 1TB.