About ACCESS Pegasus
Run Jobs and Workflows on ACCESS Resources from a Single Entry Point

- Get started quickly with sample workflows using a Python API
- Construct, submit, and monitor workflows from a Jupyter Notebook
- Track workflows and debug them when failures occur
- Perform simple interactions on the command line

We are continually developing ACCESS Pegasus. Currently you can run high-throughput workflows of jobs on a single compute node(single core, multicore, or single node MPI jobs). For workflows with MPI jobs, reach out to us.
Powerful Features




Workflows
Why Use Workflows




View Workflow Examples
We have Jupyter based training notebooks available that walk you through creating simple diamond workflow (and more complex ones) using the Pegasus Python API and executing them on ACCESS resources.
Get Started with ACCESS Pegasus
To get started you only need some Python/Jupyter Notebook knowledge, some experience using a terminal window, and an ACCESS allocation.
Find out about getting an ACCESS Allocation.
Setup
The first time you logon, you need to specify what allocations you have. Logon with your ACCESS ID and use Open OnDemand to get setup.

Single Sign On with your ACCESS ID
All registered users with an active allocation automatically have an ACCESS Pegasus account.

Configure resources once
Use Open OnDemand instance at resource providers to install SSH keys and determine location allocation ID.
Run Workflows on ACCESS


1. Create the workflow
- Use Pegasus API in Jupyter Notebook or use our examples
- Submit your workflow for execution


2. Provision compute resources
- Use HTCondor Annex tool to provision pilot jobs on your allocated ACCESS resources


3. Monitor the execution
- Follow the workflow execution within the notebook or in the terminal
- You can see what resources you brought in using the terminal
Support
