TAMU Launch

Launch is a computing cluster managed by Texas A&M High Performance Research Computing. It contains both compute nodes and gpu compute nodes, 10 total GPU compute nodes each with 2 NVIDIA A30s.

 It has a OnDemand page with various built in applications such as 

  • Abacus
  • Ansys
  • IGV
  • LS-PREPOST
  • MATLAB
  • Jupytr Notebook
  • Paraview
  • VNC
  • Rstudio
  • Jupyter Lab
  • Browse

 

 

Ask about TAMU Launch

File Transfer

There are upload and download options for files on the OnDemand page. 

If you want to copy and paste in a interactive virtual session, to do so you must 

  1. Open the toolbar on the left of the screen and select "Clipboard".
  2. If you want to paste text from your host computer to the remote session, paste the text in the clipboard box. You can then use the middle-mouse button (MMB) to paste it in your terminal.
  3. If you want to copy text from the remote session to your host computer's clipboard, simply highlight the text in the terminal. It will appear in the Clipboard toolbar pop-out where you can copy it to your host clipboard.

     

Supported Methods Data Transfer Node URL

Storage

File System

Directory Path Quota Purge Backup Notes
Home /home/USER 10 GB All disk-resident user data, on all compute servers, will be deleted six months after account deactivation unless a user has made prior arrangements with the HPRC staff and/or the affected data are covered by special provisions. The HPRC staff archives on a regular (nightly) basis all user home directories (/home/$USER) on all computational servers. Upon login, you will be situated in /home/$USER. The use of this area is for small-to-modest amounts of processing: small software, scripts, compiling, editing. Its space and file count limits are not extensible. We are still working on enabling quotas fo
Scratch /scratch/user/USER 1TB All non-home disk-resident areas (e.g., /scratch), on all compute clusters, are made available to meet the current needs of active users. Such areas are not meant in any way to provide long-term storage. Users are expected to continually delete or move e This is not backed up. This is high performance storage intended to temporarily hold larger files and is for on-going processing that uses them. It is NOT intended as long-term storage. Please delete or move out of these area any files that are not frequently used.
Project /scratch/group/PROJECTID 5TB It is not purged while the allocation is active. Data will be removed 90 days after allocation expiration. This high performance storage is shared among members of this particular group.

External Storage

Immediately upon logging in to Launch, the following message about the status of your disk space use greets you (format may vary):

Your current disk quotas are:
Disk       Disk Usage      Limit    File Usage      Limit
/home           2.49G        10G           113      10000
/scratch        1.25G         1T            40      250000
Type 'showquota' to view these quotas again.

Jobs

The batch system will charge SUs from either the account specified in the job parameters, or from your default account (if this parameter is omitted). To avoid errors in SU billing, you can view your active accounts, and set your default account using the myproject command.

Drona Workflow Engine, developed by HPRC, provides a 100% graphical interface to generate and submit Generic jobs without the need to write a Slurm script yourself or even be aware of Slurm syntax and Generic internals. The Drona app is available on all HPRC Portals under the Jobs tab (Screenshot) .

you will find Generic in the Environments Dropdown (Screenshot). NOTE: If you don't see Generic in the Environments Dropdown, you need to import it first. Click on the + sign next to the environments dropdown and select the Generic environment in the pop-up window. You only need to do this once. See the import section for more information about environments.

Once you select the Generic environment, the form will expand with Generic specific fields (Screenshot) to guide you in providing all the needed information. To generate the Generic job files, click the Generate or Preview button. This will first show a fully editable preview screen with the generated job scripts. In the preview window, you can enter all the commands you want to execute in the batch script. To submit the job, click on the submit button, and Drona will submit the generated job on your behalf. For detailed information about Drona Workflow Engine, checkout the Drona Workflow Engine Guide

If you experience any issues or have any suggestions, please get in touch with us at help [at] hprc.tamu.edu

Once you have your job script ready, it is time to submit the job. You can submit your job to the Slurm batch scheduler using the sbatch command. For example, suppose you you created a batch file named MyJob.slurm, the command to submit the job will as follows:

[username@launch ~]$ sbatch MyJob.slurm
Submitted batch job 3606

After a job has been submitted, you may want to check on its progress or cancel it. Below is a list of the most used job monitoring and control commands for jobs on Launch.

Job Monitoring and Control Commands
FunctionCommandExample
Submit a jobsbatch [script_file]sbatch FileName.job
Cancel/Kill a jobscancel [job_id]scancel 101204
Check status of a single jobsqueue --job [job_id]squeue --job 101204
Check status of all
jobs for a user
squeue -u [user_name]squeue -u User1
Check CPU and memory efficiency for a job
(Use only on finished jobs)
seff [job_id]seff 101204

Here is an example of the information that the seff command provides for a completed job:

% seff 12345678
Job ID: 12345678
Cluster: Launch
User/Group: username/groupname
State: COMPLETED (exit code 0)
Nodes: 16
Cores per node: 28
CPU Utilized: 1-17:05:54
CPU Efficiency: 94.63% of 1-19:25:52 core-walltime
Job Wall-clock time: 00:05:49
Memory Utilized: 310.96 GB (estimated maximum)
Memory Efficiency: 34.70% of 896.00 GB (56.00 GB/node)

Example Job 1: A serial job (single core, single node)

#!/bin/bash

##NECESSARY JOB SPECIFICATIONS
#SBATCH --job-name=JobExample1       #Set the job name to "JobExample1"
#SBATCH --time=01:30:00              #Set the wall clock limit to 1hr and 30min
#SBATCH --ntasks=1                   #Request 1 task
#SBATCH --mem=2560M                  #Request 2560MB (2.5GB) per node
#SBATCH --output=Example1Out.%j      #Send stdout/err to "Example1Out.[jobID]"

##OPTIONAL JOB SPECIFICATIONS
##SBATCH --account=123456             #Set billing account to 123456
##SBATCH --mail-type=ALL              #Send email on all job events
##SBATCH --mail-user=email_address    #Send all emails to email_address

#First Executable Line

Further examples can be found at Job Script / Batch System Examples

 

Queue specifications

Name Purpose CPUs GPUs RAM Jobs
30 days
Wait Time
30-day trend
Wall Time
30-day trend
Compute Node 196 AMD EPYC Genoa 9654 384 GB DDR5-4800
GPU Node 196 AMD EPYC Genoa 9654 2 NVIDIA A30s 768 GB DDR5-4800
Login Node 32 AMD EPYC Genoa 9124 384 GB