The REmotely-managed Power Aware Computing Systems and Services (REPACSS) is a high-performance computing (HPC) data center and AI infrastructure prototype that demonstrates the feasibility of using variable energy for advanced computing tasks, with the goal of reducing costs and improving efficiency. REPACSS is designed to support intensive computational and data-driven research, powered by variable energy sources. The system consists of a combination of compute, GPU, and storage nodes, interconnected with high-speed networking to ensure efficient data transfer and processing.
Login to REPACCS CPU
At present, Multi-Factor Authentication (MFA) is not required for direct REPACSS access. However, users accessing through TTU’s GlobalProtect VPN or other university services may be subject to TTU’s institutional MFA requirements.
The REPACSS system is physically hosted within TTU's research computing infrastructure and may be accessed via the following methods:
- On Campus: Users may connect through wired Ethernet or the TTUnet Wi-Fi network.
- Off Campus: Access is available through the TTU GlobalProtect Virtual Private Network (VPN).
- Authentication: All system access requires secure login via SSH or authorized web-based interfaces.
Authentication:
All system access requires secure login via SSH or authorized web-based interfaces.
Secure authentication is required for all user interactions with the REPACSS system. The following credential management practices are supported:
- Secure password updates through the official TTU identity management system
- Multi-Factor Authentication (MFA) support
- SSH key registration for secure and password-less logins
If your password is forgotten or compromised, you must contact Texas Tech University’s IT Help Central to initiate a reset and regain account access.
SSH
To initiate a session:
ssh <your_username>@repacss.ttu.edu
During first-time access, the system may prompt you to verify the server’s RSA key fingerprint. Confirm by typing yes. You will then be required to enter your password.
It is recommended to install and configure MobaXterm:
- Download the installer from MobaXterm
- Launch MobaXterm and create a new SSH session with the following details:
- Remote Host:
repacss.ttu.edu - Username: Your TTU eRaider username
- Save and initiate the connection
Password Reset
ACCESS users should consult their ACCESS identity management portal for instructions.
SSH Login
$ ssh <your_username>@ssh <your_eRaider_username>@repacss.ttu.edu
File Transfer
| Supported Methods | Data Transfer Node | URL |
|---|---|---|
| GLOBUS CONNECT | https://app.globus.org/dashboard | |
| SCP | https://guide.repacss.org/understanding/repacss-system/file-system/file-transfer.html | |
| SFTP | https://guide.repacss.org/understanding/repacss-system/file-system/file-transfer.html | |
| RSYNC | https://guide.repacss.org/understanding/repacss-system/file-system/file-transfer.html |
Storage
File System
| Directory | Path | Quota | Purge | Backup | Notes |
|---|---|---|---|---|---|
| Home | /mnt/GROUPID/home/USERID | 100GB | Persistent personal storage for user scripts and configuration files. | ||
| Scratch | /mnt/GROUPID/scratch/USERID | 1TB | 1x per month | High-performance temporary storage space subject to periodic purging. | |
| Work | /mnt/GROUPID/work/USERID | Long-term storage for research outputs and work purposes. |
Jobs
Upon accessing REPACSS, users arrive at a login node, which is intended for job preparation activities such as file editing or code compilation. All computational jobs must be submitted to compute nodes using Slurm commands.
Queue specifications
| Name | Purpose | CPUs | GPUs | RAM | Jobs
30 days
|
Wait Time
30-day trend
|
Wall Time
30-day trend
|
|---|---|---|---|---|---|---|---|
| zen4 | General-purpose parallel/serial computing, memory-intensive processing, and standard MPI workloads. | Dual AMD EPYC 9754 (256 Cores) | None | 1.5 TB | — | — | — |
| h100 | Accelerated workloads, deep learning, machine learning, and GPU-based CUDA simulations. | Dual Intel Xeon Gold 6448Y (64 Cores) | 4 x NVIDIA H100 NVL (94 GB VRAM per GPU) | 512 GB | — | — | — |