The KyRIC cluster has large memory nodes that are increasingly needed by a wide range of ACCESS researchers, particularly researchers working with big data. Each KyRIC node in the cluster has large 6TB SSD drives that are suitable to perform analytics on big data along with a traditional NFS mounted scratch.
The system is well suited for running computations in high-throughput genome sequencing, natural language processing of large datasets, and data scientists working on massive data graphs and big data analytics. Note that the cluster’s networking backend limits the cluster to accommodate only single-node jobs (not multi-node/parallel jobs).
Due to the use of the 100GbE network, this cluster is for single node jobs only and is not recommended for multi-node jobs, such as those using MPI.
Login to KYRIC
Kyric supports access through both SSH keys and a webpage. If you choose to use the webpage, it does not require any preparatory steps other than having an existing ACCESS account and an active project allocation; You can then navigate the cluster through a terminal or virtual desktop console.
If choosing to access through SSH keys all user must generate and install their own keys. To register your key with KyRic, you can head to SSH Key Management System and log in with your ACCESS information. There is an add button further down the page, and you can copy your key as recommended by help links. While it may seem like you’re done, you still must submit a ticket at this link Service Desk saying that you have uploaded your key, and they will be in contact with you.
After the key is uploaded, you should be able to connect to the KyRIC system using an SSH client. For example, from a computer running a Linux, MacOS, Windows Powershell, or Windows Subsystem for Linux, you may connect to KyRIC by opening a Terminal and entering:
ssh -i path_to_private_key yourUserName [at] kxc.ccs.uky.edu
Third-party SSH clients that provide a GUI (e.g., Bitvise, MobaXterm, PuTTY) may also be used to connect to KyRIC.
Do not use the login nodes for computationally intensive processes. These nodes are meant for compilation, file editing, simple data analysis, and other tasks that use minimal compute resources. All computationally demanding jobs should be submitted and run through the batch queuing system.
SSH Login
$ ssh <your_username>@kxc.ccs.uky.edu
File Transfer
Kyric supports scp, rsync, and Globus file transfers. They recommend to transfer data through the high-speed data transfer node (DTN) and not through the login nodes.If you are unfamiliar with Globus, follow this link for a tutorial on how to get started and use Globus to transfer files. How To Log In and Transfer Files with Globus
| Supported Methods | Data Transfer Node | URL |
|---|---|---|
| SCP | ||
| RSYNC | ||
| GLOBUS |
Storage
File System
| Directory | Path | Quota | Purge | Backup | Notes |
|---|---|---|---|---|---|
| Compute Node Local | 5TB | This local temporary space is shared among all jobs running on a single node and will be cleaned up (deleted) upon job completion. | |||
| Home | $HOME | 10GB | No purge policy | No backups | |
| Project | $PROJECT | 500GB | No purge policy | No backups | |
| Scratch | $SCRATCH | 10TB | 30 Deletion | No backups |
External Storage
These 5 dedicated ACCESS nodes will have exclusive access to approximately 300 TB of network attached disk storage. The access to this network is exclusively through the compute nodes.
Jobs
KyRIC allocations are made in core-hours. The recommended method for estimating your resource needs for an allocation request is to perform benchmark runs. The core-hours used for a job are calculated by multiplying the number of processor cores used by the wall-clock duration in hours. KyRIC core-hour calculations should assume that all jobs will run in the regular queue and that they are charged for use of all 40 cores on each node.
The Slurm scheduler tracks and charges for usage to a granularity of a few seconds of wall clock time. The system charges only for the resources you use, not those you request. If your job finishes early and exits properly, Slurm will release the node back into the pool of available nodes. Your job will only be charged for as long as you are using the node.
The user must create a Slurm submission job script ("jobscript") and the job can be executed by submitting a job to the queues:
login$ sbatch jobscript
This is a given example job script from the user manual.
#!/bin/bash
#SBATCH --time=00:15:00 # Max run time
#SBATCH --job-name=my_test_job # Job name
#SBATCH --ntasks=1 # Number of cores for the job. Same as SBATCH -n 1
#SBATCH --partition=normal # Specify partition/queue
#SBATCH -e slurm-%j.err # Error file for this job.
#SBATCH -o slurm-%j.out # Output file for this job.
#SBATCH -A <your project account> # Project allocation account name (REQUIRED)
./myprogram # This is the program that will be executed on the compute node. You will substitute this with your scientific application.
Queue specifications
| Name | Purpose | CPUs | GPUs | RAM | Jobs
30 days
|
Wait Time
30-day trend
|
Wall Time
30-day trend
|
|---|---|---|---|---|---|---|---|
| Compute Node | These nodes are where jobs are actually executed after being submitted via the user-facing login nodes. | 40 PowerEdge R930 Intel(R) Xeon(R) CPU E7-4820 v4 @ 2.00GHz | 3TB | — | — | — | |
| Login Node | The login node is what users will directly access in order to submit jobs that will get forwarded to and executed in the compute nodes. | 4 PowerEdge R930 Intel(R) Xeon(R) CPU E7-4820 v4 @ 2.00GHz | 16GB | — | — | — | |
| Data Transfer Node | This node facilitates the transfer of data in and out of the KyRIC system. Users will log in to this node with the same credentials as for the login nodes. Also, Globus endpoints are available only on this node for parallel transfers. | 8 PowerEdge R930 Intel(R) Xeon(R) CPU E7-4820 v4 @ 2.00GHz | 32GB | — | — | — |