This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| pub:hpc:mill [2024/05/10 16:59] – [Anaconda] added details for basic operations. May still revisit later. tvmwd6 | pub:hpc:mill [2025/12/22 16:05] (current) – [SSH Keys] dbwkp | ||
|---|---|---|---|
| Line 1: | Line 1: | ||
| ====== The Mill ====== | ====== The Mill ====== | ||
| ===== Request an account ===== | ===== Request an account ===== | ||
| - | You can request an account on the Mill by filling out the form found at https://missouri.qualtrics.com/jfe/form/SV_e5n4I8cvzo77jr8 | + | You can request an account on the Mill by filling out the account request |
| + | |||
| + | |||
| + | ===== SSH Keys ===== | ||
| + | |||
| + | If you want to connect using SSH keys, either to avoid having to type in your password, or are wanting to connect from off campus without a VPN, you can add your SSH public key to Mill. | ||
| + | |||
| + | For Windows users, we recommend using MobaXterm https:// | ||
| + | Generating an SSH Key | ||
| + | |||
| + | You can generate a new SSH key on your local machine. After you generate the key, you can add the public key to your account on Mill. | ||
| + | |||
| + | Open terminal | ||
| + | Paste the text below, replacing the email used in the example with your University email address. | ||
| + | |||
| + | ssh-keygen -t ed25519 -C " | ||
| + | |||
| + | Note: If you are using a legacy system that doesn' | ||
| + | |||
| + | ssh-keygen -t rsa -b 4096 -C " | ||
| + | |||
| + | This creates a new SSH key, using the provided email as a label. | ||
| + | |||
| + | Generating public/private ALGORITHM key pair. | ||
| + | |||
| + | When you're prompted to “Enter a file in which to save the key”, you can press Enter to accept the default file location. Please note that if you created SSH keys previously, ssh-keygen may ask you to rewrite another key, in which case we recommend creating a custom-named SSH key. To do so, type the default file location and replace id_ALGORITHM with your custom key name. | ||
| + | |||
| + | # Windows | ||
| + | Enter file in which to save the key (/c/Users/ | ||
| + | # Mac | ||
| + | Enter a file in which to save the key (/ | ||
| + | # Linux | ||
| + | Enter a file in which to save the key (/ | ||
| + | |||
| + | At the prompt, type a secure passphrase. | ||
| + | |||
| + | Enter passphrase (empty for no passphrase): | ||
| + | Enter same passphrase again: [Type passphrase again] | ||
| + | |||
| + | Adding your SSH key | ||
| + | |||
| + | You may add your own SSH public key to your Mill account. You can also send the key to tdx.umsystem.edu | ||
| + | |||
| + | Copy the contents of your SSH public key, which is written to the file created in the Generating an SSH Key step. | ||
| + | |||
| + | # Windows | ||
| + | Your public key has been saved in / | ||
| + | # Mac | ||
| + | Your public key has been saved in / | ||
| + | # Linux | ||
| + | Your public key has been saved in / | ||
| + | # Windows | ||
| + | |||
| + | The id_ALGORITHM.pub file contents should look similar to the ones below. | ||
| + | |||
| + | # ed25519 | ||
| + | ssh-ed25519 AAAAB3NzaC1yc2EAAAABIwAAAQEAklOUpkDHrfHY17SbrmTIpNLTGK9Tjom/ | ||
| + | |||
| + | # rsa | ||
| + | ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAklOUpkDHrfHY17SbrmTIpNLTGK9Tjom/ | ||
| + | |||
| + | Add your public key to your account by appending it to your authorized_keys file on Mill | ||
| + | |||
| + | [sso@mill-login ~]$ vim / | ||
| + | |||
| + | OR send us your public key. | ||
| + | |||
| ===== System Information ===== | ===== System Information ===== | ||
| + | ====DOI and Citing the Mill==== | ||
| + | Please ensure you use The Mill's DOI on any publications in which The Mill's resources were utilized. The DOI is https:// | ||
| + | |||
| + | Please also feel free to use these files with your citation manager to create formatted citations. | ||
| + | |||
| + | BibTex Citation: | ||
| + | <file bib mill_cluster_citation.bib> | ||
| + | @Article{Gao2024, | ||
| + | author | ||
| + | title = {The Mill HPC Cluster}, | ||
| + | year = {2024}, | ||
| + | doi = {10.71674/ | ||
| + | language | ||
| + | publisher = {Missouri University of Science and Technology}, | ||
| + | url = {https:// | ||
| + | } | ||
| + | </ | ||
| + | RIS Citation: | ||
| + | <file ris mill_cluster_citation.ris> | ||
| + | TY - JOUR | ||
| + | AU - Stephen, Gao | ||
| + | AU - Jeremy, Maurer | ||
| + | AU - Solutions, Information Technology Research Support | ||
| + | DO - 10.71674/ | ||
| + | LA - en | ||
| + | PU - Missouri University of Science and Technology | ||
| + | PY - 2024 | ||
| + | ST - The Mill HPC Cluster | ||
| + | TI - The Mill HPC Cluster | ||
| + | UR - https:// | ||
| + | ER - | ||
| + | </ | ||
| ==== Software ==== | ==== Software ==== | ||
| The Mill was built and managed with Puppet. The underlying OS for the Mill is Alma 8.9. For resource management and scheduling we are using SLURM Workload manager version 22.05.2 | The Mill was built and managed with Puppet. The underlying OS for the Mill is Alma 8.9. For resource management and scheduling we are using SLURM Workload manager version 22.05.2 | ||
| Line 22: | Line 121: | ||
| | Model | CPU Cores | System Memory | Node Count | | | Model | CPU Cores | System Memory | Node Count | | ||
| | Dell R6525 | 128 | 512 GB | 25 | | | Dell R6525 | 128 | 512 GB | 25 | | ||
| - | | Dell C6525 | 64 | 256 GB | 128 | | + | | Dell C6525 | 64 | 256 GB | 160 | |
| - | | Dell C6420 | 40 | 192 GB | 4 | | + | | Dell C6420 | 40 | 192 GB | 44 | |
| Line 33: | Line 132: | ||
| | Model | CPU cores | System Memory | GPU | GPU Memory | GPU Count | Node Count | | | Model | CPU cores | System Memory | GPU | GPU Memory | GPU Count | Node Count | | ||
| | Dell XE9680 | 112 | 1 TB | H100 SXM5 | 80 GB | 8 | 1 | | | Dell XE9680 | 112 | 1 TB | H100 SXM5 | 80 GB | 8 | 1 | | ||
| - | | Dell C4140 | 40 | 192 GB | V100 SXM2 | 32 GB | 4 | 4 | | + | | Dell C4140 | 40 | 192 GB | V100 SXM2 | 32 GB | 4 | 6 | |
| | Dell R740xd | 40 | 384 GB | V100 PCIe | 32 GB | 2 | 1 | | | Dell R740xd | 40 | 384 GB | V100 PCIe | 32 GB | 2 | 1 | | ||
| Line 49: | Line 148: | ||
| The Mill home directory storage is available from an NFS share backed by our enterprise SAN, meaning your home directory is the same across the entire cluster. This storage will provide 10 TB of raw storage, limited to 50GB per user. **This volume is not backed up, we do not provide any data recovery guarantee in the event of a storage system failure.** System failures where data loss occurs are rare, but they do happen. All this to say, you ** should not ** be storing the only copy of your critical data on this system. Please contact us if you require more storage and we can provide you with the currently available options. | The Mill home directory storage is available from an NFS share backed by our enterprise SAN, meaning your home directory is the same across the entire cluster. This storage will provide 10 TB of raw storage, limited to 50GB per user. **This volume is not backed up, we do not provide any data recovery guarantee in the event of a storage system failure.** System failures where data loss occurs are rare, but they do happen. All this to say, you ** should not ** be storing the only copy of your critical data on this system. Please contact us if you require more storage and we can provide you with the currently available options. | ||
| - | ==Scratch Directories== | + | ==Scratch Directories== |
| - | High speed network scratch is not yet available at the time of writing. | + | In addition to your 50GB home directory, you also have access to a high-performance |
| - | There is always | + | There is also local scratch on each compute node for use during calculations in / |
| ==Leased Space== | ==Leased Space== | ||
| - | If home directory, and scratch space availability aren't enough for your storage needs we also lease out quantities of cluster attached space. If you are interested in leasing storage please contact us. If you already are leasing storage, but need a reference guide on how to manage | + | If home directory, and scratch space availability aren't enough for your storage needs we also lease out quantities of cluster attached space. If you are interested in leasing storage please contact us. Additional information |
| + | Below is a cost model of our storage offerings: | ||
| + | |||
| + | Vast Storage Cluster: | ||
| + | | Total Size | 250 TB | | ||
| + | | Storage Technology | Flash | | ||
| + | | Primary Purpose | High Performance Computing Storage | | ||
| + | | Cost | $160/ | ||
| + | |||
| + | |||
| + | Ceph Storage Cluster: | ||
| + | | Total Size | 800 TB | | ||
| + | | Storage Technology | Spinning Disk | | ||
| + | | Primary Purpose | HPC-attached Utility Storage | | ||
| + | | Cost | $100/ | ||
| Line 81: | Line 194: | ||
| | general | 2 days | 800MB| | | general | 2 days | 800MB| | ||
| | gpu | 2 days | 800MB| | | gpu | 2 days | 800MB| | ||
| + | | interactive | 4 hours | 800MB| | ||
| + | | rss-class | 4 hours | 2GB | | ||
| | any priority partition | 28 days | varies by hardware| | | any priority partition | 28 days | varies by hardware| | ||
| + | |||
| + | ==== Priority Partition Leasing ==== | ||
| + | |||
| + | For the full information on our computing model please visit this page on [[ https:// | ||
| + | | The Mill Computing Model ]] which will provide more information what a priority partition is. | ||
| + | |||
| + | Below is a list of hardware which we have available for priority leases: | ||
| + | |||
| + | |||
| + | |||
| + | | | C6525 | R6525 | C4140| | ||
| + | | CPU type | AMD 7502 | AMD 7713 | Intel 6248 | | ||
| + | | CPU count | 2 | 2 | 2 | | ||
| + | | Core count | 64 | 128 | 40 | | ||
| + | | Base Clock (GHz) | 2.5 | 2.0 | 2.5 | | ||
| + | | Boost Clock (GHz) | 3.35 | 3.675 | 3.2 | | ||
| + | | GPU | N/A | N/A | Nvidia V100 | | ||
| + | | GPU Count | 0 | 0 | 4 | | ||
| + | | GPU RAM (GB) | 0 | 0 | 32x4 | | ||
| + | | RAM (GB) | 256 | 512 | 192 | | ||
| + | | Local Scratch (TB) | 2.6 SSD | 1.6 NVMe | 1.6 NVMe | | ||
| + | | Network | HDR-100 | HDR-100 | HDR-100| | ||
| + | | Internal Bandwidth | 100Gb/s | 100Gb/s | 100Gb/s | | ||
| + | | Latency | <600ns | <600ns | <600ns | | ||
| + | | Priority lease ($/year) | $3,368.30 | $4,379.80 | $7,346.06 | | ||
| + | | Current Quantity | 160 | 25 | 6 | | ||
| + | |||
| + | |||
| + | ==== Researcher Funded Nodes ==== | ||
| + | Researcher funded hardware will gain priority access for a minimum of 5 years. Hosting fees will start at $1,200 per year and will be hardware dependent. The fees will be broke down as follows: | ||
| + | |||
| + | | Fee | Cost | Annual Unit of Measure | | ||
| + | | Networking Fee | $90 | Per Network Connection | | ||
| + | | Rack Space | $260 | Per Rack U | | ||
| + | | RSS Maintenance | $850 | Per Node | | ||
| ===== Quick Start ===== | ===== Quick Start ===== | ||
| Line 265: | Line 415: | ||
| Now you should see a a.out executable in your current working directory, this is your mpi compiled code that we will run when we submit it as a job. | Now you should see a a.out executable in your current working directory, this is your mpi compiled code that we will run when we submit it as a job. | ||
| + | ==== Parallelizing your Code ==== | ||
| + | |||
| + | The following link provides basic tutorials and examples for parallel code in Python, R, Julia, Matlab, and C/C++. | ||
| + | |||
| + | [[https:// | ||
| ==== Submitting an MPI job ==== | ==== Submitting an MPI job ==== | ||
| Line 356: | Line 511: | ||
| < | < | ||
| + | =====Priority Access===== | ||
| + | Information coming on priority access leases. | ||
| ===== Applications ===== | ===== Applications ===== | ||