This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
pub:hpc:mill [2024/07/24 12:42] – jonesjosh | pub:hpc:mill [2025/07/21 16:01] (current) – [DOI] tvmwd6 | ||
---|---|---|---|
Line 3: | Line 3: | ||
You can request an account on the Mill by filling out the form found at https:// | You can request an account on the Mill by filling out the form found at https:// | ||
===== System Information ===== | ===== System Information ===== | ||
+ | ====DOI and Citing the Mill==== | ||
+ | Please ensure you use The Mill's DOI on any publications in which The Mill's resources were utilized. The DOI is https:// | ||
+ | |||
+ | Please also feel free to use these files with your citation manager to create formatted citations. | ||
+ | |||
+ | BibTex Citation: | ||
+ | <file bib mill_cluster_citation.bib> | ||
+ | @Article{Gao2024, | ||
+ | author | ||
+ | title = {The Mill HPC Cluster}, | ||
+ | year = {2024}, | ||
+ | doi = {10.71674/ | ||
+ | language | ||
+ | publisher = {Missouri University of Science and Technology}, | ||
+ | url = {https:// | ||
+ | } | ||
+ | </ | ||
+ | RIS Citation: | ||
+ | <file ris mill_cluster_citation.ris> | ||
+ | TY - JOUR | ||
+ | AU - Stephen, Gao | ||
+ | AU - Jeremy, Maurer | ||
+ | AU - Solutions, Information Technology Research Support | ||
+ | DO - 10.71674/ | ||
+ | LA - en | ||
+ | PU - Missouri University of Science and Technology | ||
+ | PY - 2024 | ||
+ | ST - The Mill HPC Cluster | ||
+ | TI - The Mill HPC Cluster | ||
+ | UR - https:// | ||
+ | ER - | ||
+ | </ | ||
==== Software ==== | ==== Software ==== | ||
The Mill was built and managed with Puppet. The underlying OS for the Mill is Alma 8.9. For resource management and scheduling we are using SLURM Workload manager version 22.05.2 | The Mill was built and managed with Puppet. The underlying OS for the Mill is Alma 8.9. For resource management and scheduling we are using SLURM Workload manager version 22.05.2 | ||
Line 22: | Line 54: | ||
| Model | CPU Cores | System Memory | Node Count | | | Model | CPU Cores | System Memory | Node Count | | ||
| Dell R6525 | 128 | 512 GB | 25 | | | Dell R6525 | 128 | 512 GB | 25 | | ||
- | | Dell C6525 | 64 | 256 GB | 159 | | + | | Dell C6525 | 64 | 256 GB | 160 | |
| Dell C6420 | 40 | 192 GB | 44 | | | Dell C6420 | 40 | 192 GB | 44 | | ||
Line 57: | Line 89: | ||
==Leased Space== | ==Leased Space== | ||
- | If home directory, and scratch space availability aren't enough for your storage needs we also lease out quantities of cluster attached space. If you are interested in leasing storage please contact us. If you already are leasing storage, but need a reference guide on how to manage | + | If home directory, and scratch space availability aren't enough for your storage needs we also lease out quantities of cluster attached space. If you are interested in leasing storage please contact us. Additional information |
+ | Below is a cost model of our storage offerings: | ||
+ | |||
+ | Vast Storage Cluster: | ||
+ | | Total Size | 250 TB | | ||
+ | | Storage Technology | Flash | | ||
+ | | Primary Purpose | High Performance Computing Storage | | ||
+ | | Cost | $160/ | ||
+ | |||
+ | |||
+ | Ceph Storage Cluster: | ||
+ | | Total Size | 800 TB | | ||
+ | | Storage Technology | Spinning Disk | | ||
+ | | Primary Purpose | HPC-attached Utility Storage | | ||
+ | | Cost | $100/ | ||
Line 81: | Line 127: | ||
| general | 2 days | 800MB| | | general | 2 days | 800MB| | ||
| gpu | 2 days | 800MB| | | gpu | 2 days | 800MB| | ||
+ | | interactive | 4 hours | 800MB| | ||
+ | | rss-class | 4 hours | 2GB | | ||
| any priority partition | 28 days | varies by hardware| | | any priority partition | 28 days | varies by hardware| | ||
+ | |||
+ | ==== Priority Partition Leasing ==== | ||
+ | |||
+ | For the full information on our computing model please visit this page on [[ https:// | ||
+ | | The Mill Computing Model ]] which will provide more information what a priority partition is. | ||
+ | |||
+ | Below is a list of hardware which we have available for priority leases: | ||
+ | |||
+ | |||
+ | |||
+ | | | C6525 | R6525 | C4140| | ||
+ | | CPU type | AMD 7502 | AMD 7713 | Intel 6248 | | ||
+ | | CPU count | 2 | 2 | 2 | | ||
+ | | Core count | 64 | 128 | 40 | | ||
+ | | Base Clock (GHz) | 2.5 | 2.0 | 2.5 | | ||
+ | | Boost Clock (GHz) | 3.35 | 3.675 | 3.2 | | ||
+ | | GPU | N/A | N/A | Nvidia V100 | | ||
+ | | GPU Count | 0 | 0 | 4 | | ||
+ | | GPU RAM (GB) | 0 | 0 | 32x4 | | ||
+ | | RAM (GB) | 256 | 512 | 192 | | ||
+ | | Local Scratch (TB) | 2.6 SSD | 1.6 NVMe | 1.6 NVMe | | ||
+ | | Network | HDR-100 | HDR-100 | HDR-100| | ||
+ | | Internal Bandwidth | 100Gb/s | 100Gb/s | 100Gb/s | | ||
+ | | Latency | <600ns | <600ns | <600ns | | ||
+ | | Priority lease ($/year) | $3,368.30 | $4,379.80 | $7,346.06 | | ||
+ | | Current Quantity | 160 | 25 | 6 | | ||
+ | |||
+ | |||
+ | ==== Researcher Funded Nodes ==== | ||
+ | Researcher funded hardware will gain priority access for a minimum of 5 years. Hosting fees will start at $1,200 per year and will be hardware dependent. The fees will be broke down as follows: | ||
+ | |||
+ | | Fee | Cost | Annual Unit of Measure | | ||
+ | | Networking Fee | $90 | Per Network Connection | | ||
+ | | Rack Space | $260 | Per Rack U | | ||
+ | | RSS Maintenance | $850 | Per Node | | ||
===== Quick Start ===== | ===== Quick Start ===== | ||