This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
pub:hpc:mill [2024/08/22 14:14] – jonesjosh | pub:hpc:mill [2025/01/22 18:31] (current) – [Hardware] jonesjosh | ||
---|---|---|---|
Line 3: | Line 3: | ||
You can request an account on the Mill by filling out the form found at https:// | You can request an account on the Mill by filling out the form found at https:// | ||
===== System Information ===== | ===== System Information ===== | ||
+ | ====DOI==== | ||
+ | Please ensure you use The Mill's DOI on any publications in which The Mill's resources were utilized. The DOI is https:// | ||
==== Software ==== | ==== Software ==== | ||
The Mill was built and managed with Puppet. The underlying OS for the Mill is Alma 8.9. For resource management and scheduling we are using SLURM Workload manager version 22.05.2 | The Mill was built and managed with Puppet. The underlying OS for the Mill is Alma 8.9. For resource management and scheduling we are using SLURM Workload manager version 22.05.2 | ||
Line 22: | Line 24: | ||
| Model | CPU Cores | System Memory | Node Count | | | Model | CPU Cores | System Memory | Node Count | | ||
| Dell R6525 | 128 | 512 GB | 25 | | | Dell R6525 | 128 | 512 GB | 25 | | ||
- | | Dell C6525 | 64 | 256 GB | 159 | | + | | Dell C6525 | 64 | 256 GB | 160 | |
| Dell C6420 | 40 | 192 GB | 44 | | | Dell C6420 | 40 | 192 GB | 44 | | ||
Line 57: | Line 59: | ||
==Leased Space== | ==Leased Space== | ||
- | If home directory, and scratch space availability aren't enough for your storage needs we also lease out quantities of cluster attached space. If you are interested in leasing storage please contact us. If you already are leasing storage, but need a reference guide on how to manage | + | If home directory, and scratch space availability aren't enough for your storage needs we also lease out quantities of cluster attached space. If you are interested in leasing storage please contact us. Additional information |
+ | Below is a cost model of our storage offerings: | ||
+ | |||
+ | Vast Storage Cluster: | ||
+ | | Total Size | 250 TB | | ||
+ | | Storage Technology | Flash | | ||
+ | | Primary Purpose | High Performance Computing Storage | | ||
+ | | Cost | $160/ | ||
+ | |||
+ | |||
+ | Ceph Storage Cluster: | ||
+ | | Total Size | 800 TB | | ||
+ | | Storage Technology | Spinning Disk | | ||
+ | | Primary Purpose | HPC-attached Utility Storage | | ||
+ | | Cost | $100/ | ||
Line 87: | Line 103: | ||
==== Priority Partition Leasing ==== | ==== Priority Partition Leasing ==== | ||
- | Below is a list of hardware which we have available for priority leases. | + | For the full information on our computing model please visit this page on [[ https:// |
+ | | The Mill Computing Model ]] which will provide more information what a priority partition is. | ||
+ | |||
+ | Below is a list of hardware which we have available for priority leases: | ||
Line 107: | Line 126: | ||
| Priority lease ($/year) | $3,368.30 | $4,379.80 | $7,346.06 | | | Priority lease ($/year) | $3,368.30 | $4,379.80 | $7,346.06 | | ||
| Current Quantity | 160 | 25 | 6 | | | Current Quantity | 160 | 25 | 6 | | ||
+ | |||
+ | |||
+ | ==== Researcher Funded Nodes ==== | ||
+ | Researcher funded hardware will gain priority access for a minimum of 5 years. Hosting fees will start at $1,200 per year and will be hardware dependent. The fees will be broke down as follows: | ||
+ | |||
+ | | Fee | Cost | Annual Unit of Measure | | ||
+ | | Networking Fee | $90 | Per Network Connection | | ||
+ | | Rack Space | $260 | Per Rack U | | ||
+ | | RSS Maintenance | $850 | Per Node | | ||
===== Quick Start ===== | ===== Quick Start ===== | ||