This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
pub:hpc:hellbender [2025/06/30 20:53] – redmonp | pub:hpc:hellbender [2025/10/06 17:59] (current) – [Storage: Research Data Ecosystem ('RDE')] nal8cf | ||
---|---|---|---|
Line 9: | Line 9: | ||
**Hellbender** is the latest High Performance Computing (HPC) resource available to researchers and students (with sponsorship by a PI) within the UM-System. | **Hellbender** is the latest High Performance Computing (HPC) resource available to researchers and students (with sponsorship by a PI) within the UM-System. | ||
- | **Hellbender** consists of 208 mixed x86-64 CPU nodes (112 AMD, 96 Intel) | + | **Hellbender** consists of 222 mixed x86-64 CPU nodes providing |
==== Investment Model ==== | ==== Investment Model ==== | ||
Line 176: | Line 176: | ||
**Costs** | **Costs** | ||
- | The cost associated with using the RDE tape archive is $8/TB for short term data kept in inside the tape library for 1-3 years or $140 per tape rounded to the number of tapes for tapes sent offsite for long term retention up to 10 years. We send these tapes off to record management where they are stored in a climate-controlled environment. Each tape from the current generation LTO 9 holds approximately 18TB of data These are flat onetime costs, and you have the option to do both a short term in library copy, and a longer-term offsite copy, or one or the other, providing flexibility. | + | The cost associated with using the RDE tape archive is $8/TB for short term data kept in inside the tape library for 1-3 years or $144 per tape rounded to the number of tapes for tapes sent offsite for long term retention up to 10 years. We send these tapes off to record management where they are stored in a climate-controlled environment. Each tape from the current generation LTO 9 holds approximately 18TB of data These are flat onetime costs, and you have the option to do both a short term in library copy, and a longer-term offsite copy, or one or the other, providing flexibility. |
**Request Process** | **Request Process** | ||
Line 256: | Line 256: | ||
Dell C6420: .5 unit server containing dual 24 core Intel Xeon Gold 6252 CPUs with a base clock of 2.1 GHz. Each C6420 node contains 384 GB DDR4 system memory. | Dell C6420: .5 unit server containing dual 24 core Intel Xeon Gold 6252 CPUs with a base clock of 2.1 GHz. Each C6420 node contains 384 GB DDR4 system memory. | ||
- | Dell R6620: 1 unit server containing dual 128 core AMD EPYC 9754 CPUs with a base clock of 2.25 GHz. Each R6620 node contains 1 TB DDR5 system memory. | + | Dell R6625: 1 unit server containing dual 128 core AMD EPYC 9754 CPUs with a base clock of 2.25 GHz. Each R6625 node contains 1 TB DDR5 system memory. |
+ | |||
+ | Dell R6625: 1 unit server containing dual 128 core AMD EPYC 9754 CPUs with a base clock of 2.25 GHz. Each R6625 node contains 6 TB DDR5 system memory. | ||
| **Model** | | **Model** | ||
Line 263: | Line 265: | ||
| Dell C6420 | 64 | 48 | 364 GB | Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz | 1 TB | 3072 | c146-c209 | | Dell C6420 | 64 | 48 | 364 GB | Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz | 1 TB | 3072 | c146-c209 | ||
| Dell R6625 | 12 | 256 | 994 GB | AMD EPYC 9754 128-Core Processor | | Dell R6625 | 12 | 256 | 994 GB | AMD EPYC 9754 128-Core Processor | ||
- | | Dell R6625 | 2 | 256 | 6034 GB | AMD EPYC 9754 128-Core Processor | + | | Dell R6625 | 2 | 256 | 6034 GB | AMD EPYC 9754 128-Core Processor |
- | | | | + | | | |
=== GPU nodes === | === GPU nodes === | ||
- | | **Model** | + | | **Model** |
- | | Dell R740xa | + | | Dell R750xa |
| Dell XE8640 | 2 | 104 | 2002 GB | H100 | 80 GB | 4 | 3.2 TB | 208 | g018-g019 | | Dell XE8640 | 2 | 104 | 2002 GB | H100 | 80 GB | 4 | 3.2 TB | 208 | g018-g019 | ||
| Dell XE9640 | 1 | 112 | 2002 GB | H100 | 80 GB | 8 | 3.2 TB | 112 | g020 | | | Dell XE9640 | 1 | 112 | 2002 GB | H100 | 80 GB | 8 | 3.2 TB | 112 | g020 | | ||
Line 276: | Line 278: | ||
| Dell R740xd | 2 | 40 | 364 GB | V100 | 32 GB | 3 | 240 GB | 80 | g026-g027 | | Dell R740xd | 2 | 40 | 364 GB | V100 | 32 GB | 3 | 240 GB | 80 | g026-g027 | ||
| Dell R740xd | 1 | 44 | 364 GB | V100 | 32 GB | 3 | 240 GB | 44 | g028 | | | Dell R740xd | 1 | 44 | 364 GB | V100 | 32 GB | 3 | 240 GB | 44 | g028 | | ||
- | | | + | | Dell R760xa | 6 | 64 |
+ | | Dell R760 | 6 | 64 | 490 GB | L40S | 45 GB | 2 | 3.5 TB | 384 | g035-g040* | ||
+ | | * = Available Oct 14 | ||
A specially formatted sinfo command can be ran on Hellbender to report live information about the nodes and the hardware/ | A specially formatted sinfo command can be ran on Hellbender to report live information about the nodes and the hardware/ | ||
Line 613: | Line 617: | ||
Below is process for setting up a class on the OOD portal. | Below is process for setting up a class on the OOD portal. | ||
- | - Send the class name, the list of students and TAs, and any shared storage requirements to itrss-support@umsystem.edu. | + | - Send the class name, the list of students and TAs, and any shared storage requirements to itrss-support@umsystem.edu. |
- We will add the students to the group allowing them access to OOD. | - We will add the students to the group allowing them access to OOD. | ||
- If the student does not have a Hellbender account yet, they will be presented with a link to a form to fill out requesting a Hellbender account. | - If the student does not have a Hellbender account yet, they will be presented with a link to a form to fill out requesting a Hellbender account. |