Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
pub:hpc:hellbender [2025/03/05 18:53] – [Storage: Research Data Ecosystem ('RDE')] nal8cfpub:hpc:hellbender [2025/04/14 20:07] (current) – [Storage: Research Data Ecosystem ('RDE')] bjmfg8
Line 119: Line 119:
 | Dell C6525 | 112    | 128        | 490 GB        | 1.6 TB          | 14336  | c001-c112  | | Dell C6525 | 112    | 128        | 490 GB        | 1.6 TB          | 14336  | c001-c112  |
  
-**The 2024 pricing is: $2,702 per node per year.**+**The 2025 pricing is: $2,702 per node per year.**
  
 ==== GPU Node Lease  ==== ==== GPU Node Lease  ====
Line 129: Line 129:
 | Dell R740xa | 17     | 64         | 238 GB        | A100 | 80 GB      | 4     | 1.6 TB        | 1088    | Dell R740xa | 17     | 64         | 238 GB        | A100 | 80 GB      | 4     | 1.6 TB        | 1088   
  
-**The 2024 pricing is: $7,692 per node per year.**+**The 2025 pricing is: $7,692 per node per year.**
  
 ==== Storage: Research Data Ecosystem ('RDE') ==== ==== Storage: Research Data Ecosystem ('RDE') ====
Line 138: Line 138:
   * Storage lab allocations are protected by associated security groups applied to the share, with group member access administered by the assigned PI or appointed representative.   * Storage lab allocations are protected by associated security groups applied to the share, with group member access administered by the assigned PI or appointed representative.
  
-**What is the Difference between High Performance and General Performance Storage?**+**What is the Difference between High Performance and General Performance Storage? **
  
 On Pixstor, which is used for standard HPC allocations, general storage is pinned to the SAS disk pool while high performance allocations are pinned to all flash NVME pool.  Meaning writes and recent reads will have lower latency with High performance allocations. On Pixstor, which is used for standard HPC allocations, general storage is pinned to the SAS disk pool while high performance allocations are pinned to all flash NVME pool.  Meaning writes and recent reads will have lower latency with High performance allocations.
Line 161: Line 161:
 **__None of the cluster attached storage available to users is backed up in any way by us__**, this means that if you delete something and don't have a copy somewhere else, it is gone. Please note the data stored on cluster attached storage is limited to Data Class 1 and 2 as defined by [[https://www.umsystem.edu/ums/is/infosec/classification-definitions| UM System DCL]]. If you have need to store things in DCL3 or DCL4 please contact us so we may find a solution for you. **__None of the cluster attached storage available to users is backed up in any way by us__**, this means that if you delete something and don't have a copy somewhere else, it is gone. Please note the data stored on cluster attached storage is limited to Data Class 1 and 2 as defined by [[https://www.umsystem.edu/ums/is/infosec/classification-definitions| UM System DCL]]. If you have need to store things in DCL3 or DCL4 please contact us so we may find a solution for you.
  
-**The 2024 pricing is: General Storage: $25/TB/Year, High Performance Storage: $95/TB/Year**+**The 2025 pricing is: General Storage: $25/TB/Year, High Performance Storage: $95/TB/Year**
  
 To order storage please fill out our [[https://missouri.qualtrics.com/jfe/form/SV_6zkkwGYn0MGvMyO| RSS Services Order Form]] To order storage please fill out our [[https://missouri.qualtrics.com/jfe/form/SV_6zkkwGYn0MGvMyO| RSS Services Order Form]]
Line 174: Line 174:
 **Costs** **Costs**
  
-The cost associated with using the RDE tape archive is $8/TB for short term data kept in inside the tape library for 1-3 years or $140 per tape rounded to the number of tapes for tapes sent offsite for long term retention up to 10 years. We send these tapes off to record management where they are stored in a climate-controlled environment. Each tape from the current generation LTO 9 holds approximately 18TB of data These are flat onetime costs and you have the option to do both a short term in library copy, and a longer-term offsite copy, or one or the other, providing flexibility.+The cost associated with using the RDE tape archive is $8/TB for short term data kept in inside the tape library for 1-3 years or $140 per tape rounded to the number of tapes for tapes sent offsite for long term retention up to 10 years. We send these tapes off to record management where they are stored in a climate-controlled environment. Each tape from the current generation LTO 9 holds approximately 18TB of data These are flat onetime costsand you have the option to do both a short term in library copy, and a longer-term offsite copy, or one or the other, providing flexibility.
  
 **Request Process** **Request Process**
Line 180: Line 180:
 To utilize the tape archive functionality that RSS has setup, the data to be archived will need to be copied to RDE storage if it does not exist there already. This would require the following steps. To utilize the tape archive functionality that RSS has setup, the data to be archived will need to be copied to RDE storage if it does not exist there already. This would require the following steps.
   * Submit a RDE storage request if the data resides locally and a RDE share is not already available to the researcher: [[http://request.itrss.umsystem.edu|RSS Account Request Form]]   * Submit a RDE storage request if the data resides locally and a RDE share is not already available to the researcher: [[http://request.itrss.umsystem.edu|RSS Account Request Form]]
-  * Create an archive folder or folders in the relevant RDE storage share to hold the data you would like to archive. The folder(s) can be named to signify the contents, but we ask that the name includes _archive at then end. For example, something akin to: labname_projectx_archive_2024.+  * Create an archive folder or folders in the relevant RDE storage share to hold the data you would like to archive. The folder(s) can be named to signify the contents, but we ask that the name includes _archive at the end. For example, something akin to: labname_projectx_archive_2024.
   * Copy the contents to be archived to the newly created archive folder(s) within the RDE storage share.   * Copy the contents to be archived to the newly created archive folder(s) within the RDE storage share.
   * Submit a RDE tape Archive request: [[https://missouri.qualtrics.com/jfe/form/SV_5o0NoDafJNzXnRY]]   * Submit a RDE tape Archive request: [[https://missouri.qualtrics.com/jfe/form/SV_5o0NoDafJNzXnRY]]
-  * Once the tape archive jobs are completed ITRSS will notify you and send you Archive job report after which you can delete the contents of the archive folder. +  * Once the tape archive jobs are completed ITRSS will notify you and send you an Archive job report after which you can delete the contents of the archive folder. 
-  * We request that subsequent archive jobs be added to a separate folder or the initial folder renamed to something that signifies the time of archive for easier retrieval *_archive2024, *archive2025, etc.+  * We request that subsequent archive jobs be added to a separate folderor the initial folder renamed to something that signifies the time of archive for easier retrieval *_archive2024, *archive2025, etc.
  
 **Recovery** **Recovery**
Line 226: Line 226:
  
   * **[[https://status.missouri.edu| UM System Status Page]]**   * **[[https://status.missouri.edu| UM System Status Page]]**
-  * **[[https://po.missouri.edu/scripts/wa.exe?SUBED1=RSSHPC-L&A=1| RSS Announcement List: Please Sign Up]]**+  * **[[https://LISTS.UMSYSTEM.EDU/scripts/wa-UMS.exe?SUBED1=RSSHPC-L&A=1&SUB=1| RSS Announcement List: Please Sign Up]]**
   * **[[https://missouri.qualtrics.com/jfe/form/SV_6zkkwGYn0MGvMyO|RSS Services: Order Form]]**   * **[[https://missouri.qualtrics.com/jfe/form/SV_6zkkwGYn0MGvMyO|RSS Services: Order Form]]**
   * **[[https://request.itrss.umsystem.edu/|Hellbender: Account Request Form]]**   * **[[https://request.itrss.umsystem.edu/|Hellbender: Account Request Form]]**
Line 258: Line 258:
 | **Model**  | **Nodes** | **Cores/Node** | **System Memory** | **CPU**                                  | **Local Scratch**   | **Cores** | **Node Names** | | **Model**  | **Nodes** | **Cores/Node** | **System Memory** | **CPU**                                  | **Local Scratch**   | **Cores** | **Node Names** |
 | Dell C6525 | 112       | 128            | 490 GB            | AMD EPYC 7713 64-Core                    | 1.6 TB              | 14336     | c001-c112      | | Dell C6525 | 112       | 128            | 490 GB            | AMD EPYC 7713 64-Core                    | 1.6 TB              | 14336     | c001-c112      |
-| Dell R640  | 32        | 40             192 GB            | Intel(R) Xeon(R) Gold 6138 CPU @ 2.00GHz |                     | 1280      | c113-c145      | +| Dell R640  | 32        | 40             364 GB            | Intel(R) Xeon(R) Gold 6138 CPU @ 2.00GHz | 100 GB              | 1280      | c113-c145      | 
-| Dell C6420 | 64        | 48             384 GB            | Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz | 1 TB                | 3072      | c146-c209      | +| Dell C6420 | 64        | 48             364 GB            | Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz | 1 TB                | 3072      | c146-c209      | 
-| Dell R6620 | 12        | 256            | 1 TB              | AMD EPYC 9754 128-Core Processor         | 1.5 TB              | 3072      | c210-c221      |+| Dell R6620 | 12        | 256            | 994 GB            | AMD EPYC 9754 128-Core Processor         | 1.5 TB              | 3072      | c210-c221      |
 |            |                          |                                                            | Total Cores         | 21760                    | |            |                          |                                                            | Total Cores         | 21760                    |
  
Line 269: Line 269:
 | Dell XE8640 | 2         | 104            | 2002 GB           | H100     | 80 GB          | 4        | 3.2 TB            | 208     | g018-g019      | | Dell XE8640 | 2         | 104            | 2002 GB           | H100     | 80 GB          | 4        | 3.2 TB            | 208     | g018-g019      |
 | Dell XE9640 | 1         | 112            | 2002 GB           | H100     | 80 GB          | 8        | 3.2 TB            | 112     | g020           | | Dell XE9640 | 1         | 112            | 2002 GB           | H100     | 80 GB          | 8        | 3.2 TB            | 112     | g020           |
-| Dell R730   | 4         | 20             128 GB            | V100     | 32 GB          | 1        | 1.6 TB            | 80      | g021-g024      | +| Dell R730   | 4         | 20             113 GB            | V100     | 32 GB          | 1        | 1.6 TB            | 80      | g021-g024      | 
-| Dell R7525  | 1         48             | 512 GB            | V100S    | 32 GB          | 3        | 480 GB            | 48      | g025           | +| Dell R7525  | 1         96             | 490 GB            | V100S    | 32 GB          | 3        | 480 GB            | 96      | g025           | 
-| Dell R740xd |         | 44             | 384 GB            | V100     | 32 GB          | 3        | 240 GB            | 132     | g026-g028      | +| Dell R740xd |         | 40             | 364 GB            | V100     | 32 GB          | 3        | 240 GB            | 80      | g026-g027      
-|                                      |                            | Total GPU      | 100      | Total Cores       1688                   |+| Dell R740xd | 1         | 44             | 364 GB            | V100     | 32 GB          | 3        | 240 GB            | 44      | g028           
 +|                                      |                            | Total GPU      | 100      | Total Cores       1708                   |
  
 A specially formatted sinfo command can be ran on Hellbender to report live information about the nodes and the hardware/features they have. A specially formatted sinfo command can be ran on Hellbender to report live information about the nodes and the hardware/features they have.