This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
pub:hpc:hellbender [2025/01/31 15:36] – [Getting Started with Globus] bjmfg8 | pub:hpc:hellbender [2025/04/14 20:07] (current) – [Storage: Research Data Ecosystem ('RDE')] bjmfg8 | ||
---|---|---|---|
Line 119: | Line 119: | ||
| Dell C6525 | 112 | 128 | 490 GB | 1.6 TB | 14336 | c001-c112 | | Dell C6525 | 112 | 128 | 490 GB | 1.6 TB | 14336 | c001-c112 | ||
- | **The 2024 pricing is: $2,702 per node per year.** | + | **The 2025 pricing is: $2,702 per node per year.** |
==== GPU Node Lease ==== | ==== GPU Node Lease ==== | ||
Line 129: | Line 129: | ||
| Dell R740xa | 17 | 64 | 238 GB | A100 | 80 GB | 4 | 1.6 TB | 1088 | | Dell R740xa | 17 | 64 | 238 GB | A100 | 80 GB | 4 | 1.6 TB | 1088 | ||
- | **The 2024 pricing is: $7,692 per node per year.** | + | **The 2025 pricing is: $7,692 per node per year.** |
==== Storage: Research Data Ecosystem (' | ==== Storage: Research Data Ecosystem (' | ||
Line 138: | Line 138: | ||
* Storage lab allocations are protected by associated security groups applied to the share, with group member access administered by the assigned PI or appointed representative. | * Storage lab allocations are protected by associated security groups applied to the share, with group member access administered by the assigned PI or appointed representative. | ||
- | **What is the Difference between High Performance and General Performance Storage?** | + | **What is the Difference between High Performance and General Performance Storage? ** |
- | On Pixstor, which is used for standard HPC allocations, | + | On Pixstor, which is used for standard HPC allocations, |
On VAST, which is used for non HPC and mixed HPC / SMB workloads, the disks are all flash but general storage allocations have a QOS policy attached that limits IOPS to prevent the share from the possibility of saturating the disk pool to the point where high-performance allocations are impacted. | On VAST, which is used for non HPC and mixed HPC / SMB workloads, the disks are all flash but general storage allocations have a QOS policy attached that limits IOPS to prevent the share from the possibility of saturating the disk pool to the point where high-performance allocations are impacted. | ||
Line 153: | Line 153: | ||
* Workloads that require sustained use of low latency read and write IO with multiple GB/s, generally generated from jobs utilizing multiple NFS mounts | * Workloads that require sustained use of low latency read and write IO with multiple GB/s, generally generated from jobs utilizing multiple NFS mounts | ||
+ | |||
+ | **Snapshots** | ||
+ | |||
+ | *VAST default policy retains 7 daily and 4 weekly snapshots for each share | ||
+ | *Pixstor default policy is 10 daily snapshots | ||
**__None of the cluster attached storage available to users is backed up in any way by us__**, this means that if you delete something and don't have a copy somewhere else, it is gone. Please note the data stored on cluster attached storage is limited to Data Class 1 and 2 as defined by [[https:// | **__None of the cluster attached storage available to users is backed up in any way by us__**, this means that if you delete something and don't have a copy somewhere else, it is gone. Please note the data stored on cluster attached storage is limited to Data Class 1 and 2 as defined by [[https:// | ||
- | **The 2024 pricing is: General Storage: $25/ | + | **The 2025 pricing is: General Storage: $25/ |
To order storage please fill out our [[https:// | To order storage please fill out our [[https:// | ||
Line 169: | Line 174: | ||
**Costs** | **Costs** | ||
- | The cost associated with using the RDE tape archive is $8/TB for short term data kept in inside the tape library for 1-3 years or $140 per tape rounded to the number of tapes for tapes sent offsite for long term retention up to 10 years. We send these tapes off to record management where they are stored in a climate-controlled environment. Each tape from the current generation LTO 9 holds approximately 18TB of data These are flat onetime costs and you have the option to do both a short term in library copy, and a longer-term offsite copy, or one or the other, providing flexibility. | + | The cost associated with using the RDE tape archive is $8/TB for short term data kept in inside the tape library for 1-3 years or $140 per tape rounded to the number of tapes for tapes sent offsite for long term retention up to 10 years. We send these tapes off to record management where they are stored in a climate-controlled environment. Each tape from the current generation LTO 9 holds approximately 18TB of data These are flat onetime costs, and you have the option to do both a short term in library copy, and a longer-term offsite copy, or one or the other, providing flexibility. |
**Request Process** | **Request Process** | ||
Line 175: | Line 180: | ||
To utilize the tape archive functionality that RSS has setup, the data to be archived will need to be copied to RDE storage if it does not exist there already. This would require the following steps. | To utilize the tape archive functionality that RSS has setup, the data to be archived will need to be copied to RDE storage if it does not exist there already. This would require the following steps. | ||
* Submit a RDE storage request if the data resides locally and a RDE share is not already available to the researcher: [[http:// | * Submit a RDE storage request if the data resides locally and a RDE share is not already available to the researcher: [[http:// | ||
- | * Create an archive folder or folders in the relevant RDE storage share to hold the data you would like to archive. The folder(s) can be named to signify the contents, but we ask that the name includes _archive at then end. For example, something akin to: labname_projectx_archive_2024. | + | * Create an archive folder or folders in the relevant RDE storage share to hold the data you would like to archive. The folder(s) can be named to signify the contents, but we ask that the name includes _archive at the end. For example, something akin to: labname_projectx_archive_2024. |
* Copy the contents to be archived to the newly created archive folder(s) within the RDE storage share. | * Copy the contents to be archived to the newly created archive folder(s) within the RDE storage share. | ||
* Submit a RDE tape Archive request: [[https:// | * Submit a RDE tape Archive request: [[https:// | ||
- | * Once the tape archive jobs are completed ITRSS will notify you and send you a Archive job report after which you can delete the contents of the archive folder. | + | * Once the tape archive jobs are completed ITRSS will notify you and send you an Archive job report after which you can delete the contents of the archive folder. |
- | * We request that subsequent archive jobs be added to a separate folder or the initial folder renamed to something that signifies the time of archive for easier retrieval *_archive2024, | + | * We request that subsequent archive jobs be added to a separate folder, or the initial folder renamed to something that signifies the time of archive for easier retrieval *_archive2024, |
**Recovery** | **Recovery** | ||
Line 221: | Line 226: | ||
* **[[https:// | * **[[https:// | ||
- | * **[[https:// | + | * **[[https:// |
* **[[https:// | * **[[https:// | ||
* **[[https:// | * **[[https:// | ||
Line 253: | Line 258: | ||
| **Model** | | **Model** | ||
| Dell C6525 | 112 | 128 | 490 GB | AMD EPYC 7713 64-Core | | Dell C6525 | 112 | 128 | 490 GB | AMD EPYC 7713 64-Core | ||
- | | Dell R640 | 32 | 40 | + | | Dell R640 | 32 | 40 |
- | | Dell C6420 | 64 | 48 | + | | Dell C6420 | 64 | 48 |
- | | Dell R6620 | 12 | 256 | 1 TB | + | | Dell R6620 | 12 | 256 | 994 GB |
| | | | | | ||
Line 264: | Line 269: | ||
| Dell XE8640 | 2 | 104 | 2002 GB | H100 | 80 GB | 4 | 3.2 TB | 208 | g018-g019 | | Dell XE8640 | 2 | 104 | 2002 GB | H100 | 80 GB | 4 | 3.2 TB | 208 | g018-g019 | ||
| Dell XE9640 | 1 | 112 | 2002 GB | H100 | 80 GB | 8 | 3.2 TB | 112 | g020 | | | Dell XE9640 | 1 | 112 | 2002 GB | H100 | 80 GB | 8 | 3.2 TB | 112 | g020 | | ||
- | | Dell R730 | 4 | 20 | + | | Dell R730 | 4 | 20 |
- | | Dell R7525 | 1 | + | | Dell R7525 | 1 |
- | | Dell R740xd | 3 | 44 | 384 GB | V100 | 32 GB | 3 | 240 GB | 132 | g026-g028 | | + | | Dell R740xd | 2 | 40 | 364 GB | V100 | 32 GB | 3 | 240 GB | 80 |
- | | | + | | Dell R740xd | 1 | 44 | 364 GB | V100 | 32 GB | 3 | 240 GB | 44 | g028 | |
+ | | | ||
A specially formatted sinfo command can be ran on Hellbender to report live information about the nodes and the hardware/ | A specially formatted sinfo command can be ran on Hellbender to report live information about the nodes and the hardware/ | ||
Line 353: | Line 359: | ||
==== Open OnDemand ==== | ==== Open OnDemand ==== | ||
- | * https:// | + | * https:// |
- | * https:// | + | * https:// |
OnDemand provides an integrated, single access point for all of your HPC resources. The following apps are currently available on Hellbender' | OnDemand provides an integrated, single access point for all of your HPC resources. The following apps are currently available on Hellbender' | ||
Line 903: | Line 909: | ||
Finally, you need to give Globus permission to use your identity to access information and perform actions (like file transfers) on your behalf. | Finally, you need to give Globus permission to use your identity to access information and perform actions (like file transfers) on your behalf. | ||
- | {{: | + | {{: |
==== Tutorial: Globus File Manager ==== | ==== Tutorial: Globus File Manager ==== | ||
After you’ve signed up and logged in to Globus, you’ll begin at the File Manager. | After you’ve signed up and logged in to Globus, you’ll begin at the File Manager. | ||
+ | |||
+ | **note: | ||
+ | https:// | ||
+ | If symlinks need to be copied, consider using the rsync on the DTN with with the -l flag** | ||
+ | |||
+ | |||
+ | |||
The first time you use the File Manager, all fields will be blank: | The first time you use the File Manager, all fields will be blank: | ||
Line 924: | Line 937: | ||
**Access A Collection** | **Access A Collection** | ||
- | * Click in the Collection field at the top of the File Manager page and type " | + | * Click in the Collection field at the top of the File Manager page and type " |
* Globus will list collections with matching names. The collections Globus Tutorial Endpoint 1 and Globus Tutorial Endpoint 2 are collections administered by the Globus team for demonstration purposes and are accessible to all Globus users without further authentication. | * Globus will list collections with matching names. The collections Globus Tutorial Endpoint 1 and Globus Tutorial Endpoint 2 are collections administered by the Globus team for demonstration purposes and are accessible to all Globus users without further authentication. | ||
- | {{:pub:hpc:globus_endpoint_tutorial_search.png?1200|}} | + | {{:pub:hpc:collection_search.png?800|}} |
- | * Click on Globus Tutorial | + | * Click on Globus Tutorial |
- | * Globus will connect to the collection and display the default directory, /~/. (It will be empty.) Click the " | + | * Globus will connect to the collection and display the default directory, /~/. (It will be empty.) Click the " |
- | + | {{:pub:hpc:test_collection_godata1.png?800|}} | |
- | {{:pub:hpc:tutorial_1.png?1200|}} | + | |
- | + | ||
- | {{: | + | |
**Request A File Transfer** | **Request A File Transfer** | ||
Line 941: | Line 951: | ||
* A new collection panel will open, with a " | * A new collection panel will open, with a " | ||
- | {{:pub:hpc:tutorial_3.png? | + | {{:pub:hpc:transfer_or_sync.png? |
- | * Find the Globus Tutorial | + | * Find the "Globus Tutorial |
* The default directory, /~/ will again be empty. Your goal is to transfer the sample files here. | * The default directory, /~/ will again be empty. Your goal is to transfer the sample files here. | ||
- | * Click on the left collection, Globus Tutorial | + | * Click on the left collection, Globus Tutorial |
- | {{:pub:hpc:tutorial_4.png?1200|}} | + | {{:pub:hpc:select_files_start.png?800|}} |
* Between the two Start buttons at the bottom of the page, the Transfer & Sync Options tab provides access to several options. | * Between the two Start buttons at the bottom of the page, the Transfer & Sync Options tab provides access to several options. | ||
Line 954: | Line 964: | ||
* Click the Start> button to transfer the selected files to the collection in the right panel. Globus will display a green notification panel—confirming that the transfer request was submitted—and add a badge to the Activity item in the command menu on the left of the page. | * Click the Start> button to transfer the selected files to the collection in the right panel. Globus will display a green notification panel—confirming that the transfer request was submitted—and add a badge to the Activity item in the command menu on the left of the page. | ||
- | {{:pub:hpc:tutorial_5.png?1200|}} | + | {{:pub:hpc:transfer_success_submitted.png?800|}} |
- | + | ||
- | {{: | + | |
**Confirm Transfer Completion** | **Confirm Transfer Completion** | ||
Line 964: | Line 972: | ||
* On the Activity page, click the arrow icon on the right to view details about the transfer. You will also receive an email with the transfer details. | * On the Activity page, click the arrow icon on the right to view details about the transfer. You will also receive an email with the transfer details. | ||
- | {{:pub:hpc:tutorial_7.png? | + | {{:pub:hpc:transfer_complete.png? |
==== Tutorial: Sharing Data - Create a Guest Collection ==== | ==== Tutorial: Sharing Data - Create a Guest Collection ==== |