Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
pub:hpc:hellbender [2025/01/16 21:24] – [Tutorial: Sharing Data - Create a Guest Collection] bjmfg8pub:hpc:hellbender [2025/12/12 13:24] (current) – [Moving Data] redmonp
Line 3: Line 3:
 **Request an Account:** **Request an Account:**
 You can request an account for access to Hellbender by filling out the form found at: You can request an account for access to Hellbender by filling out the form found at:
-[[https://request.itrss.umsystem.edu/| Hellbender/RDE Account Request Form]]+[[https://tdx.umsystem.edu/TDClient/36/DoIT/Requests/ServiceOfferingDet?ID=1041| Hellbender Account Request Form]]
  
 ==== What is Hellbender? ==== ==== What is Hellbender? ====
Line 9: Line 9:
 **Hellbender** is the latest High Performance Computing (HPC) resource available to researchers and students (with sponsorship by a PI) within the UM-System. **Hellbender** is the latest High Performance Computing (HPC) resource available to researchers and students (with sponsorship by a PI) within the UM-System.
  
-**Hellbender** consists of 208 mixed x86-64 CPU nodes (112 AMD, 96 Intel) providing 18,688 cores as well as 28 GPU nodes consisting of a mix of Nvidia GPU's (see hardware section for more details). Hellbender is attached to our Research Data Ecosystem ('RDE') that consists of 8PB of high performance and general purpose research storage. RDE can be accessible from other devices outside of Hellbender to create a single research data location across different computational environments.+**Hellbender** consists of 222 mixed x86-64 CPU nodes providing 22,272 cores as well as 28 GPU nodes consisting of a mix of Nvidia GPU's (see hardware section for more details). Hellbender is attached to our Research Data Ecosystem ('RDE') that consists of 8PB of high performance and general purpose research storage. RDE can be accessible from other devices outside of Hellbender to create a single research data location across different computational environments.
  
 ==== Investment Model ==== ==== Investment Model ====
Line 73: Line 73:
  
 ^ Service                              ^ Rate        ^ Unit         ^ Support        ^ ^ Service                              ^ Rate        ^ Unit         ^ Support        ^
-|Hellbender CPU Node| $2,702.00 | Per Node/Year | Year to Year | +|Hellbender CPU Node | $2,702.00 | Per Node/Year | Year to Year | 
-|Hellbender GPU Node* | $7,691.38 | Per Node/Year | Year to Year |+|Hellbender A100 GPU Node* | $7,691.38 | Per Node/Year | Year to Year | 
 +|Hellbender L40s GPU Node* | $4,785.00 | Per Node/Year | Year to Year | 
 +|Hellbender H100 GPU Node* | $13,123.00 | Per Node/Year | Year to Year |
 |RDE Storage: High Performance | $95.00 | Per TB/Year | Year to Year | |RDE Storage: High Performance | $95.00 | Per TB/Year | Year to Year |
 |RDE Storage: General Performance | $25.00 | Per TB/Year | Year to Year | |RDE Storage: General Performance | $25.00 | Per TB/Year | Year to Year |
  
-***Update 08/2024**: Additional priority partitions cannot be allocated at this time as both CPU and GPU investment has reached beyond the 50% threshold. If you require capacity beyond the general pool we are able to plan and work with your grant submissions to add additional capacity to Hellbender.+***Update 10/2025**: Additional GPU priority partitions cannot be allocated at this time as GPU investment has reached beyond the 50% threshold. If you require capacity beyond the general pool we are able to plan and work with your grant submissions to add additional capacity to Hellbender.
  
  
Line 89: Line 91:
   * When running on the 'General' partition - users jobs are queued according to their fairshare score. The maximum running time is 2 days.    * When running on the 'General' partition - users jobs are queued according to their fairshare score. The maximum running time is 2 days. 
   * When running on the 'Requeue' partition - users jobs are subject to pre-emption if those jobs happen to land on an investor owned node. The maximum running time is 2 days.   * When running on the 'Requeue' partition - users jobs are subject to pre-emption if those jobs happen to land on an investor owned node. The maximum running time is 2 days.
-  * To get started please fill out our [[https://request.itrss.umsystem.edu/| Hellbender/RDE Account Request Form]]+  * To get started please fill out our [[https://tdx.umsystem.edu/TDClient/36/DoIT/Requests/ServiceOfferingDet?ID=1041| Hellbender Account Request Form]]
  
 - **Paid access (Investor) tier compute**: - **Paid access (Investor) tier compute**:
Line 99: Line 101:
   * All accounts are given 50GB of storage in /home/$USER as well as 500GB in /home/$USER/data at no cost.   * All accounts are given 50GB of storage in /home/$USER as well as 500GB in /home/$USER/data at no cost.
   * MU PI's are eligible for 1 free 5TB group storage in our RDE environment   * MU PI's are eligible for 1 free 5TB group storage in our RDE environment
-  * To get started please fill our our general [[https://request.itrss.umsystem.edu/RSS Account Request Form]]+  * To get started please fill our our general [[https://tdx.umsystem.edu/TDClient/36/DoIT/Requests/ServiceOfferingDet?ID=1041Hellbender Account Request Form]] for a Hellbender account and our [[https://tdx.umsystem.edu/TDClient/36/DoIT/Requests/ServiceOfferingDet?ID=1043| RDE Group Storage Request Form]] for the free 5TB group storage.
  
 - **Paid access (Investor) tier storage**: - **Paid access (Investor) tier storage**:
Line 119: Line 121:
 | Dell C6525 | 112    | 128        | 490 GB        | 1.6 TB          | 14336  | c001-c112  | | Dell C6525 | 112    | 128        | 490 GB        | 1.6 TB          | 14336  | c001-c112  |
  
-**The 2024 pricing is: $2,702 per node per year.**+**The 2025 pricing is: $2,702 per node per year.**
  
 ==== GPU Node Lease  ==== ==== GPU Node Lease  ====
Line 126: Line 128:
 The investment structure for GPU nodes is the same as CPU - per node per year. f you have funds available that you would like to pay for multiple years up front we can accommodate that. Once Hellbender has hit 50% of the total GPU nodes in the cluster being investor-owned we will restrict additional leases until more nodes become available via either purchase or surrendered by other PI's. The GPU nodes available for investment comprise of the following: The investment structure for GPU nodes is the same as CPU - per node per year. f you have funds available that you would like to pay for multiple years up front we can accommodate that. Once Hellbender has hit 50% of the total GPU nodes in the cluster being investor-owned we will restrict additional leases until more nodes become available via either purchase or surrendered by other PI's. The GPU nodes available for investment comprise of the following:
  
-| Model       | # Nodes | Cores/Node | System Memory | GPU  | GPU Memory | # GPU | Local Scratch | # Core  +| Model       | # Nodes | Cores/Node | System Memory | GPU  | GPU Memory | # GPU/Node | Local Scratch | # Cores  
-| Dell R740xa | 17     | 64         238 GB        | A100 | 80 GB      | 4     | 1.6 TB        | 1088   +| Dell R740xa | 17     | 64         490 GB        | A100 | 80 GB      | 4     | 1.6 TB        | 1088    
 +| Dell R740xa | 6     | 64         | 490 GB        | H100 | 94 GB      | 2     | 1.8 TB        | 384  
 +| Dell R760 | 6     | 64         | 490 GB        | L40S | 45 GB      | 2     | 3.5 TB        | 384 
  
-**The 2024 pricing is: $7,692 per node per year.**+  A100 Node: $7,691.30 Per Node/Year 
 +  H100 Node: $13,123.00 Per Node/Year 
 +  L40S Node: $4,785.00 Per Node/Year 
  
 ==== Storage: Research Data Ecosystem ('RDE') ==== ==== Storage: Research Data Ecosystem ('RDE') ====
Line 138: Line 144:
   * Storage lab allocations are protected by associated security groups applied to the share, with group member access administered by the assigned PI or appointed representative.   * Storage lab allocations are protected by associated security groups applied to the share, with group member access administered by the assigned PI or appointed representative.
  
-**What is the Difference between High Performance and General Performance Storage?**+**What is the Difference between High Performance and General Performance Storage? **
  
-On Pixstor, which is used for standard HPC allocations, general storage is pinned to the SAS disk pool while high performance allocations are pinned to all flash NVME pool.  Meaning writes and recent reads will have lower latency with HPC allocations.+On Pixstor, which is used for standard HPC allocations, general storage is pinned to the SAS disk pool while high performance allocations are pinned to all flash NVME pool.  Meaning writes and recent reads will have lower latency with High performance allocations.
    
 On VAST, which is used for non HPC and mixed HPC / SMB workloads, the disks are all flash but general storage allocations have a QOS policy attached that limits IOPS to prevent the share from the possibility of saturating the disk pool to the point where high-performance allocations are impacted.  High Performance allocations may also have a QOS policy that allows for much higher IO and IOPS.  RSS reserves the right to move general store allocations to lower tier storage in the future if facing capacity constraints. On VAST, which is used for non HPC and mixed HPC / SMB workloads, the disks are all flash but general storage allocations have a QOS policy attached that limits IOPS to prevent the share from the possibility of saturating the disk pool to the point where high-performance allocations are impacted.  High Performance allocations may also have a QOS policy that allows for much higher IO and IOPS.  RSS reserves the right to move general store allocations to lower tier storage in the future if facing capacity constraints.
Line 153: Line 159:
   * Workloads that require sustained use of low latency read and write IO with multiple GB/s, generally generated from jobs utilizing multiple NFS mounts   * Workloads that require sustained use of low latency read and write IO with multiple GB/s, generally generated from jobs utilizing multiple NFS mounts
  
 +
 +**Snapshots**
 +
 +  *VAST default policy retains 7 daily and 4 weekly snapshots for each share
 +  *Pixstor default policy is 10 daily snapshots
  
 **__None of the cluster attached storage available to users is backed up in any way by us__**, this means that if you delete something and don't have a copy somewhere else, it is gone. Please note the data stored on cluster attached storage is limited to Data Class 1 and 2 as defined by [[https://www.umsystem.edu/ums/is/infosec/classification-definitions| UM System DCL]]. If you have need to store things in DCL3 or DCL4 please contact us so we may find a solution for you. **__None of the cluster attached storage available to users is backed up in any way by us__**, this means that if you delete something and don't have a copy somewhere else, it is gone. Please note the data stored on cluster attached storage is limited to Data Class 1 and 2 as defined by [[https://www.umsystem.edu/ums/is/infosec/classification-definitions| UM System DCL]]. If you have need to store things in DCL3 or DCL4 please contact us so we may find a solution for you.
  
-**The 2024 pricing is: General Storage: $25/TB/Year, High Performance Storage: $95/TB/Year**+**The 2025 pricing is: General Storage: $25/TB/Year, High Performance Storage: $95/TB/Year**
  
 To order storage please fill out our [[https://missouri.qualtrics.com/jfe/form/SV_6zkkwGYn0MGvMyO| RSS Services Order Form]] To order storage please fill out our [[https://missouri.qualtrics.com/jfe/form/SV_6zkkwGYn0MGvMyO| RSS Services Order Form]]
Line 169: Line 180:
 **Costs** **Costs**
  
-The cost associated with using the RDE tape archive is $8/TB for short term data kept in inside the tape library for 1-3 years or $140 per tape rounded to the number of tapes for tapes sent offsite for long term retention up to 10 years. We send these tapes off to record management where they are stored in a climate-controlled environment. Each tape from the current generation LTO 9 holds approximately 18TB of data These are flat onetime costs and you have the option to do both a short term in library copy, and a longer-term offsite copy, or one or the other, providing flexibility.+The cost associated with using the RDE tape archive is $8/TB for short term data kept in inside the tape library for 1-3 years or $144 per tape rounded to the number of tapes for tapes sent offsite for long term retention up to 10 years. We send these tapes off to record management where they are stored in a climate-controlled environment. Each tape from the current generation LTO 9 holds approximately 18TB of data These are flat onetime costsand you have the option to do both a short term in library copy, and a longer-term offsite copy, or one or the other, providing flexibility.
  
 **Request Process** **Request Process**
  
 To utilize the tape archive functionality that RSS has setup, the data to be archived will need to be copied to RDE storage if it does not exist there already. This would require the following steps. To utilize the tape archive functionality that RSS has setup, the data to be archived will need to be copied to RDE storage if it does not exist there already. This would require the following steps.
-  * Submit a RDE storage request if the data resides locally and a RDE share is not already available to the researcher: [[http://request.itrss.umsystem.edu|RSS Account Request Form]] +  * Submit a RDE storage request if the data resides locally and a RDE share is not already available to the researcher: [[https://tdx.umsystem.edu/TDClient/36/DoIT/Requests/ServiceOfferingDet?ID=1043|RSS Group Storage Form]] 
-  * Create an archive folder or folders in the relevant RDE storage share to hold the data you would like to archive. The folder(s) can be named to signify the contents, but we ask that the name includes _archive at then end. For example, something akin to: labname_projectx_archive_2024.+  * Create an archive folder or folders in the relevant RDE storage share to hold the data you would like to archive. The folder(s) can be named to signify the contents, but we ask that the name includes _archive at the end. For example, something akin to: labname_projectx_archive_2024.
   * Copy the contents to be archived to the newly created archive folder(s) within the RDE storage share.   * Copy the contents to be archived to the newly created archive folder(s) within the RDE storage share.
-  * Submit a RDE tape Archive request: [[https://missouri.qualtrics.com/jfe/form/SV_5o0NoDafJNzXnRY]] +  * Submit a RDE tape Archive request: [[https://archiverequest.itrss.umsystem.edu]] 
-  * Once the tape archive jobs are completed ITRSS will notify you and send you Archive job report after which you can delete the contents of the archive folder. +  * Once the tape archive jobs are completed ITRSS will notify you and send you an Archive job report after which you can delete the contents of the archive folder. 
-  * We request that subsequent archive jobs be added to a separate folder or the initial folder renamed to something that signifies the time of archive for easier retrieval *_archive2024, *archive2025, etc.+  * We request that subsequent archive jobs be added to a separate folderor the initial folder renamed to something that signifies the time of archive for easier retrieval *_archive2024, *archive2025, etc.
  
 **Recovery** **Recovery**
Line 221: Line 232:
  
   * **[[https://status.missouri.edu| UM System Status Page]]**   * **[[https://status.missouri.edu| UM System Status Page]]**
-  * **[[https://po.missouri.edu/scripts/wa.exe?SUBED1=RSSHPC-L&A=1| RSS Announcement List: Please Sign Up]]**+  * **[[https://LISTS.UMSYSTEM.EDU/scripts/wa-UMS.exe?SUBED1=RSSHPC-L&A=1&SUB=1| RSS Announcement List: Please Sign Up]]**
   * **[[https://missouri.qualtrics.com/jfe/form/SV_6zkkwGYn0MGvMyO|RSS Services: Order Form]]**   * **[[https://missouri.qualtrics.com/jfe/form/SV_6zkkwGYn0MGvMyO|RSS Services: Order Form]]**
-  * **[[https://request.itrss.umsystem.edu/|Hellbender: Account Request Form]]**+  * **[[https://tdx.umsystem.edu/TDClient/36/DoIT/Requests/ServiceOfferingDet?ID=1041|Hellbender: Account Request Form]]**
   * **[[https://missouri.qualtrics.com/jfe/form/SV_9LAbyCadC4hQdBY|Hellbender: Add User to Existing Account Form]]**   * **[[https://missouri.qualtrics.com/jfe/form/SV_9LAbyCadC4hQdBY|Hellbender: Add User to Existing Account Form]]**
   * **[[https://missouri.qualtrics.com/jfe/form/SV_6FpWJ3fYAoKg5EO|Hellbender: Course Request Form]]**   * **[[https://missouri.qualtrics.com/jfe/form/SV_6FpWJ3fYAoKg5EO|Hellbender: Course Request Form]]**
Line 233: Line 244:
 ==== Software ==== ==== Software ====
  
-The Foundry was built and managed with Puppet. The underlying OS for the Foundry is Alma 8.9. For resource management and scheduling we are using SLURM Workload manager version 22.05.11+Hellbender was built and managed with Puppet. The underlying OS for the Hellbender is Alma 8.9. For resource management and scheduling we are using SLURM Workload manager version 22.05.11
  
 ==== Hardware ==== ==== Hardware ====
Line 249: Line 260:
 Dell C6420: .5 unit server containing dual 24 core Intel Xeon Gold 6252 CPUs with a base clock of 2.1 GHz. Each C6420 node contains 384 GB DDR4 system memory. Dell C6420: .5 unit server containing dual 24 core Intel Xeon Gold 6252 CPUs with a base clock of 2.1 GHz. Each C6420 node contains 384 GB DDR4 system memory.
  
-Dell R6620: 1 unit server containing dual 128 core AMD EPYC 9754 CPUs with a base clock of 2.25 GHz. Each R6620 node contains 1 TB DDR5 system memory.+Dell R6625: 1 unit server containing dual 128 core AMD EPYC 9754 CPUs with a base clock of 2.25 GHz. Each R6625 node contains 1 TB DDR5 system memory. 
 + 
 +Dell R6625: 1 unit server containing dual 128 core AMD EPYC 9754 CPUs with a base clock of 2.25 GHz. Each R6625 node contains 6 TB DDR5 system memory.
  
 | **Model**  | **Nodes** | **Cores/Node** | **System Memory** | **CPU**                                  | **Local Scratch**   | **Cores** | **Node Names** | | **Model**  | **Nodes** | **Cores/Node** | **System Memory** | **CPU**                                  | **Local Scratch**   | **Cores** | **Node Names** |
 | Dell C6525 | 112       | 128            | 490 GB            | AMD EPYC 7713 64-Core                    | 1.6 TB              | 14336     | c001-c112      | | Dell C6525 | 112       | 128            | 490 GB            | AMD EPYC 7713 64-Core                    | 1.6 TB              | 14336     | c001-c112      |
-| Dell R640  | 32        | 40             192 GB            | Intel(R) Xeon(R) Gold 6138 CPU @ 2.00GHz |                     | 1280      | c113-c145      | +| Dell R640  | 32        | 40             364 GB            | Intel(R) Xeon(R) Gold 6138 CPU @ 2.00GHz | 100 GB              | 1280      | c113-c145      | 
-| Dell C6420 | 64        | 48             384 GB            | Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz | 1 TB                | 3072      | c146-c209      | +| Dell C6420 | 64        | 48             364 GB            | Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz | 1 TB                | 3072      | c146-c209      | 
-| Dell R6620 | 12        | 256            | 1 TB              | AMD EPYC 9754 128-Core Processor         | 1.5 TB              | 3072      | c210-c221      | +| Dell R6625 | 12        | 256            | 994 GB            | AMD EPYC 9754 128-Core Processor         | 1.5 TB              | 3072      | c210-c221      | 
-|            |                          |                                                            | Total Cores         21760     |                |+| Dell R6625 | 2         | 256            | 6034 GB           | AMD EPYC 9754 128-Core Processor         | 1.6 TB              | 512       | c222-c223      | 
 +|            |                          |                                                            | Total Cores         22272     |                |
  
 === GPU nodes === === GPU nodes ===
  
-| **Model**   | **Nodes** | **Cores/Node** | **System Memory** | **GPU**  | **GPU Memory** | **GPUs** | **Local Scratch** | **Cores** | **Node Names** | +| **Model**   | **Nodes** | **Cores/Node** | **System Memory** | **GPU**  | **GPU Memory** | **GPUs/Node** | **Local Scratch** | **Cores** | **Node Names** | 
-| Dell R740xa | 17        | 64             238 GB            | A100     | 80 GB          | 4        | 1.6 TB            | 1088    | g001-g017      |+| Dell R750xa | 17        | 64             490 GB            | A100     | 80 GB          | 4        | 1.6 TB            | 1088    | g001-g017      |
 | Dell XE8640 | 2         | 104            | 2002 GB           | H100     | 80 GB          | 4        | 3.2 TB            | 208     | g018-g019      | | Dell XE8640 | 2         | 104            | 2002 GB           | H100     | 80 GB          | 4        | 3.2 TB            | 208     | g018-g019      |
 | Dell XE9640 | 1         | 112            | 2002 GB           | H100     | 80 GB          | 8        | 3.2 TB            | 112     | g020           | | Dell XE9640 | 1         | 112            | 2002 GB           | H100     | 80 GB          | 8        | 3.2 TB            | 112     | g020           |
-| Dell R730   | 4         | 20             128 GB            | V100     | 32 GB          | 1        | 1.6 TB            | 80      | g021-g024      | +| Dell R730   | 4         | 20             113 GB            | V100     | 32 GB          | 1        | 1.6 TB            | 80      | g021-g024      | 
-| Dell R7525  | 1         48             | 512 GB            | V100S    | 32 GB          | 3        | 480 GB            | 48      | g025           | +| Dell R7525  | 1         96             | 490 GB            | V100S    | 32 GB          | 3        | 480 GB            | 96      | g025           | 
-| Dell R740xd |         | 44             | 384 GB            | V100     | 32 GB          | 3        | 240 GB            | 132     | g026-g028      | +| Dell R740xd |         | 40             | 364 GB            | V100     | 32 GB          | 3        | 240 GB            | 80      | g026-g027      | 
-|                                      |                            | Total GPU      | 100      | Total Cores       1688                   |+| Dell R740xd | 1         | 44             | 364 GB            | V100     | 32 GB          | 3        | 240 GB            | 44      | g028           | 
 +| Dell R760xa | 6         | 64             | 490 GB            | H100     | 94 GB          | 2        | 1.8 TB            | 384      | g029-g034  
 +Dell R760 | 6         | 64             | 490 GB            | L40S     | 45 GB          | 2        | 3.5 TB            | 384      | g035-g040 
 +|        |                          |                            | Total GPU      | 124      | Total Cores       2476                   |
  
 A specially formatted sinfo command can be ran on Hellbender to report live information about the nodes and the hardware/features they have. A specially formatted sinfo command can be ran on Hellbender to report live information about the nodes and the hardware/features they have.
Line 353: Line 370:
 ==== Open OnDemand ==== ==== Open OnDemand ====
  
-  * https://ondemand.rnet.missouri.edu - Hellbender Open OnDemand (HB OOD+  * https://ondemand.rnet.missouri.edu - Hellbender Open OnDemand (Researcher
-  * https://hb-classes.missouri.edu - Hellbender Classes Open OnDemand (HB OOD classes)+  * https://hb-classes.missouri.edu - Hellbender Classes Open OnDemand (Classes)
  
 OnDemand provides an integrated, single access point for all of your HPC resources. The following apps are currently available on Hellbender's Open Ondemand. OnDemand provides an integrated, single access point for all of your HPC resources. The following apps are currently available on Hellbender's Open Ondemand.
Line 542: Line 559:
  
   * Hellbender Collection Name: U MO ITRSS RDE   * Hellbender Collection Name: U MO ITRSS RDE
-  * Lewis Collection Name:  MU RCSS Lewis Home Directories 
   * Mill Collection Name: Missouri S&T Mill   * Mill Collection Name: Missouri S&T Mill
-  * Foundry Collection Name: Missouri S&T HPC Storage 
  
 More detailed information on how to use Globus is at [[https://docs.itrss.umsystem.edu/pub/hpc/hellbender#globus1]] More detailed information on how to use Globus is at [[https://docs.itrss.umsystem.edu/pub/hpc/hellbender#globus1]]
Line 604: Line 619:
 Below is process for setting up a class on the OOD portal. Below is process for setting up a class on the OOD portal.
  
-  - Send the class name, the list of students and TAs, and any shared storage requirements to itrss-support@umsystem.edu.+  - Send the class name, the list of students and TAs, and any shared storage requirements to itrss-support@umsystem.edu. This can be also accomplished by filling out our course request form:  * **[[https://missouri.qualtrics.com/jfe/form/SV_6FpWJ3fYAoKg5EO|Hellbender: Course Request Form]]**
   - We will add the students to the group allowing them access to OOD.   - We will add the students to the group allowing them access to OOD.
   - If the student does not have a Hellbender account yet, they will be presented with a link to a form to fill out requesting a Hellbender account.   - If the student does not have a Hellbender account yet, they will be presented with a link to a form to fill out requesting a Hellbender account.
Line 795: Line 810:
  
 **Documentation**:http://docs.nvidia.com/cuda/index.html **Documentation**:http://docs.nvidia.com/cuda/index.html
 +
 +==== RStudio ====
 +
 +[[https://youtu.be/WuAwXMUYE_Y]]
  
 ==== Visual Studio Code ==== ==== Visual Studio Code ====
Line 903: Line 922:
  
 Finally, you need to give Globus permission to use your identity to access information and perform actions (like file transfers) on your behalf. Finally, you need to give Globus permission to use your identity to access information and perform actions (like file transfers) on your behalf.
-{{:pub:hpc:globus_terms_6.png?1200|}} {{:pub:hpc:globus_allow_or_deny_6.png?1200|}}+{{:pub:hpc:globus_terms_6.png?600|}} {{:pub:hpc:globus_allow_or_deny_6.png?800|}}
  
 ==== Tutorial: Globus File Manager ==== ==== Tutorial: Globus File Manager ====
  
 After you’ve signed up and logged in to Globus, you’ll begin at the File Manager. After you’ve signed up and logged in to Globus, you’ll begin at the File Manager.
 +
 +**note:  Symlinks may not be transferred via Globus, preview:
 +https://docs.globus.org/faq/transfer-sharing/#how_does_globus_handle_symlinks
 +If symlinks need to be copied, consider using the rsync on the DTN with with the -l flag**
 +
 +
 +
  
 The first time you use the File Manager, all fields will be blank: The first time you use the File Manager, all fields will be blank:
Line 924: Line 950:
  
 **Access A Collection** **Access A Collection**
-  * Click in the Collection field at the top of the File Manager page and type "globus tutorial endpoint".+  * Click in the Collection field at the top of the File Manager page and type "globus tutorial collection 1".
   * Globus will list collections with matching names. The collections Globus Tutorial Endpoint 1 and Globus Tutorial Endpoint 2 are collections administered by the Globus team for demonstration purposes and are accessible to all Globus users without further authentication.   * Globus will list collections with matching names. The collections Globus Tutorial Endpoint 1 and Globus Tutorial Endpoint 2 are collections administered by the Globus team for demonstration purposes and are accessible to all Globus users without further authentication.
  
-{{:pub:hpc:globus_endpoint_tutorial_search.png?1200|}}+{{:pub:hpc:collection_search.png?800|}}
  
-  * Click on Globus Tutorial Endpoint 1.  +  * Click on Globus Tutorial Collection 1.  
-  * Globus will connect to the collection and display the default directory, /~/. (It will be empty.) Click the "Path" field and change it to /share/godata/. Globus will show the files in the new path: three small text files+  * Globus will connect to the collection and display the default directory, /~/. (It will be empty.) Click the "Path" field and change it to home/share/godata/. Globus will show the files in the new path: three small text files
  
- +{{:pub:hpc:test_collection_godata1.png?800|}}
-{{:pub:hpc:tutorial_1.png?1200|}} +
- +
-{{:pub:hpc:tutorial_2.png?1200|}}+
  
 **Request A File Transfer** **Request A File Transfer**
Line 941: Line 964:
   * A new collection panel will open, with a "Transfer or Sync to" field at the top of the panel.   * A new collection panel will open, with a "Transfer or Sync to" field at the top of the panel.
  
-{{:pub:hpc:tutorial_3.png?1200|}}+{{:pub:hpc:transfer_or_sync.png?1200|}}
  
-  * Find the Globus Tutorial Endpoint 2 collection and connect to it as you did with the Globus Tutorial Endpoint 1 above. +  * Find the "Globus Tutorial Collection 2collection and connect to it as you did with the Globus Tutorial Endpoint 1 above. 
   * The default directory, /~/ will again be empty. Your goal is to transfer the sample files here.   * The default directory, /~/ will again be empty. Your goal is to transfer the sample files here.
-  * Click on the left collection, Globus Tutorial Endpoint 1, and select all three files. The Start> button at the bottom of the panel will activate.+  * Click on the left collection, Globus Tutorial Collection 1, and select all three files. The Start> button at the bottom of the panel will activate.
    
-{{:pub:hpc:tutorial_4.png?1200|}}+{{:pub:hpc:select_files_start.png?800|}}
  
   * Between the two Start buttons at the bottom of the page, the Transfer & Sync Options tab provides access to several options.   * Between the two Start buttons at the bottom of the page, the Transfer & Sync Options tab provides access to several options.
Line 954: Line 977:
   * Click the Start> button to transfer the selected files to the collection in the right panel. Globus will display a green notification panel—​confirming that the transfer request was submitted—​and add a badge to the Activity item in the command menu on the left of the page.   * Click the Start> button to transfer the selected files to the collection in the right panel. Globus will display a green notification panel—​confirming that the transfer request was submitted—​and add a badge to the Activity item in the command menu on the left of the page.
  
-{{:pub:hpc:tutorial_5.png?1200|}} +{{:pub:hpc:transfer_success_submitted.png?800|}}
- +
-{{:pub:hpc:tutorial_6.png?1200|}}+
  
 **Confirm Transfer Completion** **Confirm Transfer Completion**
Line 964: Line 985:
   * On the Activity page, click the arrow icon on the right to view details about the transfer. You will also receive an email with the transfer details.   * On the Activity page, click the arrow icon on the right to view details about the transfer. You will also receive an email with the transfer details.
  
-{{:pub:hpc:tutorial_7.png?1200|}}+{{:pub:hpc:transfer_complete.png?1200|}}
  
 ==== Tutorial: Sharing Data - Create a Guest Collection ==== ==== Tutorial: Sharing Data - Create a Guest Collection ====