Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
pub:hpc:start [2024/01/03 14:49] – [Office Hours] blspcypub:hpc:start [2024/10/15 17:41] (current) keelerm
Line 7: Line 7:
 ====Hellbender==== ====Hellbender====
  
-Hellbender is a traditional HPC research cluster built at MU in 2023 to support research efforts for UM researchers. Mizzou's Division of Research, Innovation & Impact (DRII) is the primary source of funding for the hardware and support of Hellbender. Hellbender is made up of 112 compute nodes containing 2 AMD 7713 processors and 512GB of RAM and 17 GPU nodes containing 4 Nvidia A100 80GB RAM GPUs, 2 Intel Xeon 6338 processors, and 256GB of system RAM.   +Hellbender is a traditional HPC research cluster built at MU in 2023 to support research efforts for UM researchers. Mizzou's Division of Research, Innovation & Impact (DRII) is the primary source of funding for the hardware and support of Hellbender. Hellbender started as 112 compute nodes containing 2 AMD 7713 processors and 512GB of RAM and 17 GPU nodes containing 4 Nvidia A100 80GB RAM GPUs, 2 Intel Xeon 6338 processors, and 256GB of system RAM. Since then, it has expanded thanks to researcher investments and repurposing the newest portions of the previous HPC cluster. See a more detailed overview of Hellbender architecture at {{ :pub:hpc:hellbender_system_overview.pdf |}}.
  
-DRII has made it clear the mission of Hellbender is to accelerate Mizzou Forward initiatives. There are 2 access levels made available for UM researchers, general and priority access. General access is free and available to all UM researchers. General access provides an equal share of at least 50% of the resources available to all users. Priority access provides dedicated access to some number of nodes on Hellbender and is available through either direct investment or DRII allocations. Direct investments are subsidized by DRII at a rate of 25% of the investment. For more specific details regarding access levels and costs for investment please see %%[[PUTLINKHERE|the computing model document]]%% for Hellbender.+DRII has made it clear the mission of Hellbender is to accelerate Mizzou Forward initiatives. There are 2 access levels made available for UM researchers, general and priority access. General access is free and available to all UM researchers. General access provides an equal share of at least 50% of the resources available to all users. Priority access provides dedicated access to some number of nodes on Hellbender and is available through investment. 
  
 Requesting access to Hellbender can be done through our [[https://request.itrss.umsystem.edu|request form]]. Each form entry will need a faculty sponsor listed as the principal investigator for the group (PI) to be the primary contact for the request. The PI will be the responsible party for managing members  The form entry can also request access to our [[pub:rde:start|Research Data Ecosystem]] (RDE) at the same time as an HPC request or the RDE request can be made separately later if you find a need for it. Requesting access to Hellbender can be done through our [[https://request.itrss.umsystem.edu|request form]]. Each form entry will need a faculty sponsor listed as the principal investigator for the group (PI) to be the primary contact for the request. The PI will be the responsible party for managing members  The form entry can also request access to our [[pub:rde:start|Research Data Ecosystem]] (RDE) at the same time as an HPC request or the RDE request can be made separately later if you find a need for it.
Line 24: Line 24:
  
 [[pub:hpc:foundry|Foundry Documentation]] [[pub:hpc:foundry|Foundry Documentation]]
 +
 +====Nautilus====
 +Researchers with workflows that involve AI, machine learning, simulation, or similar computation that can be parallelized at the job level may be interested to know of another available HPC resource: the NSF National Research Platform Nautilus cluster. NSF grants orchestrated by faculty at the University of Missouri and in the Great Plains Network have contributed substantially to this resource.
 + 
 +The Nautilus HPC System is a public cluster utilizing Kubernetes containerization. Its resources include 1,352 compute nodes, 32 NVIDIA A100 nodes, 26 petabytes of DDN storage and a 200 Gbps NVIDIA Mellanox InfiniBand interconnect.
 + 
 +Users will need to be learn how to use Kubernetes and containers, a very different system than the SLURM HPC Systems we have at the University, but may find more resource availability.  Nautilus also differs from other NSF programs (like ACCESS) as it is not based on proposals and approvals to gain access. All resources requested are expected to be used, so there is a strong need for users to understand what their jobs require. There is GitHub documentation to assist with these learning points.
 + 
 +Data is expected to be DCL 1 or 2; higher classifications of data are not appropriate for this cluster.  Researchers should understand that the work they are doing on Nautilus should be considered open to the public.
 + 
 +If you think that Nautilus may be a resource that can help your research excel, please let us know of your interest.
 +
 +
  
 =====General Policies===== =====General Policies=====
Line 58: Line 71:
   * Globus for advanced data movement    * Globus for advanced data movement 
   * Specialized need consultation   * Specialized need consultation
 +  * Data archival
  
 These resources work in conjunction with RSS services related to grant support, HPC infrastructure, and data management plan development. Capabilities that are not yet generally available but in development include: These resources work in conjunction with RSS services related to grant support, HPC infrastructure, and data management plan development. Capabilities that are not yet generally available but in development include:
  
   * Data backup   * Data backup
-  * Data archival 
   * Data analytics and reporting   * Data analytics and reporting
  
Line 130: Line 143:
 We are also available to answer questions outside of these hours via email: itrss-support@umsystem.edu We are also available to answer questions outside of these hours via email: itrss-support@umsystem.edu
  
 +====Grant Proposal Assistance====
 +
 +The RSS team is here to help with grants. We offer consultations and project reviews that include but are not limited to:
  
 +  * **Security Reviews**
 +  * **Vendor Quotes**
 +    * We work with university approved vendors to get preferred pricing and support
 +  * **Letters of Support**
 +    * Some grants require proof of campus IT knowledge/support of the proposal
 +  * **Regional Partnerships**
 +    * We are active members in several different regional network groups (Great Plains Network, CIMUSE, Campus Champions and more) that can be assets in finding partnerships for multi-institution projects.
 +  * **Data Management Plans**
 +    * MU Libraries has good resources for DMP's for many of the most common granting agencies [[https://libraryguides.missouri.edu/datamanagement| MU Libraries Data Management]]
 +  * **Facilities Description**  {{ :pub:hpc:RSS Overview.docx |(available here) }}: