Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
pub:hpc:start [2024/09/05 19:06] – [Grant Proposal Assistance] bjmfg8pub:hpc:start [2025/07/14 16:57] (current) lantzer
Line 7: Line 7:
 ====Hellbender==== ====Hellbender====
  
-Hellbender is a traditional HPC research cluster built at MU in 2023 to support research efforts for UM researchers. Mizzou's Division of Research, Innovation & Impact (DRII) is the primary source of funding for the hardware and support of Hellbender. Hellbender is made up of 112 compute nodes containing 2 AMD 7713 processors and 512GB of RAM and 17 GPU nodes containing 4 Nvidia A100 80GB RAM GPUs, 2 Intel Xeon 6338 processors, and 256GB of system RAM. See a more detailed overview of Hellbender architecture at {{ :pub:hpc:hellbender_system_overview.pdf |}}.+Hellbender is a traditional HPC research cluster built at MU in 2023 to support research efforts for UM researchers. Mizzou's Division of Research, Innovation & Impact (DRII) is the primary source of funding for the hardware and support of Hellbender. Hellbender started as 112 compute nodes containing 2 AMD 7713 processors and 512GB of RAM and 17 GPU nodes containing 4 Nvidia A100 80GB RAM GPUs, 2 Intel Xeon 6338 processors, and 256GB of system RAM. Since then, it has expanded thanks to researcher investments and repurposing the newest portions of the previous HPC cluster. See a more detailed overview of Hellbender architecture at {{ :pub:hpc:hellbender_system_overview.pdf |}}.
  
-DRII has made it clear the mission of Hellbender is to accelerate Mizzou Forward initiatives. There are 2 access levels made available for UM researchers, general and priority access. General access is free and available to all UM researchers. General access provides an equal share of at least 50% of the resources available to all users. Priority access provides dedicated access to some number of nodes on Hellbender and is available through either direct investment or DRII allocations. Direct investments are subsidized by DRII at a rate of 25% of the investment. For more specific details regarding access levels and costs for investment please see {{ :pub:hpc:hellbender_initial_computing_model.pdf |}} for Hellbender.+DRII has made it clear the mission of Hellbender is to accelerate Mizzou Forward initiatives. There are 2 access levels made available for UM researchers, general and priority access. General access is free and available to all UM researchers. General access provides an equal share of at least 50% of the resources available to all users. Priority access provides dedicated access to some number of nodes on Hellbender and is available through investment. 
  
 Requesting access to Hellbender can be done through our [[https://request.itrss.umsystem.edu|request form]]. Each form entry will need a faculty sponsor listed as the principal investigator for the group (PI) to be the primary contact for the request. The PI will be the responsible party for managing members  The form entry can also request access to our [[pub:rde:start|Research Data Ecosystem]] (RDE) at the same time as an HPC request or the RDE request can be made separately later if you find a need for it. Requesting access to Hellbender can be done through our [[https://request.itrss.umsystem.edu|request form]]. Each form entry will need a faculty sponsor listed as the principal investigator for the group (PI) to be the primary contact for the request. The PI will be the responsible party for managing members  The form entry can also request access to our [[pub:rde:start|Research Data Ecosystem]] (RDE) at the same time as an HPC request or the RDE request can be made separately later if you find a need for it.
  
 [[pub:hpc:hellbender|Hellbender Documentation]] [[pub:hpc:hellbender|Hellbender Documentation]]
 +
 +====The Mill====
 +
 +The Mill is a traditional HPC research cluster built at S&T in 2023. The Mill currently consists of 229 compute nodes; 25 with 512GB of DDR4 RAM, a single 1.6Tb NVME drive for local scratch and 128 cores; and 160 with 256GB of DDR4 RAM, a single 2.6TB NVME drive for local scratch and 64 cores; 44 with 192GB of DDR4 RAM, 2.6TB of local scratch and 40 cores, totaling 15,200 compute cores across the system. There are also 8 GPU nodes; 6 with 4 Nvidia V100 GPUs; 1 with 8 H100 GPUs; 1 with 2 V100s, totaling 34 GPUs. The network is based on an HDR backbone which provides up to 200 gigabits of data point-to-point on the network. Each node is attached to the backbone with an HDR-100 InfiniBand connection capable of providing 100 gigabits of data throughput per second to each node. The Mill is connected to 250Tb of high-performance all-flash InfiniBand storage (VAST) as well as 800Tb of utility CEPH storage. Storage lab allocations are protected by associated security groups applied to the share, with the ability for the PI to manage group access.
 +
 +Priority access to dedicated hardware can be made through investment in hardware.
 +
 +We ask that when you cite any of the RSS clusters in a publication to send an email to itrss-support@umsystem.edu as well as share a copy of the publication with us. To cite the use of The Mill in a publication please use: "The computation for this work was performed on the high-performance computing infrastructure provided by Research Support Solutions at Missouri University of Science and Technology https://doi.org/10.71674/PH64-N397".
 +
 +[[pub:hpc:mill|Mill Documentation]]
  
 ====The Foundry==== ====The Foundry====
Line 71: Line 81:
   * Globus for advanced data movement    * Globus for advanced data movement 
   * Specialized need consultation   * Specialized need consultation
 +  * Data archival
  
 These resources work in conjunction with RSS services related to grant support, HPC infrastructure, and data management plan development. Capabilities that are not yet generally available but in development include: These resources work in conjunction with RSS services related to grant support, HPC infrastructure, and data management plan development. Capabilities that are not yet generally available but in development include:
  
   * Data backup   * Data backup
-  * Data archival 
   * Data analytics and reporting   * Data analytics and reporting