This is an old revision of the document!
You're not limited to only choosing one however due to differences in funding sources for the resource, some of them have different access rules. This section will cover some of the system details of each resource and requirements for account approval.
Hellbender is a traditional HPC research cluster built at MU in 2023 to support research efforts for UM researchers. Mizzou's Division of Research, Innovation & Impact (DRII) is the primary source of funding for the hardware and support of Hellbender. Hellbender is made up of 112 compute nodes containing 2 AMD 7713 processors and 512GB of RAM and 17 GPU nodes containing 4 Nvidia A100 80GB RAM GPUs, 2 Intel Xeon 6338 processors, and 256GB of system RAM.
DRII has made it clear the mission of Hellbender is to accelerate Mizzou Forward initiatives. There are 2 access levels made available for UM researchers, general and priority access. General access is free and available to all UM researchers. General access provides an equal share of at least 50% of the resources available to all users. Priority access provides dedicated access to some number of nodes on Hellbender and is available through either direct investment or DRII allocations. Direct investments are subsidized by DRII at a rate of 25% of the investment. For more specific details regarding access levels and costs for investment please see [[PUTLINKHERE|the computing model document]] for Hellbender.
Requesting access to Hellbender can be done through our request form. Each form entry will need a faculty sponsor listed as the principal investigator for the group (PI) to be the primary contact for the request. The PI will be the responsible party for managing members The form entry can also request access to our Research Data Ecosystem (RDE) at the same time as an HPC request or the RDE request can be made separately later if you find a need for it.
The Foundry is a traditional HPC research cluster built at S&T in 2019 to support regional higher education institutions. Primarily the funding for the Foundry came from the National Science Foundation's (NSF) Major Research Infrastructure (MRI) program, additional funding for support and infrastructure has been provided by the S&T campus. The Foundry is comprised of 160 5 compute nodes with 2 AMD Epyc 7502 processors and 256GB of RAM and 6 GPU nodes with 2x Intel Xeon 6248 processors 192GB of system RAM and 4 Nvidia V100 GPUs with 32GB of GPU RAM each.
With the mission of the MRI being that of making The Foundry a regional resource for higher ed, general resources are available freely to any Missouri University by emailing a one page request for resources to foundry-access@mst.edu. For UM system researchers a request to the IT help desk is all that is necessary to gain general access. Priority access to dedicated hardware can be made through investment in hardware.
In all publications or products resulting from work performed using the Foundry the NSF Grant which provided funding for the Foundry must be acknowledged. This can be done by adding this sentence to the publication “This work was supported in part by the National Science Foundation under Grant No. OAC-1919789.” Or something to this effect.
The following are RSS policies and guidelines for different services and groups:
Software installed cluster-wide must have an open source (https://opensource.org/licenses) license or be obtained utilizing the procurement process even if there is not a cost associated with it.
Licensed software (any software that requires a license or agreement to be accepted) must follow the procurement process to protect users, their research, and the University. To ensure this, for RSS to install and support licensed software RSS must manage the license and the license server.
For widely used software RSS can facilitate the sharing of license fees and/or may support the cost depending on the cost and situation. Otherwise, user are responsible for funding for fee licensed software and RSS can handle the procurement process. We require that if the license does not preclude it, and there are not node or other resource limits, that the software is make made available to all users on the cluster. All licensed software installed on the cluster is to be used following the license agreement. We will do our best to install and support a wide rage of scientific software as resources and circumstances dictate but in general we only support scientific software that will run on Linux in a HPC cluster environment. RSS may not support software that is implicitly/explicitly deprecated by the community.
A majority of scientific software and software libraries can be installed in users’ accounts or in group space. We also provide limited support for Singularity (https://sylabs.io/docs/) for advanced users who require more control over their computing environment. We cannot knowingly assist users to install software that may put them, the University, or their intellectual property at risk.
To connect to Hellbender please first make sure that you have an account. To get an account please fill out our account request form [FORM LINK HERE].
Once you have been notified by the RSS team that your account has been created on Hellbender, open a terminal and type in ssh [user_id]@hellbender-login.rnet.missouri.edu. Using your UM-system password you will be able to login directly to Hellbender if you are on campus or on the VPN. Once connected you will land on the login node and will see a screen similar to this: [HELLBENDER LANDING PAGE EXAMPLE].
You are now on the login node and are ready to proceed to submit jobs and work on the cluster.
If you won't be primarily connecting to Hellbender from on campus and do not want to use the VPN - another option is to use public/private key authentication. You can add your ssh keypairs to any number of computers and they will be able to access Hellbender from outside the campus network.
The general partition is intended for non-investors to run multi-node, multi-day jobs.
The requeue partition is intended for non-investor jobs that have been re-queued due to their landing on an investor-owned node.
The Gpu partition is composed of Nvidia A100 cards (4 per node). Acceptable use includes jobs that utilize a GPU for the majority of the run
This partition is designed for short interactive testing, interactive debugging, and general interactive jobs. Please use this for light testing as opposed to the login node.