This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
pub:hpc:start [2023/10/18 20:30] – [Job submission] bjmfg8 | pub:hpc:start [2025/07/14 16:57] (current) – lantzer | ||
---|---|---|---|
Line 7: | Line 7: | ||
====Hellbender==== | ====Hellbender==== | ||
- | Hellbender is a traditional HPC research cluster built at MU in 2023 to support research efforts for UM researchers. Mizzou' | + | Hellbender is a traditional HPC research cluster built at MU in 2023 to support research efforts for UM researchers. Mizzou' |
- | DRII has made it clear the mission of Hellbender is to accelerate Mizzou Forward initiatives. There are 2 access levels made available for UM researchers, | + | DRII has made it clear the mission of Hellbender is to accelerate Mizzou Forward initiatives. There are 2 access levels made available for UM researchers, |
Requesting access to Hellbender can be done through our [[https:// | Requesting access to Hellbender can be done through our [[https:// | ||
[[pub: | [[pub: | ||
+ | |||
+ | ====The Mill==== | ||
+ | |||
+ | The Mill is a traditional HPC research cluster built at S&T in 2023. The Mill currently consists of 229 compute nodes; 25 with 512GB of DDR4 RAM, a single 1.6Tb NVME drive for local scratch and 128 cores; and 160 with 256GB of DDR4 RAM, a single 2.6TB NVME drive for local scratch and 64 cores; 44 with 192GB of DDR4 RAM, 2.6TB of local scratch and 40 cores, totaling 15,200 compute cores across the system. There are also 8 GPU nodes; 6 with 4 Nvidia V100 GPUs; 1 with 8 H100 GPUs; 1 with 2 V100s, totaling 34 GPUs. The network is based on an HDR backbone which provides up to 200 gigabits of data point-to-point on the network. Each node is attached to the backbone with an HDR-100 InfiniBand connection capable of providing 100 gigabits of data throughput per second to each node. The Mill is connected to 250Tb of high-performance all-flash InfiniBand storage (VAST) as well as 800Tb of utility CEPH storage. Storage lab allocations are protected by associated security groups applied to the share, with the ability for the PI to manage group access. | ||
+ | |||
+ | Priority access to dedicated hardware can be made through investment in hardware. | ||
+ | |||
+ | We ask that when you cite any of the RSS clusters in a publication to send an email to itrss-support@umsystem.edu as well as share a copy of the publication with us. To cite the use of The Mill in a publication please use: "The computation for this work was performed on the high-performance computing infrastructure provided by Research Support Solutions at Missouri University of Science and Technology https:// | ||
+ | |||
+ | [[pub: | ||
====The Foundry==== | ====The Foundry==== | ||
Line 24: | Line 34: | ||
[[pub: | [[pub: | ||
+ | |||
+ | ====Nautilus==== | ||
+ | Researchers with workflows that involve AI, machine learning, simulation, or similar computation that can be parallelized at the job level may be interested to know of another available HPC resource: the NSF National Research Platform Nautilus cluster. NSF grants orchestrated by faculty at the University of Missouri and in the Great Plains Network have contributed substantially to this resource. | ||
+ | |||
+ | The Nautilus HPC System is a public cluster utilizing Kubernetes containerization. Its resources include 1,352 compute nodes, 32 NVIDIA A100 nodes, 26 petabytes of DDN storage and a 200 Gbps NVIDIA Mellanox InfiniBand interconnect. | ||
+ | |||
+ | Users will need to be learn how to use Kubernetes and containers, a very different system than the SLURM HPC Systems we have at the University, but may find more resource availability. | ||
+ | |||
+ | Data is expected to be DCL 1 or 2; higher classifications of data are not appropriate for this cluster. | ||
+ | |||
+ | If you think that Nautilus may be a resource that can help your research excel, please let us know of your interest. | ||
+ | |||
+ | |||
=====General Policies===== | =====General Policies===== | ||
Line 58: | Line 81: | ||
* Globus for advanced data movement | * Globus for advanced data movement | ||
* Specialized need consultation | * Specialized need consultation | ||
+ | * Data archival | ||
These resources work in conjunction with RSS services related to grant support, HPC infrastructure, | These resources work in conjunction with RSS services related to grant support, HPC infrastructure, | ||
* Data backup | * Data backup | ||
- | * Data archival | ||
* Data analytics and reporting | * Data analytics and reporting | ||
Line 116: | Line 139: | ||
+ | =====Getting Help===== | ||
+ | ====Office Hours==== | ||
+ | |||
+ | RSS office hours are now virtual. In person library RSS office hours have been suspended until further notice. Our team will be still available to help during the office hours via Zoom. | ||
+ | |||
+ | |||
+ | ^Office Hours ^Date and Time ^Location ^ | ||
+ | |RSS |Wed 10:00 - 12: | ||
+ | |Engineering/ | ||
+ | |BioCompute |Please message RSS or join |zoom above for Biocompute questions| | ||
+ | Note that above Zoom links are password protected, please contact us for receiving sessions password. | ||
+ | |||
+ | We are also available to answer questions outside of these hours via email: itrss-support@umsystem.edu | ||
+ | |||
+ | ====Grant Proposal Assistance==== | ||
+ | |||
+ | The RSS team is here to help with grants. We offer consultations and project reviews that include but are not limited to: | ||
- | =====Data Transfer===== | + | * **Security Reviews** |
+ | * **Vendor Quotes** | ||
+ | * We work with university approved vendors to get preferred pricing and support | ||
+ | * **Letters of Support** | ||
+ | * Some grants require proof of campus IT knowledge/ | ||
+ | * **Regional Partnerships** | ||
+ | * We are active members in several different regional network groups (Great Plains Network, CIMUSE, Campus Champions and more) that can be assets in finding partnerships for multi-institution projects. | ||
+ | * **Data Management Plans** | ||
+ | * MU Libraries has good resources for DMP's for many of the most common granting agencies [[https:// | ||
+ | * **Facilities Description** | ||
- | ====Open On Demand==== | ||