Hellbender

Request an account

You can request an account on the Hellbender by filling out the form found at https://request.itrss.umsystem.edu

System Information

Maintenance

Regular maintenance is scheduled for the 2nd Tuesday of every month. Jobs will run if scheduled to complete before the window begins, and jobs will start once maintenance is complete.

Services

Order RSS Services by filling out the form found at https://missouri.qualtrics.com/jfe/form/SV_6zkkwGYn0MGvMyO

RSS offers:

  • HPC Compute: Compute Node
  • HPC Compute: GPU Node
  • RDE Storage: General storage
  • RDE Storage: High performance storage
  • Software

Add People to Existing Account(s)

Add users to existing compute (SLURM), storage, or software groups by filling out the form found at https://missouri.qualtrics.com/jfe/form/SV_9LAbyCadC4hQdBY

Software

The Foundry was built and managed with Puppet. The underlying OS for the Foundry is Alma 8.9. For resource management and scheduling we are using SLURM Workload manager version 22.05.11

Hardware

Management nodes

The head nodes and login nodes are virtual, making this one of the key differences from the previous generation cluster named Lewis.

Compute nodes

Dell R6525: .5 rack unit servers each containing dual 64 core AMD EPYC Milan 7713 CPUs with a base clock of 2 GHz and a boost clock of up to 3.675 GHz. Each C6525 node contains 512 GB DDR4 system memory.

Model CPU Cores System Memory Node Count Local Scratch Total Core Count
Dell C6525 128 512 GB 112 1.6 TB 14336

GPU nodes

Model CPU cores System Memory GPU GPU Memory GPU Count Local Scratch Node Count
Dell XE9640 104 2048 GB H100 80 GB 4 3.2 TB 2
Dell R740xa 64 356 GB A100 80 GB 4 1.6 TB 17

A specially formatted sinfo command can be ran on Hellbender to report live information about the nodes and the hardware/features they have.

  sinfo -o "%5D %4c %8m %28f %35G" 

Investment Model

Overview

The newest High Performance Computing (HPC) resource, Hellbender, has been provided through partnership with the Division of Research Innovation and Impact (DRII) and is intended to work in conjunction with DRII policies and priorities. This outline will provide definitions about how fairshare, general access, priority access, and researcher contributions will be handled for Hellbender. HPC has been identified as a continually growing need for researchers, as such DRII has invested in Hellbender to be an institutional resource. This investment is intended to increase ease of access to these resources, provide cutting edge technology, and grow the pool of resources available.

Fairshare

To understand how general access and priority access differs, fairshare must first be defined. Fairshare is an algorithm that is used by the scheduler to assign priority to jobs from users in a way that gives every user a fair chance at the resources available. This algorithm has several metrics to perform this calculation over for any given job waiting in the queue, such as job size, wait time, current and recent usage, and individual user priority levels. This allows administrators to tune the fairshare algorithm, to adjust how it determines which jobs are next to run once resources are available.

Resources Available to Everyone: General Access

General access will be open to any research or teaching faculty, staff, and students for any UM system campus. General access is defined as open access to all resources available to users of the cluster at an equal fairshare value. This means that all users will have the same level of access to the general resource. Research users of the general access portion of the cluster will be given the RDE Standard Allocation to operate from. Larger storage allocations will be provided through RDE Advanced Allocations, and independent of HPC priority status.

Hellbender Advanced: Priority Access

When researcher needs are not being met at the general access level, researchers may request an advanced allocation on Hellbender to gain priority access. Priority access will give research groups a limited set of resources that will be available to them without competition from general access users. Priority Access will be provided to a specific set of hardware through a priority partition which contains these resources. This partition will be created, and limited to use by the user and their associated group. These resources will also be in an overlapping pool of resources available to general access users. This pool will be administered such that if a priority access user submits jobs to their priority access partition, any jobs running on those resources from the overlapping partition will be requeued and begin execution again on another resource in that partition if available, or return to wait in the queue for resources. Priority access users will retain general access status, fairshare will still play a part in moderating their access to the general resource. Fairshare inside a priority partition determine which user’s jobs are selected for execution next inside this partition. The jobs running inside this priority partition will also affect a user’s fairshare calculations even for resources in the general access partition. Meaning that running a large amount of jobs inside a priority partition will lower a user’s priority for the general resources as well.

Priority Designation

Hellbender Advanced Allocations are eligible for DRII Priority Designation. This means that DRII has determined the proposed use case (such as a core or grant-funded project) presents a strategic advantage or high priority service to the university. In this case, DRII fully subsidizes the resources used to create the Advanced Allocation.

Traditional Investment

Hellbender Advanced Allocation requests that are not approved for DRII Priority Designation may be treated as traditional investments with the researcher paying for the resources used to create the Advanced Allocation at the defined rate. These rates are subject to change based on the determination of DRII, and hardware costs.

Resource Management

Information Technology Research Support Solutions (ITRSS) will procure, set up, and maintain the resource. ITRSS will work in conjunction with MU Division of Information Technology and Facility Services to provide adequate infrastructure for the resource.

Resource Growth

Priority access resources will generally be made available from existing hardware in the general access pool and the funds will be retained for a future time to allow a larger pool of funds to accumulate for expansion of the resource. This will allow the greatest return on investment over time. If the general availability resources are less than 50% of the overall resource, an expansion cycle will be initiated to ensure all users will still have access to a significant amount of resources. If a researcher or research group is contributing a large amount of funding, it may trigger an expansion cycle if that is determined to be advantageous at the time of the contribution.

Benefits of Investing

The primary benefit of investing is receiving “shares” and a priority access partition for you or your research group. Shares are used to calculate the percentage of the cluster owned by an investor. As long as an investor has used less than they own, investors will be able to use their shares to get higher priorities in the general queue. than they own. FairShare is by far the largest factor in queue placement and wait times.

Investors will be granted Slurm accounts to use in order to charge their investment (FairShare). These accounts can contain the same members of a POSIX group (storage group) or any other set of users at the request of the investor.

To use an investor account in an sbatch script, use:

#SBATCH --account=<investor account>
#SBATCH --partition=<investor partition> (for cpu jobs)
#SBATCH --partition=<investor partition>-gpu --gres=gpu:A100:1 (requests 1 A100 gpu for gpu jobs)

To use a QOS in an sbatch script, use:

#SBATCH --qos=<qos>

HPC Pricing

The HPC Service is available at any time at the following rates for year 2023-2024:

Service Rate Unit Support
Hellbender HPC Compute $2,702.00 Per Node/Year Year to Year
GPU Compute* $7,691.38 Per Node/Year Year to Year
High Performance Storage $95.00 Per TB/Year Year to Year
General Storage $25.00 Per TB/Year Year to Year

*Note: The GPU compute service is no longer active. We have reached 50% of the GPU nodes in the cluster under investment - if you need GPU capacity beyond the general pool we are able to plan and work with your grant submissions to add additional capacity to Hellbender.

Policies

Under no circumstances should your code be running on the login node.

Software and Procurement

Open Source Software installed cluster-wide must have an open source (https://opensource.org/licenses) license or be obtained utilizing the procurement process even if there is not a cost associated with it.

Licensed software (any software that requires a license or agreement to be accepted) must follow the procurement process to protect users, their research, and the University. Software must be cleared via the ITSRQ. For more information about this process please reach out to us!

For widely used software RSS can facilitate the sharing of license fees and/or may support the cost depending on the cost and situation. Otherwise, user are responsible for funding for fee licensed software and RSS can handle the procurement process. We require that if the license does not preclude it, and there are not node or other resource limits, that the software is make made available to all users on the cluster. All licensed software installed on the cluster is to be used following the license agreement. We will do our best to install and support a wide rage of scientific software as resources and circumstances dictate but in general we only support scientific software that will run on RHEL in a HPC cluster environment. RSS may not support software that is implicitly/explicitly deprecated by the community.

Containers, Singularity/Apptainer/Docker

A majority of scientific software and software libraries can be installed in users’ accounts or in group space. We also provide limited support for Singularity for advanced users who require more control over their computing environment. We cannot knowingly assist users to install software that may put them, the University, or their intellectual property at risk.

Storage

None of the cluster attached storage available to users is backed up in any way by us, this means that if you delete something and don't have a copy somewhere else, it is gone. Please note the data stored on cluster attached storage is limited to Data Class 1 and 2 as defined by UM System Data Classifications. If you have need to store things in DCL3 or DCL4 please contact us so we may find a solution for you.

Storage Type Location Quota Description
Home /home/$USER 5 GB Available to all users
Pixstor /home/$USER/data 500 GB Available to all users
Local Scratch /local/scratch 1.6-3.2 TB Available to all users
Pixstor /cluster/pixstor, /mnt/pixstor Varies For investment, cluster attached
Vast /cluster/VAST Varies For investment, cluster/instrument attached

Research Network

Research Network DNS: The domain name for the Research Network (RNet) is rnet.missouri.edu and is for research purposes only. All hosts on RNet will have a .rnet.missouri.edu domain. Subdomains and CNAMEs are not permitted. Reverse records will always point to a host in the .rnet.missouri.edu domain.

Partitions

Default Time Limit Maximum Time Limit Description
general 1 hour 2 days For non-investors to run multi-node, multi-day jobs.
requeue 10 minutes 2 days For non-investor jobs that have been requeued due to their landing on an investor-owned node.
gpu 1 hour 2 days Acceptable use includes jobs that utilize a GPU for the majority of the run. Is composed of Nvidia A100 cards, 4 per node.
interactive 1 hour 2 days For short interactive testing, interactive debugging, and general interactive jobs. Use this for light testing as opposed to the login node.
logical_cpu 1 hour 2 days For workloads that can make use of hyperthreaded hardware
priority partitions 1 hour 28 days For investors

Citation

We ask that when you cite any of the RSS clusters in a publication to send an email to muitrss@missouri.edu as well as share a copy of the publication with us. To cite the use of any of the RSS clusters in a publication please use: The computation for this work was performed on the high performance computing infrastructure operated by Research Support Solutions in the Division of IT at the University of Missouri, Columbia MO DOI:https://doi.org/10.32469/10355/97710

Quick Start

Open OnDemand

OnDemand provides an integrated, single access point for all of your HPC resources. The following apps are currently available on Hellbender's Open Ondemand.

  • Jupyter Notebook
  • RStudio Server
  • Virtual Desktop
  • VSCode

Teaching Cluster

Hellbender can be used by instructors, TAs, and students for instructional work via the Hellbender Classes Open OnDemand Classes portal (OOD).

Below is process for setting up a class on the OOD portal.

  1. Send the class name, the list of students and TAs, and any shared storage requirements to itrss-support@umsystem.edu.
  2. We will add the students to the group allowing them access to OOD.
  3. If the student does not have a Hellbender account yet, they will be presented with a link to a form to fill out requesting a Hellbender account.
  4. We activate the student account and the student will receive an Account Request Complete email.

If desired, the professor would be able to perform step 2 themselves. You may already be able to modify your class groups here: https://netgroups.apps.mst.edu/auth-cgi-bin/cgiwrap/netgroups/netmngt.pl

If the class size is large, we can perform steps 3 and 4.

Connecting

You can request an account on the Hellbender by filling out the form found at https://request.itrss.umsystem.edu

Once you have been notified by the RSS team that your account has been created on Hellbender, open a terminal and type in ssh [SSO]@hellbender-login.rnet.missouri.edu. Using your UM-system password you will be able to login directly to Hellbender if you are on campus or on the VPN. Once connected you will land on the login node and will see a screen similar to this:

You are now on the login node and are ready to proceed to submit jobs and work on the cluster.

SSH

If you won't be primarily connecting to Hellbender from on campus and do not want to use the VPN - another option is to use public/private key authentication. You can add your ssh keypairs to any number of computers and they will be able to access Hellbender from outside the campus network.

Generating an SSH Key on Windows

  1. To generate an SSH key on a Windows computer - you will need to first download a terminal program - we suggest MobaXterm (https://mobaxterm.mobatek.net/).
  2. Once you have MobaXterm downloaded - start a new session by selecting “Start Local Terminal”
  3. [Insert local terminal mobaxterm image here].
  4. Type ssh-keygen and when prompted press enter to save the key in the default location /home/<username>/.ssh/id_rsa then enter a strong passphrase (required) twice.
  5. After you generate your key - you will need to send us the public key. To see what your public key is you can type: cat ~/.ssh/id_rsa.pub. The output will be a string of characters and numbers. Please copy this information and send to RSS and we will add your key to your account.

Generating an SSH Key on MacOS/Linux

  1. Open your terminal application of choice
  2. Type ssh-keygen and when prompted press enter to save the key in the default location /home/<username>/.ssh/id_rsa then enter a strong passphrase (required) twice.
  3. After you generate your key - you will need to send us the public key. To see what your public key is you can type: cat ~/.ssh/id_rsa.pub. The output will be a string of characters and numbers. Please copy this information and send to RSS and we will add your key to your account.

Job Submission

By default jobs submitted without a partition will land on requeue. If your job lands on a node that is owned by an investor in requeue - that job is subject to being stopped and requeued at any point if the investor needs to run on the same node at the same time.

Slurm Overview

Slurm is for cluster management and job scheduling. All RSS clusters use Slurm. This document gives an overview of how to run jobs, check job status, and make changes to submitted jobs. To learn more about specific flags or commands please visit slurm's website [link here].

All jobs must be run using srun or sbatch to prevent running on the Hellbender login node. Jobs that are running found running on the login node will be immediately terminated followed up with a notification email to the user.

Slurm Commands and Options

Job submission

sbatch - Submit a batch script for execution in the future (non-interactive)

srun - Obtain a job allocation and run an application interactively

Option Description
-A, –account=<account> Account to be charged for resources used
-a, –array=<index> Job array specification (sbatch only)
-b, –begin=<time> Initiate job after specified time
-C, –constraint=<features> Required node features
–cpu-bind=<type> Bind tasks to specific CPUs (srun only)
-c, –cpus-per-task=<count> Number of CPUs required per task
-d, –dependency=<state:jobid> Defer job until specified jobs reach specified state
-m, –distribution=<method[:method]> Specify distribution methods for remote processes
-e, –error=<filename> File in which to store job error messages (sbatch and srun only)
-x, –exclude=<name> Specify host names to exclude from job allocation
–exclusive Reserve all CPUs and GPUs on allocated nodes
–export=<name=value> Export specified environment variables (e.g., all, none)
–gpus-per-task=<list> Number of GPUs required per task
-J, –job-name=<name> Job name
-l, –label Prepend task ID to output (srun only)
–mail-type=<type> E-mail notification type (e.g., begin, end, fail, requeue, all)
–mail-user=<address> E-mail address
–mem=<size>[units] Memory required per allocated node (e.g., 16GB)
–mem-per-cpu=<size>[units] Memory required per allocated CPU (e.g., 2GB)
-w, –nodelist=<hostnames> Specify host names to include in job allocation
-N, –nodes=<count> Number of nodes required for the job
-n, –ntasks=<count> Number of tasks to be launched
–ntasks-per-node=<count> Number of tasks to be launched per node
-o, –output=<filename> File in which to store job output (sbatch and srun only)
-p, –partition=<names> Partition in which to run the job
–signal=[B:]<num>[@time] Signal job when approaching time limit
-t, –time=<time> Limit for job run time

Interactive Slurm Job

Interactive jobs are typically a few minutes. This a basic example of an interactive job using srun and -n to use one cpu:

srun -n 1 hostname

An example of the output from this code would be:

[bjmfg8@hellbender-login ~]$ srun -n 1 hostname
srun: Warning, you are submitting a job the to the requeue partition. There is a chance that your job will be preempted by priority partition jobs and have to start over from the beginning.
g003.mgmt.hellbender
[bjmfg8@hellbender-login ~]$

As noted - submitting with no partition specified will result in the job landing in the requeue partition.

[bjmfg8@hellbender-login ~]$ srun -p general -n 1 hostname
c006.mgmt.hellbender
[bjmfg8@hellbender-login ~]

Batch Slurm Job

Batch jobs run multiple jobs and multiple tasks. They typically take a few hours to a few days to complete. Most of the time you will use a SBATCH file to launch your jobs. This examples shows how to put our SLURM options in the file saving_the_world.sh and then submit the job to the queue. To learn more about the partitions available for use on Hellbender and the specifics of each partition please read our Partition Policy.

#! /bin/bash
 
#SBATCH -p general  # use the general partition
#SBATCH -J saving_the_world  # give the job a custom name
#SBATCH -o results-%j.out  # give the job output a custom name
#SBATCH -t 0-02:00  # two hour time limit
 
#SBATCH -N 2  # number of nodes
#SBATCH -n 2  # number of cores (AKA tasks)
 
# Commands here run only on the first core
echo "$(hostname), reporting for duty."
 
# Commands with srun will run on all cores in the allocation
srun echo "Let's save the world!"
srun hostname

Once the SBATCH file is ready to go start the job with:

sbatch saving_the_world.sh

Output is found in the file results-<job id here>.out. Example below:

[bjmfg8@hellbender-login ~]$ sbatch saving_the_world.sh
Submitted batch job 86439
[bjmfg8@hellbender-login ~]$ cat results-86439.out
c006.mgmt.hellbender, reporting for duty.
Let's save the world!
Let's save the world!
c006.mgmt.hellbender
c015.mgmt.hellbender
[bjmfg8@hellbender-login ~]$ 

Software

Anaconda

Anaconda is an open source package management system and environment management system. Conda quickly installs, runs and updates packages and their dependencies. Conda easily creates, saves, loads and switches between environments on your local computer. It was created for Python programs, but it can package and distribute software for any language.

Software URL:https://www.anaconda.org/

Documentation:https://conda.io/en/latest/

By default, Conda stores environments and packages within the folder ~/.conda.

To avoid using up all of your home folder's quota, which can easily happen when using Conda, we recommend placing the following within the file ~/.condarc. You can create the file if it is not already present. You can also choose a different path, so long as it is not in your home folder.

envs_dirs:
- /mnt/pixstor/data/${USER}/miniconda/envs
pkgs_dirs:
- /mnt/pixstor/data/${USER}/miniconda/envs

Usage

The version of Anaconda we have available on Hellbender is called “Miniconda”. Miniconda is a version of Anaconda that only provides the conda command.

First, you will want to make sure that you are running in a compute job.

srun -p interactive --mem 8G --pty bash

Then, you need to load the miniconda3 module:

module load miniconda3

After that command completes, you will have the conda command available to you. conda is what you will use to manage your Anaconda environments. To list the Anaconda environments that are installed, run the following:

conda env list

If this is your first time running Anaconda, you will probably only see the “root” environment. This environment is shared between all users of Hellbender and cannot be modified. To create an Anaconda environment that you can modify, do this:

conda create --name my_environment python=3.7

You can use any name you want instead of my_environment. You can also choose other Python versions or add any other packages. Ideally, you should create one environment per project and include all the required packages when you create the environment.

After running the above command, you should see something like this:

The following NEW packages will be INSTALLED:
 
  _libgcc_mutex      pkgs/main/linux-64::_libgcc_mutex-0.1-main
  _openmp_mutex      pkgs/main/linux-64::_openmp_mutex-5.1-1_gnu
  ca-certificates    pkgs/main/linux-64::ca-certificates-2023.08.22-h06a4308_0
  certifi            pkgs/main/linux-64::certifi-2022.12.7-py37h06a4308_0
  ld_impl_linux-64   pkgs/main/linux-64::ld_impl_linux-64-2.38-h1181459_1
  libffi             pkgs/main/linux-64::libffi-3.4.4-h6a678d5_0
  libgcc-ng          pkgs/main/linux-64::libgcc-ng-11.2.0-h1234567_1
  libgomp            pkgs/main/linux-64::libgomp-11.2.0-h1234567_1
  libstdcxx-ng       pkgs/main/linux-64::libstdcxx-ng-11.2.0-h1234567_1
  ncurses            pkgs/main/linux-64::ncurses-6.4-h6a678d5_0
  openssl            pkgs/main/linux-64::openssl-1.1.1w-h7f8727e_0
  pip                pkgs/main/linux-64::pip-22.3.1-py37h06a4308_0
  python             pkgs/main/linux-64::python-3.7.16-h7a1cb2a_0
  readline           pkgs/main/linux-64::readline-8.2-h5eee18b_0
  setuptools         pkgs/main/linux-64::setuptools-65.6.3-py37h06a4308_0
  sqlite             pkgs/main/linux-64::sqlite-3.41.2-h5eee18b_0
  tk                 pkgs/main/linux-64::tk-8.6.12-h1ccaba5_0
  wheel              pkgs/main/linux-64::wheel-0.38.4-py37h06a4308_0
  xz                 pkgs/main/linux-64::xz-5.4.2-h5eee18b_0
  zlib               pkgs/main/linux-64::zlib-1.2.13-h5eee18b_0
 
 
Proceed ([y]/n)?

Press y to continue. Your packages should be downloaded. After the packages are downloaded, the following will be printed:

#
# To activate this environment, use:
# > source activate my_environment
#
# To deactivate an active environment, use:
# > source deactivate
#

Make a note of that because those commands are how to get in and out of the environment you just created. To test it out, run:

[bjmfg8@c067 ~]$ source activate my_environment
(my_environment) [bjmfg8@c067 ~]$ python
Python 3.7.16 (default, Jan 17 2023, 22:20:44)
[GCC 11.2.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>

You might notice that (my_environment) now appears before your prompt, and that the Python version is the one you specified above (in our example, version 3.7).

Press Ctrl-D to exit Python.

When the environment name appears before your prompt, you are able to install packages with conda. For instance, to install pandas:

(my_environment) [bjmfg8@c067 ~]$ conda install pandas

Now, pandas will be accessible from your environment:

(my_environment) [bjmfg8@c067 ~]$ python
Python 3.7.16 (default, Jan 17 2023, 22:20:44)
[GCC 11.2.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import pandas
>>> pandas.__version__
'1.3.5'

Press Ctrl-D to exit Python. To see list of installed packages in the environment, run:

conda list

To exit your environment:

(my_environment) [bjmfg8@c067 ~]$ conda deactivate
[bjmfg8@c067 ~]$

In the case that you do not need your environment, you can use the following to remove it (after exit):

conda env remove --name my_environment

Conda Channels

Whenever we use conda create or conda install without mentioning a channel name, Conda package manager search its default channels to install the packages. If you are looking for specific packages that are not in the default channels you have to mention them by using:

conda create --name env_name --channel channel1 --channel channel2 ... package1 package2 ...

For example the following creates new_env and installs r-sf, shapely and bioconductor-biobase from r, conda-forge and bioconda channels:

conda create --name new_env --channel r --channel conda-forge --channel bioconda r-sf shapely bioconductor-biobase

Conda Packages

To find the required packages, we can visit anaconda.org and search for packages to find their full name and the corresponding channel. Another option is using conda search command. Note that we need to search the right channel to find pakages that are not in the default channels. For example:

conda search --channel bioconda biobase

CUDA

CUDA® is a parallel computing platform and programming model invented by NVIDIA. It enables dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU). With millions of CUDA-enabled GPUs sold to date, software developers, scientists and researchers are using GPU-accelerated computing for broad-ranging applications.

Software URL:https://developer.nvidia.com/cuda-zone

Documentation:http://docs.nvidia.com/cuda/index.html

Globus

Globus is a file transfer software ecosystem widely used in the science and research community. It offers the ability to transfer large amounts of data in parallel and over secure channels between various “endpoints.”

Getting Started with Globus

https://docs.globus.org/how-to/get-started/

  • Select “University of Missouri System”.
  • Login using your University e-mail and password.
  • Follow the prompts.

Linking Identities (If you already have a globus account)

Link Globus Identities: https://docs.globus.org/how-to/link-to-existing/

  • Please link an organizational identity to your existing Globus account.
  • Select “University of Missouri System” as the identity to link.
  • Log in using your university e-mail username and your University password.
  • Follow the prompts to complete the account linking.

Sharing Data Via Guest Collection

  • Create the desired folder structure on the target system
  • After linking your University of Missouri System identity, you can connect to the mapped collection via Globus.
  • Login to the globus application
  • Select File Manger
  • In the Collection field, enter your search target - right now we have the Lewis cluster: “MU RCSS Lewis Home Directories” as well as RDE “U MO ITRSS RDE”
  • Change the Path field to your target directory.
  • Follow prompts to Share the path and invite users.

Moving Data From Lewis to Hellbender Using Globus

Both Lewis and Hellbender have Globus endpoints which allows for the easy transfer of data between the two clusters directly from the Globus application.

To begin, login to the Globus web client and follow the login prompts to connect to your account. In the file manager menu search for the Lewis endpoint “MU RCSS Lewis Home Directories”

This will land on your home file path (the same as where you land by default after logging into Lewis). From here you can select the file that you would like to transfer to Hellbender. In this case we will be moving the file “test.txt”.

Next, we need to find the Hellbender endpoint to transfer this file to. In the collection search bar on the right search for the Hellbender/RDE endpoint “U MO ITRSS RDE”. If you are trying to transfer files from your university OneDrive select “U MO ITRSS RDE - M365”. If you do not see the menu on the right - select the “transfer or sync to” option.

After “U MO ITRSS RDE” is selected:

You will land by default at the root directory of the RDE storage system. Use the path box to navigate to the specific file path on Hellbender/RDE that you are wanting to move the data to. NOTE: This works the same for group storage as well as the personal /data. In this example - we are using the personal data directory of user bjmfg8:

Once you have your desired file selected from the Lewis side and your destination selected on the Hellbender/RDE side you are ready to transfer the file. Select the “Start” button on the source (Lewis) side to begin:

You can refresh the folder and you should see the small test.txt file has been successfully transferred:

Visual Studio Code

Visual Studio Code, also commonly referred to as VS Code, is a source-code editor developed by Microsoft for Windows, Linux and macOS. Features include support for debugging, syntax highlighting, intelligent code completion, snippets, code refactoring, and embedded Git.

We require users who want to work with VS Code on Hellbender to only use our interactive application in Open On Demand. Native connections to VS Code will spawn resource intensive processes on the login node and your session will likely be terminated by a system administrator.

To open a VS Code session in Open On Demand navigate to https://ondemand.rnet.missouri.edu/ in a web browser.

You will see a landing page similar to this:

Next select “Interactive Apps” and choose VS Code Server:

You will see a menu to add resources to your job submission. VS Code should be fine with the defaults (this is just for running the VS Code editor - this is not the resource selections for the actual jobs you want to run).

Your job will be submitted to the queue, after a few seconds you should see the option to launch your VS Code window:

You should land in your /data directory by default. You can now use VS Code as you wish.