Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
pub:hpc:mill [2024/04/23 20:45] – [VSCode] jonesjoshpub:hpc:mill [2024/05/15 14:28] (current) jonesjosh
Line 22: Line 22:
 | Model | CPU Cores | System Memory | Node Count |  | Model | CPU Cores | System Memory | Node Count | 
 | Dell R6525 | 128 | 512 GB | 25 |  | Dell R6525 | 128 | 512 GB | 25 | 
-| Dell C6525 | 64 | 256 GB | 124 +| Dell C6525 | 64 | 256 GB | 128 
-| Dell C6420 | 40 | 192 GB | +| Dell C6420 | 40 | 192 GB | 
  
  
Line 34: Line 34:
 | Dell XE9680 | 112 | 1 TB | H100 SXM5 | 80 GB | 8 | 1 |  | Dell XE9680 | 112 | 1 TB | H100 SXM5 | 80 GB | 8 | 1 | 
 | Dell C4140 | 40 | 192 GB | V100 SXM2 | 32 GB | 4 | 4 |  | Dell C4140 | 40 | 192 GB | V100 SXM2 | 32 GB | 4 | 4 | 
-| Dell R740xd | 40 | 394 GB | V100 PCIe | 32 GB | 2 | 1 |+| Dell R740xd | 40 | 384 GB | V100 PCIe | 32 GB | 2 | 1 |
  
 A specially formatted sinfo command can be ran on the Mill to report live information about the nodes and the hardware/features they have.  A specially formatted sinfo command can be ran on the Mill to report live information about the nodes and the hardware/features they have. 
Line 104: Line 104:
 However, if you are attempting use a GUI, ensure that you __do not run your session on the login node__ (Example: username@mill-login-p1). Use an interactive session to be directed to a compute node to run your software. However, if you are attempting use a GUI, ensure that you __do not run your session on the login node__ (Example: username@mill-login-p1). Use an interactive session to be directed to a compute node to run your software.
  
-Sinteractive is not yet available. Please use the salloc command for now. salloc is preferred over srun. +<code> salloc --time=1:00:00 --x11</code>
-<code> salloc --time=1:00:00 </code>+
  
 === Putty (Windows)=== === Putty (Windows)===
Line 357: Line 356:
 <code> sbatch array_test.sub </code> <code> sbatch array_test.sub </code>
  
 +=====Priority Access=====
 +Information coming on priority access leases.
  
 ===== Applications ===== ===== Applications =====
Line 378: Line 379:
 \\ \\
 Once inside an interactive job you need to load the Abaqus module. Once inside an interactive job you need to load the Abaqus module.
-    module load abaqus+    module load abaqus/2023
 Now you may run abaqus. Now you may run abaqus.
     ABQLauncher cae -mesa     ABQLauncher cae -mesa
Line 426: Line 427:
  
 ====Anaconda==== ====Anaconda====
-If you would like to install python modules via conda, you may load the anaconda module to get access to conda for this purpose. After loading the module you will need to initialize conda to work with your shell.+If you would like to install packages via conda, you may load the module for the version you prefer (anaconda, miniconda, mamba) to get access to conda commands. After loading the module you will need to initialize conda to work with your shell.
 <code> <code>
 +# miniconda and mamba are also available
 module load anaconda module load anaconda
 +
 conda init conda init
 </code> </code>
 This will ask you what shell you are using, and after it is done it will ask you to log out and back in again to load the conda environment. After you log back in your command prompt will look different than it did before. It should now have (base) on the far left of your prompt. This is the virtual environment you are currently in. Since you do not have permissions to modify base, you will need to create and activate your own virtual environment to build your software inside of. This will ask you what shell you are using, and after it is done it will ask you to log out and back in again to load the conda environment. After you log back in your command prompt will look different than it did before. It should now have (base) on the far left of your prompt. This is the virtual environment you are currently in. Since you do not have permissions to modify base, you will need to create and activate your own virtual environment to build your software inside of.
 <code> <code>
-conda create --name myenv +# to create in default location (~/.conda/envs) 
-conda activate myenv+conda create -n ENVNAME 
 +conda activate ENVNAME 
 + 
 +# to create in custom location (only do this if you have a reason to) 
 +conda create -n ENVNAME -p /path/to/location 
 +conda activate /path/to/location
 </code> </code>
-Now instead of (base) it should say (myenvor whatever you have named your environment in the create step. These environments are stored in your home directory so they are unique to you. If you are working together with a group, everyone in your group will either need a copy of the environment you've built in $HOME/.conda/envs/ +Now instead of (base) it should say (ENVNAME). These environments are stored in your home directory so they are unique to you. If you are working together with a group, see the sections below about rebuilding or moving an environment, or if you have shared storage read the section about creating single environments in a different folder and moving the default conda install directory and choose the solution that is best for your team. 
 \\ \\
 Once you are inside your virtual environment you can run whatever conda installs you would like and it will install them and dependencies inside this environment. If you would like to execute code that depends on the modules you install you will need to be sure that you are inside your virtual environment. (myenv) should be shown on your command prompt, if it is not, activate it with `conda activate`. Once you are inside your virtual environment you can run whatever conda installs you would like and it will install them and dependencies inside this environment. If you would like to execute code that depends on the modules you install you will need to be sure that you are inside your virtual environment. (myenv) should be shown on your command prompt, if it is not, activate it with `conda activate`.
 +=== - Rebuilding or Moving Conda Environments ===
 +The recommended method for moving an environment to a new location is to save its configuration and rebuild it in the new location, so the instructions for both processes are the same.
  
-==== VSCode ====+In order to create a configuration file for a Conda environment, run the following commands: 
 +<code> 
 +# Activate the env you wish to export 
 +conda activate ENVNAME
  
 +# Export the configuration
 +conda env export > ENVNAME.yml
 +</code>
 +If you are simply moving to a new system,copy the file and rebuild the env.
 +<code>
 +conda env create -f ENVNAME.yml
 +</code>
 +If you need to specify a location for just this one env, you can specify a prefix.
 +<code>
 +conda env create -f ENVNAME.yml -p /path/to/install/location/
 +</code>
 +=== - Moving Env's by Moving the Default Conda Install Directory ===
 +If you want to permanently change the conda install directory, you need to generate a .condarc file and tell conda where it needs to install your environments from now on. The paths you specify should point to folders.
  
-While a very powerful tool, it should NOT be used to connect to the login node of the cluster. If you need to use VScode there are two ways to do so. The first is through the regular slurm scheduler. First you will want to connect to the Mill with X forwarding enabled. +**If you are intending for all lab members to install env's in shared storage, each member will need to generate the .condarc file and set the paths for their own Conda configuration** 
 +<code> 
 +# Generate .condarc 
 +conda config 
 + 
 +# Add new package install directory 
 +conda config --add pkgs_dirs /full/path/to/pkgs/ 
 + 
 +# Add new env install directory 
 +conda config --add envs_dirs /full/path/to/envs/ 
 +</code> 
 +You can check your configuration by making sure these new paths are listed first in their respective fields when you run: 
 +<code> 
 +conda info 
 +</code> 
 +You may also need to temporarily rename your original conda folder so that your new environments can have the same name as the old ones. 
 +<code> 
 +## Remember to delete conda-old once you have verified  
 +## that the new env's work properly, otherwise you are 
 +## not saving space! 
 + 
 +mv ~/.conda ~/conda-old 
 +</code> 
 +Now you can reinstall the environments as usual, but they will be stored in the new location. 
 +<code> 
 +conda env create -f ENVNAME.yml 
 +</code> 
 +Because we updated the paths earlier you can still activate your environments like normal even though the location has changed. 
 +=== - Using Conda Envs With Jupyter in OpenOnDemand === 
 + 
 +There is a simple way to import your conda environment to use as a kernel in Jupyter in OpenOnDemand. 
 + 
 +First you must connect to the cluster and activate the env you wish to import 
 +<code> 
 +ssh USER@mill.mst.edu 
 + 
 +source activate ENVNAME 
 +</code> 
 + 
 +Next you must install two additional packages to the env. 
 +<code> 
 +conda install -c anaconda ipykernel 
 +</code> 
 + 
 +Finally you must install the env as a kernel 
 +<code> 
 +python -m ipykernel install --user --name=ENVNAME 
 +</code> 
 + 
 +Now when you open Jupyter in OpenOnDemand you should see your env in the kernel selection dropdown menu. 
 +==== VS Code ==== 
 + 
 + 
 +While VS Code is a very powerful tool, it should NOT be used to connect to the login node of the cluster. If you need to use VS Code there are several ways to do so.  
 + 
 +=== - X Forwarding: === 
 + 
 +The first way to use VS Code is by using X forwarding through the regular slurm scheduler. First you will want to connect to the Mill with X forwarding enabled. 
 <code> <code>
 ssh -X mill.mst.edu ssh -X mill.mst.edu
Line 451: Line 534:
 salloc --x11 --time=1:00:00 --ntasks=2 --mem=2G --nodes=1 salloc --x11 --time=1:00:00 --ntasks=2 --mem=2G --nodes=1
 </code> </code>
 +Once you get a job, you will want to load the VS Code module with:
 +<code>
 +module load vscode/1.88.1
 +</code>
 +To launch VS Code in this job you simply run the command:
 +<code>
 +code
 +</code>
 +
 +=== - OpenOnDemand: ===
 +
 +The second way is through a web browser with our OnDemand system.
 +If you go to mill-ondemand.mst.edu and sign in with your university account you will have access to VS Code via the Interactive Apps tab. Choose 'Interactive Apps ' -> 'Vscode' and you will come to a new page where you can choose what account, partition, job length, number of cpus, memory amount and more to run your job with. Once you fill out all the resources you would like click launch. If there are enough resources free your job will start immediately, VS Code will launch on the cluster and you will be able to click 'Connect to VS Code'. You will then have a new tab opened with VS Code running on a compute node.
 +
 +=== - VSCode Remote Tunnels: ===
 +
 +**This solution requires a GitHub account.**
 +
 +If You are not satisfied with the above solutions, the Remote Tunnels extension may be what you are looking for.
 +
 +Step 1: Install the Remote Tunnels extension on the computer you wish to use to connect to the cluster.
 +[[https://marketplace.visualstudio.com/items?itemName=ms-vscode.remote-server|Link to Extension]]
 +
 +Step 2: If you have not already, sign in to VSCode with your GitHub Account. If you don't have one you will need to create one to use this extension.
 +
 +Step 3: connect to the login node normally using SSH and request an interactive session
 +<code>
 +ssh USER@mill.mst.edu
 +
 +salloc -n 4 --mem 8G --time 60
 +</code>
 +remember to change the resource values to meet your needs, especially time.
 +
 +Step 4: Load the VSCode module and create the tunnel by running the following command.
 +<code>
 +module load vscode
 +
 +code tunnel
 +</code>
 +This will create the tunnel. You should see an 8 digit code and a link to GitHub. Follow the link, sign in if necessary, and input the 8 digit code. You should then see a browser prompt to open VSCode. Click to accept this, and VSCode will automatically connect to the tunnel which is running in your interactive session.
 +
 +If your local VSCode does not automatically connect to the tunnel, you can tell it to connect manually. To do this, click the green box at the bottom left with the icon that resembles ><. Then from the menu that appears, select "Connect to Tunnel..." and select the option with the name of the compute node where your interactive session is running. This should resemble "compute-11-22".
 +
 +//- Do not close the window with your interactive session. Closing this will close your tunnel -//