Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
pub:hpc:mill [2024/05/10 16:32] – [Anaconda] Added guidance about moving conda envs and changing default install location tvmwd6pub:hpc:mill [2025/01/22 18:31] (current) – [Hardware] jonesjosh
Line 3: Line 3:
 You can request an account on the Mill by filling out the form found at https://missouri.qualtrics.com/jfe/form/SV_e5n4I8cvzo77jr8 You can request an account on the Mill by filling out the form found at https://missouri.qualtrics.com/jfe/form/SV_e5n4I8cvzo77jr8
 ===== System Information ===== ===== System Information =====
 +====DOI====
 +Please ensure you use The Mill's DOI on any publications in which The Mill's resources were utilized. The DOI is https://doi.org/10.71674/PH64-N397
 ==== Software ==== ==== Software ====
 The Mill was built and managed with Puppet. The underlying OS for the Mill is Alma 8.9. For resource management and scheduling we are using SLURM Workload manager version 22.05.2 The Mill was built and managed with Puppet. The underlying OS for the Mill is Alma 8.9. For resource management and scheduling we are using SLURM Workload manager version 22.05.2
Line 22: Line 24:
 | Model | CPU Cores | System Memory | Node Count |  | Model | CPU Cores | System Memory | Node Count | 
 | Dell R6525 | 128 | 512 GB | 25 |  | Dell R6525 | 128 | 512 GB | 25 | 
-| Dell C6525 | 64 | 256 GB | 128 +| Dell C6525 | 64 | 256 GB | 160 
-| Dell C6420 | 40 | 192 GB | +| Dell C6420 | 40 | 192 GB | 44 
  
  
Line 33: Line 35:
 | Model | CPU cores | System Memory | GPU | GPU Memory | GPU Count | Node Count |  | Model | CPU cores | System Memory | GPU | GPU Memory | GPU Count | Node Count | 
 | Dell XE9680 | 112 | 1 TB | H100 SXM5 | 80 GB | 8 | 1 |  | Dell XE9680 | 112 | 1 TB | H100 SXM5 | 80 GB | 8 | 1 | 
-| Dell C4140 | 40 | 192 GB | V100 SXM2 | 32 GB | 4 | +| Dell C4140 | 40 | 192 GB | V100 SXM2 | 32 GB | 4 | 
 | Dell R740xd | 40 | 384 GB | V100 PCIe | 32 GB | 2 | 1 | | Dell R740xd | 40 | 384 GB | V100 PCIe | 32 GB | 2 | 1 |
  
Line 57: Line 59:
 ==Leased Space== ==Leased Space==
  
-If home directory, and scratch space availability aren't enough for your storage needs we also lease out quantities of cluster attached space. If you are interested in leasing storage please contact us. If you already are leasing storage, but need a reference guide on how to manage the storage please go [[ ~:storage here]].+If home directory, and scratch space availability aren't enough for your storage needs we also lease out quantities of cluster attached space. If you are interested in leasing storage please contact us. Additional information on the STRIDE storage allocations can be found here [[https://mailmissouri.sharepoint.com/:b:/r/sites/MUandSTITRSS-Ogrp/Shared%20Documents/Data%20Management/STRIDE/STRIDE%20Allocation%20Model.pdf?csf=1&web=1&e=ORWvZV|STRIDE storage model]].  
 +Below is a cost model of our storage offerings: 
 + 
 +Vast Storage Cluster: 
 +| Total Size | 250 TB | 
 +| Storage Technology | Flash |  
 +| Primary Purpose | High Performance Computing Storage | 
 +| Cost | $160/TB/Year | 
 + 
 + 
 +Ceph Storage Cluster: 
 +| Total Size | 800 TB | 
 +| Storage Technology | Spinning Disk | 
 +| Primary Purpose | HPC-attached Utility Storage |  
 +| Cost | $100/TB/Year |
  
  
Line 84: Line 100:
  
  
 +
 +==== Priority Partition Leasing ====
 +
 +For the full information on our computing model please visit this page on [[ https://mailmissouri.sharepoint.com/:b:/s/MUandSTITRSS-Ogrp/EcDtEFkTU6xPr2hh9ES4hCcBfDOiGto7OZqbonsU9m6qdQ?e=owPLpd&xsdata=MDV8MDJ8fGE4ZjUwZGQzMDU0MTRlNDAzNzAxMDhkY2MyMjgyYmMyfGUzZmVmZGJlZjdlOTQwMWJhNTFhMzU1ZTAxYjA1YTg5fDB8MHw2Mzg1OTg3MjQ5NjgzOTY1NDV8VW5rbm93bnxWR1ZoYlhOVFpXTjFjbWwwZVZObGNuWnBZMlY4ZXlKV0lqb2lNQzR3TGpBd01EQWlMQ0pRSWpvaVYybHVNeklpTENKQlRpSTZJazkwYUdWeUlpd2lWMVFpT2pFeGZRPT18MXxMM1JsWVcxekx6RTVPbUUwT0dWbVkyVXlOREF6WmpRM1lUazRNbUV6WkdKaE56ZzNNakV4WkRGalFIUm9jbVZoWkM1MFlXTjJNaTlqYUdGdWJtVnNjeTh4T1RwaVl6SmlOak14T1RZMllXVTBZell3WWpCbU5qZzJObUUzTjJZeU1tVTRORUIwYUhKbFlXUXVkR0ZqZGpJdmJXVnpjMkZuWlhNdk1UY3lOREkzTlRZNU5qRXdOQT09fDFhYzdjZjQ4Mzg4YTQwODQzNzAxMDhkY2MyMjgyYmMyfDdkNTA4MmU3OGJmOTQ5YmZiZGI1ZGFhMjMyZWMzMmQx&sdata=cUJJZ2hxMjVZc1VNeVowajEyV29sNG5ZcDJVcGtSNHdIODZLY1EwZm1QRT0%3D
 + | The Mill Computing Model ]] which will provide more information what a priority partition is.
 +
 +Below is a list of hardware which we have available for priority leases:
 +
 +
 +
 +| | C6525 | R6525 | C4140|
 +| CPU type | AMD 7502 | AMD 7713 | Intel 6248 |
 +| CPU count | 2 | 2 | 2 | 
 +| Core count | 64 | 128 | 40 |
 +| Base Clock (GHz) | 2.5 | 2.0 | 2.5 |
 +| Boost Clock (GHz) | 3.35 | 3.675 | 3.2 |
 +| GPU | N/A | N/A | Nvidia V100 |
 +| GPU Count | 0 | 0 | 4 |
 +| GPU RAM (GB) | 0 | 0 | 32x4 |
 +| RAM (GB) | 256 | 512 | 192 |
 +| Local Scratch (TB) | 2.6 SSD | 1.6 NVMe | 1.6 NVMe |
 +| Network | HDR-100 | HDR-100 | HDR-100|
 +| Internal Bandwidth | 100Gb/s | 100Gb/s | 100Gb/s |
 +| Latency | <600ns | <600ns | <600ns |
 +| Priority lease ($/year) | $3,368.30 | $4,379.80 | $7,346.06 |
 +| Current Quantity | 160 | 25 | 6 |
 +
 +
 +==== Researcher Funded Nodes ====
 +Researcher funded hardware will gain priority access for a minimum of 5 years. Hosting fees will start at $1,200 per year and will be hardware dependent. The fees will be broke down as follows:
 +
 +| Fee | Cost | Annual Unit of Measure |
 +| Networking Fee | $90 | Per Network Connection |
 +| Rack Space | $260 | Per Rack U |
 +| RSS Maintenance | $850 | Per Node |
 ===== Quick Start ===== ===== Quick Start =====
  
Line 265: Line 316:
 Now you should see a a.out executable in your current working directory, this is your mpi compiled code that we will run when we submit it as a job. Now you should see a a.out executable in your current working directory, this is your mpi compiled code that we will run when we submit it as a job.
  
 +==== Parallelizing your Code ====
 +
 +The following link provides basic tutorials and examples for parallel code in Python, R, Julia, Matlab, and C/C++.
 +
 +[[https://berkeley-scf.github.io/tutorial-parallelization/]]
  
 ==== Submitting an MPI job ==== ==== Submitting an MPI job ====
Line 356: Line 412:
 <code> sbatch array_test.sub </code> <code> sbatch array_test.sub </code>
  
 +=====Priority Access=====
 +Information coming on priority access leases.
  
 ===== Applications ===== ===== Applications =====
Line 425: Line 483:
  
 ====Anaconda==== ====Anaconda====
-If you would like to install python modules via conda, you may load the anaconda module to get access to conda for this purpose. After loading the module you will need to initialize conda to work with your shell.+If you would like to install packages via conda, you may load the module for the version you prefer (anaconda, miniconda, mamba) to get access to conda commands. After loading the module you will need to initialize conda to work with your shell.
 <code> <code>
-module load anaconda    # miniconda and mamba are also available+# miniconda and mamba are also available 
 +module load anaconda 
 conda init conda init
 </code> </code>
 This will ask you what shell you are using, and after it is done it will ask you to log out and back in again to load the conda environment. After you log back in your command prompt will look different than it did before. It should now have (base) on the far left of your prompt. This is the virtual environment you are currently in. Since you do not have permissions to modify base, you will need to create and activate your own virtual environment to build your software inside of. This will ask you what shell you are using, and after it is done it will ask you to log out and back in again to load the conda environment. After you log back in your command prompt will look different than it did before. It should now have (base) on the far left of your prompt. This is the virtual environment you are currently in. Since you do not have permissions to modify base, you will need to create and activate your own virtual environment to build your software inside of.
 <code> <code>
-conda create --name myenv +# to create in default location (~/.conda/envs) 
-conda activate myenv+conda create -n ENVNAME 
 +conda activate ENVNAME 
 + 
 +# to create in custom location (only do this if you have a reason to) 
 +conda create -n ENVNAME -p /path/to/location 
 +conda activate /path/to/location
 </code> </code>
-Now instead of (base) it should say (myenvor whatever you have named your environment in the create step. These environments are stored in your home directory so they are unique to you. If you are working together with a group, everyone in your group will either need a copy of the environment you've built in $HOME/.conda/envs/ +Now instead of (base) it should say (ENVNAME). These environments are stored in your home directory so they are unique to you. If you are working together with a group, see the sections below about rebuilding or moving an environment, or if you have shared storage read the section about creating single environments in a different folder and moving the default conda install directory and choose the solution that is best for your team. 
 \\ \\
 Once you are inside your virtual environment you can run whatever conda installs you would like and it will install them and dependencies inside this environment. If you would like to execute code that depends on the modules you install you will need to be sure that you are inside your virtual environment. (myenv) should be shown on your command prompt, if it is not, activate it with `conda activate`. Once you are inside your virtual environment you can run whatever conda installs you would like and it will install them and dependencies inside this environment. If you would like to execute code that depends on the modules you install you will need to be sure that you are inside your virtual environment. (myenv) should be shown on your command prompt, if it is not, activate it with `conda activate`.
Line 459: Line 524:
 === - Moving Env's by Moving the Default Conda Install Directory === === - Moving Env's by Moving the Default Conda Install Directory ===
 If you want to permanently change the conda install directory, you need to generate a .condarc file and tell conda where it needs to install your environments from now on. The paths you specify should point to folders. If you want to permanently change the conda install directory, you need to generate a .condarc file and tell conda where it needs to install your environments from now on. The paths you specify should point to folders.
 +
 +**If you are intending for all lab members to install env's in shared storage, each member will need to generate the .condarc file and set the paths for their own Conda configuration**
 <code> <code>
 # Generate .condarc # Generate .condarc