This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
pub:hpc:mill [2024/05/10 16:32] – [Anaconda] Added guidance about moving conda envs and changing default install location tvmwd6 | pub:hpc:mill [2025/01/22 18:31] (current) – [Hardware] jonesjosh | ||
---|---|---|---|
Line 3: | Line 3: | ||
You can request an account on the Mill by filling out the form found at https:// | You can request an account on the Mill by filling out the form found at https:// | ||
===== System Information ===== | ===== System Information ===== | ||
+ | ====DOI==== | ||
+ | Please ensure you use The Mill's DOI on any publications in which The Mill's resources were utilized. The DOI is https:// | ||
==== Software ==== | ==== Software ==== | ||
The Mill was built and managed with Puppet. The underlying OS for the Mill is Alma 8.9. For resource management and scheduling we are using SLURM Workload manager version 22.05.2 | The Mill was built and managed with Puppet. The underlying OS for the Mill is Alma 8.9. For resource management and scheduling we are using SLURM Workload manager version 22.05.2 | ||
Line 22: | Line 24: | ||
| Model | CPU Cores | System Memory | Node Count | | | Model | CPU Cores | System Memory | Node Count | | ||
| Dell R6525 | 128 | 512 GB | 25 | | | Dell R6525 | 128 | 512 GB | 25 | | ||
- | | Dell C6525 | 64 | 256 GB | 128 | | + | | Dell C6525 | 64 | 256 GB | 160 | |
- | | Dell C6420 | 40 | 192 GB | 4 | | + | | Dell C6420 | 40 | 192 GB | 44 | |
Line 33: | Line 35: | ||
| Model | CPU cores | System Memory | GPU | GPU Memory | GPU Count | Node Count | | | Model | CPU cores | System Memory | GPU | GPU Memory | GPU Count | Node Count | | ||
| Dell XE9680 | 112 | 1 TB | H100 SXM5 | 80 GB | 8 | 1 | | | Dell XE9680 | 112 | 1 TB | H100 SXM5 | 80 GB | 8 | 1 | | ||
- | | Dell C4140 | 40 | 192 GB | V100 SXM2 | 32 GB | 4 | 4 | | + | | Dell C4140 | 40 | 192 GB | V100 SXM2 | 32 GB | 4 | 6 | |
| Dell R740xd | 40 | 384 GB | V100 PCIe | 32 GB | 2 | 1 | | | Dell R740xd | 40 | 384 GB | V100 PCIe | 32 GB | 2 | 1 | | ||
Line 57: | Line 59: | ||
==Leased Space== | ==Leased Space== | ||
- | If home directory, and scratch space availability aren't enough for your storage needs we also lease out quantities of cluster attached space. If you are interested in leasing storage please contact us. If you already are leasing storage, but need a reference guide on how to manage | + | If home directory, and scratch space availability aren't enough for your storage needs we also lease out quantities of cluster attached space. If you are interested in leasing storage please contact us. Additional information |
+ | Below is a cost model of our storage offerings: | ||
+ | |||
+ | Vast Storage Cluster: | ||
+ | | Total Size | 250 TB | | ||
+ | | Storage Technology | Flash | | ||
+ | | Primary Purpose | High Performance Computing Storage | | ||
+ | | Cost | $160/ | ||
+ | |||
+ | |||
+ | Ceph Storage Cluster: | ||
+ | | Total Size | 800 TB | | ||
+ | | Storage Technology | Spinning Disk | | ||
+ | | Primary Purpose | HPC-attached Utility Storage | | ||
+ | | Cost | $100/ | ||
Line 84: | Line 100: | ||
+ | |||
+ | ==== Priority Partition Leasing ==== | ||
+ | |||
+ | For the full information on our computing model please visit this page on [[ https:// | ||
+ | | The Mill Computing Model ]] which will provide more information what a priority partition is. | ||
+ | |||
+ | Below is a list of hardware which we have available for priority leases: | ||
+ | |||
+ | |||
+ | |||
+ | | | C6525 | R6525 | C4140| | ||
+ | | CPU type | AMD 7502 | AMD 7713 | Intel 6248 | | ||
+ | | CPU count | 2 | 2 | 2 | | ||
+ | | Core count | 64 | 128 | 40 | | ||
+ | | Base Clock (GHz) | 2.5 | 2.0 | 2.5 | | ||
+ | | Boost Clock (GHz) | 3.35 | 3.675 | 3.2 | | ||
+ | | GPU | N/A | N/A | Nvidia V100 | | ||
+ | | GPU Count | 0 | 0 | 4 | | ||
+ | | GPU RAM (GB) | 0 | 0 | 32x4 | | ||
+ | | RAM (GB) | 256 | 512 | 192 | | ||
+ | | Local Scratch (TB) | 2.6 SSD | 1.6 NVMe | 1.6 NVMe | | ||
+ | | Network | HDR-100 | HDR-100 | HDR-100| | ||
+ | | Internal Bandwidth | 100Gb/s | 100Gb/s | 100Gb/s | | ||
+ | | Latency | <600ns | <600ns | <600ns | | ||
+ | | Priority lease ($/year) | $3,368.30 | $4,379.80 | $7,346.06 | | ||
+ | | Current Quantity | 160 | 25 | 6 | | ||
+ | |||
+ | |||
+ | ==== Researcher Funded Nodes ==== | ||
+ | Researcher funded hardware will gain priority access for a minimum of 5 years. Hosting fees will start at $1,200 per year and will be hardware dependent. The fees will be broke down as follows: | ||
+ | |||
+ | | Fee | Cost | Annual Unit of Measure | | ||
+ | | Networking Fee | $90 | Per Network Connection | | ||
+ | | Rack Space | $260 | Per Rack U | | ||
+ | | RSS Maintenance | $850 | Per Node | | ||
===== Quick Start ===== | ===== Quick Start ===== | ||
Line 265: | Line 316: | ||
Now you should see a a.out executable in your current working directory, this is your mpi compiled code that we will run when we submit it as a job. | Now you should see a a.out executable in your current working directory, this is your mpi compiled code that we will run when we submit it as a job. | ||
+ | ==== Parallelizing your Code ==== | ||
+ | |||
+ | The following link provides basic tutorials and examples for parallel code in Python, R, Julia, Matlab, and C/C++. | ||
+ | |||
+ | [[https:// | ||
==== Submitting an MPI job ==== | ==== Submitting an MPI job ==== | ||
Line 356: | Line 412: | ||
< | < | ||
+ | =====Priority Access===== | ||
+ | Information coming on priority access leases. | ||
===== Applications ===== | ===== Applications ===== | ||
Line 425: | Line 483: | ||
====Anaconda==== | ====Anaconda==== | ||
- | If you would like to install | + | If you would like to install |
< | < | ||
- | module load anaconda | + | # miniconda and mamba are also available |
+ | module load anaconda | ||
conda init | conda init | ||
</ | </ | ||
This will ask you what shell you are using, and after it is done it will ask you to log out and back in again to load the conda environment. After you log back in your command prompt will look different than it did before. It should now have (base) on the far left of your prompt. This is the virtual environment you are currently in. Since you do not have permissions to modify base, you will need to create and activate your own virtual environment to build your software inside of. | This will ask you what shell you are using, and after it is done it will ask you to log out and back in again to load the conda environment. After you log back in your command prompt will look different than it did before. It should now have (base) on the far left of your prompt. This is the virtual environment you are currently in. Since you do not have permissions to modify base, you will need to create and activate your own virtual environment to build your software inside of. | ||
< | < | ||
- | conda create --name myenv | + | # to create in default location (~/ |
- | conda activate | + | conda create -n ENVNAME |
+ | conda activate ENVNAME | ||
+ | |||
+ | # to create in custom location (only do this if you have a reason to) | ||
+ | conda create -n ENVNAME | ||
+ | conda activate | ||
</ | </ | ||
- | Now instead of (base) it should say (myenv) or whatever you have named your environment in the create step. These environments are stored in your home directory so they are unique to you. If you are working together with a group, | + | Now instead of (base) it should say (ENVNAME). These environments are stored in your home directory so they are unique to you. If you are working together with a group, |
\\ | \\ | ||
Once you are inside your virtual environment you can run whatever conda installs you would like and it will install them and dependencies inside this environment. If you would like to execute code that depends on the modules you install you will need to be sure that you are inside your virtual environment. (myenv) should be shown on your command prompt, if it is not, activate it with `conda activate`. | Once you are inside your virtual environment you can run whatever conda installs you would like and it will install them and dependencies inside this environment. If you would like to execute code that depends on the modules you install you will need to be sure that you are inside your virtual environment. (myenv) should be shown on your command prompt, if it is not, activate it with `conda activate`. | ||
Line 459: | Line 524: | ||
=== - Moving Env's by Moving the Default Conda Install Directory === | === - Moving Env's by Moving the Default Conda Install Directory === | ||
If you want to permanently change the conda install directory, you need to generate a .condarc file and tell conda where it needs to install your environments from now on. The paths you specify should point to folders. | If you want to permanently change the conda install directory, you need to generate a .condarc file and tell conda where it needs to install your environments from now on. The paths you specify should point to folders. | ||
+ | |||
+ | **If you are intending for all lab members to install env's in shared storage, each member will need to generate the .condarc file and set the paths for their own Conda configuration** | ||
< | < | ||
# Generate .condarc | # Generate .condarc |