This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
pub:hpc:foundry [2023/06/28 16:08] – ark3m6 | pub:hpc:foundry [2024/05/07 14:28] (current) – [EOL PLAN!!] blspcy | ||
---|---|---|---|
Line 1: | Line 1: | ||
====== The Foundry ====== | ====== The Foundry ====== | ||
+ | ====EOL PLAN!!==== | ||
+ | **THE FOUNDRY WILL BE DECOMISSIONED IN JUNE 2024** | ||
+ | |||
+ | The Foundry will no longer have compute resources as of June 1st 2024. | ||
+ | |||
+ | The login nodes will be shut down on June 3rd. | ||
+ | |||
+ | Scratch storage will be shut down on June 4th. | ||
+ | |||
+ | The Globus node will shut down on June 30th 2024. | ||
+ | |||
+ | You will be able to transfer data with Globus through June 30th 2024 from your home directory. | ||
===== System Information ===== | ===== System Information ===== | ||
+ | As of 22 Jan 2024 we will not be creating new Foundry accounts. Please look into requesting an account on the new cluster named Mill at https:// | ||
==== Software ==== | ==== Software ==== | ||
The Foundry was built and managed with Puppet. The underlying OS for the Foundry is Ubuntu 18.04 LTS. With the Foundry we made the conversion from Centos to Ubuntu, and made the jump from a 2.6.x kernel to a 5.3.0 kernel build. For resource management and scheduling we are using SLURM Workload manager version 17.11.2 | The Foundry was built and managed with Puppet. The underlying OS for the Foundry is Ubuntu 18.04 LTS. With the Foundry we made the conversion from Centos to Ubuntu, and made the jump from a 2.6.x kernel to a 5.3.0 kernel build. For resource management and scheduling we are using SLURM Workload manager version 17.11.2 | ||
Line 44: | Line 57: | ||
==Leased Space== | ==Leased Space== | ||
- | If home directory, and scratch space availability aren't enough for your storage needs we also lease out quantities of cluster attached space. If you are interested in leasing storage please contact us. If you already are leasing storage, but need a reference guide on how to manage the storage please go [[ pub:foundry:storage | here]]. | + | If home directory, and scratch space availability aren't enough for your storage needs we also lease out quantities of cluster attached space. If you are interested in leasing storage please contact us. If you already are leasing storage, but need a reference guide on how to manage the storage please go [[ ~:storage | here]]. |
Line 104: | Line 117: | ||
=== Off Campus Logins === | === Off Campus Logins === | ||
- | Our off campus logins use public key authentication only, password authentication is disabled for off campus users unless they are connected to the campus VPN. To learn how to connect from off campus please see our how to on [[ publickeysetup | setting up public key authentication. ]]After setting up your public key, you may still use the host foundry.mst.edu to connect without using the VPN. | + | Our off campus logins use public key authentication only, password authentication is disabled for off campus users unless they are connected to the campus VPN. To learn how to connect from off campus please see our how to on [[ ~:publickeysetup | setting up public key authentication. ]]After setting up your public key, you may still use the host foundry.mst.edu to connect without using the VPN. |
==== Submitting a job ==== | ==== Submitting a job ==== | ||
Line 199: | Line 212: | ||
Missouri S&T users can mount their web volumes and S Drives with the < | Missouri S&T users can mount their web volumes and S Drives with the < | ||
+ | |||
+ | You can un-mount your user directories with the < | ||
=== Windows === | === Windows === | ||
Line 884: | Line 899: | ||
Now you may run thermo-calc. | Now you may run thermo-calc. | ||
Thermo-Calc.sh | Thermo-Calc.sh | ||
+ | |||
+ | ====Vasp==== | ||
+ | |||
+ | To use our site installation of Vasp you must first prove that you have a license to use it by emailing your vasp license confirmation to < | ||
+ | |||
+ | Once you have been granted access to using vasp you may load the vasp module < | ||
+ | |||
+ | and create a vasp job file, in the directory that your input files are, that will look similar to the one below. | ||
+ | |||
+ | <file bash vasp.sub> | ||
+ | #!/bin/bash | ||
+ | |||
+ | #SBATCH -J Vasp | ||
+ | #SBATCH -o Foundry-%j.out | ||
+ | #SBATCH --time=1: | ||
+ | #SBATCH --ntasks=8 | ||
+ | |||
+ | module load vasp | ||
+ | module load libfabric | ||
+ | |||
+ | srun vasp | ||
+ | |||
+ | </ | ||
+ | |||
+ | This example will run the standard vasp compilation on 8 cpus for 1 hour. \\ | ||
+ | |||
+ | If you need the gamma only version of vasp use < | ||
+ | |||
+ | If you need the non-colinear version of vasp use < | ||
+ | |||
+ | It might work to launch vasp with " | ||
+ | |||
+ | There are some globally available Psudopoetentials available, the module sets the environment variable $POTENDIR to the global directory. | ||