Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
pub:hpc:foundry [2023/08/18 19:42] lantzerpub:hpc:foundry [2024/05/07 14:28] (current) – [EOL PLAN!!] blspcy
Line 1: Line 1:
 ====== The Foundry ====== ====== The Foundry ======
 +====EOL PLAN!!====
 +**THE FOUNDRY WILL BE DECOMISSIONED IN JUNE 2024**
 +
 +The Foundry will no longer have compute resources as of June 1st 2024. 
 +
 +The login nodes will be shut down on June 3rd. 
 +
 +Scratch storage will be shut down on June 4th.
 +
 +The Globus node will shut down on June 30th 2024.
 +
 +You will be able to transfer data with Globus through June 30th 2024 from your home directory.
 ===== System Information ===== ===== System Information =====
 +As of 22 Jan 2024 we will not be creating new Foundry accounts. Please look into requesting an account on the new cluster named Mill at https://docs.itrss.umsystem.edu/pub/hpc/mill
 ==== Software ==== ==== Software ====
 The Foundry was built and managed with Puppet. The underlying OS for the Foundry is Ubuntu 18.04 LTS. With the Foundry we made the conversion from Centos to Ubuntu, and made the jump from a 2.6.x kernel to a 5.3.0 kernel build. For resource management and scheduling we are using SLURM Workload manager version 17.11.2 The Foundry was built and managed with Puppet. The underlying OS for the Foundry is Ubuntu 18.04 LTS. With the Foundry we made the conversion from Centos to Ubuntu, and made the jump from a 2.6.x kernel to a 5.3.0 kernel build. For resource management and scheduling we are using SLURM Workload manager version 17.11.2
Line 886: Line 899:
 Now you may run thermo-calc. Now you may run thermo-calc.
     Thermo-Calc.sh     Thermo-Calc.sh
 +
 +====Vasp====
 +
 +To use our site installation of Vasp you must first prove that you have a license to use it by emailing your vasp license confirmation to <nic-cluster-admins@mst.edu>.
 +
 +Once you have been granted access to using vasp you may load the vasp module <code>module load vasp</code> (you might need to select the version that you are licensed for).
 +
 +and create a vasp job file, in the directory that your input files are, that will look similar to the one below.
 +
 +<file bash vasp.sub>
 +#!/bin/bash
 +
 +#SBATCH -J Vasp
 +#SBATCH -o Foundry-%j.out
 +#SBATCH --time=1:00:00
 +#SBATCH --ntasks=8
 +
 +module load vasp
 +module load libfabric
 +
 +srun vasp
 +
 +</file>
 +
 +This example will run the standard vasp compilation on 8 cpus for 1 hour. \\
 +
 +If you need the gamma only version of vasp use <code> srun vasp_gam </code> in your submission file. \\
 +
 +If you need the non-colinear version of vasp use <code> srun vasp_ncl </code> in your submission file.\\
 +
 +It might work to launch vasp with "mpirun vasp", but running "srun vasp" should automatically configure the MPI job parameters based on the configured slurm job parameters and should run more cleanly than mpirun.\\\
 +
 +There are some globally available Psudopoetentials available, the module sets the environment variable $POTENDIR to the global directory.