Armis2 Configuration

Hardware

Node Type Standard Large Memory GPU (K40m) GPU (V100)
Number of Nodes 36 2 3 5
Processors 2x 2.5 GHz Intel Haswell (Xeon E5-2680v3) 2x 3.0 GHz Intel Skylake (Xeon Gold 6154) 2x 2.2 GHz Intel Broadwell (Xeon E5-2630v4) 2x 2.5 GHz Intel Cascade Lake (Xeon Gold 6248)
Cores per Node 24 36 20 40
RAM 128 GB (122.8 GB requestable) 1.5 TB (1,542 GB requestable) 64 GB (58.3 GB requestable) 191 GB (184.3 GB requestable)
GPU N/A N/A 4x Nvidia Tesla K40m 3x Nvidia Tesla V100

Networking

The compute nodes are all interconnected with InfiniBand networking. The InfiniBand fabric is based on the Mellanox enhanced data rate (EDR) platform in the Voltaire GridDirector 4700, which provides 100 Gbps of bandwidth and sub-5μs latency per host. Five Grid Director 4700 switches are connected to each other with 240 Gbps of bandwidth each.

In addition to the InfiniBand networking, there is a gigabit Ethernet network that also connects all of the nodes. This is used for node management and NFS file system access.

To discuss high-speed connections to the Armis2 cluster, please contact hpc-support@umich.edu.

Storage

The high-speed home and scratch file systems are provided by ARC-TS Turbo Research Storage. Turbo is a high-capacity, fast, reliable, and secure data storage service that allows investigators across U-M to connect their data to the computing resources necessary for their research, including our Armis2 HPC cluster. Turbo supports storage of sensitive data.

Operation

Computing jobs on Armis2 are managed completely through the Slurm workload manager.  See the Armis2 User Guide for directions on how to submit and manage jobs. For advanced information on how to use Slurm on Armis2, see the Slurm User Guide for Armis2

Software

There are three layers of software on Armis2.

Operating Software

The Armis2 cluster runs CentOS Linux 7. We update the operating system on Armis2 as CentOS releases new versions and our library of third-party applications offers support. Due to the need to support several types of drivers (AFS file system drivers, InfiniBand network drivers and NVIDIA GPU drivers) and dozens of third party applications, we are cautious in upgrading and can lag CentOS’s releases by months.

Compilers and Parallel and Scientific Libraries

Armis2 supports the Gnu Compiler Collection, the Intel Compilers, and the PGI Compilers for C and Fortran. The Armis2 cluster’s parallel library is OpenMPI, and the default versions are 1.10.7 (i686) and 3.1.2 (x86_64), and there are limited earlier versions available.  Armis2 provides the Intel Math Kernel Library (MKL) set of high-performance mathematical libraries. Other common scientific libraries are compiled from source and include HDF5, NetCDF, FFTW3, Boost, and others.

Software installed on Armis2 must be compatible with these compilers and libraries.

Application Software

Armis2 supports a wide range of application software. We license common engineering simulation software (e.g. Ansys, Abaqus, VASP). We also have software for statistics, mathematics, debugging and profiling, etc. Please contact us if you wish to inquire about the current availability of a particular application.

GPUs

Armis2 has 12 total Nvidia Tesla K40m GPUs connected to three nodes, and 15 total Nvidia Tesla V100 GPUs connected to five nodes.

GPU Model Nvidia Tesla K40m Nvidia Tesla V100
Architecture Kepler Volta
Peak double precision floating point perf. 1.43 Tflops 7 Tflops
Peak single precision floating point perf. 4.29 Tflops 14 Tflops
Memory bandwidth (ECC off) 288 GB/sec 900 GB/sec
Memory size (GDDR5) 12 GB 16 GB
CUDA cores 2880 5120

Order Service

To request an account, please email us at hpc-support@umich.edu.

Users need to request a user login to access the cluster. All users must have been granted access to an account before a user login can be created. Be sure to have a shortcode that you are authorized to use, a list of uniqnames of the users who should be able to use the account. a list of uniqnames of the administrators who are authorized to make changes to the account. If you had a pilot account, you can optionally use the same administrative group, the school or college you are a part of, and any limits that you want to set on the account (such as a spending limit or resources usage limits).

If you have questions about gaining access to an account (or getting a trial account), please email hpc-support@umich.edu.

Please see the Terms of Usage for more information.

Related Events

There are no upcoming events at this time.