Great Lakes Configuration

Hardware

Computing

Node Type Standard Large Memory GPU Visualization
Number of Nodes 380 3 20 4
Processors 2x 3.0 GHz Intel Xeon Gold 6154 2x 3.0 GHz Intel Xeon Gold 6154 2x 2.4 GHz Intel Xeon Gold 6148 2x 2.4 GHz Intel Xeon Gold 6148
Cores per Node 36 36 40 40
RAM 192 GB 1.5 TB 192 GB 192 GB
Storage 480 GB SSD + 4 TB HDD 4 TB HDD 4 TB HDD 4 TB HDD
GPU N/A N/A 2x NVidia Tesla V100 1x NVidia Tesla P40

Networking

The compute nodes are all interconnected with InfiniBand networking, capable of 100 Gb/s throughput. In addition to the InfiniBand networking, there is a gigabit Ethernet network that also connects all of the nodes. This is used for node management and NFS file system access.

Storage

The high-speed scratch file system provides 2 petabytes of storage at approximately 80 GB/s performance (compared to 8 GB/s on Flux).

Operation

Computing jobs on Great Lakes are managed completely through Slurm.

Software

There are three layers of software on Great Lakes.

Operating Software

The Great Lakes cluster runs CentOS 7. We update the operating system on Great Lakes as CentOS releases new versions and our library of third-party applications offers support. Due to the need to support several types of drivers (AFS and Lustre file system drivers, InfiniBand network drivers and NVIDIA GPU drivers) and dozens of third party applications, we are cautious in upgrading and can lag CentOS’s releases by months.

Compilers and Parallel and Scientific Libraries

Great Lakes supports the Gnu Compiler Collection, the Intel Compilers, and the PGI Compilers for C and Fortran. The Great Lakes cluster’s parallel library is OpenMPI, and the default versions are 1.10.7 (i686) and 3.1.2 (x86_64), and there are limited earlier versions available.  Great Lakes provides the Intel Math Kernel Library (MKL) set of high-performance mathematical libraries. Other common scientific libraries are compiled from source and include HDF5, NetCDF, FFTW3, Boost, and others.

Please contact us if you have questions about the availability of, or support for, any other compilers or libraries.

Application Software

Great Lakes supports a wide range of application software. We license common engineering simulation software (e.g. Ansys, Abaqus, VASP) and we compile others for use on Great Lakes (e.g. OpenFOAM and Abinit). We also have software for statistics, mathematics, debugging and profiling, etc. Please contact us if you wish to inquire about the current availability of a particular application.

GPUs

Great Lakes has 40 NVidia Tesla V100 GPUs connected to 20 nodes. 4 NVidia Tesla P40 GPUs connected to 4 nodes are also available for visualization work.

GPU Model NVidia Tesla V100 NVidia Tesla P40
Number and Type of GPU one Volta GPU one Pascal GPU
Peak double precision floating point perf. 7 Tflops N/A
Peak single precision floating point perf. 14 Tflops 12 Tflops
Memory bandwidth (ECC off) 900 GB/sec 346 GB/sec
Memory size (GDDR5) 32 GB HBM2 24 GB GDDR5
CUDA cores 5120 3840

If you have questions, please send email to hpc-support@umich.edu.

Order Service

Great Lakes will be available in the first half of 2019. This page will provide updates on the progress of the project.

Please contact hpc-support@umich.edu with any questions.