The Beta hardware is a subset of the hardware currently used in Flux.
The compute nodes are all interconnected with InfinBand networking. In addition to the InfiniBand networking, there is a gigabit Ethernet network that also connects all of the nodes. This is used for node management and NFS file system access.
The high-speed scratch file system is based on Lustre v2.5 and is a DDN SFA10000 backed by the hardware described in this table, the same that is used in current Flux. All other group volumes use the same storage as current Flux.
There are three layers of software on Beta.
The Beta cluster runs CentOS 7. We update the operating system on Beta as CentOS releases new versions and our library of third-party applications offers support. Due to the need to support several types of drivers (AFS and Lustre file system drivers, InfiniBand network drivers and NVIDIA GPU drivers) and dozens of third party applications, we are cautious in upgrading and can lag CentOS’s releases by months.
Compilers and Parallel and Scientific Libraries
Beta supports the Gnu Compiler Collection, the Intel Compilers, and the PGI Compilers for C and Fortran. The Beta cluster’s parallel library is OpenMPI, and the default versions are 1.10.7 (i686) and 3.1.2 (x86_64), and there are limited earlier versions available. Beta provides the Intel Math Kernel Library (MKL) set of high-performance mathematical libraries. Other common scientific libraries are compiled from source and include HDF5, NetCDF, FFTW3, Boost, and others.
Please contact us if you have questions about the availability of, or support for, any other compilers or libraries.
Beta supports a wide range of application software. We license common engineering simulation software, for example, Ansys, Abaqus, VASP, and we compile other for use on Beta, for example, OpenFOAM and Abinit. We also have software for statistics, mathematics, debugging and profiling, etc. Please contact us if you wish to inquire about the current availability of a particular application.
Beta has eight K20x GPUs on one node for testing GPU workloads under Slurm.
|GPU Model||NVidia K20X|
|Number and Type of GPU||one Kepler GK110|
|Peak double precision floating point perf.||1.31 Tflops|
|Peak single precision floating point perf.||3.95 Tflops|
|Memory bandwidth (ECC off)||250 GB/sec|
|Memory size (GDDR5)||6 GB|
If you have questions, please send email to email@example.com.