Flux is the shared, Linux-based high-performance computing (HPC) cluster available to all researchers at the University of Michigan.
Flux consists of approximately 27,000 cores – including 1,372 compute nodes composed of multiple CPU cores, with at least 4GB of RAM per core, interconnected with InfiniBand networking.
Please see the following pages for more information on Flux:
- For Grant Writers and Research Administrators — information on including Flux in grant submissions
- Flux for Instructors — information using Flux in a class
- Flux for Undergraduates (free access to the cluster)
- Check My Allocation
For technical support, please email firstname.lastname@example.org.
Migration to the Great Lakes HPC cluster
With the Great Lakes HPC cluster coming online this summer, users should prepare to migrate their workloads by testing on Beta. Later this summer, the Great Lakes cluster will be available for general availability and all accounts, users, and workloads should be migrated from Flux to the Great Lakes cluster by November 25, 2019. No Flux jobs will run past November 25, 2019. There will be future communications to help everyone in the migration process to the Great Lakes cluster.
Unit-specific Flux Allocations
Flux Operating Environment
The Flux Operating Environment (FOE) supports researchers with grants that require the purchase of computing hardware. FOE allows researchers to place their own hardware within the Flux cluster.
For more information, visit our FOE page.
Flux on Demand (FOD) allows users to run jobs as needed without committing to a month-long allocation. FOD may be the right choice for users with sporadic workloads that don’t result in consistent sets of jobs run over the course of a month. FOD jobs have access to 3,900 Standard Flux processors.
To create a Flux On Demand allocation, email email@example.com with the list of users who should have access to the account. See the ARC-TS Computing Resources Rates page for details on the costs of Flux On Demand.
Flux has 360 cores with larger amounts of RAM — about 25GB per core, or 1TB in a 40-core node. Large Memory Flux is designed for researchers with codes requiring large amounts of RAM or cores in a single system.
Flux has 24 K20x GPUs connected to 3 compute nodes, 24 K40 GPUs connected to 6 nodes, and 12 TITANV GPUs connected to 3 nodes. These are available for researchers who have applications that can benefit from the acceleration provided by GPU co-processors. In addition, the software library on Flux has several programs that can benefit from these accelerators.
Each GPU allocation comes with 2 compute cores and 8GB of CPU RAM.
|GPU Model||NVidia K20X||NVidia K40||NVidia TITANV|
|Number and Type of GPU||one Kepler GK110||Kepler GK110B||GV100|
|Peak double precision floating point perf.||1.31 Tflops||1.43 Tflops||7.5 Tflops|
|Peak single precision floating point perf.||3.95 Tflops||4.29 Tflops||15 Tflops|
|Tensor Performance (Deep Learning)||110 Tflops|
|Memory bandwidth (ECC off)||250 GB/sec||288 GB/sec||652.8 GB/sec|
|Memory size (GDDR5)||6 GB||12 GB||12 GB|
|CUDA cores||2688||2880||5120 (single precision)|
If you have questions, please send email to firstname.lastname@example.org.