The Flux HPC ClusterFlux is the shared, Linux-based high-performance computing (HPC) cluster available to all researchers at the University of Michigan.

Flux consists of approximately 27,000 cores – including 1,372 compute nodes composed of multiple CPU cores, with at least 4GB of RAM per core, interconnected with InfiniBand networking.

Please see the following pages for more information on Flux:

For technical support, please email

Migration to the Great Lakes HPC cluster

With the Great Lakes HPC cluster coming online this summer, users should prepare to migrate their workloads by testing on Beta. Later this summer, the Great Lakes cluster will be available for general availability and all accounts, users, and workloads should be migrated from Flux to the Great Lakes cluster by November 25, 2019. No Flux jobs will run past November 25, 2019. There will be future communications to help everyone in the migration process to the Great Lakes cluster.

Unit-specific Flux Allocations

Flux Operating Environment

The Flux Operating Environment (FOE) supports researchers with grants that require the purchase of computing hardware. FOE allows researchers to place their own hardware within the Flux cluster.

For more information, visit our FOE page.

Flux On Demand

Flux on Demand (FOD) allows users to run jobs as needed without committing to a month-long allocation. FOD may be the right choice for users with sporadic workloads that don’t result in consistent sets of jobs run over the course of a month. FOD jobs have access to 3,900 Standard Flux processors.

To create a Flux On Demand allocation, email with the list of users who should have access to the account. See the ARC-TS Computing Resources Rates page for details on the costs of Flux On Demand.

Large Memory Flux

Flux has 360 cores with larger amounts of RAM — about 25GB per core, or 1TB in a 40-core node. Large Memory Flux is designed for researchers with codes requiring large amounts of RAM or cores in a single system.

For information on determining the size of a Flux allocation, please see our pages on How Flux WorksSizing a Flux Order, and Managing a Flux Project.


Flux has 24 K20x GPUs connected to 3 compute nodes, 24 K40 GPUs connected to 6 nodes, and 12 TITANV GPUs connected to 3 nodes. These are available for researchers who have applications that can benefit from the acceleration provided by GPU co-processors. In addition, the software library on Flux has several programs that can benefit from these accelerators.

Each GPU allocation comes with 2 compute cores and 8GB of CPU RAM.

FluxG GPU Specifications

GPU Model NVidia K20X NVidia K40 NVidia TITANV
Number and Type of GPU one Kepler GK110 Kepler GK110B GV100
Peak double precision floating point perf. 1.31 Tflops 1.43 Tflops 7.5 Tflops
Peak single precision floating point perf. 3.95 Tflops 4.29 Tflops 15 Tflops
Tensor Performance (Deep Learning) 110 Tflops
Memory bandwidth (ECC off) 250 GB/sec 288 GB/sec 652.8 GB/sec
Memory size (GDDR5) 6 GB 12 GB 12 GB
CUDA cores 2688 2880 5120 (single precision)

If you have questions, please send email to

Order Service

For information on determining the size of a Flux allocation, please see our pages on How Flux Works, Sizing a Flux Order, and Managing a Flux Project.

To order:

1. Fill out the ARC-TS HPC account request form.

2. Email with the following information:

  • the number of cores needed
  • the start date and number of months for the allocation
  • the shortcode for the funding source
  • the list of people who should have access to the allocation
  • the list of people who can change the user list and augment or end the allocations.

For information on costs, visit our Rates page.

Related Events

There are no upcoming events at this time.