The language below can be used in grant submissions to government agencies or other funding entities.

Flux

Armis


Flux Description

Computing
Flux is an HPC Linux-based cluster intended to support parallel and other applications that are not suitable for departmental or individual computers. Each Flux compute node comprises multiple CPU cores with at least 4 GB of RAM per core; Flux has more than 19,000 cores. All compute nodes are interconnected with InfiniBand networking.

The larger memory Flux hardware comprises 10 compute nodes, each configured with 1 TB RAM.

Flux contains five GPU nodes, with a total of 40 NVIDIA CUDA-capable GPUs.

Computing jobs on Flux are managed through a combination of the Moab Scheduler, the Terascale Open-Source Resource and QUEue Manager (Torque) and the GOLD Allocation Manager from Adaptive Computing.

Flux Configuration has a detailed description of the Flux cluster.

Storage
The system also includes high-speed scratch storage using the Lustre parallel network file system. The storage is connected with InfiniBand. This file system allows researchers to store data on a short-term basis to perform calculations; it is not for long-term data storage or archival purposes.

Intra-networking
All Flux nodes are interconnected with quad-data rate InfiniBand, delivering up to 40 Gbps of bandwidth and less than 5μs latency.

Inter-networking
Flux is connected to the University of Michigan’s campus backbone to provide access to student and researcher desktops as well as other campus computing and storage systems. The campus backbone provides 100 Gbps connectivity to the commodity Internet and the research networks Internet2 and MiLR.

Software
The Flux cluster includes a comprehensive software suite of commercial and open source research software, including major software compilers, and many of the common research-specific applications such as Mathematica, Matlab, R and Stata.

Data Center Facilities
Flux is housed in the Modular Data Center (MDC). The MDC uses ambient air for cooling approximately 75% of the year, thus significantly reducing the amount of energy needed for cooling, and contributing to U-M’s sustainability efforts.

Hardware Grants
Flux Operating Environment is a service that allows researchers to add their own compute hardware to the Flux cluster, in order to take advantage of the data center, support, networking, storage, and basic software. For more information, visit the Flux Operating Environment page.

Support
Flux computing services are provided through a collaboration of University of Michigan units: Advanced Research Computing (in the Office of the VP of Research and the Provost’s Office), and computing groups in schools and colleges at the university.


The Following Steps Will Help You Include Flux in a Grant Proposal

1. Determine the suitability of Flux for your research by considering whether a large computing resource is required. It is important that the proposed funds will provide computing cycles in a way that allows the team of researchers to allocate them as needed. The size of an allocation can be changed on a month by month basis to meet research needs and make the best possible use of the awarded funds. Faculty-owned or provided hardware cannot be accepted into Flux.

2. Determine if the constraints on access to Flux are suitable for your project. Access to Flux and the software library will be granted to all University of Michigan faculty, staff, graduate, and undergraduate students. Contractors and collaborators from other institutions may not use Flux because of licensing limitations with third party commercial software.

3. Determine an appropriate budget to include in the proposal; the cost per core month is an approved rate and may be charged as a direct cost to federal grants. The Ordering Service page contains the most current Flux rate for budgeting use in your proposal. Flux Sizing will help estimate Flux allocations for budget planning purposes. For questions or more information about estimating usage, contact hpc-support@umich.edu.

4. Use the appropriate parts of the Flux Description above in your proposal. In NSF proposals use the category “computer service” and the phrase “cluster compute allocation” with quantities expressed as core-months or core-years to describe Flux time.

5. Plan for the end of the award period or the exhaustion of the funds. At that time, the allocation on Flux expires and no more jobs associated with that Flux project can run.


 

Armis Description

Computing

Armis is a Linux-based high performance computing cluster that is used in conjunction with Turbo Research Storage, providing a secure, scalable, and distributed computing environment that aligns with HIPAA privacy standards. Armis is intended to support parallel and other applications that are not suitable for departmental or individual computers. Each Armis compute node comprises multiple CPU cores with at least 4 GB of RAM per core. If there is sufficient resource usage, Armis can be resized to include up to 14,000 cores. All compute nodes are interconnected with InfiniBand networking.

Computing jobs on Armis are managed through a combination of the Moab Scheduler, the Terascale Open-Source Resource and QUEue Manager (Torque) and the Moab Accounting Manager from Adaptive Computing.

 

Storage

Armis is used in conjunction with Turbo Research Storage, which providing a high-capacity, fast, reliable, and secure data storage for home and scratch directories. Researchers can also purchase additional storage on Turbo that can be mounted on Armis and can also be mounted by the researcher via NFS+Kerberos maintaining HIPAA alignment. Turbo supports storage of sensitive data.

Networking

Armis nodes are interconnected at minimum with QDR InfiniBand, delivering up to 40 Gbit/s of bandwidth and less than 5μs latency, and newer generation nodes with EDR InfiniBand delivering up to 100 Gbit/s of bandwidth.

Armis is connected to the University of Michigan’s campus backbone to provide access to student and researcher desktops as well as other campus computing and storage systems. The campus backbone provides 100 Gbit/s connectivity the research networks Internet2 and MiLR.

Software

The Armis cluster includes a comprehensive software suite of commercial and open source software.  Software is available for research problems as diverse as statistical analysis (e.g., R, SAS, Stata), mathematical modelling (e.g., Matlab, Mathematica), technical, engineering simulation (e.g., Abaqus) and molecular modelling (e.g., OpenFOAM, Gaussian), and a number of common genomic pipelines.

Data Center Facilities

Armis is housed in the Modular Data Center (MDC). The MDC uses ambient air for cooling approximately 75% of the year, thus significantly reducing the amount of energy needed for cooling, and contributing to U-M’s sustainability efforts.

Hardware Grants

Armis Operating Environment is a service that allows researchers to add their own compute hardware to the Armis cluster, in order to take advantage of the data center, support, networking, storage, and basic software. If you have a grant with money specifically for hardware purchase, ARC-TS will assist you in purchasing private hardware that is suitable for integration into Armis. Contact us for details .

Support

Armis computing services are provided through a collaboration of University of Michigan units: Advanced Research Computing (in the Office of the VP of Research and the Provost’s Office), and computing groups in schools and colleges at the university.

The Following Steps Will Help You Include Armis in a Grant Proposal

  1. Determine the suitability of Armis for your research by considering whether a large computing resource is required. It is important that the proposed funds will provide computing cycles in a way that allows the team of researchers to allocate them as needed. The size of an allocation can be changed on a month by month basis to meet research needs and make the best possible use of the awarded funds. Faculty-owned or provided hardware cannot be accepted into Armis.
  2. Determine if the constraints on access to Armis are suitable for your project. Access to Armis and the software library will be granted to all University of Michigan faculty, staff, graduate, and undergraduate students. Contractors and some collaborators from other institutions may not use Armis because of licensing limitations with third party commercial software.
  3. Determine an appropriate budget to include in the proposal; the cost per core month is an approved rate and may be charged as a direct cost to federal grants. The Ordering Service page contains the most current Armis rate for budgeting use in your proposal. For questions or more information about estimating usage, contact hpc-support@umich.edu.
  4. Use the appropriate parts of the Armis description above in your proposal. In NSF proposals use the category “computer service” and the phrase “cluster compute allocation” with quantities expressed as core-months or core-years to describe Armis time.
  5. Plan for the end of the award period or the exhaustion of the funds. At that time, the allocation on Armis expires and no more jobs associated with that Armis project can run.