The cost of services offered by ARC-TS is shown in the descriptions below. Researchers in the Medical School, College of Engineering, College of Literature, Sciences, and the Arts, and School of Public Health pay lower rates due to cost sharing by those units.

Great Lakes

These rates represent cost recovery for Great Lakes and do not include any support your unit may choose to provide. Billing for jobs will begin on January 6th, 2020. For clarity, this means that jobs running through January 5th, 2020 will continue to be at no charge; jobs will begin to incur charges on January 6th (as soon as the cluster returns from winter maintenance).

Partition Rate Per Minute Rate Per Month CPU Unit Memory Unit GPU Unit
standard/debug/viz $0.000430556 $18.59 1 7 gigabytes N/A
largemem $0.001374306 $59.37 1 41.75 gigabytes N/A
gpu $0.004939815 $213.40 20 90 gigabytes 1

For more information and examples, see the Great Lakes rates page.

Flux Standard Memory

The Flux resource that is the most suitable for most people is the standard memory Flux core: one compute core with 4GB RAM interconnected with 40Gbps InfiniBand networking. Researchers often start with a small number of cores — between 4 and 24 — to validate their workflow in the Flux environment.

About 10,000 Flux cores are available for this service.

Standard cost $11.72 per core/month
Contribution from Engineering, LSA, SPH*, or Med School $5.12 per core/month
Cost to researchers in Engineering, LSA, SPH*, and Med School $6.60 per core/month

To order service:

Email hpc-support@umich.edu with the following information:

  • number of cores needed (for help, see our Allocations page)
  • U-M shortcode for billing
  • start and end dates
  • list of users who should have access to the allocation
  • list of people who can change dates or approved users of the allocation

Flux Larger Memory

Flux has 440 cores with larger amounts of RAM — about 25GB per core, or 1TB in a 40-core node. For researchers with codes requiring large amounts of RAM or cores in a single system, the Larger Memory Flux nodes can be a good option.

Standard cost $23.82 per core/month
Contribution from Engineering, LSA, SPH*, or Med School $10.52 per core/month
Cost to researchers in Engineering, LSA, SPH*, and Med School $13.30 per core/month

To order service:

Email hpc-support@umich.edu with the following information:

  • number of cores needed (for help, see our Allocations page)
  • U-M shortcode for billing
  • start and end dates
  • list of users who should have access to the allocation
  • list of people who can change dates or approved users of the allocation

Flux GPUs

Flux has 40 NVIDIA K20x GPUs, 24 NVIDIA K40 GPUs, and 12 NVIDIA TITAN V GPUs available for researchers with applications that can benefit from the acceleration provided by GPU co-processors.

In addition, the software library on Flux has several programs that can benefit from these accelerators.

The cost per month listed below includes two cores and one GPU.

Standard cost $107.10 per month
Contribution from Engineering, LSA, SPH*, or Med School $47.10 per month
Cost to researchers in Engineering, LSA, SPH*, and Med School $60 per month

To order service:

Email hpc-support@umich.edu with the following information:

  • number of cores needed (for help, see our Allocations page)
  • U-M shortcode for billing
  • start and end dates
  • list of users who should have access to the allocation
  • list of people who can change dates or approved users of the allocation

Turbo

Turbo is a high-capacity, fast, reliable, and secure data storage service that allows investigators across U-M Ann Arbor to connect their data to the computing resources necessary for their research, including U-M’s Great Lakes HPC cluster. Turbo supports storage of sensitive data and ARC-TS’s Armis2 cluster.

Turbo can only be used for research data. It is tuned for large files (1MB or greater), but is capable of handling small files such as documents and spreadsheets. Turbo, in combination with Globus sharing, works well for sharing and hosting data for external collaborators and institutes.

The Turbo service is available in a replicated and unreplicated form. Snapshots are available on a daily basis for up to a week. Weekly snapshots are stored for one more week before being deleted. More information for Turbo is available on the Turbo page

Service Option Rates
Replicated Turbo storage $230.40/TB/year
Unreplicated Turbo storage $115.20/TB/year

To order service:
To order Turbo, fill out the information for NFS volumes (including Multiprotocol to be shared to Linux only or both Windows and Linux) or CIFS volumes (to be shared with Windows only). If you have any questions contact us at arcts-support@umich.edu.

Armis2

These rates represent cost recovery for Armis2 and do not include any support your unit may choose to provide. Billing for jobs will begin on January 6th, 2020. For clarity, this means that jobs running through January 5th, 2020 will continue to be at no charge; jobs will begin to incur charges on January 6th (as soon as the cluster returns from winter maintenance).

Partition Rate Per Minute Rate Per Month CPU Unit Memory Unit GPU Unit
standard/debug $0.0005111 $22.08 1 7 gigabytes N/A
largemem $0.0014373 $62.09 1 26.89 gigabytes N/A
gpu $0.0051674 $223.23 5 15 gigabytes 1

For more information and examples, see the Armis2 rates page.

Flux on Demand

Flux on Demand (FOD) allows users to run jobs as needed without committing to a month-long allocation. FOD may be the right choice for users with sporadic workloads that don’t result in consistent sets of jobs run over the course of a month. FOD jobs have access to 4,024 Standard Flux processors.

For purposes of rate comparison with standard Flux, we show rates for a ‘core/month’, where a month contains 30 days. Actual charges are based on core-seconds.

To create a Flux-on-Demand account, email hpc-support@umich.edu with the list of users who should have access to the account.

Flux on Demand cost $30.25 per core/month, rounded to the nearest core-second
Contribution from LSA, SPH*, or Med School $13.31 per core/month
Cost to researchers in LSA, SPH*, or Med School $16.94 per core/month

To order service:

Email hpc-support@umich.edu with the list of users who should have access to the account.

Armis on Demand

The Armis HIPAA aligned HPC is only offered as an on-demand service. The on-demand service allows users to run jobs as needed without committing to a month-long allocation.

For purposes of rate comparison with standard Flux, we show rates for a ‘core/month’, where a month contains 30 days. Actual charges are based on core-seconds.

To request a Armis-on-Demand account, email hpc-support@umich.edu with the list of users who should have access to the account. Currently Armis is being offered as a release candidate HPC service and the charges below will not apply until the service is transitioned to production.

Armis on Demand cost $30.25 per core/month, rounded to the nearest core-second
Contribution from LSA, SPH*, or Med School $13.31 per core/month
Cost to researchers in LSA, SPH*, or Med School $16.94 per core/month

To order service:

Email hpc-support@umich.edu with the list of users who should have access to the account.

Flux Operating Environment or Lighthouse

The Flux Operating Environment (FOE) is meant to support researchers with grants that require the purchase of computing hardware. FOE allows researchers to place their own hardware within the Flux cluster. The FOE provides the data center, staff, networking, storage and software.

Lighthouse is a new HPC cluster providing the same service as FOE, but using the Slurm resource manager and scheduler.  More information is available on the Flux Operating Environment and Lighthouse webpages.

FOE / Lighthouse Service With Commercial Software Without Commercial Software + Matlab
Cost $113 per node/month $97 per node/month

The commercial software option gives researchers access to the full ARC-TS software library; the no-software option gives access to open-source software, Matlab, and the Intel compilers.

For more information, visit the webpages for Flux Operating Environment or Lighthouse or email hpc-support@umich.edu.

To order service:

Email hpc-support@umich.edu

Locker Large-File Storage

Locker is a cost-optimized, high-capacity, large-file storage service for research data. Locker provides high performance for large files, and allows investigators across U-M Ann Arbor to connect their data to computing resources necessary for their research, including U-M’s HPC clusters.

Locker can only be used for research data. It is designed for data that does not require frequent access, for example, data that is at the end of its useful life-cycle.  It is tuned for large files (1MB or greater), and large numbers of small files should not be stored on the service without being aggregated in some fashion (as through tar).   Locker can be used in combination with the Globus data management sharing system for hosting and sharing data with external collaborators and institutes.

The service is available in a replicated and unreplicated form. Snapshots are available on a daily basis for up to a week. Weekly snapshots are stored for one more week before being deleted. More information for Locker is available on the Locker page.

Service Option Rates
Replicated Locker storage $80/TB/year
Unreplicated Locker storage $40/TB/year

To order service:

To order Locker, click on the respective link for NFS volumes (for linux) or CIFS volumes(for Windows). If you have any questions contact us at arcts-support@umich.edu.

Data Den

Data Den is a service for preserving electronic data generated from research activities. It is a low-cost, highly-durable storage system, and is the largest storage system operated by ARC-TS.

Data Den is a disk-caching, tape-backed archive optimized for data that is not regularly accessed for extended periods of time (from weeks to years). Data Den does not replace active storage services like Turbo and Locker. It is best suited for research data that is not accessed regularly and frequently.  Data Den is best accessed through the Globus data management sharing system to move data into and out of tape.

Data Den is only available in a replicated format.  

Service Option Rates
Replicated Data Den storage No-cost during the pilot phase.

To order service:

To order Data Den during the pilot phase, contact us at arcts-support@umich.edu, and include the following info:

  • Amount of storage needed (in 1TB increments).
  • MCommunity Group name (group members will receive service-related notifications, and can request service changes)
  • Numeric group ID of the group that will have access to files at the top level directory.
  • Numeric user ID of person who will administer the top level directory and grant access to other users.

* The SPH subsidy is limited to a finite budget.  Therefore, SPH faculty and staff must check with the Assistant Dean for Finance to verify if/how much subsidy is available.