Armis2 Update : May 2020 (Increased Compute/GPU Capacity and Limits)

By | Armis2, News

ARC-TS is pleased to announce the addition of compute resources in the standard, large memory, and GPU partitions, new V100 GPUs (graphics processing units), and increased Slurm root account limits for Armis2 effective May 20, 2020. 

 

Additional Compute Capability added

ARC-TS will be adding 93 standard compute nodes, 4 large memory nodes, and 3 new GPU nodes (each with 4 NVIDIA K40x GPUs).  These nodes are the same hardware type as the existing Armis2 nodes.  We plan on migrating the new hardware on May 20, 2020.

 

New GPUs added

ARC-TS has added five nodes, each with three V100 GPUs for faster service to the GPU partition. These are the same types of GPU nodes that are in the Great Lakes HPC cluster. Learn more about the V100 GPU

 

What do I need to do? 

You can access the new GPUs by submitting your jobs to the Armis2 gpu partition. Refer to the Armis2 user guide, section 1.2 Getting started, Part 5 “Submit a job” or contact arcts-support@umich.edu to get help or if you have questions. 

 

Resources

 

How do I get help? 

Contact arcts-support@umich.edu to get help or if you have questions. 

 

Slurm default resource limits increased

ARC-TS will be raising the default Slurm resource limits (set at the per-PI/project root account level) to give each researcher up to 33% of the resources in the standard partition, and 25% of the resources in the largemem and gpu partitions, to better serve your research needs. This will happen on May 20, 2020.

 

What do I need to do? 

Review, enable, or modify limits on your Armis2 Slurm accounts. Because of the higher cpu limit, your researchers will be able to run more jobs, which could generate a larger bill. Contact arcts-support@umich.edu if you would like to modify or add any limits. 

 

What is a Slurm root account?

A per-principal investigator (PI) or per-project root account contains one or more Slurm sub-accounts, each with their own users, limits, and shortcode(s). The entire root account has limits for overall cluster and /scratch usage in addition to any limits put on the sub-accounts.

 

What is the new Slurm root account limit? 

Each PI’s or project’s collection of Slurm accounts will be increased to 1,032 cores and 5,160GB of memory, and 10 GPUs, effective May 20, 2020. The Slurm root account level limit is currently set to 90 cores. We will document all of the updated limits, including large memory and GPU limits, on the Armis2 website when they go into effect.

 

Resources

 

How do I get help? 

Armis2 is available for general access

By | Armis2, HPC, Systems and Services

U-M Armis2: Now available

What is Armis2?

The Armis2 service is a HIPAA-aligned, HPC platform for all University of Michigan researchers and is a successor to the current Armis cluster. It is based on the same hardware as the current Armis system, but uses the Slurm resource manager rather than the Torque/Moab environment. 

If your data falls under some non-HIPAA, restricted use agreement, contact arcts-support@umich.edu to discuss whether or not you can run your jobs on Armis2. 

Key features of Armis2 

  • 24 standard nodes using the Intel Haswell-processor, each with 24 cores. More capacity will be added in the coming weeks
  • Slurm provides the resource manager and scheduler 
  • The scratch storage system will provide high-performance temporary storage for compute. See the User Guide for quotas and the file purge policy
  • The EDR InfiniBand network is 100Gb/s to each node
  • Large Memory Nodes have 1.5TB memory per node
  • GPU Nodes with NVidia K40GPUs (4GPUs per node)

ARC-TS will be adding more standard, large memory, and GPU nodes in the coming weeks during the transition from Armis to Armis2, as well as migrating hardware from Flux to Armis2. For more information on the technical aspects of Armis2, see the Armis2 configuration page.

When will Armis2 be available?

Armis2 is available now; you can log in at: armis2.arc-ts.umich.edu.  

Using Armis2

Armis2 has a simplified accounting structure. On Armis2, you can use the same account and simply request the resources you need, including standard, GPU, and large memory nodes.

Active accounts have been migrated from Armis to Armis2. To see which accounts you have access to, type my_accounts to view the list of accounts. See the User Guide for more information on accounts and partitions.

Armis2 rates

Armis2 is a “pay only for what you use” model. We will be sharing rates shortly. Send us an email at hpc-support@umich.edu if you have any questions. 

Previously, Armis was in tech-preview mode and you were not billed for the service. Use of Armis2 is currently free, but beginning on December 2, 2019, all jobs run on Armis2 will be subject to applicable rates.

View the Armis2 rates page for more information. 

How does this change impact me?

All migrations from Armis to Armis2 must be completed by November 25, 2019, as Armis will not run any jobs beyond that date. View the Armis2 HPC timeline.

The primary difference between Armis2 and Armis is the resource manager. You will have to update your job submission scripts to work with Slurm; see the Armis2 User Guide for details on how to do this. Additionally, you’ll need to migrate any data and software from Armis to Armis2.

How do I learn how to use Armis2?

To learn how to use Slurm and for a list of policies, view the documentation on the Armis2 website

Additionally, ARC-TS and academic unit support teams will be offering training sessions around campus. As information becomes available, there will be a schedule on the ARC-TS website as well as Twitter and email.