DNA sequencing productivity increases with ARC-TS services

By | HPC, News, Research, Systems and Services
NovaSeq, the DNA sequencer that is about the size of large laser printer.

The Advanced Genomics Core’s Illumina NovaSeq 6000 sequencing platform. It’s about the size of large laser printer.

On the cutting-edge of research at U-M is the Advanced Genomics Core’s Illumina NovaSeq 6000 sequencing platform. The AGC is one of the first academic core facilities to optimize this exciting and powerful instrument, that is about the size of a large laser printer. 

The Advanced Genomics Core (AGC), part of the Biomedical Research Core Facilities within the Medical School Office of Research, provides high-quality, low-cost next generation sequencing analysis for research clients on a recharge basis. 

One NovaSeq run can generate as much as 4TB of raw data. So how is the AGC able to generate, process, analyze, and transfer so much data for researchers? They have partnered with Advanced Research Computing – Technology Services (ARC-TS) to leverage the speed and power of the Great Lakes High-Performance Computing Cluster

With Great Lakes, AGC can process the data, and then store the output on other ARC-TS services: Turbo Research Storage and Data Den Research Archive, and share with clients using Globus File Transfer. All three services work together. Turbo offers the capacity and speed to match the computational performance of Great Lakes, Data Den provides an archive of raw data in case of catastrophic failure, and Globus has the performance needed for the transfer of big data. 

“Thanks to Great Lakes, we were able to process dozens of large projects simultaneously, instead of being limited to just a couple at a time with our in-house system,” said Olivia Koues, Ph.D., AGC managing director. 

“In calendar year 2020, the AGC delivered nearly a half petabyte of data to our research community. We rely on the speed of Turbo for storage, the robustness of Data Den for archiving, and the ease of Globus for big data file transfers. Working with ARC-TS has enabled incredible research such as making patients resilient to COVID-19. We are proudly working together to help patients.”

“Our services process more than 180,000GB of raw data per year for the AGC. That’s the same as streaming the three original Star Wars movies and the three prequels more than 6,000 times,” said Brock Palen, ARC-TS director. “We enjoy working with AGC to assist them into the next step of their big data journey.”

ARC-TS is a division of Information and Technology Services (ITS). The Advanced Genomics Core (ACG) is part of the Biomedical Research Core Facilities (BRCF) within the Medical School Office of Research.

Armis2 Update : May 2020 (Increased Compute/GPU Capacity and Limits)

By | Armis2, News

ARC-TS is pleased to announce the addition of compute resources in the standard, large memory, and GPU partitions, new V100 GPUs (graphics processing units), and increased Slurm root account limits for Armis2 effective May 20, 2020. 

 

Additional Compute Capability added

ARC-TS will be adding 93 standard compute nodes, 4 large memory nodes, and 3 new GPU nodes (each with 4 NVIDIA K40x GPUs).  These nodes are the same hardware type as the existing Armis2 nodes.  We plan on migrating the new hardware on May 20, 2020.

 

New GPUs added

ARC-TS has added five nodes, each with three V100 GPUs for faster service to the GPU partition. These are the same types of GPU nodes that are in the Great Lakes HPC cluster. Learn more about the V100 GPU

 

What do I need to do? 

You can access the new GPUs by submitting your jobs to the Armis2 gpu partition. Refer to the Armis2 user guide, section 1.2 Getting started, Part 5 “Submit a job” or contact arcts-support@umich.edu to get help or if you have questions. 

 

Resources

 

How do I get help? 

Contact arcts-support@umich.edu to get help or if you have questions. 

 

Slurm default resource limits increased

ARC-TS will be raising the default Slurm resource limits (set at the per-PI/project root account level) to give each researcher up to 33% of the resources in the standard partition, and 25% of the resources in the largemem and gpu partitions, to better serve your research needs. This will happen on May 20, 2020.

 

What do I need to do? 

Review, enable, or modify limits on your Armis2 Slurm accounts. Because of the higher cpu limit, your researchers will be able to run more jobs, which could generate a larger bill. Contact arcts-support@umich.edu if you would like to modify or add any limits. 

 

What is a Slurm root account?

A per-principal investigator (PI) or per-project root account contains one or more Slurm sub-accounts, each with their own users, limits, and shortcode(s). The entire root account has limits for overall cluster and /scratch usage in addition to any limits put on the sub-accounts.

 

What is the new Slurm root account limit? 

Each PI’s or project’s collection of Slurm accounts will be increased to 1,032 cores and 5,160GB of memory, and 10 GPUs, effective May 20, 2020. The Slurm root account level limit is currently set to 90 cores. We will document all of the updated limits, including large memory and GPU limits, on the Armis2 website when they go into effect.

 

Resources

 

How do I get help? 

Armis2 is available for general access

By | Armis2, HPC, Systems and Services

U-M Armis2: Now available

What is Armis2?

The Armis2 service is a HIPAA-aligned, HPC platform for all University of Michigan researchers and is a successor to the current Armis cluster. It is based on the same hardware as the current Armis system, but uses the Slurm resource manager rather than the Torque/Moab environment. 

If your data falls under some non-HIPAA, restricted use agreement, contact arcts-support@umich.edu to discuss whether or not you can run your jobs on Armis2. 

Key features of Armis2 

  • 24 standard nodes using the Intel Haswell-processor, each with 24 cores. More capacity will be added in the coming weeks
  • Slurm provides the resource manager and scheduler 
  • The scratch storage system will provide high-performance temporary storage for compute. See the User Guide for quotas and the file purge policy
  • The EDR InfiniBand network is 100Gb/s to each node
  • Large Memory Nodes have 1.5TB memory per node
  • GPU Nodes with NVidia K40GPUs (4GPUs per node)

ARC-TS will be adding more standard, large memory, and GPU nodes in the coming weeks during the transition from Armis to Armis2, as well as migrating hardware from Flux to Armis2. For more information on the technical aspects of Armis2, see the Armis2 configuration page.

When will Armis2 be available?

Armis2 is available now; you can log in at: armis2.arc-ts.umich.edu.  

Using Armis2

Armis2 has a simplified accounting structure. On Armis2, you can use the same account and simply request the resources you need, including standard, GPU, and large memory nodes.

Active accounts have been migrated from Armis to Armis2. To see which accounts you have access to, type my_accounts to view the list of accounts. See the User Guide for more information on accounts and partitions.

Armis2 rates

Armis2 is a “pay only for what you use” model. We will be sharing rates shortly. Send us an email at hpc-support@umich.edu if you have any questions. 

Previously, Armis was in tech-preview mode and you were not billed for the service. Use of Armis2 is currently free, but beginning on December 2, 2019, all jobs run on Armis2 will be subject to applicable rates.

View the Armis2 rates page for more information. 

How does this change impact me?

All migrations from Armis to Armis2 must be completed by November 25, 2019, as Armis will not run any jobs beyond that date. View the Armis2 HPC timeline.

The primary difference between Armis2 and Armis is the resource manager. You will have to update your job submission scripts to work with Slurm; see the Armis2 User Guide for details on how to do this. Additionally, you’ll need to migrate any data and software from Armis to Armis2.

How do I learn how to use Armis2?

To learn how to use Slurm and for a list of policies, view the documentation on the Armis2 website

Additionally, ARC-TS and academic unit support teams will be offering training sessions around campus. As information becomes available, there will be a schedule on the ARC-TS website as well as Twitter and email.

 

Modular Data Center Electrical Work

By | Flux, Systems and Services, Uncategorized

[Update 2019-05-17 ] The MDC electrical work was completed successfully and Flux has been returned to full production.

 

The Modular Data Center (MDC), which houses Flux, Flux Hadoop, and other HPC resources, has an electrical issue which requires us to bring the power usage below 50% for the some racks in order to resolve the problem.  In order to do this, we have put reservations on some of the nodes to reduce the power draw so the issue can be fixed by ITS Data Centers.  Once we hit the target power level and the issue is resolved, we will remove the reservations and return Flux and Flux Hadoop back into full production level.