Armis2 is available for general access

By | Armis2, HPC, Systems and Services

U-M Armis2: Now available

What is Armis2?

The Armis2 service is a HIPAA-aligned, HPC platform for all University of Michigan researchers and is a successor to the current Armis cluster. It is based on the same hardware as the current Armis system, but uses the Slurm resource manager rather than the Torque/Moab environment. 

If your data falls under some non-HIPAA, restricted use agreement, contact hpc-support@umich.edu to discuss whether or not you can run your jobs on Armis2. 

Key features of Armis2 

  • 24 standard nodes using the Intel Haswell-processor, each with 24 cores. More capacity will be added in the coming weeks
  • Slurm provides the resource manager and scheduler 
  • The scratch storage system will provide high-performance temporary storage for compute. See the User Guide for quotas and the file purge policy
  • The EDR InfiniBand network is 100Gb/s to each node
  • Large Memory Nodes have 1.5TB memory per node
  • GPU Nodes with NVidia K40GPUs (4GPUs per node)

ARC-TS will be adding more standard, large memory, and GPU nodes in the coming weeks during the transition from Armis to Armis2, as well as migrating hardware from Flux to Armis2. For more information on the technical aspects of Armis2, see the Armis2 configuration page.

When will Armis2 be available?

Armis2 is available now; you can log in at: armis2.arc-ts.umich.edu.  

Using Armis2

Armis2 has a simplified accounting structure. On Armis2, you can use the same account and simply request the resources you need, including standard, GPU, and large memory nodes.

Active accounts have been migrated from Armis to Armis2. To see which accounts you have access to, type my_accounts to view the list of accounts. See the User Guide for more information on accounts and partitions.

Armis2 rates

Armis2 is a “pay only for what you use” model. We will be sharing rates shortly. Send us an email at hpc-support@umich.edu if you have any questions. 

Previously, Armis was in tech-preview mode and you were not billed for the service. Use of Armis2 is currently free, but beginning on December 2, 2019, all jobs run on Armis2 will be subject to applicable rates.

View the Armis2 rates page for more information. 

How does this change impact me?

All migrations from Armis to Armis2 must be completed by November 25, 2019, as Armis will not run any jobs beyond that date. View the Armis2 HPC timeline.

The primary difference between Armis2 and Armis is the resource manager. You will have to update your job submission scripts to work with Slurm; see the Armis2 User Guide for details on how to do this. Additionally, you’ll need to migrate any data and software from Armis to Armis2.

How do I learn how to use Armis2?

To learn how to use Slurm and for a list of policies, view the documentation on the Armis2 website

Additionally, ARC-TS and academic unit support teams will be offering training sessions around campus. As information becomes available, there will be a schedule on the ARC-TS website as well as Twitter and email.

 

Great Lakes Update: August 2019

By | General Interest, Great Lakes, Happenings, HPC, News

Great Lakes cluster is available for general access

What is the current status of the Great Lakes cluster

Now that we have completed Early User testing, the Great Lakes cluster is available for general access to the University community. Until the migration from Flux is complete on November 25, 2019, there will be no charge for using the Great Lakes cluster.

Noteworthy Features

  • The Great Lakes cluster compute nodes use the new Intel Skylake processor. In particular, the Skylake CPUs on the standard and large memory compute nodes will provide researchers more consistent performance, regardless of how many other jobs are on the machine. 
  • The Great Lakes cluster has 20 GPU nodes, each of which contains two NVidia V100 GPUs which are significantly faster than the K20 and K40 GPUs on Flux.
  • The HDR100 InfiniBand network will provide consistent 100Gb/s performance across all nodes. On Flux, this ranged from 40-100Gb/s, depending on the node your job used.
  • The high performance GPFS /scratch system, with a capacity of approximately two petabytes, is significantly faster than /scratch on Flux. 
  • The Torque-based batch job submission environment has been replaced with the Slurm resource manager. We expect this system to be significantly more responsive and quicker at starting jobs than was the case on Flux.
  • For web-based job submission, the Open OnDemand system will replace the ARC Connect environment for providing web based file access, job submission, remote desktop, graphical Matlab, Jupyter Notebooks, and more. For more information, see the web-based access section in our user guide. 

How do I get access?

Every Flux user has a login on the Great Lakes cluster; you should be able to log in via ssh to greatlakes.arc-ts.umich.edu. We have created Slurm accounts for each PI or project based on the current Flux accounts. You can see what Slurm accounts you have access to by running the command `my_accounts.`  

Additionally, you can access the Great Lakes cluster via the web through our Open OnDemand portal. Here you can submit jobs, see submitted jobs, create Jupyter Notebooks and more. Please see the Great Lakes Cluster User Guide for more information.

Where do I read more about the Great Lakes cluster and how to use it?

The current documentation for the Great Lakes cluster, including configuration, user guides, and known issues can be found at https://arc-ts.umich.edu/greatlakes.

There is a schedule for upcoming training sessions on the CSCAR website, and we will communicate new sessions through Twitter and email.

Software

Almost all of the software packages available on Flux have been recompiled on the Great Lakes cluster for improved performance anticipated from the Intel SkyLake architecture. In most cases, the latest software version available is being provided. If you need older versions or need additional packages, let us know via email at hpc-support@umich.edu

We have also reorganized the software module structure to make it easier to find packages you want to load as well as automatically loading prerequisites. To search for packages, use the “module spider” command along with the name of the package or keywords. In many cases we combined similar packages into “Collections” such as Chemistry and BioInformatics. The command “module load Chemistry” will make any Chemistry package available to you and packages in the Chemistry collection will then be discoverable via the “module available” command. After loading a specific collection, you must then load any individual packages within that collection that you would like to use.

What are the rates? 

We are working with ITS and UM Finance for approved service rates. Current plans are to have proposed1 rates identified by end of August. As soon as this information is more concrete, we will provide an update on the Great Lakes cluster website and in our email communication. We understand that this information is necessary for planning purposes and apologize for any impacts this has had on your budget planning. 

What can be shared at this time is the new approach to billing that will be used for the Great Lakes cluster. Unlike Flux, there are no monthly allocations with fixed fees regardless of whether they are used or not. On the Great Lakes cluster, the monthly charge for an account will be calculated based on the resources used by jobs each month. The cost calculation for each job will be based on the amount and type of resources the job reserves and how long the job runs. This should be a significantly more flexible system and won’t require updating allocations as your computing needs change over time.

1 Rates are not considered final until they have been formally approved by OFA.

Flux to the Great Lakes cluster transition efforts

If you have not already, you should be developing a plan to migrate your work from Flux to the Great Lakes cluster.  If you need help in developing a plan, please contact us and we can provide assistance during this migration period. 

  • ARC-TS and academic unit support teams will be offering training sessions around campus. We will have a training sessions schedule on the ARC-TS website. We also communicate new sessions through Twitter and email.
  • To assist your transition, if you have any Turbo or MiStorage NFS mounts on Flux, those mounts will also be available on the Great Lakes cluster.  If you would prefer to not have those volumes mounted on the Great Lakes cluster, email us at hpc-support@umich.edu.

Ensure that your migration from Flux to the Great Lakes cluster is completed by November 25, 2019. No jobs on Flux will run after November 25, 2019.

Additional Information

We will be adding new capabilities in the coming weeks and months and will continue to communicate these capabilities by email as they become available. If you have any questions, email us at hpc-support@umich.edu.

Great Lakes Update: March 2019

By | Flux, General Interest, Great Lakes, Happenings, HPC, News

ARC-TS previously shared much of this information through the December 2018 ARC Newsletter and on the ARC-TS website. We have added some additional details surrounding the timeline for Great Lakes as well as for users who would like to participate in Early User testing.

What is Great Lakes?

The Great Lakes service is a next generation HPC platform for University of Michigan researchers, which will provide several performance advantages compared to Flux. Great Lakes is built around the latest Intel CPU architecture called Skylake and will have standard, large memory, visualization, and GPU-accelerated nodes.  For more information on the technical aspects of Great Lakes, please see the Great Lakes configuration page.

Key Features:

  • Approximately 13,000 Intel Skylake Gold processors providing AVX512 capability providing over 1.5 TFlop of performance per node
  • 2 PB scratch storage system providing approximately 80 GB/s performance (compared to 8 GB/s on Flux)
  • New InfiniBand network with improved architecture and 100 Gb/s to each node
  • Each compute node will have significantly faster I/O via SSD-accelerated storage
  • Large Memory Nodes with 1.5 TB memory per node
  • GPU Nodes with NVidia Volta V100 GPUs (2 GPUs per node)
  • Visualization Nodes with Tesla P40 GPUs

Great Lakes will be using Slurm as the resource manager and scheduler, which will replace Torque and Moab on Flux. This will be the most immediate difference between the two clusters and will require some work on your part to transition from Flux to Great Lakes.

Another significant change is that we are making Great Lakes easier to use through a simplified accounting structure.  Unlike Flux where you need an account for each resource, on Great Lakes you can use the same account and simply request the resources you need, from GPUs to large memory.

There will be two primary ways to get access to compute time: 1) the on-demand model, which adds up the account’s job charges (reserved resources multiplied by the time used) and is billed monthly, similar to Flux On-Demand and 2) node purchases.  In the node purchase model, you will own the hardware which will reside in Great Lakes through the life of the cluster. You will receive an equivalent credit which you can use anywhere on the cluster, including on GPU and large memory nodes. We believe this will be preferable to buying actual hardware in the FOE model, as your daily computational usage can increase and decrease as your research requires. Send us an email at hpc-support@umich.edu if you have any questions or are interested in purchasing hardware on Great Lakes.

When will Great Lakes be available?

The ARC-TS team will prepare the cluster in April 2019 for an Early User period beginning in May, which will continue for approximately 4 weeks to ensure sufficient time to address any issues. General availability of Great Lakes should occur in June 2019.  We have a timeline for the Great Lakes project which will have more detail.

How does this impact me? Why Great Lakes?

After being the primary HPC cluster for the University for 8 years, Flux will be retired in September 2019.  Once Great Lakes becomes available to the University community, we will provide a few months to transition from Flux to Great Lakes.  Flux will be retired after that period due to aging hardware as well as expiring service contracts and licenses. We highly recommend preparing to migrate as early as possible so your research will not be interrupted.  Later in this email, we have suggestions for what you can do to make this migration process as easy as possible.

When Great Lakes becomes generally available to the University community, we will no longer be accepting new Flux accounts or allocations.  All new work should be focused on Great Lakes.

You can see the HPC timeline, including Great Lakes, Beta and Flux, here.

What is the current status of Great Lakes?

Today, the Great Lakes HPC compute hardware and high-performance Storage System has been fully installed and configured. In parallel with this work, the ARC-TS and Unit Support team members have been readying the new service with new software, modules as well as developing training to support the transition onto Great Lakes. A key feature of the new Great Lakes service is the just released HDR InfiniBand from Mellanox. Today, the hardware is installed but the firmware is still in its final stages of testing with the supplier with a target delivery of of mid-April 2019. Given the delays, ARC-TS and the suppliers have discussed an adjusted plan that allows quicker access to the cluster while supporting the future update once the firmware becomes available.

We are working with ITS Finance to define rates for Great Lakes.  We will update the Great Lakes documentation when we have final rates and let everyone know in subsequent communications.

What should I do to transition to Great Lakes?

We hope the transition from Flux to Great Lakes will be relatively straightforward, but to minimize disruptions to your research, we recommend you do your testing early.  In October 2018, we announced availability of the HPC cluster Beta in order to help users with this migration. Primarily, it allows users to migrate their PBS/Torque job submission scripts to Slurm.  You can and should also see the new Modules environments, as they have changed from their current configuration on Flux. Beta is using the same generation of hardware as Flux, so your performance will be similar to that on Flux. You should continue to use Flux for your production work; Beta is only to help test your Slurm job scripts and not for any production work.

Every user on Flux has an account on Beta.  You can login into Beta at beta.arc-ts.umich.edu.  You will have a new home directory on Beta, so you will need to migrate any scripts and data files you need to test your workloads into this new directory.  Beta should not be used for any PHI, HIPAA, Export Controlled, or any sensitive data!  We highly recommend that you use this time to convert your Torque scripts to Slurm and test that everything works as you would expect it to.  

To learn how to use Slurm, we have provided documentation on our Beta website.  Additionally, ARC-TS and academic unit support teams will be offering training sessions around campus. We will have a schedule on the ARC-TS website as well as communicate new sessions through Twitter and email.

If you have compiled software for use on Flux, we highly recommend that you recompile on Great Lakes once it becomes available.  Great Lakes is using the latest CPUs from Intel and by recompiling, your code may get performance gains by taking advantage of new capabilities on the new CPUs.

Questions? Need Assistance?

Contact hpc-support@umich.edu

Women in HPC launches mentoring program

By | Educational, General Interest, HPC, News

Women in High Performance Computing (WHPC) has launched a year-round mentoring program, providing a framework for women to provide or receive mentorship in high performance computing. Read more about the program at https://womeninhpc.org/2019/03/mentoring-programme-2019/

WHPC was created with the vision to encourage women to participate in the HPC community by providing fellowship, education, and support to women and the organizations that employ them. Through collaboration and networking, WHPC strives to bring together women in HPC and technical computing while encouraging women to engage in outreach activities and improve the visibility of inspirational role models.

The University of Michigan has been recognized as one of the first Chapters in the new Women in High Performance Computing (WHPC) Pilot Program. Read more about U-M’s chapter at https://arc.umich.edu/whpc/

Winter HPC maintenance completed

By | Beta, Flux, General Interest, Happenings, HPC, News

Flux, Beta, Armis, Cavium, and ConFlux, and their storage systems (/home and /scratch) are back online after three days of maintenance.  The updates that have been completed will improve the performance and stability of ARC-TS services. 

The following maintenance tasks were done:

  • Preventative maintenance at the Modular Data Center (MDC) which requires a full power outage
  • InfiniBand networking updates (firmware and software)
  • Ethernet networking updates (datacenter distribution layer switches)
  • Operating system and software updates
  • Migration of Turbo networking to new switches (affects /home and /sw)
  • Perform consistency checks on the Lustre file systems that provide /scratch
  • Update firmware and software of the GPFS file systems (ConFlux, starting 9 a.m., Monday, Jan. 7)
  • Perform consistency checks on the GPFS file systems that provide /gpfs (ConFlux, starting 9 a.m., Monday, Jan. 7) 

Please contact hpc-support@umich.edu if you have any questions.

Winter HPC maintenance scheduled for Jan. 6-9

By | Beta, Flux, General Interest, Happenings, HPC, News

To accommodate updates to software, hardware, and operating systems, Flux, Beta, Armis, Cavium, and ConFlux, and their storage systems (/home and /scratch) will be unavailable starting at 6 a.m. Sunday, January 6th and returning to service on Wednesday, January 9th.  These updates will improve the performance and stability of ARC-TS services. We try to encapsulate the required changes into two maintenance periods per year and work to complete these tasks quickly, as we understand the impact of the maintenance on your research.

During this time, the following maintenance tasks are planned:

  • Preventative maintenance at the Modular Data Center (MDC) which requires a full power outage
  • InfiniBand networking updates (firmware and software)
  • Ethernet networking updates (datacenter distribution layer switches)
  • Operating system and software updates
  • Potential updates to job scheduling software
  • Migration of Turbo networking to new switches (affects /home and /sw)
  • Perform consistency checks on the Lustre file systems that provide /scratch
  • Update firmware and software of the GPFS file systems (ConFlux, starting 9 a.m., Monday, Jan. 7)
  • Perform consistency checks on the GPFS file systems that provide /gpfs (ConFlux, starting 9 a.m., Monday, Jan. 7) 

You can use the command “maxwalltime” to discover the amount of time remaining until the beginning of the maintenance. Jobs requesting more walltime than remains before the maintenance will be queued and started after the maintenance is completed.

All filesystems will be unavailable during the maintenance. We encourage you to copy any data that might be needed during that time from Flux prior to the start of the maintenance.

We will post status updates on our Twitter feed ( https://twitter.com/arcts_um ) throughout the course of the maintenance and send an email to all users when the maintenance has been completed.  Please contact hpc-support@umich.edu if you have any questions.

ARC Director Sharon Broude Geva elected Chair of the Coalition for Academic Scientific Computation

By | HPC, News

Dr. Sharon Broude Geva, Director of Advanced Research Computing at the University of Michigan, has been elected Chair of the Coalition for Academic Scientific Computation (CASC) for 2019.

Founded in 1989, CASC advocates for the use of advanced computing technology to accelerate scientific discovery for national competitiveness, global security, and economic success. The organization’s members represent 87 institutions of higher education and national labs.

The chair position is one of four elected CASC executive officers. The officers work closely as a team with the director of CASC. The Chair is responsible for arranging and presiding over general CASC meetings and acts as an official representative of CASC.

Geva served as CASC secretary in 2015 and 2016, and vice-chair in 2017 and 2018.

The other executive officers for 2019 are Neil Bright, Georgia Institute of Technology, Vice Chair; Craig Stewart, Indiana University, Secretary; Scott Yockel, Harvard University, Treasurer; Rajendra Bose, Columbia University, past chair. Lisa Arafune is CASC Director.

 

Beta cluster available for learning Slurm; new scheduler to be part of upcoming cluster updates

By | Flux, General Interest, Happenings, HPC, News

New HPC resources to replace Flux and updates to Armis are coming.  They will run a new scheduling system (Slurm). You will need to learn the commands in this system and update your batch files to successfully run jobs. Read on to learn the details and how to get training and adapt your files.

In anticipation of these changes, ARC-TS has created the test cluster “Beta,” which will provide a testing environment for the transition to Slurm. Slurm will be used on Great Lakes; the Armis HIPAA-aligned cluster; and a new cluster called “Lighthouse” which will succeed the Flux Operating Environment in early 2019.

Currently, Flux and Armis use the Torque (PBS) resource manager and the Moab scheduling system; when completed, Great Lakes and Lighthouse will use the Slurm scheduler and resource manager, which will enhance the performance and reliability of the new resources. Armis will transition from Torque to Slurm in early 2019.

The Beta test cluster is available to all Flux users, who can login via ssh at ‘beta.arc-ts.umich.edu’. Beta has its own /home directory, so users will need to create or transfer any files they need, via scp/sftp or Globus.

Slurm commands will be needed to submit jobs. For a comparison of Slurm and Torque commands, see our Torque to Slurm migration page. For more information, see the Beta home page.

Support staff from ARC-TS and individual academic units will conduct several in-person and online training sessions to help users become familiar with Slurm. We have been testing Slurm for several months, and believe the performance gains, user communications, and increased reliability will significantly improve the efficiency and effectiveness of the HPC environment at U-M.

The tentative time frame for replacing or transitioning current ARC-TS resources is:

  • Flux to Great Lakes, first half of 2019
  • Armis from Torque to Slurm, January 2019
  • Flux Operating Environment to Lighthouse, first half of 2019
  • Open OnDemand on Beta, which replaces ARC Connect for web-based job submissions, Jupyter Notebooks, Matlab, and additional software packages, fall 2018

U-M selects Dell EMC, Mellanox and DDN to Supply New “Great Lakes” Computing Cluster

By | Flux, General Interest, Happenings, HPC, News

The University of Michigan has selected Dell EMC as lead vendor to supply its new $4.8 million Great Lakes computing cluster, which will serve researchers across campus. Mellanox Technologies will provide networking solutions, and DDN will supply storage hardware.

Great Lakes will be available to the campus community in the first half of 2019, and over time will replace the Flux supercomputer, which serves more than 2,500 active users at U-M for research ranging from aerospace engineering simulations and molecular dynamics modeling to genomics and cell biology to machine learning and artificial intelligence.

Great Lakes will be the first cluster in the world to use the Mellanox HDR 200 gigabit per second InfiniBand networking solution, enabling faster data transfer speeds and increased application performance.

“High-performance research computing is a critical component of the rich computing ecosystem that supports the university’s core mission,” said Ravi Pendse, U-M’s vice president for information technology and chief information officer. “With Great Lakes, researchers in emerging fields like machine learning and precision health will have access to a higher level of computational power. We’re thrilled to be working with Dell EMC, Mellanox, and DDN; the end result will be improved performance, flexibility, and reliability for U-M researchers.”

“Dell EMC is thrilled to collaborate with the University of Michigan and our technology partners to bring this innovative and powerful system to such a strong community of researchers,” said Thierry Pellegrino, vice president, Dell EMC High Performance Computing. “This Great Lakes cluster will offer an exceptional boost in performance, throughput and response to reduce the time needed for U-M researches to make the next big discovery in a range of disciplines from artificial intelligence to genomics and bioscience.”

The main components of the new cluster are:

  • Dell EMC PowerEdge C6420 compute nodes, PowerEdge R640 high memory nodes, and PowerEdge R740 GPU nodes
  • Mellanox HDR 200Gb/s InfiniBand ConnectX-6 adapters, Quantum switches and LinkX cables, and InfiniBand gateway platforms
  • DDN GRIDScaler® 14KX® and 100 TB of usable IME® (Infinite Memory Engine) memory

“HDR 200G InfiniBand provides the highest data speed and smart In-Network Computing acceleration engines, delivering HPC and AI applications with the best performance, scalability and efficiency,” said Gilad Shainer, vice president of marketing at Mellanox Technologies. “We are excited to collaborate with the University of Michigan, Dell EMC and DataDirect Networks, in building a leading HDR 200G InfiniBand-based supercomputer, serving the growing demands of U-M researchers.”

“DDN has a long history of working with Dell EMC and Mellanox to deliver optimized solutions for our customers. We are happy to be a part of the new Great Lakes cluster, supporting its mission of advanced research and computing. Partnering with forward-looking thought leaders as these is always enlightening and enriching,” said Dr. James Coomer, SVP Product Marketing and Benchmarks at DDN.

Great Lakes will provide significant improvement in computing performance over Flux. For example, each compute node will have more cores, higher maximum speed capabilities, and increased memory. The cluster will also have improved internet connectivity and file system performance, as well as NVIDIA Tensor GPU cores, which are very powerful for machine learning compared to prior generations of GPUs.

“Users of Great Lakes will have access to more cores, faster cores, faster memory, faster storage, and a more balanced network,” said Brock Palen, Director of Advanced Research Computing – Technology Services (ARC-TS).

The Flux cluster was created approximately 8 years ago, although many of the individual nodes have been added since then. Great Lakes represents an architectural overhaul that will result in better performance and efficiency. Based on extensive input from faculty and other stakeholders across campus, the new Great Lakes cluster will be designed to deliver similar services and capabilities as Flux, including the ability to accommodate faculty purchases of hardware, access to GPUs and large-memory nodes, and improved support for emerging uses such as machine learning and genomics.

ARC-TS will operate and maintain the cluster once it is built. Allocations of computing resources through ARC-TS include access to hundreds of software titles, as well as support and consulting from professional staff with decades of combined experience in research computing.

Updates on the progress of Great Lakes will be available at https://arc-ts.umich.edu/greatlakes/.