HPC training workshops begin Thursday, Sept. 21

By | Educational, Events, General Interest, HPC, News | No Comments

series of training workshops in high performance computing will be held Sept. 21 through Oct. 31, 2017, presented by CSCAR in conjunction with Advanced Research Computing – Technology Services (ARC-TS). All sessions are held at East Hall, Room B254, 530 Church St.

Introduction to the Linux command Line
This course will familiarize the student with the basics of accessing and interacting with Linux computers using the GNU/Linux operating system’s Bash shell, also known as the “command line.”
Dates: (Please sign up for only one)
• Thursday, Sept. 21, 9 a.m. – noon (full descriptionregistration)
• Thursday, Sept. 28, 9 a.m. – noon (full description | registration)
Location:
East Hall, Room B250, 530 Church St.

Introduction to the Flux cluster and batch computing
This workshop will provide a brief overview of the components of the Flux cluster, including the resource manager and scheduler, and will offer students hands-on experience.
Dates: (Please sign up for only one)
• Thursday, Sept. 28, 1 – 4 p.m. (full description | registration)
• Monday, Oct. 2, 9 a.m. – noon (full description | registration)
Location:
East Hall, Room B254, 530 Church St.

Advanced batch computing on the Flux cluster
This course will cover advanced areas of cluster computing on the Flux cluster, including common parallel programming models, dependent and array scheduling, among other topics.
Dates: (Please sign up for only one)
• Tuesday, Oct. 10, 1 – 5 p.m. (full description | registration)
• Thursday, Oct. 12, 9 a.m. – noon (full description | registration)
Location:
East Hall, Room B254, 530 Church St.

Hadoop Workshop
Learn how to process large amounts (up to terabytes) of data using SQL and/or simple programming models available in Python, Scala, and Java.
Date:
• Tuesday, Oct. 31, 1 – 5 p.m. (full description | registration)
Location:
East Hall, Room B254, 530 Church St.

Summer HPC maintenance

By | | No Comments

To accommodate equipment repairs, and upgrades to software, hardware, and operating systems, Flux, Armis, ConFlux, Flux Hadoop, and their storage systems (/home and /scratch) will be unavailable starting at 6 a.m. Saturday, July 29, returning to service on Wednesday, August 2.  

During this time, the following updates are planned:

  • Annual power maintenance at the Modular Data Center.  All systems will be powered off. (Flux/Armis/Flux Hadoop)
  • Campus network hardware and software updates (Flux/Armis/Flux Hadoop)
  • InfiniBand networking updates (firmware and software) (Flux/Armis/ConFlux)
  • Operating system and software updates (All clusters).
  • Resource manager and job scheduling software updates (Flux/Armis).
  • Migrate NFS volumes, including /home, from Value Storage to Turbo (Flux)
  • Update hardware and software of the Lustre file systems that provide /scratch (Flux)

For Flux HPC jobs, you can use the command “maxwalltime” to discover the amount of time remaining until the beginning of the maintenance. Jobs requesting more walltime than remains before the maintenance will be queued and started after the maintenance is completed.

All Flux, Armis, ConFlux, and Flux Hadoop filesystems will be unavailable during the maintenance. We encourage you to copy any data that might be needed during that time from Flux prior to the start of the maintenance.

We will post status updates on our Twitter feed ( https://twitter.com/arcts_um ) throughout the course of the maintenance and send an email to all HPC and Hadoop users when the maintenance has been completed.

ARC-TS Town Hall on Next Generation HPC Cluster

By | | No Comments

The University of Michigan is beginning the process of building our next generation HPC platform, “Big House.”  Flux, the shared HPC cluster, has reached the end of its useful life. Flux has served us well for more than five years, but as we move forward with replacement, we want to make sure we’re meeting the needs of the research community.

ARC-TS will be holding a series of town halls to take input from faculty and researchers on the next HPC platform to be built by the University.  These town halls are open to anyone.

Your input will help to ensure that U-M is on course for providing HPC, so we hope you will make time to attend one of these sessions. If you cannot attend, please email hpc-support@umich.edu with any input you want to share.

ARC-TS Town Hall on Next Generation HPC Cluster

By | | No Comments

The University of Michigan is beginning the process of building our next generation HPC platform, “Big House.”  Flux, the shared HPC cluster, has reached the end of its useful life. Flux has served us well for more than five years, but as we move forward with replacement, we want to make sure we’re meeting the needs of the research community.

ARC-TS will be holding a series of town halls to take input from faculty and researchers on the next HPC platform to be built by the University.  These town halls are open to anyone.

Your input will help to ensure that U-M is on course for providing HPC, so we hope you will make time to attend one of these sessions. If you cannot attend, please email hpc-support@umich.edu with any input you want to share.

ARC-TS Town Hall on Next Generation HPC Cluster

By | | No Comments

The University of Michigan is beginning the process of building our next generation HPC platform, “Big House.”  Flux, the shared HPC cluster, has reached the end of its useful life. Flux has served us well for more than five years, but as we move forward with replacement, we want to make sure we’re meeting the needs of the research community.

ARC-TS will be holding a series of town halls to take input from faculty and researchers on the next HPC platform to be built by the University.  These town halls are open to anyone.

Your input will help to ensure that U-M is on course for providing HPC, so we hope you will make time to attend one of these sessions. If you cannot attend, please email hpc-support@umich.edu with any input you want to share.

ARC-TS Town Hall on Next Generation HPC Cluster

By | | No Comments

The University of Michigan is beginning the process of building our next generation HPC platform, “Big House.”  Flux, the shared HPC cluster, has reached the end of its useful life. Flux has served us well for more than five years, but as we move forward with replacement, we want to make sure we’re meeting the needs of the research community.

ARC-TS will be holding a series of town halls to take input from faculty and researchers on the next HPC platform to be built by the University.  These town halls are open to anyone.

Your input will help to ensure that U-M is on course for providing HPC, so we hope you will make time to attend one of these sessions. If you cannot attend, please email hpc-support@umich.edu with any input you want to share.

ARC-TS seeks input on next generation HPC cluster

By | Events, Flux, General Interest, Happenings, HPC, News | No Comments

The University of Michigan is beginning the process of building our next generation HPC platform, “Big House.”  Flux, the shared HPC cluster, has reached the end of its useful life. Flux has served us well for more than five years, but as we move forward with replacement, we want to make sure we’re meeting the needs of the research community.

ARC-TS will be holding a series of town halls to take input from faculty and researchers on the next HPC platform to be built by the University.  These town halls are open to anyone and will be held at:

  • College of Engineering, Johnson Room, Tuesday, June 20th, 9:00a – 10:00a
  • NCRC Bldg 300, Room 376, Wednesday, June 21st, 11:00a – 12:00p
  • LSA #2001, Tuesday, June 27th, 10:00a – 11:00a
  • 3114 Med Sci I, Wednesday, June 28th, 2:00p – 3:00p

Your input will help to ensure that U-M is on course for providing HPC, so we hope you will make time to attend one of these sessions. If you cannot attend, please email hpc-support@umich.edu with any input you want to share.

Application container software installed on Flux and Armis

By | General Interest, News | No Comments

Singularity, which is new “application container” software, has been installed on the Flux and Armis HPC clusters. An application container is a program — a single file — that can be used to combine an application with the system software it needs to run. This enables applications to run on the clusters even if the system software is different. For example, an older application that is needed to finish a project can continue to be used even if it is incompatible with the updated cluster. An application that needs a different Linux distribution can be containerized to run on the cluster.

Singularity containers cannot be created on Flux or Armis, but they can be created and brought to the clusters to run.  Singularity provides tools to convert Docker containers for use on Flux and Armis. Please contact hpc-support@umich.edu if you are interested in using Singularity and would like more information about how to create and run Singularity containers or would like a referral to unit support who can help.

Information about Singularity on Flux and Armis can be found at http://arc-ts.umich.edu/software/singularity and about Singularity itself at http://singularity.lbl.gov/

HPC Maintenance

By | | No Comments

To accommodate upgrades to software and operating systems, Flux, Armis, and their storage systems (/home and /scratch) will be unavailable starting at 9am Saturday, January 7th, returning to service on Monday, January 9th.  Additionally, external Turbo mounts will be unavailable 11pm Saturday, January 7th, until 7am Sunday, January 8th.

During this time, the following updates are planned:

  • Operating system and software updates (minor updates) on Flux and Armis.  This should not require any changes to user software or processes.
  • Resource manager and job scheduling software updates.
  • Operating system updates on Turbo.

For HPC jobs, you can use the command “maxwalltime” to discover the amount of time before the beginning of the maintenance. Jobs that cannot complete prior to the beginning of the maintenance will be able to start when the clusters are returned to service.

We will post status updates on our Twitter feed ( https://twitter.com/arcts_um ) and send an email to all HPC users when the outage has been completed.

HPC User Meetup

By | | No Comments

Users of high performance computing resources are invited to meet ARC-TS HPC operators and support staff in person.

There is not a set agenda; come at anytime and stay as long as you please. You can come and talk about your use of any sort of computational resource, Flux, Armis, Hadoop, XSEDE, Amazon, or other.

Ask any questions you may have. The ARC-TS staff will work with you on your specific projects, or just show you new things that can help you optimize your research.

This is also a good time to meet other researchers doing similar work.

This is open to anyone interested; it is not limited to Flux users.

Examples of potential topics:

  • What ARC-TS services are there, and how to access them?
  • I want to do X, do you have software capable of it?
  • What is special about GPU/Xeon Phi/Accelerators?
  • Are there resources for people without budgets?
  • I want to apply for grant X, but it has certain limitations. What support can ARC-TS provide?
  • I want to learn more about the compiler and debugging?
  • I want to learn more about performance tuning, can you look at my code with me?
  • Etc.