Application container software installed on Flux and Armis

By | General Interest, News | No Comments

Singularity, which is new “application container” software, has been installed on the Flux and Armis HPC clusters. An application container is a program — a single file — that can be used to combine an application with the system software it needs to run. This enables applications to run on the clusters even if the system software is different. For example, an older application that is needed to finish a project can continue to be used even if it is incompatible with the updated cluster. An application that needs a different Linux distribution can be containerized to run on the cluster.

Singularity containers cannot be created on Flux or Armis, but they can be created and brought to the clusters to run.  Singularity provides tools to convert Docker containers for use on Flux and Armis. Please contact hpc-support@umich.edu if you are interested in using Singularity and would like more information about how to create and run Singularity containers or would like a referral to unit support who can help.

Information about Singularity on Flux and Armis can be found at http://arc-ts.umich.edu/software/singularity and about Singularity itself at http://singularity.lbl.gov/

Combining simulation and experimentation yields complex crystal nanoparticle

By | General Interest, News, Research | No Comments

The most complex crystal designed and built from nanoparticles has been reported by researchers at Northwestern University and the University of Michigan. The work demonstrates that some of nature’s most complicated structures can be deliberately assembled if researchers can control the shapes of the particles and the way they connect using DNA.

The U-M researcher is Sharon C. Glotzer, the John W. Cahn Distinguished University Professor of Engineering and the Stuart W. Churchill Collegiate Professor of Chemical Engineering. The work is published in the March 3 issue of Science. ARC’s computational resources supported the work.

NVIDIA, IBM info session on new technology for HPC & life science research — Jan 24

By | Educational, General Interest, News | No Comments

Join us for a special IBM High Performance Computing event with NVIDIA!

Dramatic shifts in the information technology industry offer new kinds of performance capabilities and throughput. Professionals in HPC, Deep Learning, Big Data Analytics and Life Sciences are cordially invited to learn more about industry trends & directions and IT solutions from NVIDIA and IBM.

PRESENTORS

  • Brad Davidson – NVIDIA Senior Solutions Architect
  • Janis Landry-Lane – IBM Worldwide Program Director for Genomic Medicine
  • Jane Yu – IBM Worldwide Team Lead, Translational Medicine Solutions

For more information, visit our event page.

Webinar: Writing a Successful XSEDE Allocation Proposal — Jan. 5

By | Educational, General Interest, News | No Comments

The Extreme Science and Engineering Discovery Environment (XSEDE) will introduce users to the process of writing an XSEDE allocation proposal and cover the elements that make a proposal successful. This webinar is recommended for users making the jump from a startup allocation to a research allocation and is highly recommended for new campus champions.

Registration: https://www.xsede.org/web/xup/course-calendar

Please submit any questions you may have via the Consulting section of the XSEDE User Portal.

https://portal.xsede.org/help-desk

Video, slides available from U-M presentations at SC16

By | Events, General Interest, News | No Comments

Several University of Michigan researchers and research IT staff made presentations at the SC16 conference in Salt Lake City Nov. 13-17. Material from many of the talks is now available for viewing online:

  • Shawn McKee (Physics) and Ben Meekhof (ARC-TS) presented a demonstration of the Open Storage Research Infrastructure (OSiRIS) project at the U-M booth. The demonstration extended the OSiRIS network from its participating institutions in Michigan to the conference center in Utah. Meekhof also presented at a”Birds of a Feather” session on Ceph in HPC environments. More information, including slides, is available on the OSiRIS website.
  • Todd Raeker (ARC-TS) made a presentation on ConFlux, U-M’s new computational physics cluster, at the NVIDIA booth. Slides and video are available.
  • Nilmini Abeyratne, a Ph.D student in computer science, presented her project “Low Design-Risk Checkpointing Storage Solution for Exascale Supercomputers” at the Doctoral Showcase. A summary, slides, and poster can be viewed on the SC16 website.
  • Jeremy Hallum (ARC-TS) presented information on the Yottabyte Research Cloud at the U-M booth. His slides are available here.

Other U-M activity at the conference included Sharon Broude Geva, Director of Advanced Research Computing, participating in a panel titled “HPC Workforce Development: How Do We Find Them, Recruit Them, and Teach Them to Be Today’s Practitioners and Tomorrow’s Leaders?”; Quentin Stout (EECS) and Christiane Jablonowski (CLASP) teaching the “Parallel Computing 101” tutorial.

HPC maintenance scheduled for January 7 – 9

By | Flux, General Interest, News | No Comments

To accommodate upgrades to software and operating systems, Flux, Armis, and their storage systems (/home and /scratch) will be unavailable starting at 9am Saturday, January 7th, returning to service on Monday, January 9th.  Additionally, external Turbo mounts will be unavailable 11pm Saturday, January 7th, until 7am Sunday, January 8th.

During this time, the following updates are planned:

  • Operating system and software updates (minor updates) on Flux and Armis.  This should not require any changes to user software or processes.
  • Resource manager and job scheduling software updates.
  • Operating system updates on Turbo.

For HPC jobs, you can use the command “maxwalltime” to discover the amount of time before the beginning of the maintenance. Jobs that cannot complete prior to the beginning of the maintenance will be able to start when the clusters are returned to service.

We will post status updates on our Twitter feed ( https://twitter.com/arcts_um ) and send an email to all HPC users when the outage has been completed.

NVIDIA accepting applications for Graduate Fellowship Program

By | Educational, Funding Opportunities, General Interest, News | No Comments

NVIDIA has launched its 16th Annual Graduate Fellowship Program, which awards grants and technical support to graduate students who are doing outstanding GPU-based research.

This year NVIDIA is especially seeking doctoral students pushing the envelope in artificial intelligence, deep neural networks, autonomous vehicles, and related fields. The Graduate Fellowship awards are now up to $50,000 per student. These grants will be awarded in the 2017-2018 academic year.

Since its inception in 2002, the NVIDIA Graduate Fellowship Program has awarded over 130 Ph.D. graduate students with grants that have helped accelerate their research efforts.

The NVIDIA Graduate Fellowship Program is open to applicants worldwide. The deadline for submitting applications is Jan. 16, 2017. Eligible graduate students will have already completed their first year of Ph.D. level studies in the areas of computer science, computer engineering, system architecture, electrical engineering or a related area. In addition, applicants must also be engaged in active research as part of their thesis work.

For more information on eligibility and how to apply, visit http://research.nvidia.com/relevant/graduate-fellowship-program or email fellowship@nvidia.com.

Blue Waters accepting proposals for allocations, fellowships, and undergrad internships

By | Educational, General Interest, News | No Comments

The GLCPC (Great Lakes Consortium for Petascale Computation) recently posted its call for proposals. Researchers from member institutions (including the University of Michigan) are eligible to apply for a Blue Waters allocation.  The application deadline is Friday, December 2nd.  More information can be found at: http://www.greatlakesconsortium.org/2016cfp.htm

Applications are also being accepted for Blue Waters Fellowships. Applications are due February 3, 2017. More information is available at: https://bluewaters.ncsa.illinois.edu/fellowships

Applications are now being accepted for Blue Waters undergraduate internships. Applications are due February 3, 2017.  More information is available at: https://bluewaters.ncsa.illinois.edu/internships

Research highlights: Running climate models in the cloud

By | General Interest, News, Research | No Comments

Xianglei Huang

Can cloud computing systems help make climate models easier to run? Assistant research scientist Xiuhong Chen and MICDE affiliated faculty Xianglei Huang, from Climate and Space Sciences and Engineering (CLASP), provide some answers to this question in an upcoming issue of Computers & Geoscience (Vol. 98, Jan. 2017, online publication link: http://dx.doi.org/10.1016/j.cageo.2016.09.014).

Teaming up with co-authors Dr. Chaoyi Jiao and Prof. Mark Flanner, also in CLASP, as well as Brock Palen and Todd Raeker from U-M’s Advanced Research Computing – Technology Services (ARC-TS), they compared the reliability and efficiency of Amazon’s Web Service – Elastic Compute 2 (AWS EC2) with U-M’s Flux high performance computing (HPC) cluster in running the Community Earth System Model (CESM), a flagship climate model in the U.S. developed by the National Center for Atmospheric Research.

The team was able to run the CESM in parallel on an AWS EC2 virtual cluster with minimal packaging and code compiling effort, finding that the AWS EC2 can render a parallelization efficiency comparable to Flux, the U-M HPC cluster, when using up to 64 cores. When using more than 64 cores, the communication time between virtual EC2 exceeded the distributed computing time.

Until now, climate and earth systems simulations had relied on numerical model suites that run on thousands of dedicated HPC cores for hours, days or weeks, depending on the size and scale of each model. Although these HPC resources have the advantage of being supported and maintained by trained IT support staff, making them easier to use them, they are expensive and not readily available to every investigator that needs them.

Furthermore, the systems within reach are sometimes not large enough to run simulations at the desired scales. Commercial cloud systems, on the other hand, are cheaper and accessible to everyone, and have grown significantly in the last few years. One potential drawback of cloud systems is that the user needs to provide and install all the software and the IT expertise needed to run the simulations’ packages.

Chen and Huang’s work represents an important firstxiangleihuangpost2016 step in the use of cloud computing in large-scale climate simulations. Now, cloud computing systems can be considered a viable alternate option to traditional HPC clusters for computational research, potentially allowing researchers to leverage the computational power offered by a cloud environment.

This study was sponsored by the Amazon Climate Initiative through a grant awarded to Prof. Huang. The local simulation in U-M was made possible by a DoE grant awarded to Prof. Huang.

Top image: http://www.cesm.ucar.edu/