NVIDIA, IBM info session on new technology for HPC & life science research — Jan 24

By | Educational, General Interest, News | No Comments

Join us for a special IBM High Performance Computing event with NVIDIA!

Dramatic shifts in the information technology industry offer new kinds of performance capabilities and throughput. Professionals in HPC, Deep Learning, Big Data Analytics and Life Sciences are cordially invited to learn more about industry trends & directions and IT solutions from NVIDIA and IBM.

PRESENTORS

  • Brad Davidson – NVIDIA Senior Solutions Architect
  • Janis Landry-Lane – IBM Worldwide Program Director for Genomic Medicine
  • Jane Yu – IBM Worldwide Team Lead, Translational Medicine Solutions

For more information, visit our event page.

Webinar: Writing a Successful XSEDE Allocation Proposal — Jan. 5

By | Educational, General Interest, News | No Comments

The Extreme Science and Engineering Discovery Environment (XSEDE) will introduce users to the process of writing an XSEDE allocation proposal and cover the elements that make a proposal successful. This webinar is recommended for users making the jump from a startup allocation to a research allocation and is highly recommended for new campus champions.

Registration: https://www.xsede.org/web/xup/course-calendar

Please submit any questions you may have via the Consulting section of the XSEDE User Portal.

https://portal.xsede.org/help-desk

Video, slides available from U-M presentations at SC16

By | Events, General Interest, News | No Comments

Several University of Michigan researchers and research IT staff made presentations at the SC16 conference in Salt Lake City Nov. 13-17. Material from many of the talks is now available for viewing online:

  • Shawn McKee (Physics) and Ben Meekhof (ARC-TS) presented a demonstration of the Open Storage Research Infrastructure (OSiRIS) project at the U-M booth. The demonstration extended the OSiRIS network from its participating institutions in Michigan to the conference center in Utah. Meekhof also presented at a”Birds of a Feather” session on Ceph in HPC environments. More information, including slides, is available on the OSiRIS website.
  • Todd Raeker (ARC-TS) made a presentation on ConFlux, U-M’s new computational physics cluster, at the NVIDIA booth. Slides and video are available.
  • Nilmini Abeyratne, a Ph.D student in computer science, presented her project “Low Design-Risk Checkpointing Storage Solution for Exascale Supercomputers” at the Doctoral Showcase. A summary, slides, and poster can be viewed on the SC16 website.
  • Jeremy Hallum (ARC-TS) presented information on the Yottabyte Research Cloud at the U-M booth. His slides are available here.

Other U-M activity at the conference included Sharon Broude Geva, Director of Advanced Research Computing, participating in a panel titled “HPC Workforce Development: How Do We Find Them, Recruit Them, and Teach Them to Be Today’s Practitioners and Tomorrow’s Leaders?”; Quentin Stout (EECS) and Christiane Jablonowski (CLASP) teaching the “Parallel Computing 101” tutorial.

HPC maintenance scheduled for January 7 – 9

By | Flux, General Interest, News | No Comments

To accommodate upgrades to software and operating systems, Flux, Armis, and their storage systems (/home and /scratch) will be unavailable starting at 9am Saturday, January 7th, returning to service on Monday, January 9th.  Additionally, external Turbo mounts will be unavailable 11pm Saturday, January 7th, until 7am Sunday, January 8th.

During this time, the following updates are planned:

  • Operating system and software updates (minor updates) on Flux and Armis.  This should not require any changes to user software or processes.
  • Resource manager and job scheduling software updates.
  • Operating system updates on Turbo.

For HPC jobs, you can use the command “maxwalltime” to discover the amount of time before the beginning of the maintenance. Jobs that cannot complete prior to the beginning of the maintenance will be able to start when the clusters are returned to service.

We will post status updates on our Twitter feed ( https://twitter.com/arcts_um ) and send an email to all HPC users when the outage has been completed.

NVIDIA accepting applications for Graduate Fellowship Program

By | Educational, Funding Opportunities, General Interest, News | No Comments

NVIDIA has launched its 16th Annual Graduate Fellowship Program, which awards grants and technical support to graduate students who are doing outstanding GPU-based research.

This year NVIDIA is especially seeking doctoral students pushing the envelope in artificial intelligence, deep neural networks, autonomous vehicles, and related fields. The Graduate Fellowship awards are now up to $50,000 per student. These grants will be awarded in the 2017-2018 academic year.

Since its inception in 2002, the NVIDIA Graduate Fellowship Program has awarded over 130 Ph.D. graduate students with grants that have helped accelerate their research efforts.

The NVIDIA Graduate Fellowship Program is open to applicants worldwide. The deadline for submitting applications is Jan. 16, 2017. Eligible graduate students will have already completed their first year of Ph.D. level studies in the areas of computer science, computer engineering, system architecture, electrical engineering or a related area. In addition, applicants must also be engaged in active research as part of their thesis work.

For more information on eligibility and how to apply, visit http://research.nvidia.com/relevant/graduate-fellowship-program or email fellowship@nvidia.com.

Blue Waters accepting proposals for allocations, fellowships, and undergrad internships

By | Educational, General Interest, News | No Comments

The GLCPC (Great Lakes Consortium for Petascale Computation) recently posted its call for proposals. Researchers from member institutions (including the University of Michigan) are eligible to apply for a Blue Waters allocation.  The application deadline is Friday, December 2nd.  More information can be found at: http://www.greatlakesconsortium.org/2016cfp.htm

Applications are also being accepted for Blue Waters Fellowships. Applications are due February 3, 2017. More information is available at: https://bluewaters.ncsa.illinois.edu/fellowships

Applications are now being accepted for Blue Waters undergraduate internships. Applications are due February 3, 2017.  More information is available at: https://bluewaters.ncsa.illinois.edu/internships

HPC User Meetups set for October, November and December

By | Educational, Events, General Interest | No Comments

Users of high performance computing resources are invited to meet ARC-TS HPC operators and support staff in person at an upcoming user meeting:

  • Monday, October 17, 1:10 – 5 p.m., 2001 LSA Building (500 S. State St.)
  • Wednesday, November 9, 1 – 5 p.m., 1180 Duderstadt Center (2281 Bonisteel Blvd., North Campus)
  • Monday, December 12, 1 – 5 p.m., 4515 Biomedical Science Research Building (BSRB, 109 Zina Pitcher Pl.)

There is not a set agenda; come at anytime and stay as long as you please. You can come and talk about your use of any sort of computational resource, Flux, Armis, Hadoop, XSEDE, Amazon, or other.

Ask any questions you may have. The ARC-TS staff will work with you on your specific projects, or just show you new things that can help you optimize your research.

This is also a good time to meet other researchers doing similar work.

This is open to anyone interested; it is not limited to Flux users.

Examples of potential topics:

  • What ARC-TS services are there, and how to access them?
  • I want to do X, do you have software capable of it?
  • What is special about GPU/Xeon Phi/Accelerators?
  • Are there resources for people without budgets?
  • I want to apply for grant X, but it has certain limitations. What support can ARC-TS provide?
  • I want to learn more about the compiler and debugging?
  • I want to learn more about performance tuning, can you look at my code with me?
  • Etc.

Research highlights: Running climate models in the cloud

By | General Interest, News, Research | No Comments

Xianglei Huang

Can cloud computing systems help make climate models easier to run? Assistant research scientist Xiuhong Chen and MICDE affiliated faculty Xianglei Huang, from Climate and Space Sciences and Engineering (CLASP), provide some answers to this question in an upcoming issue of Computers & Geoscience (Vol. 98, Jan. 2017, online publication link: http://dx.doi.org/10.1016/j.cageo.2016.09.014).

Teaming up with co-authors Dr. Chaoyi Jiao and Prof. Mark Flanner, also in CLASP, as well as Brock Palen and Todd Raeker from U-M’s Advanced Research Computing – Technology Services (ARC-TS), they compared the reliability and efficiency of Amazon’s Web Service – Elastic Compute 2 (AWS EC2) with U-M’s Flux high performance computing (HPC) cluster in running the Community Earth System Model (CESM), a flagship climate model in the U.S. developed by the National Center for Atmospheric Research.

The team was able to run the CESM in parallel on an AWS EC2 virtual cluster with minimal packaging and code compiling effort, finding that the AWS EC2 can render a parallelization efficiency comparable to Flux, the U-M HPC cluster, when using up to 64 cores. When using more than 64 cores, the communication time between virtual EC2 exceeded the distributed computing time.

Until now, climate and earth systems simulations had relied on numerical model suites that run on thousands of dedicated HPC cores for hours, days or weeks, depending on the size and scale of each model. Although these HPC resources have the advantage of being supported and maintained by trained IT support staff, making them easier to use them, they are expensive and not readily available to every investigator that needs them.

Furthermore, the systems within reach are sometimes not large enough to run simulations at the desired scales. Commercial cloud systems, on the other hand, are cheaper and accessible to everyone, and have grown significantly in the last few years. One potential drawback of cloud systems is that the user needs to provide and install all the software and the IT expertise needed to run the simulations’ packages.

Chen and Huang’s work represents an important firstxiangleihuangpost2016 step in the use of cloud computing in large-scale climate simulations. Now, cloud computing systems can be considered a viable alternate option to traditional HPC clusters for computational research, potentially allowing researchers to leverage the computational power offered by a cloud environment.

This study was sponsored by the Amazon Climate Initiative through a grant awarded to Prof. Huang. The local simulation in U-M was made possible by a DoE grant awarded to Prof. Huang.

Top image: http://www.cesm.ucar.edu/