Jesse Capecelatro, assistant professor of Mechanical engineering and MICDE affiliated faculty member, has been awarded an NSF CAREER grant for his project “Toward Understanding and Modeling Turbulent Reacting Particle-Laden Flows.
Dr. Sharon Broude Geva, Director of Advanced Research Computing at U-M, was one of four women profiled in HPCWire’s “Celebrating Women in Science” article.
Read the piece at https://www.hpcwire.com/2019/02/11/women-science-leading-the-way-in-hpc/
U-M is seeking an XSEDE Student Champion to provide outreach on campus to help users access the best advanced computing resource that will help them accomplish their research goals, provide training to users on campus, or work on special projects assigned by a mentor.
U-M is offering a new, campus-wide license for MATLAB, Simulink, and companion products. All faculty, researchers, and students are eligible to download and install these products, including toolboxes such as:
- Bioinformatics Toolbox
- Control System Toolbox
- Curve Fitting Toolbox
- Data Acquisition Toolbox
- Image Processing Toolbox
- Instrument Control Toolbox
- Optimization Toolbox
- Parallel Computing Toolbox
- Signal Processing Toolbox
- Simscape Multibody
- Simulink Control Design
- Statistics and Machine Learning Toolbox
- Symbolic Math Toolbox.
Access free, self-paced training to get started in less than 2 hours: MATLAB Onramp.
Commercial use of MathWorks products is not covered by our TAH license, so if you are using a commercial license, please continue to do so.
H.V. Jagadish has been appointed director of the Michigan Institute for Data Science (MIDAS), effective February 15, 2019.
Jagadish, the Bernard A. Galler Collegiate Professor of Electrical Engineering and Computer Science at the University of Michigan, was one of the initiators of an earlier concept of a data science initiative on campus. With support from all academic units and the Institute for Social Research, the Office of the Provost and Office of the Vice President for Research, MIDAS was established in 2015 as part of the university-wide Data Science Initiative to promote interdisciplinary collaboration in data science and education.
“I have a longstanding passion for data science, and I understand its importance in addressing a variety of important societal issues,” Jagadish said. “As the focal point for data science research at Michigan, I am thrilled to help lead MIDAS into its next stage and further expand our data science efforts across disciplines.”
Jagadish replaces MIDAS co-directors Brian Athey and Alfred Hero, who completed their leadership appointments in December 2018.
“Professor Jagadish is a leader in the field of data science, and over the past two decades, he has exhibited national and international leadership in this area,” said S. Jack Hu, U-M vice president for research. “His leadership will help continue the advancement of data science methodologies and the application of data science in research in all disciplines.”
MIDAS has built a cohort of 26 active core faculty members and more than 200 affiliated faculty members who span all three U-M campuses. Institute funding has catalyzed several multidisciplinary research projects in health, transportation, learning analytics, social sciences and the arts, many of which have generated significant external funding. MIDAS also plays a key role in establishing new educational opportunities, such as the graduate certificate in data science, and provides additional support for student groups, including one team that used data science to help address the Flint water crisis.
As director, Jagadish aims to expand the institute’s research focus and strengthen its partnerships with industry.
“The number of academic fields taking advantage of data science techniques and tools has been growing dramatically,” Jagadish said. “Over the next several years, MIDAS will continue to leverage the university’s strengths in data science methodologies to advance research in a wide array of fields, including the humanities and social sciences.”
Jagadish joined U-M in 1999. He previously led the Database Research Department at AT&T Labs.
His research, which focuses on information management, has resulted in more than 200 journal articles and 37 patents. Jagadish is a fellow of the Association for Computing Machinery and the American Association for the Advancement of Science, and he served nine years on the Computing Research Association board.
Flux, Beta, Armis, Cavium, and ConFlux, and their storage systems (/home and /scratch) are back online after three days of maintenance. The updates that have been completed will improve the performance and stability of ARC-TS services.
The following maintenance tasks were done:
- Preventative maintenance at the Modular Data Center (MDC) which requires a full power outage
- InfiniBand networking updates (firmware and software)
- Ethernet networking updates (datacenter distribution layer switches)
- Operating system and software updates
- Migration of Turbo networking to new switches (affects /home and /sw)
- Perform consistency checks on the Lustre file systems that provide /scratch
- Update firmware and software of the GPFS file systems (ConFlux, starting 9 a.m., Monday, Jan. 7)
- Perform consistency checks on the GPFS file systems that provide /gpfs (ConFlux, starting 9 a.m., Monday, Jan. 7)
Please contact firstname.lastname@example.org if you have any questions.
Motivated by Ceph usage in the OSiRIS project, the University of Michigan has joined the Ceph Foundation as an Associate Member. We join other educational, government, and research organizations engaged in the Ceph foundation at this membership level.
From the Foundation website: The Ceph Foundation exists to enable industry members to collaborate and pool resources to support the Ceph project community. The Foundation provides an open, collaborative, and neutral home for project stakeholders to coordinate their development and community investments in the Ceph ecosystem.
What is Great Lakes?
The Great Lakes service is a next generation HPC platform for University of Michigan researchers. Great Lakes will provide several performance advantages compared to Flux, primarily in the areas of storage and networking. Great Lakes is built around the latest Intel CPU architecture called Skylake and will have standard, large memory, visualization, and GPU-accelerated nodes. For more information on the technical aspects of Great Lakes, please see the Great Lakes configuration page.
- Approximately 13,000 Intel Skylake Gold processors providing AVX512 capability providing over 1.5 TFlop of performance per node
- 2 PB scratch storage system providing approximately 80 GB/s performance (compared to 8 GB/s on Flux)
- New InfiniBand network with improved architecture and 100 Gb/s to each node
- Each compute node will have significantly faster I/O via SSD-accelerated storage
- Large Memory Nodes with 1.5 TB memory per node
- GPU Nodes with NVidia Volta V100 GPUs (2 GPUs per node)
- Visualization Nodes with Tesla P40 GPUs
Great Lakes will be using Slurm as the resource manager and scheduler, which will replace Torque and Moab on Flux. This will be the most immediate difference between the two clusters and will require some work on your part to transition from Flux to Great Lakes.
Another significant change is that we are making Great Lakes easier to use through a simplified accounting structure. Unlike Flux where you need an account for each resource, on Great Lakes you can use the same account and simply request the resources you need, from GPUs to large memory.
There will be two primary ways to get access to compute time: 1) the pay-as-you-go model similar to Flux On-Demand and 2) node purchases. Node purchases will give you computational time commensurate to 4 years multiplied by the number of nodes you buy. We believe this will be preferable to buying actual hardware in the FOE model, as your daily computational usage can increase and decrease as your research requires. Additionally you will not be limited by hardware failures on your specific nodes, as your jobs can run anywhere on Great Lakes. Send us an email at email@example.com if you have any questions or are interested in purchasing hardware on Great Lakes.
When will Great Lakes be available?
The ARC-TS team will prepare the cluster in February/March 2019 for an Early User period which will continue for several weeks to ensure sufficient time to address any issues. General availability of Great Lakes should occur in April.
How does this impact me? Why Great Lakes?
After being the primary HPC cluster for the University for 8 years, Flux will be retired in September 2019. Once Great Lakes becomes available to the University community, we will provide a few months to transition from Flux to Great Lakes. Flux will be retired after that period due to aging hardware as well as expiring service contracts and licenses. We highly recommend preparing to migrate as early as possible so your research will not be interrupted. Later in this email, we have suggestions for what you can do to make this migration process as easy as possible.
When Great Lakes becomes generally available to the University community, we will no longer be accepting new Flux accounts or allocations. All new work should be focused on Great Lakes.
What is the current status of Great Lakes?
Today, the Great Lakes HPC compute hardware has been fully installed and the high-performance Storage System configuration is in progress. In parallel with this work, the ARC-TS and Unit Support team members have been readying the new service with new software, modules as well as developing training to support the transition onto Great Lakes. A key feature of the new Great Lakes service is the just released HDR InfiniBand from Mellanox. Today, the hardware is available but the firmware is still in its final stages of testing with the supplier with a target delivery date of March (2019). Given the delays, ARC-TS and the suppliers have discussed an adjusted plan that allows quicker access to the cluster while supporting the future update once the firmware becomes available.
What should I do to transition to Great Lakes?
We hope the transition from Flux to Great Lakes will be relatively straightforward, but to minimize disruptions to your research, we recommend you do your testing early. In October, we announced availability of the HPC cluster Beta in order to help users with this migration. Primarily, it allows users to migrate their PBS/Torque job submission scripts to Slurm. You can also see the new Modules environments, as they have changed from their current configuration on Flux. Beta is using the same generation of hardware as Flux, so your performance will be similar to that on Flux. You should continue to use Flux for your production work; Beta is only to help test your Slurm job scripts and not for any production work.
Every user on Flux has an account on Beta. You can login into Beta at beta.arc-ts.umich.edu. You will have a new home directory on Beta, so you will need to migrate any scripts and data files you need to test your workloads into this new directory. Beta should not be used for any PHI, HIPAA, Export Controlled, or any sensitive data! We highly recommend that you use this time to convert your Torque scripts to Slurm and test that everything works as you would expect it to.
To learn how to use Slurm, we have provided documentation on our Beta website. Additionally, ARC-TS and academic unit support teams will be offering training sessions around campus. We’ll have a schedule on the ARC-TS website as well as communicate new sessions through Twitter and email.
If you have compiled software for use on Flux, we highly recommend that you recompile on Great Lakes once it becomes available. Great Lakes is using the latest CPUs from Intel and by recompiling, your code may get performance gains by taking advantage of new capabilities on the new CPUs.
Questions? Need Assistance?
Beginning in January of 2019, most of CSCAR’s workshops will be offered free of charge to UM students, faculty, and staff.
CSCAR is able to do this thanks to funding from UM’s Data Science Initiative. Registration for CSCAR workshops is still required, and seats are limited.
CSCAR requests that participants please cancel their registration if they decide not to attend a workshop for which they have previously registered.
Note that a small number of workshops hosted by CSCAR but taught by non-CSCAR personnel will continue to have a fee, and fees will continue to apply for people who are not UM students, faculty or staff.
Eric Michielssen will step down from his position as Associate Vice President for Research – Advanced Research Computing on December 31, 2018, after serving in that leadership role for almost six years. Dr. Michielssen will return to his faculty role in the Department of Electrical Engineering and Computer Science in the College of Engineering.
Under his leadership, Advanced Research Computing has helped empower computational discovery through the Michigan Institute for Computational Discovery and Engineering (MICDE), the Michigan Institute for Data Science (MIDAS), Advanced Research Computing-Technology Services (ARC-TS) and Consulting for Statistics, Computing and Analytics Research (CSCAR).
In 2015, Eric helped launch the university’s $100 million Data Science initiative, which enhances opportunities for researchers across campus to tap into the enormous potential of big data. He also serves as co-director of the university’s Precision Health initiative, launched last year to harness campus-wide research aimed at finding personalized solutions to improve the health and wellness of individuals and communities.
The Office of Research will convene a group to assess the University’s current and emerging needs in the area of research computing and how best to address them.