U-M fosters thriving artificial intelligence and machine learning research

By | General Interest, HPC, News, Research

Research using machine learning and artificial intelligence — tools that allow computers to learn about and predict outcomes from massive datasets — has been booming at the University of Michigan. The potential societal benefits being explored on campus are numerous, from on-demand transportation systems to self-driving vehicles to individualized medical treatments to improved battery capabilities.

The ability of computers and machines generally to learn from their environments is having transformative effects on a host of industries — including finance, healthcare, manufacturing, and transportation — and could have an economic impact of $15 trillion globally according to one estimate.

But as these methods become more accurate and refined, and as the datasets needed become bigger and bigger, keeping up with the latest developments and identifying and securing the necessary resources — whether that means computing power, data storage services, or software development — can be complicated and time-consuming. And that’s not to mention complying with privacy regulations when medical data is involved.

“Machine learning tools have gotten a lot better in the last 10 years,” said Matthew Johnson-Roberson, Assistant Professor of Engineering in the Department of Naval Architecture & Marine Engineering and the Department of Electrical Engineering and Computer Science. “The field is changing now at such a rapid pace compared to what it used to be. It takes a lot of time and energy to stay current.”

Diagram representing the knowledge graph of an artificial intelligence system, courtesy of Jason Mars, assistant professor, Electrical Engineering and Computer Science, U-M

Johnson-Roberson’s research is focused on getting computers and robots to better recognize and adapt to the world, whether in driverless cars or deep-sea mapping robots.

“The goal in general is to enable robots to operate in more challenging environments with high levels of reliability,” he said.

Johnson-Roberson said that U-M has many of the crucial ingredients for success in this area — a deep pool of talented researchers across many disciplines ready to collaborate, flexible and personalized support, and the availability of computing resources that can handle storing the large datasets and heavy computing load necessary for machine learning.

“The people is one of the reasons I came here,” he said. “There’s a broad and diverse set of talented researchers across the university, and I can interface with lots of other domains, whether it’s the environment, health care, transportation or energy.”

“Access to high powered computing is critical for the computing-intensive tasks, and being able to leverage that is important,” he continued. “One of the challenges is the data. A major driver in machine learning is data, and as the datasets get more and more voluminous, so does the storage needs.”

Yuekai Sun, an assistant professor in the Statistics Department, develops algorithms and other computational tools to help researchers analyze large datasets, for example, in natural language processing. He agreed that being able to work with scientists from many different disciplines is crucial to his research.

“I certainly find the size of Michigan and the inherent diversity that comes with it very stimulating,” he said. “Having people around who are actually working in these application areas helps guide the direction and the questions that you ask.”

Sun is also working on analyzing the potential discriminatory effects of algorithms used in decisions like whether to give someone a loan or to grant prisoners parole.

“If you use machine learning, how do you hold an algorithm or the people who apply it accountable? What does it mean for an algorithm to be fair?” he said. “Can you check whether a particular notion of non-discrimination is satisfied?”

Jason Mars, an assistant professor in the Electrical Engineering and Computer Science department and co-founder of a successful spinoff called Clinc, is applying artificial intelligence to driverless car technology and a mobile banking app that has been adopted by several large financial institutions. The app, called Finie, provides a much more conversational interface between users and their financial information than other apps in the field.

“There is going to be an expansion of the number of problems solved and number of contributions that are AI-based,” Mars said. He predicted that more researchers at U-M will begin exploring AI and ML as they understand the potential.

“It’s going to require having the right partner, the right experts, the right infrastructure, and the best practices of how to use them,” he said.

He added that U-M does a “phenomenal job” in supporting researchers conducting AI and ML research.

“The level of support and service is awesome here,” he said. “Not to mention that the infrastructure is state of the art. We stay relevant to the best techniques and practices out there.”

Advanced Research Computing at U-M, in part through resources from the university-wide Data Science Initiative, provides computing infrastructure, consulting expertise, and support for interdisciplinary research projects to help scientists conducting artificial intelligence and machine learning research.

For example, Consulting for Statistics, Computing and Analytics Research, an ARC unit, has several consultants on staff with expertise in machine learning and predictive analysis with large, complex, and heterogeneous data. CSCAR recently expanded capacity to support very large-scale machine learning using tools such as Google’s TensorFlow.

CSCAR consultants are available by appointment or on a drop-in basis, free of charge. See cscar.research.umich.edu or email cscar@umich.edu for more information.

CSCAR also provides workshops on topics in machine learning and other areas of data science, including sessions on Machine Learning in Python, and an upcoming workshop in March titled “Machine Learning, Concepts and Applications.”

The computing resources available to machine learning and artificial intelligence researchers are significant and diverse. Along with the campus-wide high performance computing cluster known as Flux, the recently announced Big Data cluster Cavium ThunderX will give researchers a powerful new platform for hosting artificial intelligence and machine learning work. Both clusters are provided by Advanced Research Computing – Technology Services (ARC-TS).

All allocations on ARC-TS clusters include access to software packages that support AI/ML research, including TensorFlow, Torch, and Spark ML, among others.

ARC-TS also operates the Yottabyte Research Cloud (YBRC), a customizable computing platform that recently gained the capacity to host and analyze data governed by the HIPAA federal privacy law.

Also, the Michigan Institute for Data Science (MIDAS) (also a unit of ARC) has supported several AI/ML projects through its Challenge Initiative program, which has awarded more than $10 million in research support since 2015.

For example, the Analytics for Learners as People project is using sensor-based machine learning tools to translate data on academic performance, social media, and survey data into attributes that will form student profiles. Those profiles will help link academic performance and mental health with the personal attributes of students, including values, beliefs, interests, behaviors, background, and emotional state.

Another example is the Reinventing Public Urban Transportation and Mobility project, which is using predictive models based on machine learning to develop on-demand, multi-modal transportation systems for urban areas.

In addition, MIDAS supports student groups involved in this type of research such as the Michigan Student Artificial Intelligence Lab (MSAIL) and the Michigan Data Science Team (MDST).

(A version of this piece appeared in the University Record.)

Yottabyte Research Cloud able to accept HIPAA-aligned data

By | General Interest, HPC, News

Advanced Research Computing – Technology Services (ARC-TS) is pleased to announce that the Yottabyte Research Cloud (YBRC) computing platform is now HIPAA-compliant. This means that YBRC and its associated services can accept restricted data, enabling secure data analysis on Windows and Linux virtual desktops as well as secure hosting of databases and data ingestion.

The new capability ensures the security of restricted data through the creation of firewalled network enclaves, allowing HIPAA-aligned data to be analyzed safely and securely in YBRC’s flexible, robust and scalable environment.   Within each network enclave, researchers have access to Windows and Linux virtual desktops that can contain any software required for their analysis pipeline.

This capability also extends to our database and ingestion services:

  • Structured databases:  MySQL/MariaDB, and PostgreSQL.
  • Unstructured databases: Cassandra, MongoDB, InfluxDB, Grafana, and ElasticSearch.
  • Data ingestion: Redis, Kafka, RabbitMQ.
  • Data processing: Apache Flink, Apache Storm, Node.js and Apache NiFi.
  • Other data services are available upon request.

YBRC is supported by U-M’s Data Science Initiative launched in 2015. YBRC was created through a partnership between Yottabyte and ARC-TS announced last fall.

These tools are offered to all researchers at the University of Michigan free of charge, provided that certain usage restrictions are not exceeded. Large-scale users who outgrow the no-cost allotment may purchase additional YBRC resources. All interested parties should contact hpc-support@umich.edu.

U-M wraps up successful SC17 conference

By | General Interest, Happenings, HPC, News

Several University of Michigan researchers and professional IT staff attended the Supercomputing 17 (SC17) conference in Denver from Nov. 12-17, participating in a number of different ways, including demonstrations, presentations and tutorials.

U-M participation included:

  • Matt McLean, a Big Data systems administrator with ARC-TS, served as a panelist at a session titled “The ARM Software Ecosystem: Are We There Yet?” (Slides)
  • Jeff Sica, a research database administrator with ARC-TS, helped lead a Birds of a Feather session titled “Containers in HPC.” (Slides)
  • Quentin Stout (EECS) and Christiane Jablonowski (CLASP) taught the “Parallel Computing 101” tutorial.
  • Shawn McKee, U-M Department of Physics, and OSiRIS Principal Investigator, demonstrated Object Storage and Caching for Science (network topology diagrams)
  • Eric Boyd, Director of Research Networks, presented on Research Networking at the University of Michigan at the U-M exhibit booth.
  • Simon Adorf, Ph.D. Candidate, Chemical Engineering Department, U-M, presented on Simple Data and Workflow Management with Signac and GPU-Accelerated Predictive Material Design at the U-M exhibit booth.
  • ARC sponsored a networking and career networking reception put on by Women in HPC. ARC Director Sharon Broude Geva spoke at the event.
  • Amy Liebowitz, a network architect at ITS, worked on SCINet, a high-capacity network created every year for the conference. Liebowitz was on the routing team, which is responsible for installing, configuring and supporting the high performance conference network. The Routing Team also coordinated external connectivity with commodity Internet and R&E WAN service providers.

U-M partners with Cavium on Big Data computing platform

By | Feature, General Interest, Happenings, HPC, News

A new partnership between the University of Michigan and Cavium Inc., a San Jose-based provider of semiconductor products, will create a powerful new Big Data computing cluster available to all U-M researchers.

The $3.5 million ThunderX computing cluster will enable U-M researchers to, for example, process massive amounts of data generated by remote sensors in distributed manufacturing environments, or by test fleets of automated and connected vehicles.

The cluster will run the Hortonworks Data Platform providing Spark, Hadoop MapReduce and other tools for large-scale data processing.

“U-M scientists are conducting groundbreaking research in Big Data already, in areas like connected and automated transportation, learning analytics, precision medicine and social science. This partnership with Cavium will accelerate the pace of data-driven research and opening up new avenues of inquiry,” said Eric Michielssen, U-M associate vice president for advanced research computing and the Louise Ganiard Johnson Professor of Engineering in the Department of Electrical Engineering and Computer Science.

“I know from experience that U-M researchers are capable of amazing discoveries. Cavium is honored to help break new ground in Big Data research at one of the top universities in the world,” said Cavium founder and CEO Syed Ali, who received a master of science in electrical engineering from U-M in 1981.

Cavium Inc. is a leading provider of semiconductor products that enable secure and intelligent processing for enterprise, data center, wired and wireless networking. The new U-M system will use dual socket servers powered by Cavium’s ThunderX ARMv8-A workload optimized processors.

The ThunderX product family is Cavium’s 64-bit ARMv8-A server processor for next generation Data Center and Cloud applications, and features high performance custom cores, single and dual socket configurations, high memory bandwidth and large memory capacity.

Alec Gallimore, the Robert J. Vlasic Dean of Engineering at U-M, said the Cavium partnership represents a milestone in the development of the College of Engineering and the university.

“It is clear that the ability to rapidly gain insights into vast amounts of data is key to the next wave of engineering and science breakthroughs. Without a doubt, the Cavium platform will allow our faculty and researchers to harness the power of Big Data, both in the classroom and in their research,” said Gallimore, who is also the Richard F. and Eleanor A. Towner Professor, an Arthur F. Thurnau Professor, and a professor both of aerospace engineering and of applied physics.

Along with applications in fields like manufacturing and transportation, the platform will enable researchers in the social, health and information sciences to more easily mine large, structured and unstructured datasets. This will eventually allow, for example, researchers to discover correlations between health outcomes and disease outbreaks with information derived from socioeconomic, geospatial and environmental data streams.

U-M and Cavium chose to run the cluster on Hortonworks Data Platform, which is based on open source Apache Hadoop. The ThunderX cluster will deliver high performance computer services for the Hadoop analytics and, ultimately, a total of three petabytes of storage space.

“Hortonworks is excited to be a part of forward-leading research at the University of Michigan exploring low-powered, high-performance computing,” said Nadeem Asghar, vice president and global head of technical alliances at Hortonworks. “We see this as a great opportunity to further expand the platform and segment enablement for Hortonworks and the ARM community.”

Potential service disruption for Value Storage maintenance — Dec. 2

By | Flux, General Interest, Happenings, HPC, News

The ITS Storage team will be applying an operating system patch on the MiStorage Silver environment, which provides home directories for both Flux and Flux Hadoop.  The ITS maintenance window will be from December 2nd 11:00pm to December 3rd 7:00am (8 hours total).  This update might be potentially disruptive to the stability of the nodes and jobs running on them.

The ITS status page for this incident is here:  http://status.its.umich.edu/report.php?id=141155

For Flux users: we have created a reservation on Flux so no jobs will be running or impacted.  We will remove the reservation after we receive the update from the ITS storage team of a successful update.

For Flux Hadoop users:  The scheduler and user logins will be deactivated when the outage starts, and any user currently logged into the cluster will be logged out for the duration of the outage.  We will reactivate access when we have received the all-clear from the ITS storage team of a successful update.

Status updates will be posted on the ARC-TS Twitter feed: https://twitter.com/arcts_um  and if you have any questions, please email us at hpc-support@umich.edu.

CSCAR provides walk-in support for new Flux users

By | Data, Educational, Flux, General Interest, HPC, News

CSCAR now provides walk-in support during business hours for students, faculty, and staff seeking assistance in getting started with the Flux computing environment.  CSCAR consultants can walk a researcher through the steps of applying for a Flux account, installing and configuring a terminal client, connecting to Flux, basic SSH and Unix command line, and obtaining or accessing allocations.  

In addition to walk-in support, CSCAR has several staff consultants with expertise in advanced and high performance computing who can work with clients on a variety of topics such as installing, optimizing, and profiling code.  

Support via email is also provided via hpc-support@umich.edu.  

CSCAR is located in room 3550 of the Rackham Building (915 E. Washington St.). Walk-in hours are from 9 a.m. – 5 p.m., Monday through Friday, except for noon – 1 p.m. on Tuesdays.

See the CSCAR web site (cscar.research.umich.edu) for more information.

Info session: Consulting and computing resources for data science — Nov. 8

By | Data, Educational, Events, General Interest, Happenings, HPC

Advanced Research Computing at U-M (ARC) will host an information session for graduate students in all disciplines who are interested in new computing and data science resources and services available to U-M researchers.

Brief presentations from members of ARC Technology Services (ARC-TS) on computing infrastructure, and from Consulting for Statistics, Computing, and Analytics Research (CSCAR) on statistics, data science, and computing training and consulting will be followed by a Q&A session, and opportunities to interact individually with ARC and CSCAR staff.

ARC and CSCAR are interested in connecting with graduate students whose research would benefit from customized or innovative computational or analytic approaches, and can provide guidance for students aiming to do this. ARC and CSCAR are also interested in developing training and documentation materials for a diverse range of application areas, and would welcome input from student researchers on opportunities to tailor our training offerings to new areas.

Speakers:

  • Kerby Shedden, Director, CSCAR
  • Brock Palen, Director, ARC-TS

Date/Time/Location:

Wednesday, Nov. 8, 2017, 2 – 4 p.m., West Conference Room, 4th Floor, Rackham Building (915 E. Washington St.)

Add to Google Calendar

HPC training workshops begin Thursday, Sept. 21

By | Educational, Events, General Interest, HPC, News

series of training workshops in high performance computing will be held Sept. 21 through Oct. 31, 2017, presented by CSCAR in conjunction with Advanced Research Computing – Technology Services (ARC-TS). All sessions are held at East Hall, Room B254, 530 Church St.

Introduction to the Linux command Line
This course will familiarize the student with the basics of accessing and interacting with Linux computers using the GNU/Linux operating system’s Bash shell, also known as the “command line.”
Dates: (Please sign up for only one)
• Thursday, Sept. 21, 9 a.m. – noon (full descriptionregistration)
• Thursday, Sept. 28, 9 a.m. – noon (full description | registration)
Location:
East Hall, Room B250, 530 Church St.

Introduction to the Flux cluster and batch computing
This workshop will provide a brief overview of the components of the Flux cluster, including the resource manager and scheduler, and will offer students hands-on experience.
Dates: (Please sign up for only one)
• Thursday, Sept. 28, 1 – 4 p.m. (full description | registration)
• Monday, Oct. 2, 9 a.m. – noon (full description | registration)
Location:
East Hall, Room B254, 530 Church St.

Advanced batch computing on the Flux cluster
This course will cover advanced areas of cluster computing on the Flux cluster, including common parallel programming models, dependent and array scheduling, among other topics.
Dates: (Please sign up for only one)
• Tuesday, Oct. 10, 1 – 5 p.m. (full description | registration) Location: East Hall, Room B254, 530 Church St.
• Thursday, Oct. 12, 9 a.m. – noon (full description | registration) Location: East Hall, Room B254, 530 Church St.
• Friday, Oct. 13, 9 a.m. – noon (full description | registration) Location: East Hall, Room B250, 530 Church St.

Hadoop Workshop
Learn how to process large amounts (up to terabytes) of data using SQL and/or simple programming models available in Python, Scala, and Java.
Date:
• Tuesday, Oct. 31, 1 – 5 p.m. (full description | registration)
Location:
East Hall, Room B254, 530 Church St.

Flux HPC Blog: Querying data with SparkSQL

By | Data, General Interest, HPC, News

SparkSQL is a way for people to use SQL-like language to query their data with ease while taking advantage of the speed of Spark, a fast, general engine for data processing that runs over Hadoop. I wanted to test this out on a dataset I found from Walmart with their stores’ weekly sales numbers. I put the csv into our cluster’s HDFS (in /var/walmart) making it accessible to all Flux Hadoop users.

New Data Science Computing Platform Available to U-M Researchers

By | General Interest, Happenings, HPC, News

Advanced Research Computing – Technology Services (ARC-TS) is pleased to announce an expanded data science computing platform, giving all U-M researchers new capabilities to host structured and unstructured databases, and to ingest, store, query and analyze large datasets.

The new platform features a flexible, robust and scalable database environment, and a set of data pipeline tools that can ingest and process large amounts of data from sensors, mobile devices and wearables, and other sources of streaming data. The platform leverages the advanced virtualization capabilities of ARC-TS’s Yottabyte Research Cloud (YBRC) infrastructure, and is supported by U-M’s Data Science Initiative launched in 2015. YBRC was created through a partnership between Yottabyte and ARC-TS announced last fall.

The following functionalities are immediately available:

  • Structured databases:  MySQL/MariaDB, and PostgreSQL.
  • Unstructured databases: Cassandra, MongoDB, InfluxDB, Grafana, and ElasticSearch.
  • Data ingestion: Redis, Kafka, RabbitMQ.
  • Data processing: Apache Flink, Apache Storm, Node.js and Apache NiFi.

Other types of databases can be created upon request.

These tools are offered to all researchers at the University of Michigan free of charge, provided that certain usage restrictions are not exceeded. Large-scale users who outgrow the no-cost allotment may purchase additional YBRC resources. All interested parties should contact hpc-support@umich.edu.

At this time, the YBRC platform only accepts unrestricted data. The platform is expected to accommodate restricted data within the next few months.

ARC-TS also operates a separate data science computing cluster available for researchers using the latest Hadoop components. This cluster also will be expanded in the near future.