Great Lakes Timeline (Past)

By | Great Lakes, HPC

Great Lakes Past Events Timeline

Back to Future Events

Great Lakes Summer 2020 Maintenance

August 3, 2020

Great Lakes Summer 2020 Maintenance

Read more

Open OnDemand on Great Lakes being upgraded

May 18, 2020

1:00 PM – 5:00 PM: We are migrating Open OnDemand on Great Lakes from version 1.4 to 1.6 to fix a security issue. Users will…

Read more

Winter 2020 Maintenance

March 9, 2020

The ARC-TS 2020 winter maintenance will start on March 9, 2020.  You can read more about it here.  https://arc-ts.umich.edu/event/winter-2020-maintenance/   Great Lakes, Lighthouse, and Armis2…

Read more

Great Lakes Billing Begins

January 6, 2020

Billing for Great Lakes will begin on January 6, 2020 at 9am. Rates for using Great Lakes can be found here: https://arc-ts.umich.edu/greatlakes/rates   . No…

Read more

Beta Retires

November 29, 2019

Beta will be retired after both Flux and Armis have been retired, as the purpose of Beta is to assist users to transition to the…

Read more

Great Lakes Migration Complete

November 25, 2019

All HPC accounts, users, and workflows must be migrated to either Great Lakes (Standard, GPU, Large-Memory, and On-demand Flux) or Lighthouse (Flux Operating Environment nodes)…

Read more

Great Lakes Open for General Availability

August 19, 2019

Assuming all initial testing is successful, we expect that Great Lakes will become available for University users after the ARC-TS summer maintenance.

Read more

Great Lakes Early User Testing Ends

August 14, 2019

The Great Lakes early user period for testing will end. Great Lakes will transition into production and users will be able to submit work as…

Read more

2019 Summer Maintenance

August 12, 2019

The ARC-TS annual summer maintenance will begin.   The details, services impacted,  and length of the maintenance will be determined when we get closer to the date. …

Read more

Great Lakes Open for Early User Testing

July 15, 2019

We will be looking for sets of friendly users who will be able to test different aspects of the system to submit their workloads to…

Read more

Great lakes firmware updates complete

July 8, 2019

This date is a rough estimate.  Great Lakes, as of June, is running a GA-candidate HDR-100 InfiniBand firmware.   We will schedule an update to the…

Read more

Great Lakes primary installation complete and ARC-TS begins loading and configuration

May 30, 2019

The Great Lakes installation is primarily complete other than waiting for final HDR firmware testing for the new InfiniBand system.  The current InfiniBand system is…

Read more

HPC OnDemand Available for Beta

March 8, 2019

The replacement for ARC Connect, called HPC OnDemand, will be available for users.  This will allow users to submit jobs via the web rather than…

Read more

Beta Cluster testing continues

January 8, 2019

If you are a current HPC user on Flux or Armis and have not used Slurm before, we highly recommend you login and test your…

Read more

Great Lakes Beta (GLB) created

November 1, 2018

Great Lakes Beta is installed for HPC support staff to build and test software packages on the same hardware in Great Lakes.

Read more

Beta HPC cluster available

October 2, 2018

The Beta HPC cluster was introduced to enable HPC users to begin migrating their Torque job scripts to Slurm and test their workflows on a Slurm-based…

Read more

Great Lakes Installation Begins

October 1, 2018

Dell, Mellanox, and DDN will be delivering and installing the hardware to deliver the new Great Lakes service.  These teams will be working along side the…

Read more

If you have questions, please send email to arcts-support@umich.edu.

Order Service

Billing for the Great Lakes service began on January 6, 2020. Existing, active Flux accounts and logins have been added to the Great Lakes Cluster. Complete this form to get a new Great Lakes cluster login.

If you would like to create a Great Lakes Cluster account or have any questions, contact arcts-support@umich.edu with lists of users, admins, and a shortcode. Trial accounts are also available for new PIs.

Consulting

By | Uncategorized

Advanced Research Computing – Technology Services (ARC-TS), a division of ITS, is pleased to offer a pilot called Scientific Computing and Research Consulting Services to help researchers implement data analytics and workflows within their research projects. This includes navigating technical resources like high-performance computing and storage.

The ARC-TS Scientific Computing and Research Consulting Services team will be your guide to navigating the complex technical world: from implementing intense data projects, to teaching you how the technical systems work, to assist in identifying proper tools, to guiding you on how to hire a programmer.

Areas of expertise:

  • Data Science
    • Data Workflows
    • Data Analytics
    • Machine Learning
    • Programming
  • Grant Proposals
    • Compute Technologies
    • Data Storage and Management
    • Budgeting cost for computing and storage
  • Scientific Computing/Programming
    • Getting started with advanced Computing
    • Code optimization
    • Parallel computing
    • GPU/Accelerator Programing
  • Additional Resources
    • Facilitating Collaborations/User Communities
    • Workshops and Training

Who can use this service?

  • All researchers and their collaborators from any of the university’s three campuses, including faculty, staff, and students
  • Units that want help including technical information when preparing grants
  • Anyone who has a need for HPC services and needs help navigating resources

How much does it cost?

  • Initial consultation, grant pre-work, and short term general guidance/feedback on methods and code are available at no cost.
  • For protracted longer engagements, research teams will be asked to contribute to the cost of providing the service.

Partnership

The ARC-TS Scientific Computing and Research Consulting Services team works in partnership with the Consulting for Statistics, Computing, and Analytics Research team (CSCAR), Biomedical Research Core Facilities, and others. ARC-TS may refer or engage complimentary groups as required by the project.

Get started

Send an email to arcts-consulting@umich.edu with the following information:

  • Research topic and goal
  • What you would like ARC-TS to help you with
  • Any current or future data types and sources
  • Current technical resources
  • Current tools (programs, software)
  • Timeline – when do you need the help or information?

Get help

If you have any questions or wish to setup a consult, please contact us at arcts-consulting@umich.edu. Be sure to include as much information as possible from the “How to get started” section noted above.

Data Science

By | Uncategorized

Data Science Consulting Details

Data Workflows

We are available to assist researchers along the entire lifecycle of the data workflow, from the conceptual stage to ingest, preprocessing, cleansing, and storage solutions. We can advise in the following areas:

  • Establishing and troubleshooting dataflows between systems
  • Selecting the appropriate systems for short-term and long-term storage
  • Transformation of raw data into structured formats
  • Data deduplication and cleansing
  • Conversion of data between different formats to aide in analysis
  • Automation of dataflow tasks

Analytics

The data science consulting team can assist with data analytics to support research:

  • Choosing the appropriate tools and techniques for performing analysis
  • Development of data analytics in a variety of frameworks
  • Cloud-based (Hadoop) analytic development

Machine Learning

Machine learning is an application of artificial intelligence (AI) that focuses on the development of computer programs to learn information from data.

We are available to consult on the following. This includes a general overview of concepts, discussion into what tools and architectures best fit your needs, or technical support on implementation.

Languages Tools/Architectures Models
Python Python data tools (scikit, numpy, etc) Neural networks
C++ TensorFlow Decision trees
Java Jupyter notebooks Support vector machines
Matlab

Programming

We also provide consulting on programming in a variety of programming languages (including but not limited to: C++, Java, and Python) to support your data science needs. We can assist in algorithm design and implementation, as well as optimizing and parallelizing code to efficiently utilize high performance computing (HPC) resources where possible/necessary. We can help identify available commercial and open-source software packages to simplify your data analysis.

If you have any questions or wish to setup a consult please contact us at arcts-consulting@umich.edu

Great Lakes For Student Teams and Organizations

By | Great Lakes, HPC

Great Lakes For Student Teams and Organizations

The Great Lakes HPC Cluster is the university-wide, shared computational discovery and high-performance computing (HPC) service. It is designed to support both compute- and data-intensive research. 

Great Lakes is available without charge for student teams and organizations who need HPC resources. This program aims to enable students access to high-performance computing to enhance their team’s mission; it is not meant to be used for faculty-led research. Jobs submitted from a student team account will have lower priority and will be able to run when there are sufficient resources available. Currently, we are limiting the resources available to each team (see below), but in the future, we expect to increase the available resources. If your team or organization needs more resources or access to the large memory nodes, an administrator for your team can also request a paid Slurm account which will not have any of the restrictions mentioned below and can work in conjunction with the no-cost team account. 

Access

Your student team/organization must be registered as a Sponsored Student Organizations (SSO). If your team is an SSO and would like an account on Great Lakes, please have a sponsoring administrator email us at arcts-support@umich.edu with your team name, the uniqnames of the users who are allowed to use the account, and a list of the uniqnames of the account administrators who can make decisions about the account.

If you are a member of a team or organization that isn’t registered as an SSO, please email us at arcts-support@umich.edu with the details of your organization and what your HPC needs are.

Limits

Once your account is created, your team will be able to use up to 100 CPUs in the standard partition and 1 GPU in the gpu partition. Jobs will be limited to 24 hours. If members of your team do not have Linux experience at the command-line, they can also use Great Lakes through their browser via Open OnDemand.

If you have any questions, please send an email to arcts-support@umich.edu.

Order Service

Billing for the Great Lakes service began on January 6, 2020. Existing, active Flux accounts and logins have been added to the Great Lakes Cluster. Complete this form to get a new Great Lakes cluster login.

If you would like to create a Great Lakes Cluster account or have any questions, contact arcts-support@umich.edu with lists of users, admins, and a shortcode. Trial accounts are also available for new PIs.

LSA funding for instructional use of the Great Lakes cluster

By | Great Lakes

LSA funding for instructional use of the Great Lakes cluster

 

An LSA pilot program funds use of Great Lakes by LSA classes (instructional use of Great Lakes). Any LSA faculty member can apply to receive a Great Lakes account paid for by LSA to use in the classroom. Each application is for a single term only; if a class will be using Great Lakes for multiple terms, the instructor must apply for the LSA-funded class account each term. Since funding is limited and since it can take a while to install new software that may be needed for the course on Great Lakes, faculty are encouraged to apply as early as possible (two months or more before the start of the term is ideal, although we can also accept applications after a term has started).

To apply for an LSA-funded class Great Lakes account, the faculty member teaching the class should send the following information to arcts-support@umich.edu:

1. Course name, course number, and academic term.
2. Approximate number of students who will be enrolled in the course.
3. A two to three sentence description of how Great Lakes will be used in the course.
4. Which Great Lakes service(s) are being requested (Standard, Largemem, or GPU).
5. For each month of the course, the number of resources (cores, GPUs) requested. The resources can vary based on when students will be using Great Lakes and when assignment/project due dates are. LSA research support staff can meet with you to help determine how many resources will be needed each month, based on how many students are in the class and what number/length/type of jobs the students will be running. (Example response: “0 cores in September, 24 cores in October, and 64 cores in each of November and December.”)
6. Is there any special software that LSA research support staff should install on Great Lakes for the course, or any other special setup or resources the class will need?
7. Would you like LSA research support staff to give a guest lecture on how to use Great lakes? If so, approximately when in the term?

Please send any questions about instructional use of Great Lakes to arcts-support@umich.edu.

Order Service

Great Lakes will be free of charge until January 6th, 2020. Existing, active Flux accounts and logins have been added to the Great Lakes cluster. Complete this form to get a new Great Lakes cluster login.

If you do not have a Flux account and would like to create a Great Lakes cluster account or have any questions, contact arcts-support@umich.edu with lists of users, admins, and a shortcode.

Great Lakes Cluster Rates

By | Great Lakes, HPC

Great Lakes Cluster Rates

The 2019-20 rates for the Great Lakes Cluster have been approved. These rates represent cost recovery for the Great Lakes Cluster, and do not include any support your unit may choose to provide.

Partition Rate Per Minute Rate Per Month CPU Unit Memory Unit GPU Unit
standard/debug/viz $0.000430556 $18.59 1 7 gigabytes N/A
largemem $0.001374306 $59.37 1 41.75 gigabytes N/A
gpu $0.004939815 $213.40 20 90 gigabytes

The monthly rate is based on a 30-day month; per-minute rates are included for reference. Charges are based on the percentage of the machine your job requests (in terms of the amount of cores, memory, and GPUs), and its actual runtime. 

ARC-TS is working on providing a command-line script to calculate job charges to assist you in cost estimation. 

The College of Engineering and the Medical School will cost-share 44% of the Great Lakes Cluster rates for sponsored research accounts for researchers in these units who are paying to use the Great Lakes Cluster on unit shortcodes.

The College of Literature, Science, & the Arts (LSA) will cost-share 44% of the Great Lakes Cluster rates for research accounts for researchers in these units who are paying for their use of the Great Lakes Cluster on unit shortcodes. NOTE:  This cost-sharing is in place temporarily while a cost-sharing strategy for LSA is devised. Questions should be directed to Luke Tracy (ltracy@umich.edu), Manager of Research Computing Services, LSA. LSA researchers who do not have access to any other account may be eligible to use the accounts provided centrally by LSA. The usage policy and restrictions on these accounts is described in detail on the LSA’s public Great Lakes accounts page. Questions about access or use of these accounts should be sent to arcts-support@umich.edu.

See the LSA funding page for information on funding courses at the College of Literature, Science, and the Arts.

The School of Public Health will cost-share 44% of the Great Lakes Cluster rates for sponsored research accounts for researchers in these units who are paying for their use of Great Lakes Cluster on unit shortcodes. NOTE: The SPH cost-sharing is limited to a finite budget. Therefore, SPH faculty and staff must check with the Assistant Dean for Finance to verify if/how much cost-sharing is available.

To establish a Slurm account for a class please contact us at arcts-support@umich.edu with the following information:

  • Students to be put on the account
  • Shortcode to be used for billing
  • List of individuals to administer the account
  • Any limits to be placed on the either the users or the account as a whole

Please note: courses are currently free of charge through the end of winter term 2020. All students will need to have a user login to use the account and can via this form – https://arc-ts.umich.edu/login-request/

Example jobs and their charges¹

To help illustrate how the job charges work, here are some examples of differently-sized jobs.

Partition Total CPUs Used Total Memory Used Total GPUs Used Cost Per Minute
standard 1 1 GB N/A $0.000430
standard 1 10 GB N/A $0.000615
standard 36 5 GB N/A $0.015492 
standard 50 GB N/A $0.003074
largemem 1 180 GB N/A $0.005925  
gpu 1 20 GB 1 $0.004940 

¹ The charges above have been rounded for readability.

If you have questions, please send email to arcts-support@umich.edu.

Order Service

Billing for the Great Lakes service began on January 6, 2020. Existing, active Flux accounts and logins have been added to the Great Lakes Cluster. Complete this form to get a new Great Lakes cluster login.

If you would like to create a Great Lakes Cluster account or have any questions, contact arcts-support@umich.edu with lists of users, admins, and a shortcode. Trial accounts are also available for new PIs.

Great Lakes FAQ

By | Great Lakes, HPC

Great Lakes FAQ

1. What is Great Lakes?

Great Lakes is an ARC-TS managed HPC cluster available to faculty (PIs) and their students/researchers. All computational work is scheduled via the Slurm resource manager and task scheduler. For detailed hardware information, see the configuration page. Great Lakes is not suitable for HIPAA or other sensitive data.

2. What forms do I need to fill out?

  1. The Principal Investigator (PI) needs to request a Slurm account, specifying users that can access the account, the people which can administer that account, and payment details.
  2. Each user given access to the account must request a user login. Please refer to the Great Lakes User Guide for additional steps and usage information.

3. How can I get a trial account on Great Lakes?

If you are a PI that hasn’t used Great Lakes before, you are eligible for a limited trial account.  This account will have $150 worth of cluster time (see the rates page) and will be unable to run jobs after 1 month. If interested, please contact arcts-support@umich.edu specifying that you’d like a trial account with lists of users and admins.

4. Will my Turbo storage be available on Great Lakes?

Since Turbo is a storage service independent of Great Lakes, users that utilized Turbo on Flux will still be able to access their data on Great Lakes.  The cost of Turbo will not change and no data needs to be transferred.  If you have trouble accessing Turbo, please contact arcts-support@umich.edu.

5. How do I submit jobs using a web interface?

Great Lakes utilizes Open OnDemand to enable web-based job submission, manage the files in their home directory, view/delete active jobs, and open a web terminal session. Users can also use Matlab, Jupyter Notebooks, RStudio, and get a remote desktop.

You must be on campus or on the VPN to connect to Great Lakes OnDemand.  For more information, see the OnDemand section of the Great Lakes User Guide.

6. How do I view the resource usage on my account?

To view TRES (Trackable RESource) utilization by user or account, use the following commands (substitute bold variables):

Shows TRES usage by all users on account during date range:
sreport cluster UserUtilizationByAccount start=mm/dd/yy end=mm/dd/yy account=test --tres type
Shows TRES usage by specified user(s) on account during date range:
sreport cluster UserUtilizationByAccount start=mm/dd/yy end=mm/dd/yy users=un1,un2 account=test --tres type
Lists users alphabetically along with TRES usage and total during date range:
sreport cluster AccountUtilizationByUser start=mm/dd/yy end=mm/dd/yy tree account=test --tres type
Possible TRES types:

cpu
mem
node
gres/gpu

To view disk usage and availability by user, type:

home-quota -u uniqname

For more reporting options, see the Slurm sreport documentation.

7. What is a “root (_root) account”?

Each PI or project has a collection of Slurm accounts which could be used for different purposes (e.g. different grants or focuses of research) with different users.  These Slurm accounts are contained within the PI/project’s root account (e.g. research_root).  For example:

researcher_root
    researcher
        user1
        user2
    researcher1
        user2
        user3

These accounts can have different limits on them, and are also collectively limited for /scratch usage and overall cluster usage.

8. As a PI, how can I limit usage on my account?

Principal Investigators can request that CPU, GPU, memory, billing units, and walltime be limited per user or group of users on their account.  For more information, see the Great Lakes policy documentation.

Limits must be requested by emailing arcts-support@umich.edu.

9. As a PI, can I purchase my own nodes for Great Lakes?

PIs may purchase hardware for use on the Lighthouse cluster by emailing arcts-support@umich.edu to develop a hardware plan.  Lighthouse utilizes the same Slurm job scheduler and infrastructure as Great Lakes, but purchased nodes can be used exclusively by the PI’s group.

10. What does my job status mean?

When listing your submitted jobs with squeue -u uniqname, the final column titled “NODELIST(REASON)” will give you the reason that the job is not running yet. The possible statuses are:

Resources

This job is waiting for the resources (CPUs, Memory, GPUs) it requested to become available. Resources become available when currently running jobs complete. The job with Resources in the NODELIST(REASON) column is the top priority job and should be started next.

Priority

This job is not the top priority, so it must wait in the queue until it becomes the top priority job. Once it becomes the top priority job, the NODELIST(REASON) column will change to “Resources”. The priority of all pending jobs can be shown with the sprio command. A job’s priority is determined by two factors: fairshare and age. The fairshare factor in a job’s priority is influenced by the amount of resources that have been consumed by members of your Slurm account. More recent usage means a lower fairshare priority. The age factor is determined by the job’s queued time. The longer the job has been waiting in the queue, the higher the age priority.

AssocGrpCpuLimit

This job was submitted with a Slurm account that has a limit set on the number of CPUs that may be used at one time. This limit is set for all jobs by all users of the same Slurm account. Once some of the jobs running under this Slurm account complete, this reason will change to Priority or Resources unless there is some other limit or dependency. All jobs running under a given Slurm account can be viewed by running squeue --account=account_name

AssocGrpGRES

This job was submitted with a Slurm account that has a limit set on the number of GPUs that may be used at one time. This limit is set for all jobs by all users of the same Slurm account. Once some of the jobs running under this Slurm account complete, this reason will change to Priority or Resources unless there is some other limit or dependency. All jobs running under a given Slurm account can be viewed by running squeue --account=account_name

AssocGrpMem

This job was submitted with a Slurm account that has a limit set on the amount of memory that may be used at one time. This limit is set for all jobs by all users of the same Slurm account. Once some of the jobs running under this Slurm account complete, this reason will change to Priority or Resources unless there is some other limit or dependency. All jobs running under a given Slurm account can be viewed by running squeue --account=account_name

AssocGrpBillingMinutes

This job was submitted with a Slurm account that has a limit set on the amount of monetary charges that may be accrued. Jobs that are pending with this reason will not start until the limit has been raised or the monthly bill has been processed.

Dependency

This job has a dependency on another job. It will not start until that dependency is met. The most common dependency is waiting for another job to complete.

QOSMinGRES

This job was submitted to the GPU partition, but did not request a GPU. This job will never start. This job should be deleted and resubmitted to a different partition or if a GPU is needed, resubmitted to the GPU partition with a GPU request. A GPU can be requested by adding the following line to a batch script: #SBATCH --gres=gpu:1

11. How Can I Access On-Campus Restricted Software?

From the Command Line

Log into an on-campus login node via ssh client to gl-campus-login.arc-ts.umich.edu

From Open On-Demand

Open your browser (Firefox, Edge, or Chrome in an incognito tab – recommended) and navigate to greatlakes-oncampus.arc-ts.umich.edu.

12. What are the SSH pub keys for Great Lakes?

If you wish to pre-populate your SSH client configuration with the publicly available keys for Great Lakes, they are as follows:

ECDSA:

greatlakes.arc-ts.umich.edu,greatlakes-oncampus.arc-ts.umich.edu,gl-login?.arc-ts.umich.edu,141.211.192.38,141.211.192.39,141.211.192.40,141.211.192.41 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHWel/rAXqIJYxexVzMSlgy/fICWukn8DaOGMPpAomH1E5AhCjrH2zMMTJHtXYsRA+brm/sTbn21Zw+pgpgJSYA=

 

ED25519:

greatlakes.arc-ts.umich.edu,greatlakes-oncampus.arc-ts.umich.edu,gl-login?.arc-ts.umich.edu,141.211.192.38,141.211.192.39,141.211.192.40,141.211.192.41 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICwaAq9LI48vVO4qbt35Xfz1pi+RE1Krq1iIeJQqoFEw

 

RSA:

greatlakes.arc-ts.umich.edu,greatlakes-oncampus.arc-ts.umich.edu,gl-login?.arc-ts.umich.edu,141.211.192.38,141.211.192.39,141.211.192.40,141.211.192.41 ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEA16eDiBWF3SgPQXEeJsH8dsxO8x3o5KkdqWMg/lK57Kpwf4QGXJNvYy0jxSAuKTRim/ob6+nDRH8zIOwnl9tlyEw+8VN3WR8nqBqxX6Km2yzTOMO8Lh7fLuMTZHOdEz0uOn6tBP8LTMtHN9h/fANjKFVl8N+jsejMXrPf0w7jGjc=

 

On a Mac or Linux machine you’ll add the keys to your known_hosts file.

On a Mac this file is: /Users/<username>/.ssh/known_hosts.
On Linux: /home/<username>/.ssh/known_hosts.

The known_hosts file should have 644 (i.e. -rw-r--r--) permissions.

If you are using an SSH client that is not part of your operating system (e.g. Windows using PuTTY), please see the client documentation referring to host key verification.
A good start for PuTTY users can be found here (section A.2.9 “Is there an option to turn off the annoying host key prompts?)”

 

If you have a problem not listed here, please send an email to arcts-support@umich.edu.

Order Service

Billing for the Great Lakes service began on January 6, 2020. Existing, active Flux accounts and logins have been added to the Great Lakes Cluster. Complete this form to get a new Great Lakes cluster login.

If you would like to create a Great Lakes Cluster account or have any questions, contact arcts-support@umich.edu with lists of users, admins, and a shortcode. Trial accounts are also available for new PIs.

Armis Timeline

By | HPC

Armis Timeline

Dates in the future are subject to change. We use our best estimates given what we know today.

No upcoming events

If you have questions, please send email to arcts-support@umich.edu.

Order Service

Armis is currently offered as a pilot program. To request an Armis account, please fill out this form.

Please see the Terms of Usage for more information.

Related Event

There are no upcoming events at this time.

Flux Timeline

By | Flux, HPC

Flux Timeline

Dates in the future are subject to change. We use our best estimates given what we know today.

No upcoming events

If you have questions, please send email to hpc-support@umich.edu.

Order Service

For information on determining the size of a Flux allocation, please see our pages on How Flux Works, Sizing a Flux Order, and Managing a Flux Project.

To order:

1. Fill out the ARC-TS HPC account request form.

2. Email hpc-support@umich.edu with the following information:

  • the number of cores needed
  • the start date and number of months for the allocation
  • the shortcode for the funding source
  • the list of people who should have access to the allocation
  • the list of people who can change the user list and augment or end the allocations.

For information on costs, visit our Rates page.

Related Event

November 10 @ 10:00 am - 12:00 pm

Image Segmentation using Deep Learning with FastAI

This workshop will demonstrate how to perform image segmentation using the FastAI [fast.ai] Python library, which is built on the deep learning library PyTorch. Some familiarity with Python is expected, but no previous…

November 16 @ 1:00 pm - 4:00 pm

Advanced Graphics Optimization For Data Visualization In Unity3D

BlueJeans link will be shared with registered attendees 24 hours before start Modern 3D game engines and computer hardware can render convincing graphics, rivaling that of pre-rendered 3D animation. But…

November 19 @ 1:30 pm - 4:00 pm

Map making in R

The focus of the workshop is twofold: to learn cartography principles for generating single and multivariable choropleth maps, and explore functionalities of R for generating static and interactive web maps….

December 8 @ 10:00 am - 12:00 pm

Introduction to Deep Neural Networks with Keras/TensorFlow

Deep Neural Networks (DNNs) are used as a machine learning method for both regression and classification problems. Keras is a high-level, Python interface running on top of multiple neural network libraries, including the popular library TensorFlow. In this workshop, participants will learn…

Beta Known Issues

By | Beta, HPC

Beta Known Issues

No current issues

If you have a problem not listed here, please send an email to arcts-support@umich.edu.

Getting Access

Beta is intended for small scale testing to convert Torque/PBS scripts to Slurm. No sensitive data of any type should be used on Beta.

To request:

1. Fill out the ARC-TS HPC account request form.

Because this is a test platform, there is no cost for using Beta.

Related Event

November 10 @ 10:00 am - 12:00 pm

Image Segmentation using Deep Learning with FastAI

This workshop will demonstrate how to perform image segmentation using the FastAI [fast.ai] Python library, which is built on the deep learning library PyTorch. Some familiarity with Python is expected, but no previous…

November 16 @ 1:00 pm - 4:00 pm

Advanced Graphics Optimization For Data Visualization In Unity3D

BlueJeans link will be shared with registered attendees 24 hours before start Modern 3D game engines and computer hardware can render convincing graphics, rivaling that of pre-rendered 3D animation. But…

November 19 @ 1:30 pm - 4:00 pm

Map making in R

The focus of the workshop is twofold: to learn cartography principles for generating single and multivariable choropleth maps, and explore functionalities of R for generating static and interactive web maps….

December 8 @ 10:00 am - 12:00 pm

Introduction to Deep Neural Networks with Keras/TensorFlow

Deep Neural Networks (DNNs) are used as a machine learning method for both regression and classification problems. Keras is a high-level, Python interface running on top of multiple neural network libraries, including the popular library TensorFlow. In this workshop, participants will learn…