1. Get Duo
DUO is required to access the majority of UM services and all HPC services. If you need to set up DUO please visit this page.
2. Get a Great Lakes user login
You must establish a user login on Great Lakes by filling out this form.
3. Get an SSH Client & Connect to Great Lakes
You must be on campus or on the VPN to connect to Great Lakes. If you are trying to log in from off campus, or using an unauthenticated wireless network such as MGuest, you have a couple of options:
- Install VPN software on your computer
- SSH to login.itd.umich.edu and continue with the Linux instructions
Mac or Linux:
Open Terminal and type:
You will be required to enter your Kerberos level-1 password to log in. Please note that as you type your password, nothing you type will appear on the screen; this is completely normal. Press “Enter/Return” key once you are done typing your password.
When you’re connecting for the first time, it’s not uncommon to see a message like this one:
The authenticity of host 'greatlakes.arc-ts.umich.edu (188.8.131.52)' can't be established. RSA key fingerprint is 6f:8c:67:df:43:4f:e0:fc:80:5b:49:1a:eb:81:cc:54. Are you sure you want to continue connecting (yes/no)?
This is normal. By saying “yes” you’re accepting the public SSH key for the system. This key will be stored in a local known_hosts file on your system so you won’t be prompted in the future. The keys from Great Lakes will NOT change. So, for example, if you get a new computer and SSH to Great Lakes, you’ll be prompted to add the key again.
We encourage you to compare the fingerprint you’re presented with, when connecting for the first time, to one of the fingerprints below. The format of the fingerprint you’re presented could be dictated by the SSH client on your machine.
In the example message given above, we are presented with the RSA key fingerprint and its MD5 value, which is the same value as in the above table.
If you’re NOT seeing one of these fingerprints, submit a ticket to firstname.lastname@example.org and do NOT connect to the server via SSH until discussing with an ARC-TS staff member to determine if there is a security issue.
To avoid being prompted to accept the key on a new system you may choose to pre-populate your SSH known_hosts file with the pub keys from Great Lakes. The keys can be found in the FAQ
Windows (using PuTTY):
Download and install PuTTY here.
If you receive a “PuTTY Security Alert” pop-up, this is completely normal, click the “Yes” option. This will tell PuTTY to trust the host the next time you want to connect to it. From there, a terminal window will open; you will be required to enter your UMICH uniqname and then your Kerberos level-1 password in order to log in. Please note that as you type your password, nothing you type will appear on the screen; this is completely normal. Press “Enter/Return” key once you are done typing your password.
All Operating Systems:
4. Get files
You can use SFTP (best for simple transfers of small files) or Globus (best for large files or a commonly used endpoint) to transfer data to your /home directory.
SFTP: Mac or Windows using FileZilla
- Open FileZilla and click the “Site Manager” button
- Create a New Site, which you can name “Great Lakes” or something similar
- Select the “SFTP (SSH File Transfer Protocol)” option
- In the Host field, type greatlakes-xfer.arc-ts.umich.edu
- Select “Interactive” for Logon Type
- In the User field, type your uniqname
- Click “Connect”
- Enter your Kerberos password
- Select your Duo method (1-3) and complete authentication
- Drag and drop files between the two systems
- Click “Disconnect” when finished
On Windows, you can also use WinSCP with similar settings, available alongside PuTTY here.
SFTP: Mac or Linux using Terminal
To copy a single file, type:
scp localfile email@example.com:./remotefile
To copy an entire directory, type:
scp -r localdir firstname.lastname@example.org:./remotedir
These commands can also be reversed in order to copy files from Great Lakes to your machine:
scp -r email@example.com:./remotedir localdir
You will need to authenticate via Duo to complete the file transfer.
Globus: Windows, Mac, or Linux
Globus is a reliable high performance parallel file transfer service provided by many HPC sites around the world. It enables easy transfer of files from one system to another, as long as they are Globus endpoints.
- The Globus endpoint for Great Lakes is “umich#greatlakes”.
How to use Globus
Globus Online is a web front end to the Globus transfer service. Globus Online accounts are free and you can create an account with your University identity.
- Set up your Globus account and learn how to transfer files using the Globus documentation. Select “University of Michigan” from the dropdown box to get started.
- Once you are ready to transfer files, enter “umich#greatlakes” as one of your endpoints.
Globus Connect Personal
Globus Online also allows for simple installation of a Globus endpoint for Windows, Mac, and Linux desktops and laptops.
- Follow the Globus instructions to download the Globus Connect Personal installer and set up an endpoint on your desktop or laptop.
Batch File Copies
A non-standard use of Globus Online is that you can use it to copy files from one location to another on the same cluster. To do this use the same endpoint (umich#greatlakes as an example) for both the sending and receiving machines. Setup the transfer and Globus will make sure the rest happens. The service will email you when the copy is finished.
Command Line Globus
There are Command line tools for Globus that are intended for advanced users. If you wish to use these, contact HPC support.
5. Submit a job
This is a simple guide to get your jobs up and running. For more advanced Slurm features, see the Slurm User Guide for Great Lakes. If you are familiar with using the resource manager Torque, you may find the migrating from Torque to Slurm guide useful.
Most work will be queued to be run on Great Lakes and is described through a batch script. The sbatch command is used to submit a batch script to Slurm. To submit a batch script simply run the following from a shared file system; those include your home directory, /scratch, and any directory under /nfs that you can normally use in a job on Flux. Output will be sent to this working directory (jobName-jobID.log). Do not submit jobs from /tmp or any of its subdirectories.
$ sbatch myJob.sh
The batch job script is composed of three main components:
- The interpreter used to execute the script
- #SBATCH directives that convey submission options
- The application(s) to execute along with its input arguments and options
#!/bin/bash # The interpreter used to execute the script #“#SBATCH” directives that convey submission options: #SBATCH --job-name=example_job #SBATCH --mail-type=BEGIN,END #SBATCH --nodes=1 #SBATCH --ntasks-per-node=1 #SBATCH --mem-per-cpu=1000m #SBATCH --time=10:00 #SBATCH --account=test #SBATCH --partition=standard # The application(s) to execute along with its input arguments and options: /bin/hostname sleep 60
How many nodes and processors you request will depend on the capability of your software and what it can do. There are four common scenarios:
NOTE: If you will be using licensed software, for example, Stata, SAS, Abaqus, Ansys, etc., then you may need to request licenses. See below table of common submission options for the syntax; in the Software section, we show the command to see which software requires you to request a license.
#!/bin/bash #SBATCH --job-name JOBNAME #SBATCH --nodes=1 #SBATCH --cpus-per-task=1 #SBATCH --mem-per-cpu=1g #SBATCH --time=00:15:00 #SBATCH --account=test #SBATCH --partition=standard #SBATCH --mail-type=NONE srun hostname -s
This is similar to what a modern desktop or laptop is likely to have. Software that can use more than one processor may be described as multicore, multiprocessor, or mulithreaded. Some examples of software that can benefit from this are MATLAB and Stata/MP. You should read the documentation for your software to see if this is one of its capabilities.
#!/bin/bash #SBATCH --job-name JOBNAME #SBATCH --nodes=1 #SBATCH --cpus-per-task=4 #SBATCH --mem-per-cpu=1g #SBATCH --time=00:15:00 #SBATCH --account=test #SBATCH --partition=standard #SBATCH --mail-type=NONE srun hostname -s
This is the classic MPI approach, where multiple machines are requested, one process per processor on each node is started using MPI. This is the way most MPI-enabled software is written to work.
#!/bin/bash #SBATCH --job-name JOBNAME #SBATCH --nodes=2 #SBATCH --ntasks-per-node=4 #SBATCH --mem-per-cpu=1g #SBATCH --time=00:15:00 #SBATCH --account=test #SBATCH --partition=standard #SBATCH --mail-type=NONE srun hostname -s
This is often referred to as the “hybrid mode” MPI approach, where multiple machines are requested and multiple processes are requested. MPI will start a parent process or processes on each node, and those in turn will be able to use more than one processor for threaded calculations.
#!/bin/bash #SBATCH --job-name JOBNAME #SBATCH --nodes=2 #SBATCH --ntasks-per-node=4 #SBATCH --cpus-per-task=4 #SBATCH --mem-per-cpu=1g #SBATCH --time=00:15:00 #SBATCH --account=test #SBATCH --partition=standard #SBATCH --mail-type=NONE srun hostname -s
Common Job Submission Options
|Description||Slurm directive (#SBATCH option)||Great Lakes Usage|
Available partitions: standard (default), gpu (GPU jobs only), largemem (large memory jobs only), viz, debug, standard-oc (on-campus software only)
|Wall time limit||--time=<dd-hh:mm:ss>||--time=01-02:00:00|
|Process count per node||--ntasks-per-node=<count>||--ntasks-per-node=1|
|Minimum memory per processor||--mem-per-cpu=<memory>||--mem-per-cpu=1000m|
|Request software license(s)||--licenses=<application>@slurmdb:<N>||--licenses=stata@slurmdb:1
requests one license for Stata
|Request event notification||
Note: multiple mail-type requests may be specified in a comma separated list:
Please note that if your job is set to utilize more than one node, make sure your code is MPI enabled in order to run across these nodes. More advanced job submission options can be found in the Slurm User Guide for Great Lakes.
An interactive job is a job that returns a command line prompt (instead of running a script) when the job runs. Interactive jobs are useful when debugging or interacting with an application. The srun command is used to submit an interactive job to Slurm. When the job starts, a command line prompt will appear on one of the compute nodes assigned to the job. From here commands can be executed using the resources allocated on the local node.
[user@gl-login1 ~]$ srun --pty --account=test /bin/bash srun: job 309 queued and waiting for resources srun: job 309 has been allocated resources [user@gl3160 ~]$ hostname gl3160.arc-ts.umich.edu [user@gl3160 ~]$
Jobs submitted with srun –pty /bin/bash will be assigned the cluster default values of 1 CPU and 1024MB of memory. The account must also be specified; the job will not run otherwise. If additional resources are required, they can be requested as options to the srun command. The following example job is assigned 2 nodes with 4 CPUS and 4GB of memory each:
[user@gl-login1 ~]$ srun --nodes=2 --account=test --ntasks-per-node=4 --mem-per-cpu=1GB --pty /bin/bash srun: job 894 queued and waiting for resources srun: job 894 has been allocated resources [user@gl3160 ~]$ srun hostname gl3160.arc-ts.umich.edu gl3160.arc-ts.umich.edu gl3161.arc-ts.umich.edu gl3160.arc-ts.umich.edu gl3161.arc-ts.umich.edu gl3161.arc-ts.umich.edu gl3160.arc-ts.umich.edu gl3161.arc-ts.umich.edu
In the above example srun is used within the job from the first compute node to run a command once for every task in the job on the assigned resources. srun can be used to run on a subset of the resources assigned to the job. See the srun man page for more details.
Jobs can request GPUs with the job submission options --partition=gpu and a count option from the table below. All counts can be represented by gputype:number or just a number (default type will be used). Available GPU types can be found with the command sinfo -O gres -p <partition>. GPUs can be requested in both Batch and Interactive jobs.
|Description||Slurm directive (#SBATCH or srun option)||Example|
|GPUs per node||--gpus-per-node=<gputype:number>||--gpus-per-node=2 or --gpus-per-node=v100:2|
|GPUs per job||--gpus=<gputype:number>||--gpus=2 or --gpus=v100:2|
|GPUs per socket||--gpus-per-socket=<gputype:number>||--gpus-per-socket=2 or --gpus-per-socket=v100:2|
|GPUs per task||--gpus-per-task=<gputype:number>||--gpus-per-task=2 or --gpus-per-task=v100:2|
|CPUs required per GPU||--cpus-per-gpu=<number>||--cpus-per-gpu=4|
|Memory per GPU||--mem-per-gpu=<number>||--mem-per-gpu=1000m|
Jobs can request nodes with large amounts of RAM with --partition=largemem.
Submitting a Job in One Line
If you wish to submit a job without needing a separate script, you can use sbatch --wrap=<command string>. This will wrap the specified command in a simple “sh” shell script, which is then submitted to the Slurm controller.
During your job, you may write to and read from two temporary locations on the node:
- /tmp: Two 7200 RPM SATA drives in RAID 0, 3.5 TB per node
- /tmpssd: Faster solid state drive, 426 GB per node
These folders are local, meaning they are only available to the processes running on that specific node and are not shared across the cluster. If you need shared space, your /scratch folder may be a better temporary work space.
Keep in mind that these are temporary folders and may be used by others during or after your job. Please try not to completely fill the space so that others can use it, and move or delete your /tmp and /tmpssd files after your work is finished.
Most of a job’s specifications can be seen by invoking scontrol show job <jobID>. More details about the job can be written to a file by using scontrol write batch_script <jobID> output.txt. If no output file is specified, the script will be written to slurm<jobID>.sh.
A job’s record remains in Slurm’s memory for 30 minutes after it completes. scontrol show job will return “Invalid job id specified” for a job that completed more than 30 minutes ago. At that point, one must invoke the sacct command to retrieve the job’s record from the Slurm database.