Explore ARCExplore ARC
Close up of Xeon Phi processors

Data Den Research Archive

By | Systems and Services

Data Den is a service for preserving electronic data generated from research activities.  It is a low cost, highly durable storage system that allows faculty to store large amounts of data without paying continual, monthly rates.

Data Den is a disk-caching, tape-backed archive that is optimized for data that is not regularly accessed for long periods of time. Data Den does not replace active storage services like Turbo.

Data Den places two encrypted copies of your data in two geographically separate locations.  It also supports data sharing with external collaborators.

Data Den pilot services will be available in the second half of 2018.

Order Service

Data Den is scheduled for availability in the second half of 2018.

Contact hpc-support@umich.edu with any questions.

Close up of Flux

ARC-TS Storage

By | Systems and Services

Several levels of data storage are provided with an allocation of ARC-TS HPC services, varying by capacity, I/O rate, and longevity of storage.

Storage type / location Description Best Used For Access and Policy Details
/tmp
Local directory unique to each node. Not shared. High-speed read and writes for small files (less than 10GB)
/home
Shared across the entire cluster. Only for use with currently running jobs. Quota of 80GB per user. Currently running jobs.
/scratch
Lustre-based parallel file system shared across all Flux nodes. Large reads and writes of very large data files. Checkpoint/restart files and large data sets that are frequently read from or written to are common examples. Also, code that uses MPI. ARC-TS /scratch information
AFS
AFS is a filesystem maintained and backed up by ITS. It is the only storage option available for Flux that is regularly backed up, and is therefore the most secure choice. It is only available on Flux login nodes and can provide up to 10GB of backed-up storage. Storing important files. NOT available for running jobs on compute nodes. ARC-TS AFS information
Turbo
Turbo Research Storage is a high-speed storage service providing NFSv3 and NFSv4 access. It is available only for research data. Data stored in Turbo can be easily shared with collaborators when used in combination with the Globus file transfer service. Storing research data. ARC-TS Turbo page
Long-term storage
Users who need long-term storage can purchase it from ITS MiStorage. Once established, it can be mounted on the Flux login and compute nodes. Long-term storage. ITS MiStorage page

Researchers in the Medical School and College of Literature, Science, and the Arts can take advantage of free or subsidized storage options through their respective academic units.

Locker Large-File Storage

By | Systems and Services

Locker is a cost optimized, high-capacity, large file storage service for research data.  Locker provides high performance for large files, and allows investigators across U-M to connect their data to computing resources necessary for their research, including U-M’s HPC clusters.

Locker can only be used for research data. It is tuned for large files (1MB or greater) but is capable of handling small files such as documents, spreadsheets, etc. Locker can be used in combination with the Globus data management sharing system for hosting and sharing data with external collaborators and institutes.

Locker is now available on a pilot basis. Potential pilot users should contact hpc-support@umich.edu.

Getting Started

Requesting or Modifying a Locker Storage Volume

Globus Server Endpoint

Locker can be made available on existing ARC-TS Globus servers to provide high performance transfers, data sharing and access to Locker from off campus.  To access Locker via Globus, request your Locker volume be added to Globus.

ARC-TS Compute System Support

Locker can be accessed from any ARC-TS compute service that supports the same data classifications as your export.  To have your Locker export added to an ARC-TS resource contact us with the export name and system name. Locker will be available on all login and data transfer nodes at a minimum.

Mounts will be located at
/nfs/locker/<export-name>/

Research groups may also request system group creation to control group access to Locker volumes.

Optional Features

Replication – (Recommended) Optional second copy of all data in a different geographic location.

Snapshots – (Highly Recommended) Tracking of how data in a volume changes over time allowing users to recover deleted, modified, or otherwise damaged data.

Access snapshots at:
<mount-location>/.snapshots/<date>

Using Locker

Mounting on Windows CIFS
Instructions provided when provisioned

Mounting on Linux NFS
Instructions provided when provisioned

Mounting on Apple OSX
Instructions provided when provisioned

Group Access Controls

Linux Set GID

Using Set GID (SGID) on a directory will force all files created in that directory to inherit the same group permissions as the parent directory even if the user creating them primary or effective group is different.  The benefit of this combined with the creation of a group on shared systems is that all files will be created owned and accessible (by default) to members of that group

#list available group
groups
chgrp <groupname> folder
chmod g+s folder

Windows AD Groups

Contact hpc-support@umich.edu

Policies

Small File Limitation

Locker’s target audience are those research projects with massive data volumes in large files. Because of this design each 1 TByte of Locker capacity provides only 1 Million files.  Eg. 10 TByte provides 10 Million files. This works out to 1 Mbyte per file average size.

Sensitive Data — ePHI/HIPAA/ITAR/EAR/CUI

Locker is not currently supported for PHI or other data types.  It is scheduled to be reviewed for support at a later date.

System Abuse

Abuse of Locker intentionally or not may result in performance or access being limited to preserve performance and access for other users.  In the event this happens staff will be in contact with the users to engineer solutions.

Frequently Asked Questions

Q: How do I Check Locker Space and File Usage?
A: Linux or OSX Terminal use:

    Space: df -h <mount-path>
    Files: df -h -i <mount-path>

Q: Can Locker be Mounted on All ARC-TS Cluster Compute Nodes?
A: Currently we do not allow Locker to be mounted by very large numbers of clients.  This could change in the future so let us know if this would help you. Currently we recommend using Globus to stage data between cluster scratch and Locker between runs.  Globus provides a CLI so you can script.

Q: Can I Simultaneously Access Locker from Linux and Windows?
A: Currently Locker supports NFS (Linux) OR CIFS (Windows), Apple OSX supports both. This is known as Multi-Protocol or simultaneous NFS and CIFS access.  Because Linux and Windows have different permissions schemes this is complex to manage. We don’t currently support his on Locker but do support it on Turbo.  To work around this we recommend using Globus to ease data movement between Locker and systems that cannot mount it natively.

Q: Why can’t we use Locker as general purpose storage?
A: To maintain performance, encryption, professional support, and a low cost,  Locker’s design does not make it well suited for general purpose primary storage. For this see the Turbo and MiStorage services.

Q: I deleted data but Locker still reports full?
A: Likely your export has snapshots enabled.  Snapshots store changes to Locker exports over time thus deleted data is just ‘moved’ to a snapshot.  Eventually snapshots will age out and free space on their own. Snapshot consumption does count against volume space used.  To delete or disable snapshots to free space early contact support.

Q: I have free space but Locker reports full?
A:  Likely you are at your file quota and are running average file size smaller than 1 MByte. This use case is outside the support of Locker’s design and the small files should move to another storage service.

Q: I don’t see my .snapshots folder?
A: Your volume might not have snapshots enabled.  If it does it is a hidden file on Linux and OSX terminals use ls -a to view all files including hidden files.  To show hidden files in OSX and Windows user interfaces varies by version and can be found in their documentation and online.

Q: My volume shows 2x the size I requested!
A: The system Locker is built on tracks all copies of data in its file system.  if a volume requests replication (2 copies of all data) total space will represent the primary and replica copy in total.  Thus 1TB of new data will consume 2TB of Locker space.

Advanced Topics

System Configuration

 Locker consists of two DDN GS14KX-E Gridscaler clusters running IBM Spectrum Scale.  Each cluster is located in different data centers with dedicated fiber for data replication between the two sites.  Each GS14KX-E cluster can hold 1680 hard drives for capacity of 10PB usable using 8TByte drives. Each hard drive is 7200RPM self encrypting and can be added to the system online. If at capacity additional GS14KX-E can be clustered to add additional performance and capacity.

By not including dedicated metadata or flash/NVMe storage we are able to keep the cost of Locker lower than other solutions such at Turbo. Thus Locker will not perform well with small IO operations and is built for capacity.  Thus why we offer both services. The GS14KX-E does have support for adding NVMe/Flash for meta-data and tiering at a later date should the price of such devices become more reasonable.

Locker is directly connected to the Data Den archive via dedicated data movers and to the ARC-TS research network by two IBM Cluster Export Services (CES) nodes or Protocol Nodes.  Each CES node is connected with 100Gbps network connections and work in a active-active high availability configuration. Outside the ARC-TS network performance is limited to 40Gbps from the campus backbone.

Citing and Grants

Order Service

Locker is now available on a pilot basis. Potential pilot users should contact hpc-support@umich.edu.

The rate for Locker will be $40.09 per terabyte per year.

Contact hpc-support@umich.edu with any questions.

 

To order Locker, the following information is required:

  • Amount of storage needed (1TB increments 10TB Minimum)
  • MCommunity Group name (group members will receive service-related notification, and can request service changes)
  • Shortcode for billing
  • NFS
    • Hostnames or IP addresses for each permitted user on the wired U-M network. (If forward and reverse records exist in DNS, please use the fully qualified hostname. If the records do not exist, provide the IP address.)
    • Numeric user ID of person who will administer the top level Locker directory and grant access to other users
  • CIFS
    • UMROOT AD Group Name
  • Specify if regulated or sensitive data will be use
  • Specify if your Locker account should be accessible on the Flux HPC cluster

Fill out this form to order Locker CIFS.

Fill out this form to order Locker NFS.

Close up of ethernet cables on Flux

OSiRIS

By | Systems and Services

Open Storage Research Infrastructure (OSiRIS) is a collaboration between U-M, Wayne State University, Michigan State University and Indiana University to build a distributed, multi-institutional storage infrastructure that will allow researchers at any of our three campuses to read, write, manage and share large amounts of data directly from their computing facility locations on each campus.

By providing a single data infrastructure that supports computational access on the data “in-place,” OSiRIS meets many of the data-intensive and collaboration challenges faced by our research communities and enable these communities to easily undertake research collaborations beyond the border of their own Universities.

OSiRIS will use commercial off-the-shelf hardware coupled with CEPH software to build a high performance software-defined storage system. The system is composed of a number of building-block components: storage head- nodes with SSDs + 60-disk SAS shelf, RHEV cluster (2-hosts), Globus Connect servers, a PERFSONAR network monitoring node and reliable, OpenFlow capable switches.

OSiRIS will deploy a software-defined storage (e.g., commodity hardware with storage logic abstracted into a software layer) service for our universities using the CEPH Storage Cluster as the primary means of organizing the storage hardware required.

OSiRIS is funded by a grant from the National Science Foundation; the Principal Investigator is Shawn McKee, Research Scientist in the Department of Physics and the Director of the Center for Network and Storage-Enabled Collaborative Computational Science (CNSECCS). CNSECCS and OSiRIS are operated under the auspices of the Michigan Institute for Computational Discovery and Engineering (MICDE).

 

Close up of Turbo

Turbo Research Storage

By | Systems and Services

Turbo is a high-capacity, fast, reliable, and secure data storage service that allows investigators across U-M to connect their data to the computing resources necessary for their research, including U-M’s Flux HPC cluster. Turbo supports storage of sensitive data and ARC-TS’s Armis cluster.

Turbo can only be used for research data. It is tuned for large files (1MB or greater) but is capable of handling small files such as documents, spreadsheets, etc. Turbo in combination with Globus sharing should work well for sharing and hosting data for external collaborators and institutes.

Turbo costs $19.20 per terabyte per month, or $230.40 per terabyte per year, for replicated data. The cost for unreplicated data is $9.60 per terabyte per month, or $115.20 per terabyte per year. A U-M shortcode is required to order.

Researchers in the Medical School and College of Literature, Science, and the Arts can take advantage of free or subsidized storage options through their respective academic units.

Order Service

To order Turbo, the following information is required:

  • Amount of storage needed (1TB increments)
  • MCommunity Group name (group members will receive service-related notification, and can request service changes)
  • Shortcode for billing
  • NFS
    • Hostnames or IP addresses for each permitted user on the wired U-M network. (If forward and reverse records exist in DNS, please use the fully qualified hostname. If the records do not exist, provide the IP address.)
    • Numeric user ID of person who will administer the top level Turbo directory and grant access to other users
  • CIFS
    • UMROOT AD Group Name
  • Specify if regulated or sensitive data will be use
  • Specify if your Turbo account should be accessible on the Flux HPC cluster

Fill out this form to order Turbo CIFS.

Fill out this form to order Turbo NFS.