SABER
Shared Analytics & Big-data Enterprise Resource
** SABER has been decommissioned. The new HPC Cluster Lakeshore has replaced SABER and a webpage with more details will be available soon. There are two ways to get access to the new cluster:
- Request free compute resources through the Chicago Computes program
- Purchase into the HPC Partnership program to reserve exclusive nodes for your lab or department **
For meeting 21st century’s big data challenge and to advance data science research, the Shared Analytics & Big-data Enterprise Resource (SABER) was introduced into the ACER HPC resource offerings in May 2017. In addition to traditional HPC workloads, this NSF-funded cluster provides access to various Big-Data/Analytics software packages.
SABER is UIC’s first fee-for-service grant where researchers can either buy service time or buy dedicated nodes for exclusive usage. With over one petabyte of research storage, SABER will allow researchers to run computation and analysis on massive datasets from heterogeneous sources.
Important: SABER will be decommissioned in December 2023. Researchers looking for a similar access model are invited to use our AWS HPC in the Cloud resource.
Cluster Configuration Heading link
With over 75 Nodes and one Petabyte of research storage, SABER will allow researchers to run computation and analysis on massive data sets from heterogeneous sources and calculate results at run-time. Along with a super fast interconnect that supports high speed inter node communications, these results should be transferred as soon as possible.
Computation
The SABER cluster consists of traditional high performance computing nodes
- 76 x HPC Nodes
- Intel Xeon E5-2650 @ 2.20GHz
- 24 x Cores/Node with 30 MB cache
- 128GB RAM
- 1TB Local Storage
- Software on each node – CentOS 7.3
Storage
SABER utilises two file storage systems, NFS and Lustre.
- NFS Storage: A high capacity with 524 TB(0.5 PB) of raw persistent storage. All user home directories and group shares can be found here.
- Lustre Storage: An extremely fast storage with RAID 6 configuration and communicating with nodes over QDR Infiniband. It is a 760TB fast scratch storage, which should be used exclusively at run time; results should be transferred as soon as possible.
Interconnect: SABER uses FDR Infiniband by Mellanox that is capable of supporting internode communications at 56Gb/s to form a high-speed internal network. SABER operates on combination of EDR spine and FDR leaf switches with a 2:1 blocking factor ratio.