Your browser is unsupported

We recommend using the latest version of IE11, Edge, Chrome, Firefox or Safari.

Extreme

Launched on January 29, 2014, as part of the cross-departmental HPCC initiative, the Extreme HPC Resource has served as UIC’s flagship supercomputing resource for numerous departments and colleges. The original configuration was 163 compute nodes and 1 petabyte of aggregate storage. In December 2014, the cluster was augmented with an additional 40 compute nodes.

In 2019, Extreme was again augmented by a 50-node NVIDIA GPU cluster for CUDA-accelerated applications.

The Extreme HPC resource was purchased through a combination of central funds within the Provost’s office for common infrastructure and per-department purchases for groups of compute nodes. Those who are interested in purchasing resources on Extreme should visit the Resource Pricing page for more information.

  • 5,040 Cores for High Performance Computing

  • 37 TB High Performace Memory

  • 1 PB Total Local Raw Storage (in PetaBytes)

Cluster Configuration Heading link

Extreme is a powerful machine with 3,500 cores, 24.5 Terabytes of memory and 1.25 Petabytes of local raw storage, of which 275 Terabytes is raw fast scratch storage. Extreme supports more than 150 researchers across 25 different research groups from 5 colleges. As the result of unprecedented collaboration across several stakeholders, Extreme is built on partnership model where multiple departments and colleges invested to allow their affiliated faculty to use this resource.

The following provides a brief overview of the configuration of Extreme:

The cluster is composed of traditional, high memory and GPU computing nodes.

  • 212× HPC worker nodes
    • 160× Generation 1:  Intel Xeon E5-2670 @ 2.60 GHz, cache size of 20MB with 16 cores in each node; 128GB RAM and 1 TB local storage
    • 40× Generation 2: Intel Xeon E5-2670 @2.50 GHz, cache size of 20MB with 20 cores per node; 128 GB RAM and 1 TB local storage
    • 12× Generation 4: Intel Xeon Gold 5218 @2.30 GHz, cache size of 22 MB with 32 cores per node; 192 GB RAM and 1.2 TB local storage
  • 3× Generation 1 High Memory Worker Nodes: Intel Xeon E5-4650L @ 2.60GHz, cache size of 20 MB with 32 cores per node; 1 TB RAM
  • 50× Generation 2 GPU Worker Nodes: 4× Tesla P100 GPUs; Intel Xeon E5-2650 v4 @2.20 GHz, cache size of 30 MB with 24 cores per node; 128 GB RAM and 1 TB local storage
  • 3× Login Nodes
  • 2× Admin Nodes

Extreme Cluster Storage includes both NFS and Lustre filesystems.

  • NFS Storage – A high capacity with 1.14 PB of raw persistent storage. All user home directories and group shares can be found here
  • Lustre Storage – An extremely fast storage with RAID 6 configuration and communicating with nodes over QDR Infiniband. It is a 288TB fast scratch storage, which should be used exclusively at run time; results should be transferred as soon as possible.

Interconnect: Extreme uses Infiniband (QDR) high speed network. Communicating with each other through high bandwidth Infiniband switches.

The traditional HPC and high memory worker nodes currently run CentOS 6.9. The GPU worker nodes currently run CentOS 7.5.