Overview
The overview page should help you get started with learning about and utilizing the various services offered at ACER.
What is ACER? Heading link
The Advanced Cyberinfrastructure for Education and Research (ACER) strives to provide researchers—and their collaborators around the globe—with a broad array of computational resources and data-related services to meet any and all of their technical needs. These services include, but are not limited to, providing access to high-performance computing clusters, big data analytics clusters, collaborative research data storage, secure research environments and high-performance networking. In addition to providing resources, ACER offers consulting services to researchers to aid with their individual projects. Whether it be building architecture to securely transfer digital information; analyze data from instruments, sensors, and devices; or implementing software-defined networking projects, ACER is committed to furthering UIC’s research ambitions holistically.
What is High Performance Computing? Heading link
High performance computing (HPC) is a process of aggregating computing power to run heavy and large-scale applications in a way that delivers much faster performance and throughput than what you would typically get with a single laptop or workstation computer. This kind of computing power allows you to efficiently solve complex problems in the fields of science, engineering, and business.
Research Complexities solved by HPC Heading link
-
Shorter Processing Times
Information processing times for complex tasks are decreased exponentially when performed on an HPC cluster, leading to faster and more efficient research output.
-
Higher Throughput
As HPC resources tend to have larger memory and storage capacities along with grouped computing power, large volumes of data can be processed simultaneously, leading to a high bandwidth system where more work can be done at the same time.
-
Faster Data Transfer
Computing resources at ACER offer fast, realtime data transfer speeds using InfiniBand topologies that are better optimized for high performance needs when compared to traditional Ethernet.
HPC Workflow Heading link
The following steps illustrate a typical job workflow in a High Performance Computing environment:
Requesting an account on the Cluster
A user can obtain access to the cluster by a) Requesting a new allocation for a project/department, b) Requesting a new project under an existing allocation or c) Joining a project on an existing allocation.
There are three different types of user accounts that can be created on the cluster and are classified as Research User, Principal Investigator and Education user.
For more information and to request access to the cluster, please refer to the Request Access page.
Logging into the Cluster
Once you have an account setup on either Extreme or SABER, you can login to one of the login nodes through an SSH client (if on Windows) and the terminal application (if on Linux/MacOS).
$ ssh netid@login-1.extreme.acer.uic.edu
On Extreme, a user with an account can use the two login-nodes (login-1, login-2) for access to CPU compute nodes. One of the nodes (login-3) has reserved access to GPU resources.
On SABER, there are two CPU only login-nodes (login-1, login-2). GPU nodes will be available soon.
Each login session on the cluster is protected by the DUO Two-factor authentication. To setup a 2FA account, please follow the instructions provided here.
User Workspace and Directories
All users with an account on Extreme/SABER are assigned an individual and a group space where they can store files or run programs.
Home directory:
The home directory is used to store personal data and/or user-specific configurations. This is the default starting point for a user when they log into their account.
Storage Limits for the home directory:
- Soft Storage Limit: 10GB // Beyond this limit, you will not be able to write data into the directory and will have a grace period of 7 days to clear up space.
- Hard Storage Limit: 15GB // Beyond this limit, you and your jobs will not be able to write files into the directory.
Project Directory:
This is a larger storage space which is accessible from both Extreme and SABER and should be used for the majority of the user’s HPC-based workload. Each project directory has access to 250GB of scratch space which can be used to store large job outputs and source files. This project space allows for group collaboration, with each member having their own sub-directory and the group having a common directory for shared files. Users affiliated with multiple HPC projects will have multiple project directories so they can easily share their files within/across the appropriate groups.
Requesting / Running Software on the Cluster
Once you are logged into a particular cluster through a login node, you can access the various software and tools installed on that specific node.
Using pre-installed software on the Cluster:
Both Extreme and SABER have a wide variety of applications and tools installed on the cluster that can be accessed through the use of environment modules. This enables easy loading/unloading of specific software according to your needs. The list of available software on the cluster can be accessed using the command
$ module avail
You can also explore a more extensive list of available software here: Software Directory
Installing your own software:
If there is a software package that is specific to your needs, you may install the software locally within your home or project directory. Please contact our support staff at acer@uic.edu for instructions on how to do this. You are advised to not run any programs on the login node as it is a shared resource and may impact the performance of other users.
** Login nodes are solely meant for submitting jobs **
Users can also request for new software to be installed here.
Submitting Jobs on the Cluster
To fully exploit the computing power of the High Performance Cluster, programs are specified as Jobs and run on multiple processors via various system resources. The cluster utilizes the Moab Cluster Suite to schedule user-submitted jobs, allocate system resources and process these jobs.
A job is submitted in the form of a PBS script that allows the user to specify various hardware parameters to be used while processing the job. Once a job is processed, an output and an error file are generated with the results of the computation.
Please refer to the Knowledge Base for instructions on how to request job resources, create and submit job scripts, and to monitor your submitted jobs.
Monitoring a Job
The cluster provides various tools and commands to monitor your jobs:
- See the status of all your submitted jobs
- $ showq -u
- Check the status of a particular job
- $ checkjob
- Find out the approximate start time for a particular job
- $ showstart
- Cancel a particular job
- $ canceljob
For more information on how to manage Jobs on Extreme, please refer to the Knowledge Base.
Access Knowledge Base
Quick Links Heading link
We're here to Help Heading link
As part of your collaboration with ACER, we shall provide support to you in all phases of the technology life-cycle including:
- Working with you to determine what type(s) of compute resources best meet your research needs
- Negotiating competitive prices from a range of top-tier vendors using bulk purchasing prices
- Hosting your dedicated compute resources in a secure on-campus data center
- Active on-going support, updates, and maintenance by a team of professional system administrators and engineers.