HPC Cluster documentation

Welcome to the documentation for Apollo2, our High Performance Computing cluster, a middle size HPC facility comprising almost 3000 CPU cores distributed across ~120 compute nodes.

As many HPC systems, Apollo2 is a well-connected network of fairly cheap processors, which means that its power does not come from nodes’ individual technical specifications but from the relatively large amount of cores that can simultaneously work on a given task.

You can use the cluster to help you accelerate you computations or analysis by making use of a distributed parallel computer. In order to get the most out of your use of the cluster though, you need to design your workload in such a way that it could be run in parallel.

To access the system you use a login node called apollo2, which provides a gateway to the compute resources.

Users are given a workspace (the HPC home directory) to store their HPC-research-related data (this space is backed up every night) and a directory under our parallel file system (Lustre FS), a non-backed-up scratch space intended for parallel I/O (usually many processes reading/writing simultaneously to/from the same big file).