“VU HPC” Sauletekis

 

BullSequana X1000 GPU cluster of supercomputer “VU HPC Saulėtekis”.

  • 2 GPU compute nodes.

There are 64 compute cores per node with AVX-512, AVX2 AVX, SSE4.2 processor instructions; 4 Tesla V100 GPU cards, 376 GB RAM. Nodes connect IB networks. Expected physical availability of compute nodes: 50%.

Nodes connect 100 Gbps speed IB networks with other nodes cluster nodes and clusters.

The entire system uses a high-performance file system: 46TB (100 Gbps speed on the IB network) and 289 TB (40 Gbps speed on the IB network). Maximum file size for a task is 200TB.

Task queues: SLURM
Operating system: Red Hat Enterprise Linux 8.4 release

Architecture: Sequana X1000.

Possibilities of use:

OpenMP and MPI type calculations with NVidia GPU technology choosing the amount of nodes, cores and memory required for the task, in a Linux RHEL8 environment.

BullSequana X1000 cluster of supercomputer “VU HPC Saulėtekis”.

  • A distributed memory cluster.

Total maximum utilization of 143 compute nodes per task (one node has 64 compute cores with AVX-512, AVX2 AVX, SSE4.2 CPU instructions, 376 GB RAM). Expected physical availability of computing nodes: 80%.

Nodes connect 100 Gbps speed IB networks with other nodes cluster nodes and clusters.

The entire system uses a high-performance file system: 46TB (100 Gbps speed on the IB network) and 289 TB (40 Gbps speed on the IB network). Maximum file size for a task is 200TB.

Task queues: SLURM

Operating system: Red Hat Enterprise Linux 8.4 release

Architecture: Sequana X1000.

Possibilities of use:

OpenMP and MPI-like computations for task-selecting node cores and memory amounts in a Linux RHEL8 environment. Virtual machine with Windows environment according to special needs.