HPC high performance computing
Sukurta: 22 March 2022
- Unit: Faculty of Mathematics and Informatics
- Keywords: supercomputer, high performance computing, distributed computing, HPC, CPU, GPU
HPC High Performance Computing Cluster.
The supercomputer is composed of clusters (in computing nodes, the first number is the amount of resources actually available):
Cluster |
Nodes |
CPU |
GPU |
RAM |
Main (CPU) |
35/36 |
48 |
0 |
384GiB |
GPU |
3/3 |
40 |
8 |
512GB/32GB |
Power (GPU) |
2/2 |
32 |
4 |
1024GB/32GB |
All infrastructure requests are evaluated in terms of CPU/GPU hours. When calculations are performed on multiple nodes/cards, the number of hours should be multiplied by the number of nodes/cards.
- The CPU 1-hour computing package consists of 1 CPU + 8 GB RAM, unlimited disk capacity, but no data backup.
- For GPUs, a 1-hour compute package shall consist of: 1 V100 graphics card with 32 GB of graphics memory + 5 CPUs and 64 GB of system memory, unlimited disk capacity, but no data backup.
All computing resources are connected via a low latency 100 Gigabit Infiniband network.
Usability
- Development of data analysis models in HPC environments. This includes the initiation of the data analysis process, the preparation of the data, the development of algorithms suitable for data analysis and experimental evaluation. Data cleaning, outlier detection, classification, clustering, associative rules and time series algorithms. R,Weka, Octave, etc.
- Optimisation of business process data representation and analysis and development of analytical algorithms in HPC environment. This is the visual analysis of data generated by business processes in an HPC environment using mapping methodologies common in data analytics or processes. Structural, visual or descriptive analysis of a business process in an application environment, taking into account the multidimensionality of the data and the complexity of the processes. Used: R,Weka, Octave, Gaussian and Gamess, etc.
- Development of algorithms for massive data management and analysis using cloud computing. This includes the development of "big data" databases and their experimental use and evaluation in HPC and cloud computing environments. NoSQL, Hadoop, BigTable, etc.
- Development of sensor and streaming data management and analytical algorithms in HPC and cloud computing. This includes the implementation of continuously changing and periodically updated data models, time series similarity algorithms, the study of key temporal properties and the application of algorithms to sensor and geographic continuously varying data. Applications: PostgreSQL, TPR indexes, streaming DB, and other applications.
- Business process outsourcing and data analysis with cloud computing. Customization and experimental evaluation of business systems moving to cloud computing environments. Business or open source cloud computing implementation and consulting. Usage: S3, OpenNebula, OpenStack, VMware (ESXi), Hyper-V, XenServer, KVM, OpenVZ, VirtualBox etc.
- Image and audio processing computations and algorithm development. This includes feature extraction of conventional (photo and video) and medical images, development of algorithms for fast content image retrieval, and mass processing. Applications: FFTW, WT, POV-Ray, etc.
- Expert assessment and IT consultancy. Expert evaluation and consultancy of IT products, services, solutions (development, implementation, operation, etc.). Application of the latest research and results involving VU MIF researchers.
- Expert assessment, consultancy, training on cyber security issues. Analysis of business processes in the context of cyber security. Development of simulation environments for detection and assessment of breaches involving VU MIF researchers.
Responsible person: , tel. +370 5 2195005