Logo
Logo
iconAccess to HPCicon
icon
SearchSearch
Search
iconPortalicon
Language
EN
Language
MenuMenu
Logo
Logo

About Us

  • Supercomputing Center TUKE
  • PERUN Supercomputer
  • About Project
  • Institute of Computer Technology
  • Technical University of Košice

Technologies

  • PERUN Architecture
  • NVIDIA DGX
  • Qaptiva Quantum Simulator
  • Data Storage
  • Communication Network
  • Assistive Technologies

Calls and Customers

  • What Is Call
  • Open Calls
  • Access to HPC
  • Documentation
  • HPC PERUN Status
  • Support and Contact

Address

Němcovej 3, 042 01 Košice, Slovakia

Dispatching

055/602 6000

Secretariat

055/602 7602

Billing information

IČO: 00 397 610 | DIČ: 2020486710 | VAT ID: SK2020486710

© 2026 Technical University of Košice, all rights reserved.
Protection of personal dataAccessibility Statement
TechnologiesPERUN ArchitectureNVIDIA DGXQaptiva Quantum SimulatorStorageCommunication NetworkAssistive Technologies
/

Assistive Technologies

The operation of the PERUN supercomputer requires a precisely engineered technical foundation that ensures reliable, secure, and energy-efficient performance.

The supporting technologies form an integral part of the Supercomputing Center’s infrastructure at the Technical University of Košice (TUKE) and include power supply, cooling, and environmental control systems within the data hall.

    1. Data Center Power Supply

    The power system of the HPC PERUN infrastructure is designed with a focus on high reliability, redundancy, and uninterrupted operation.

    Electrical energy is supplied from two independent transformers, ensuring stability even in the event of a failure on one branch.

    Backup power is provided by a Vertiv UPS system complemented by a battery array, which maintains power during short outages or during the startup of the CGM motor generator (900 kVA).

    This setup enables seamless transitions between power sources without any interruption to the operation of computing nodes, storage systems, or networking equipment.

    2. HPC Cooling System

    Given the high computational density of the infrastructure, the cooling system is designed as a water-based, redundant, and fully automated solution.

    The primary cooling medium is a water–glycol mixture, ensuring optimal heat dissipation even under demanding operating conditions.

    Three STULZ cooling units (chillers) are installed outside the Supercomputing Center, operating in automatic redundant mode.

    Cooling is divided into two circuits:

    • Cold-water circuit – cools HPE ARC in-rack cooling units, which remove heat directly from the racks.
    • Warm-water circuit – through heat exchangers, mixers, and distribution manifolds, delivers cooling to both the universal computing partition and the accelerated GPU partition.

    This configuration enables precise thermal balance control and maximizes the energy efficiency of the cooling process.

    3. Management of Data Center Environment

    The HVAC system of the data center ensures air exchange, constant temperature, and humidity control within the data hall.

    It complements the water-based cooling and maintains stable microclimatic conditions required for long-term, fault-free operation of computing technologies.

    An automated cooling management system continuously monitors environmental parameters and dynamically adjusts them according to the system’s current load.

    Comprehensive Infrastructure for Reliable Operation

    The supporting technologies of the PERUN supercomputer form a stable and secure foundation for 24/7 HPC operation.

    Thanks to the advanced power architecture, redundant cooling, and environmental management systems, the Supercomputing Center at TUKE maintains optimal operating conditions for computing nodes, storage arrays, and network systems — even under maximum performance loads.