Parallel Computing
reserach group

The Parallel Computing research group (alpha) at the University of Torino is interested in parallel programming models, languages, and tools. This topic has undergone impressive change over recent years. New architectures and applications, such as AI, have rapidly become the central focus of the discipline.
These changes are often the result of cross-fertilisation of parallel and distributed technologies with other rapidly evolving technologies. In the storm of such rapid evolution, abstraction provides a cornerstone on which to build.
The research group also pursues quality research by offering a relaxed, creative, and international working environment to enhance skills at every experience level. The working language is English.

Research Topics

  • Programming models and run-time systems for parallel computing
    • Parallel programming models for HPC, multicores and accelerators
    • RISC-V software ecosystem: vectorization, memory performance, energy efficiency
    • System software for HPC: Workflows for hybrid HPC-cloud, storage
    • Foundational aspects of parallel processing
  • Cloud Engineering, virtualization, containerization (OpenStack, Kubernetes, etc.), web3.0
  • Distributed AI and federated learning, foundational models for AI at scale, AI for Science, AI benchmarking

We are open to follow research MSc theses and to host international researchers
(Professors, postdoc, PhD & MSc students)

We Proudly run HPC4AI

hpc4ai

Our research activity is supported by HPC4AI data centre. We imagined HPC4AI before the transformers arrived to change computing forever. We expected parallel computing to be a major driver of innovation in AI.
Thanks to HPC4AI, our researchers can use up to 10,000 cores and 150 GPUs, including the Nvidia H100 and GraceHoppers, without worrying about finding resources and dealing with access limitations. Our resources are designed to support system software research; we simply don’t bother with production constraints. HPC4AI also hosts several prototypes we developed, such as the worldwide AI server with a two-phase cooling system (40x more energy efficient than air cooling). Some are provided by vendors because we are developing state-of-the-art software, such as the first worldwide version of Pytorch for RISC-V. We are NVidia Lighthouse University.
HPC4AI hosts three systems: an OpenStack cloud, a heterogenous HPC cluster, and a system development cluster (with Arm and RISC-V processors and accelerators).
Proudly, the HPC4AI 250KW Tier-3 datacenter is among the greenest worldwide (PUE<1.1 – see PUE online monitor), and even more proudly is designed for researchers and students.

News