Parallel Computing Group
[ALPHA]

The Parallel Computing research group is interested in parallel programming models, languages, and parallel programming tools. This topic has undergone impressive change over recent years. New architectures and applications have rapidly become the central focus of the discipline.
These changes are often a result of cross-fertilization of parallel and distributed technologies with other rapidly evolving technologies. In the storm of such rapid evolution, we believe, abstraction provides a cornerstone to build on.
The shift toward multicore and many-core technologies has many drivers that are likely to sustain this trend for several years to come.

Software Technology Is Consequently Changing

In the long term, writing parallel programs that are efficient, portable, and correct must be no more onerous than writing sequential programs.
To date, parallel programming has not embraced much more than low-level libraries, which often require the application’s architectural re-design.
In the hierarchy of abstractions, it is only slightly above toggling absolute binary in the machine’s front panel. This approach cannot effectively scale to support the mainstream of software development where human productivity, total cost, and time to the solution are equally, if not more, important aspects.

WE ARE HIRING!

Research Topics

  • Programming models and run-time systems for parallel computing
    • Parallel programming models for HPC, multicores and accelerators
    • RISC-V software ecosystem: vectorization, memory performance, energy efficiency
    • System software for HPC: Workflows for hybrid HPC-cloud, storage
    • Foundational aspects of parallel processing
  • Cloud Engineering, virtualization, containerization (OpenStack, Kubernetes, etc.), web3.0
  • Distributed and federated learning, foundational models for AI at scale, AI for Science

Parallel Programming Models

What are they?

Let us start from what is not parallel programming model

  • A parallel programming language is not a parallel programming model
  • A C++ or Java synchronisation or messaging library is not a programming model, a library is not programming model
  • A programming framework is not a programming model
  • The shared-memory and message-passing paradigms are parallel programming models, even if they are very low-level …

We Proudly Run HPC4AI

Most of the research group’s recent activities revolve around the HPC4AI initiative, Turin’s centre on High-Performance Computing for Artificial Intelligence, and C3S, the University of Torino’s Competence Center on Scientific Computing.
HPC4AI hosts three systems: an OpenStack cloud, a heterogenous HPC cluster, and a system developemnt cluster (with Arm, and RISC-V processors and accelerators). Overall, HPC4AI hosts over 11k cores, 120 GPUs, and 6 storage systems (CEPH, EMC2, Lustre, BeeGFS).
Proudly, HPC4AI 250KW Tier-3 datacenter is among the greenest wordwide (PUE<1.1 – see PUE online monitor), and even more proudly is designed to for researchers and students: it is located in the middle of Computer Science Department offices is such a way students can work with it as in living lab, and all the datacenter rooms as well as the offices nearby are constantly monitored for over 15 pollution sources (PM1/2.5/10, VOCt, CO2, NO2, NH3, etc. – see pollution online monitor).
The Openstack cloud, diretcly managed by the reserach gruop, is running with zero-incidents in over 3 years.