Parallel Computing group


The Parallel Computing research group is interested in parallel programming models, languages, and parallel programming tools. This topic has undergone impressive change over recent years. New architectures and applications have rapidly become the central focus of the discipline. These changes are often a result of cross-fertilization of parallel and distributed technologies with other rapidly evolving technologies. In the storm of such rapid evolution, we believe, abstraction provides a cornerstone to build on.
The shift toward multicore and many-core technologies has many drivers that are likely to sustain this trend for several years to come.
Software technology is consequently changing: in the long term, writing parallel programs that are efficient, portable, and correct must be no more onerous than writing sequential programs. To date, parallel programming has not embraced much more than low-level libraries, which often require the application’s architectural re-design. In the hierarchy of abstractions, it is only slightly above toggling absolute binary in the machine’s front panel. This approach cannot effectively scale to support the mainstream of software development where human productivity, total cost, and time to the solution are equally, if not more, important aspects.

Research topics

  • Programming models and run-time systems for parallel computing.
    • Parallel programming models or multicomputer, multicores and accelerators
    • Tools and system software for HPC: Workflows for hybrid HPC-cloud, storage
  • Cloud Engineering, virtualization, containerization (OpenStack, Kubernetes, etc.)
  • Distributed computing: distributed and federated deep learning
  • Foundational aspects of parallel processing

Parallel programming models, what are they?

Let us start from what is not parallel programming model

  • A parallel programming language is not a parallel programming model
  • A C++ or Java synchronisation or messaging library is not a programming model, a library is not programming model
  • A programming framework is not a programming model
  • The shared-memory and message-passing paradigms are parallel programming models, even if they are very low-level …

We proudly run HPC4AI

HPC4AI

Most of the research group’s recent activities revolve around the HPC4AI initiative, Turin’s centre on High-Performance Computing for Artificial Intelligence, and C3S, the University of Torino’s Competence Center on Scientific Computing. HPC4AI hosts three systems: an OpenStack cloud, a heterogenous HPC cluster, and a system developemnt cluster (with Arm, and RISC-V processors and accelerators). Overall, HPC4AI hosts over 11k cores, 120 GPUs, and 6 storage systems (CEPH, EMC2, Lustre, BeeGFS). Proudly, HPC4AI 250KW Tier-3 datacenter is among the greenest wordwide (PUE<1.1 – see PUE online monitor), and even more proudly is designed to for researchers and students: it is located in the middle of Computer Science Department offices is such a way students can work with it as in living lab, and all the datacenter rooms as well as the offices nearby are constantly monitored for over 15 pollution sources (PM1/2.5/10, VOCt, CO2, NO2, NH3, etc. – see pollution online monitor). The Openstack cloud, diretcly managed by the reserach gruop, is running with zero-incidents in over 3 years.

We are hiring

The alpha research group is active in many EU and national research projects. We are hiring: research engineers, Ph.D. candidates, and post-doctoral researchers (fixed-term and tenured appointments). The working language is English, no teaching duties (unless desired). International researchers are welcome. For more information, please look to open positions or contact Prof. Marco Aldinucci directly.

Contacts

Prof. Marco Aldinucci
Computer Science Department, University of Torino
Corso Svizzera 185, 10149 Torino, Italy
Tel: +39 011 6706852
E-mail: aldinuc@di.unito.it