Category Archives: news

Parallel computing group news

Claudia got her PhD on data analytics and she is now Scientist at IBM TJ Watson

Claudia Misale got her PhD in Computer Science at University of Torino on May 11, 2017 defending her thesis entitled “PiCo: A Domain-Specific Language for Data Analytics Pipelines” 

In her thesis Claudia reviews and analyses the state of the art frameworks for data analytics and proposing a methodology to compare their expressiveness and advocates the design of a novel C++ DSL for big data analytics (so-called PiCo: Pipeline Composition). PiCo, differently from Spark/Flink/etc is fully polymorphic and exhibit a clear separation between data and transformations. This, together with the careful C++/Fastflow implementation, eases the application development since data scientists can play with the pipelining of different transformations without any need to adapt the data type (and its memory layout). Type is inferred along the transformation pipeline in a “fluent” programming fashion. The clear separation between transformation, data type and its layout in memory make it possible to really optimise data movements, memory usage and eventually performance. Application developed with PiCo exhibit a huge 10x reduced memory footprint against Spark/Flink equivalent. The fully C++/Fastflow run-time support make it possible to really generate the network of run-time support processes from data processing pipeline, thus achieving the maximum scalability imposed by true data dependencies (much beyond the simple master-worker paradigm of Spark and Flink). Being PiCo C++11/14, it is already open to host native GPU offloading, which paves the way for the analytics-machine learning converge. See more at  DOI:10.5281/zenodo.579753

Claudia is flying today to New York to start her career as a scientist at IBM TJ Watson research center within the Data-Centric Systems Solutions.

Congratulations Claudia. It has been a pleasure working with you for the past 4 years.

Viva snaphots


Claudia viva talk


Viva committee: Prof. Jose Daniel Garcia Sanchez (UC3M, Madrid, Spain), prof. Ernesto Damiani (Khalifa University, Abu Dhabi, UAE), Prof. Domenico Talia (Università della Calabria, Italy)




Continue reading

OptiBike experiment funded under Fortissimo2 EU I4MS

Our project OptiBike (total cost 230K €) has been incorporated into the Fortissimo2 EU @i4ms project (n. 680481). Also this is the first EU-funded project at  and at HPC  laboratory of ICxT@UNITO.

We are looking forward for the kick-off.


Maximum loads and principal directions in Global Stiffness Load case.

Maximum loads and principal directions in Global Stiffness Load case

OptiBike: Robust Lightweight Composite Bicycle design and optimization

In the current design process for composite materials, the effect of manufacturing uncertainty on the structural and dynamic performances of mechanical structures (aeronautic, automotive and others) cannot be accurately assessed due to the limitations of the current computational resources and are hence compensated for by applying safety factors. This non ideal situation usually leads to overdesigned structures that could potentially meet higher performances with the same safety levels.
The objective of this experiment is to establish a design workflow and service for composite material modelling, simulation and numerical optimization that uses uncertainty quantification and HPC to deliver high performance and reliable composite material products. This design workflow can be applied to any composite material structure, from aeronautics to bicycles, and enables SMEs an easy to use and hassle-free access to advanced material design methodologies and the relative HPC infrastructures that allow the application of reliability based design optimization (RBDO). To demonstrate this design workflow, a full composite bicycle will be designed and optimized based on real manufacturing data provided by IDEC, a Spanish SME that provides unique design and manufacturing capabilities for composite materials.

The expected results of this experiment are:

  1. a design optimization approach for composite materials by means of uncertainty quantification techniques for reliability and robustness;
  2. a design optimization service that can be used by every user to perform reliability based product optimization (this will be demonstrated for the bicycle case of IDEC).


  • Noesis solution, Belgium (coord)
  • IDEC, Spain
  • University of Torino (Alpha, ICxT and C3S), Italy
  • Arctur, Slovenia


C3S@UNITO will use INDIGO software tools

Adopting INDIGO-Data Cloud: the Scientific Computing Competence Centre of the University of Torino will use INDIGO software tools

The INDIGO-Data Cloud project is happy to announce that it signed a Memorandum of Understanding (MoU) with the Scientific Computing Competence Centre of the University of Torino (C3S).

A Memorandum of Understanding has just been signed.

The INDIGO-Data Cloud project is happy to announce that it signed a Memorandum of Understanding (MoU) with the Scientific Computing Competence Centre of the University of Torino (C3S). C3S is a research centre that focuses on scientific computing technology and applications and manages OCCAM, • Open Computing Cluster for Advanced data Manipulation • a multipurpose HPC Cluster. Thanks to this agreement, a collaboration has been set-up between C3S and the INDIGO-DataCloud project for the use and development of advanced tools for scientific computing, particularly for heterogeneous use-case management in an HPC infrastructure context.

C3S will have access to software tools developed by INDIGO-DataCloud and will be able to integrate them in the management layer of the OCCAM supercomputer. The INDIGO teams collaborate with C3S in adapting and porting the tools to the specific use cases, giving support on a best-effort basis and providing, whenever feasible, patches and customisations for its software products.

“INDIGO-DataCloud aims at providing services that can be deployed on different computing platforms and enable the interoperability of heterogeneous e-infrastructures. C3S is a very interesting opportunity to test such capabilities and prove how our tools can really make the difference, providing seamless access, elasticity and scalability for the exploitation of data and computational resources” • says Giacinto Donvito, the Technical Director of the INDIGO-DataCloud Project.

“We have a very wide variety of use cases, from traditional HPC in computational chemistry, physics and astrophysics to data-intensive genomics and computational biology all the way to social sciences and even the humanities, so we will have to use the best tools to accommodate them all. We trust that many INDIGO products will help us to improve the performance and usability of our centre” says Matteo Sereno, professor at the Department of Computer Science of the University of Torino and C3S Director.

Technical information in:

M. Aldinucci, S. Bagnasco, S. Lusso, P. Pasteris, S. Vallero, and S. Rabellino, “The Open Computing Cluster for Advanced data Manipulation (OCCAM),” in The 22nd International Conference on Computing in High Energy and Nuclear Physics (CHEP), San Francisco, USA, 2016.

Ricerca, innovazione e formazione al tempo dei Big Data

Ricerca, innovazione e formazione al tempo dei Big Data

Lo sviluppo tumultuoso delle tecnologie digitali e l’affermarsi di Internet come mondo virtuale che diventa sempre più integrato alla quotidianità hanno esponenzialmente accresciuto la disponibilità d’informazione in tutti i campi del sapere umano. In questo contesto, la scienza dei dati (o data science) si propone come nuovo ambito di conoscenza intrinsecamente interdisciplinare, capace di trasformare in profondità le modalità e l’impatto della ricerca universitaria.

L’Università degli Studi di Torino si è attrezzata velocemente per questa sfida epocale e vuole presentare pubblicamente la propria capacità innovativa sul piano teorico, della ricerca applicata e della formazione, spaziando dagli algoritmi ai big data, dall’Internet of Things al machine learning.

Grazie al supporto del Collegio Carlo Alberto e alla Compagnia di San Paolo è stato possibile organizzare una giornata seminariale in cui ricercatori e docenti divulgano alcune delle ricerche più avanzate e si confrontano sulle opportunità (e i rischi) che nuovi approcci offrono alla scienza, alla cultura, alla società e all’economia.


9:30      Saluti istituzionali

Gianmaria Ajani – Magnifico Rettore dell’Università degli Studi diTorino

Francesco Profumo- Presidente della Compagnia di San Paolo

10:00    Introduzione. Scienza dei dati epiattaforme abilitanti

Marco Guerzoni (Dip. di Economia e Statistica e Despina BigData Lab)

Marco Aldinucci (Dip. di Informatica e C3S)

10:15    La rivoluzione dei big data nelle scienze della vita

Moderatore: Paolo Provero (Dipartimento di BiotecnologieMolecolari e Scienze per la salute)

–       La rivoluzione digitale e la medicina personalizzata inoncologia

Enzo Medico (Dip. di Oncologia)

–       Big Data e genomica vegetale

Alberto Acquadro (Dipartimento di Scienze Agrarie, Forestali eAlimentari)

11:00    La complessità svelata: big data perl’economia e l’impresa

Moderatore: Magda Fontana (Dip. di Economia e Statistica eDespina Big Data Lab)

–       Il grado di verità dei dati nei processi decisionali

Elvira Di Nardo (Dip. di Matematica)

–       Analisi predittive e comportamento dei consumatori

Massimiliano Nuccio (Dip. di Economia e Statistica e DespinaBig Data Lab)

–       Le sfide normative dei Big Data tra standards giuridici eflussi transfrontalieri

Alberto Oddenino (Dip. di Giurisprudenza)

11.50      Data Science: strumenti e modelli perreti e applicazioni

Moderatore: Filippo Barbera (Dip. Culture Politica e Società)

–       A cosa servono i modelli matematici: il caso dello studiodelle reti

Laura Sacerdote (Dip. di Matematica)

–       Modelli statistici per reti: applicazioni alle neuroscienze

Antonio Canale (Dip. di Scienze economico-sociali ematematico-statistiche)

–       La scienza delle reti: dai dati alle applicazioni

Giancarlo Ruffo (Dip. di Informatica)

12.40    L’offerta didattica su Big Dataall’Università di Torino

Matteo Ruggiero (Dip. di Scienze economico-sociali ematematico-statistiche)


13:00-14:00 ——————Pranzo———————————–

14:00    Vita e salute

Moderatore: Lorenzo Richiardi (Dip. di Scienze Mediche)

–       Big data analitico-clinici per prevenzione e diagnosticamedica ad alta focalizzazione

Marco Vincenti (Dip. di Chimica)

–       Dal dato al modello in epidemiologia veterinaria

Mario Giacobini (Dip. di Scienze Veterinarie)

14.45    Società, cultura e cittá intelligenti

Moderatore: Guido Boella (Dip. di Informatica)

–       Mappe sensoriali e la nuova scienza delle città

Rossano Schifanella (Dip.di Informatica)

–       Fisica Sociale e Big Data al servizio del benessere deicittadini

Marcello Bogetti (Labnet)

–       Smart big data per le scienze umane

Vincenzo Lombardo (Dip. di Informatica)

–       Equità e sostenibilitá nei consumi energetici: unapproccio IoT

Vito Frontuto (Dip. di Economia e Statistica)

15.50    TAVOLA ROTONDA Data-drivenInnovation: l’ecosistema big data a Torino

Moderatore: Aldo Geuna (Dip. di Economia e Statistica eCollegio Carlo Alberto)

–      Valerio Cencig (Chief Data Officer – Intesa S. Paolo)

–      Stefano Gallo (Direttore ICT – Città della Salute e dellaScienza)

–      Roberto Moriondo (Direttore Generale -Comune di Novara)

–      Daniela Paolotti (Research Leader – Fondazione ISI)

–      Emilio Paolucci (BigData@Polito – Politecnico di Torino)

–      Gian Paolo Zanetta (Cittá della Salute e della Scienza)

16.45    TAVOLA ROTONDA Le istituzioni verso ibig data

Moderatore: Pietro Terna (Presidente – Collegio Carlo Alberto)

–      Giuseppina De Santis (Assessore alle Attività Produttive eall’Innovazione – Regione Piemonte)

–      Francesca Leon (Assessore alla Cultura – Comune di Torino)

–      Paola Pisano (Assessore all’Innovazione – Comune di Torino)

Una sessione di poster allestita all’ingresso presenterà alcune delle ricerche realizzate o in corso presso i diversi dipartimenti dell’Università di Torino

EU I4MS Digital Manufacturing HUB – 1st Workshop, Torino, Italy

1st Regional DIMA-HUB Workshop
November 3rd, 2016
Centro Congressi Unione Industriale di Torino

Don’t miss the HPC session (moderated by Marco Aldinucci) and CPS (moderated by Enrico Bini)

Register here

DIMA-HUB is a feasibility study positioned within the context of the mentoring and sponsorship programme of the I4MS initiative (Phase 2) aiming to extend the geographical coverage of the I4MS ecosystem by establishing a Regional Digital Innovation (RDMI) hub in the Piedmont Region, Italy. The mission of the hub is to foster and accelerate technology advances, while supporting start-ups and SMEs in their digital transformation path by connecting them with firms and competence centers within and from outside the Piedmont Region.


  • Advanced laser-based applications (including additive manufacturing)
  • Internet of Things (IoT)
  • Robotics, Cyber Physical Systems (CPS)
  • High Performance Computing (HPC) 


The consortium consists of five members, three of which have specialization in research and development on digital manufacturing and the remaining two are regional innovation clusters:

  1. Politecnico di Torino
  2. Università di Torino
  3. MESAP
  4. Torino Wireless (TOWL)
  5. Istituto Superiore Mario Boella (ISMB)

More information at DIMA-HUB home page


Paolo Inaudi is the best MSc in CS@UNITO 2015/16 with a thesis on High-Performance libfabric

Paolo Inaudi is the recipient of the award “best MSc thesis of the year” 2015/16 in Computer Science at University of Torino (so-called “medaglia d’argento” of University of Torino). Paolo graduated with a thesis entitled “Design and development of a libfabric provider for the Ronniee/A3Cube high-performance network”.

Paolo graduate exactly in time with all exams achieved with 100% of the score. During his MSc thesis, Paolo developed an almost complete libfabric provider, which is available on GitHub under LGPLv3. The work has been made actually possible thanks to direct A3Cube Inc. support  (under the UNITO-A3Cube MoU).

Eventually, Paolo is the 3-in-a-row MSc student in the last 5 years achieving the best MSc thesis of the year in Computer Science at University of Torino with a thesis in parallel computing within the alpha group.

Congratulations Paolo!

Stay foolish, build your own supercomputer at UNITO

The novel Competence Center for Scientific Computing at University of Torino and INFN Torino (C3S@UNITO) is eventually opening this week. The inauguration workshop will take place on October 7, 2016 at main theatre of the Campus Luigi Einaudi. The center involves over 16 departments of University of Torino and hosts the brand new OCCAM platform.

The program and (free) registration form is here. Everybody is invited.

The Open Computing Cluster for Advanced data Manipulation (OCCAM) is a multi-purpose flexible HPC cluster designed and operated by a collaboration between the University of Torino and the Torino branch of the Istituto Nazionale di Fisica Nucleare. It is aimed at providing a flexible, reconfigurable and extendable infrastructure to cater to a wide range of different scientific computing needs, as well as a platform for R&D activities on computational technologies themselves. Extending it with novel architecture CPU, accelerator or hybrid microarchitecture (such as forthcoming Intel Xeon Phi Knights Landing) should be as a simple as plugging a node in a rack.

The initial system counts slightly more than 1100 cpu cores and includes different types of computing nodes (standard dual-socket nodes, large quad-sockets nodes with 768 GB RAM, and multi-GPU nodes) and two separate disk storage subsystems: a smaller high-performance scratch area, based on the Lustre file system, intended for direct computational I/O and a larger one, of the order of 1PB, to archive near-line data for archival purposes. All the components of the system are interconnected through a 10Gb/s Ethernet layer with one-level topology and an InfiniBand FDR 56Gbps layer in fat-tree topology.

A system of this kind, heterogeneous and reconfigurable by design, poses a number of challenges related to the frequency at which heterogeneous hardware resources might change their availability and shareability status, which in turn affect methods and means to allocate, manage, optimize, bill, monitor VMs, virtual farms, jobs, interactive bare-metal sessions.

The topic of the workshop is indeed the description of some of the use cases that prompted the design ad construction of the HPC cluster, its architecture and a first characterisation of its performance by some synthetic benchmark tools and a few realistic use-case tests.

More technical details at CHEP 2016: M. Aldinucci, S. Bagnasco, S. Lusso, P. Pasteris, and S. Rabellino, “The Open Computing Cluster for Advanced data Manipulation (OCCAM),” in The 22nd International Conference on Computing in High Energy and Nuclear Physics (CHEP), San Francisco, USA, 2016. 


3.5M € EU project REPARA: A success story for UNITO

The EU FP7 REPARA project @reparaproject is now completed. Running for 3 years (2013-2016) with total cost 3.5M €, it has been evaluated “excellent” during mid-term EU review demonstrating its absolute scientific value. Among the other results, the REPARA project paves the way to efficient but easy to use parallel patterns into standard C++.

In the news

occam@UNITO> Hello, world!

occam@UNITO> Hello, world!

Occam, our own first pleasantly-sized oddly-heterogenous computer is born. Here 5 the IB switches to build a fat-tree of ~1K cores HT, ~16K CUDA cores, ~1PB archive, ~320 TB high-performance scratch storage. In absolute terms certainly not a huge machine, for research on HPC and BigData at University of Torino certainly a huge opportunity. Thanks to Fondazione SanPaolo and its 960,000 Euro funding.