RISC-V tools
We actively contribute to the RISC-V ecosystem via different software ports. As of today, these include:
- Fastflow. Fastflow is a C++ framework for high-level pattern-based parallel programming performance (more info here). The RISC-V port is part of the official repository.
- Pytorch. PyTorch is one of the most popular Python/C++ frameworks for training and using DNN models. The RISC-V port is available here.
- OpenFL for RISC-V. We managed to port the official Intel® OpenFL federated learning framework to the RISC-V platform. The Python packages are available to be installed via pip from this repository. To properly add our proprietary repository to your pip configuration, just run
pip config set global.index-url https://gitlab.di.unito.it/api/v4/projects/1057/packages/pypi/simple
The, to install OpenFL built for RISC-V, just runpip install openfl-riscv
As side results, on this repository are also available the following RISC-V compatible Python packages: ninja (ninja-riscv), meson-python (meson-python-riscv), scipy (scipy-riscv), scikit-learn (scikit-learn-riscv).
FastFederatedLearning
Fast Federated Learning (FFL) is a C/C++-based Federated Learning framework built on top of the parallel programming FastFlow framework. It exploits the Cereal library to efficiently serialise the updates sent over the network and the libtorch library to fully bypass the need for Python code. The first release of this software comprises three examples based on three different communication topologies: master-worker, peer-to-peer, and tree-based.
FastFederatedLearning is freely available on GitHub under the LGPLv3 license. It has been successfully tested on x86_64, ARM, and RISC-V platforms. FFL has scripts for automatically installing the framework and reproducing all the experiments reported in the original paper. More information about software usage can be found on the official repository.
G. Mittone, N. Tonci, R. Birke, I. Colonnelli, D. Medić, A. Bartolini, R. Esposito, E. Parisi, F. Beneventi, M. Polato, M. Torquati, L. Benini, and M. Aldinucci, “Experimenting with Emerging RISC-V Systems for Decentralised Machine Learning“, 20th ACM International Conference on Computing Frontiers, 2023. DOI: 10.1145/3587135.3592211
OpenFL-extended
OpenFL-extended is an extended version of the official Intel® OpenFL federated learning (FL) framework. OpenFL-extended fully supports the standard FL workflow already provided by OpenFL, but in addition, it provides support for both federated bagging and federated boosting approaches. Federated bagging is implemented through simple bagging of models trained by different parties from the aggregator, while federated boosting is obtained employing the AdaBoost.F algorithm developed at the University of Turin[1]. Through these approaches, OpenFL extended is fully model-agnostic, which means that it can be used to build federations out of any Machine Learning model, not only Deep Neural Networks.
OpenFL extended is freely available on GitHub under the LGPLv3 license. It is fully Python-based and comes with a wide range of ready-made examples. It has been tested on x86_64, ARM and RISC-V architectures. More information about software usage can be found on the official repository.
This software’s publication is currently under review, but an open-access version of the paper is available on arXiv.
[1] M. Polato, R. Esposito, and M. Aldinucci. “Boosting the federation: Cross-silo federated learning without gradient descent.” 2022 International Joint Conference on Neural Networks (IJCNN). IEEE, 2022.
Jupyter Workflow
Jupyter Workflow is an extension of the IPython kernel designed to support distributed literate workflows. The Jupyter Workflow kernel enables Jupyter Notebooks to describe complex workflows and to execute them in a distributed fashion on hybrid cloud/HPC infrastructures. In particular, code cells are regarded as the nodes of a distributed workflow graph, whereas cell metadata are used to express data and control dependencies, parallel execution patterns (e.g. Scatter/Gather), and target execution infrastructures.
Jupyter Workflow code is available on GitHub under the LGPLv3 license, and the related Python package is downloadable from PyPI. More details about the tool and its applications can be found on the Jupyter Workkflow website.
I. Colonnelli, M. Aldinucci, B. Cantalupo, L. Padovani, S. Rabellino, C. Spampinato, R. Morelli, R. Di Carlo, N. Magini and C. Cavazzoni, “Distributed workflows with Jupyter”, Future Generation Computer Systems, vol. 128, pp. 282-298, 2022. doi: 10.1016/j.future.2021.10.007.
StreamFlow
The StreamFlow framework is a container-native Workflow Management System (WMS) written in Python 3 and based on the Common Workflow Language (CWL) Standard.
StreamFlow has been designed around two main principles:
- Allowing the execution of tasks in multi-container environments in order to support the concurrent execution of multiple communicating tasks in a multi-agent ecosystem
- Relaxing the requirement of a single shared data space to allow for hybrid workflow executions on top of multi-cloud or hybrid cloud/HPC infrastructures.
StreamFlow source code is available on GitHub under the LGPLv3 license. Moreover, a Python package is downloadable from PyPI and Docker containers can be found on Docker Hub. More details about the tool and its applications can be found on the StreamFlow website.
StreamFlow has been selected as an exploring technology by the EC Innovation Radar initiative.
I. Colonnelli, B. Cantalupo, I. Merelli and M. Aldinucci, “StreamFlow: cross-breeding cloud with HPC,” in IEEE Transactions on Emerging Topics in Computing, doi: 10.1109/TETC.2020.3019202.
CAPIO
A joint project with CS Dept. – University of Pisa
Cross-Application Programmable I/O (CAPIO) …
FastFlow
A joint project with CS Dept. – University of Pisa

FastFlow (斋戒流) is a C++ parallel programming framework advocating high-level, pattern-based parallel programming. It chiefly supports streaming and data parallelism, targeting heterogenous platforms composed of clusters of shared-memory platforms, possibly equipped with computing accelerators such as NVidia GPGPUs, Xeon Phi, Tilera TILE64.
At today, FastFlow has been the background technology of 3 European Projects and 1 National project for an aggregate total cost of 12M € (ParaPhrase FP7, REPARA FP7, Rephrase H2020, and IMPACT, see projects section). We are still actively developing FastFlow along with its underlying technology, and we are wide open to turn challenges in research and innovation. More details can be found in the main FastFlow website.
FastFlow comes as a C++ template library designed as a stack of layers that progressively abstracts the programming of parallel applications. The goal of the stack is threefold: portability, extensibility, and performance. For this, all three layers are realized as thin strata of C++ templates that are 1) seamlessly portable, 2) easily extended via subclassing, and 3) statically compiled and cross-optimized with the application. The terse design ensures easy portability on almost all OSes and CPUs with a C++ compiler.
More details in the FastFlow website.
Discontinued Parallel Computing tools
GAM: Global Asynchronous Memory (2018)
Parallel Programming with Global Asynchronous Memory: Models, C++ APIs and Implementations
M. Drocco, “Parallel programming with global asynchronous memory: models, C++ APIs and implementations,” PhD Thesis, 2017. doi:10.5281/zenodo.1037585
PiCo: Pipeline Composition (2017)
PiCo (Pipeline Composition) is an open-source C++11 header-only DSL for high-performance data analytics, featuring low latency, high throughput, and minimal memory footprint on multi-core platforms. For more information see the PiCo paper.
GridCOMP Grid Component Model (2006)
The full software package supporting the development of distributed and multi-core applications based on autonomic components and behavioural skeletons is available under GPL license. More information on the GridCOMP page. The Grid Component Model (GCM) has been standardised by ETSI: DTS/GRID-0004-1 (27/08/2008), DTS/GRID-004-2 (27/08/2008), DTS/GRID-0004-3 (20/03/2009), DTS/GRID-0004-4 (24/03/2010).
VirtuaLinux (2006)
VirtuaLinux is a Linux meta-distribution that allows the creation, deployment and administration of virtualized clusters with no single point of failure. VirtuaLinux architecture supports disk-less configurations and provides an efficient, iSCSI based abstraction of the SAN. Clusters running VirtuaLinux exhibits no master node to boost resilience and flexibility. Thanks to its storage virtualisation layer, VIrtuaLinux was able to deploy hundreds of VMs in a few seconds. Actually, VirtuaLinux realises a cloud (but the cloud word with the current meaning did not exist in 2006).
Muskel (2005)
Muskel is a parallel programming library providing users with structured parallel constructs (skeletons) that can be used to implement efficient parallel applications. Muskel applications run on networks/clusters of workstations equipped with Java (1.5 or greater). The skeletons are implemented exploiting macro data flow technology. Muskel extends Lithium with many interesting features, in particular with adaptive and autonomic features.
Ad-HOC (2004)
AD-HOC (Adaptive Distributed Herd of Object Caches), is a fast and robust distributed object repository. It provides applications with a distributed storage manager that virtualise PC’s memories into a unique common distributed storage space. Ad-HOC can effectively be used to implement DSMs as well as distributed cache subsystems. a high-performance distributed shared memory server for cluster and grid, and its applications. ADHOC is a basic block enabling the development of shared memory run-time supports and applications for dynamic and unreliable executing environments (C++, GPL). The libraries and applications developed on top of ADHOC include:
- a parallel file system exhibiting the same API and better performance of the PVFS;
- a distributed cache that can be plugged in the Apache webserver with no modifications of Apache code. The cache substantially improve web server farm performance with no additional costs;
- a Distributed Shared Memory (DSM) for ASSIST.
ASSIST (2003)
ASSIST (A Software development System based on Integrated Skeleton Technology) is a parallel programming environment based on skeleton and coordination language technology aimed at the development of distributed high-performance applications. ASSIST applications should be compiled in binary packages that can be deployed and run on grids, including those exhibiting heterogeneous platforms. Deployment and run are provided through standard middleware services (e.g. Globus) enriched with the ASSIST run-time support. ASSIST applications are described by means of a coordination language, which can express arbitrary graphs of modules, interconnected by typed streams of data. For more information see ASSIST papers.
Lithium (2002)
Lithium is a Java-based parallel programming library providing users with structured parallel constructs (patterns/skeletons) that can be used to implement efficient parallel applications on clusters. The skeletons (including pipe, farm, map, reduce, loop) are implemented exploiting macro data flow technology. Lithium skeletons admit a formal specification of both functional and extra-functional behaviour.
Eskimo (2002)
Eskimo (Easy SKeleton Interface – Memory Oriented), which was part of my PhD dissertation, is a first (maybe a bit naive) tentative to bring skeletal/pattern-based programming on the shared memory model. To my knowledge, there were no previous experiments since skeletal programming was exclusively living in the message passing arena. From a certain viewpoint, it can be considered an ancestor of Fastflow (and other libraries in this class, such as Intel TBB).
Meta (2000)
META is a toolkit for the source-to-source optimisation of pattern-based/skeletal parallel programs (OCaml, GPL). It includes a quite efficient subtree-matching implementation.
SkIE (1998)
SkIE (Skeleton-based Integrated Environment) is a skeleton-based parallel programming environment. SkIE was an engineered version of P3L developed within Quadrics Supercomputing World (QSW) and Alenia Aerospace. Within QSW, I have designed and developed part of the compiler back-end.