Alberto Riccardo Martinelli

Alberto Martinelli

PhD Student
Computer Science Department, University of Turin
Parallel Computing group
Via Pessinetto 12, 10149 Torino – Italy
E-mail:  albertoriccardo.martinelli AT unito.it

Short Bio

Alberto Riccardo Martinelli is a Ph.D. student in computer science at the University of Turin. He received his master’s degree with honors in Computer Science from the University of Turin.

Fields of interest:

  • Parallel computing
  • Distributed computing
  • High performance computing

Publications

2023

  • A. R. Martinelli, M. Torquati, M. Aldinucci, I. Colonnelli, and B. Cantalupo, “CAPIO: a Middleware for Transparent I/O Streaming in Data-Intensive Workflows,” in 2023 IEEE 30th International Conference on High Performance Computing, Data, and Analytics (HiPC), Goa, India, 2023. doi:10.1109/HiPC58850.2023.00031
    [BibTeX] [Abstract] [Download PDF]

    With the increasing amount of digital data available for analysis and simulation, the class of I/O-intensive HPC workflows is fated to quickly expand, further exacerbating the performance gap between computing, memory, and storage technologies. This paper introduces CAPIO (Cross-Application Programmable I/O), a middleware capable of injecting I/O streaming capabilities into file-based workflows, improving the computation-I/O overlap without the need to change the application code. The contribution is twofold: 1) at design time, a new I/O coordination language allows users to annotate workflow data dependencies with synchronization semantics; 2) at run time, a user-space middleware automatically and transparently to the user turns a workflow batch execution into a streaming execution according to the semantics expressed in the configuration file. CAPIO has been tested on synthetic benchmarks simulating typical workflow I/O patterns and two real-world workflows. Experiments show that CAPIO reduces the execution time by 10\% to 66\% for data-intensive workflows that use the file system as a communication medium.

    @inproceedings{23:hipc:capio,
    title = {{CAPIO}: a Middleware for Transparent I/O Streaming in Data-Intensive Workflows},
    author = {Alberto Riccardo Martinelli and Massimo Torquati and Marco Aldinucci and Iacopo Colonnelli and Barbara Cantalupo},
    year = {2023},
    month = dec,
    booktitle = {2023 IEEE 30th International Conference on High Performance Computing, Data, and Analytics (HiPC)},
    publisher = {{IEEE}},
    address = {Goa, India},
    doi = {10.1109/HiPC58850.2023.00031},
    abstract = {With the increasing amount of digital data available for analysis and simulation, the class of I/O-intensive HPC workflows is fated to quickly expand, further exacerbating the performance gap between computing, memory, and storage technologies. This paper introduces CAPIO (Cross-Application Programmable I/O), a middleware capable of injecting I/O streaming capabilities into file-based workflows, improving the computation-I/O overlap without the need to change the application code. The contribution is twofold: 1) at design time, a new I/O coordination language allows users to annotate workflow data dependencies with synchronization semantics; 2) at run time, a user-space middleware automatically and transparently to the user turns a workflow batch execution into a streaming execution according to the semantics expressed in the configuration file. CAPIO has been tested on synthetic benchmarks simulating typical workflow I/O patterns and two real-world workflows. Experiments show that CAPIO reduces the execution time by 10\% to 66\% for data-intensive workflows that use the file system as a communication medium.},
    keywords = {admire, eupex, icsc, capio},
    url = {https://iris.unito.it/retrieve/27380f37-0978-409e-a9d8-2b5e95a4bb85/CAPIO-HiPC23-preprint.pdf}
    }

  • G. Audrito, A. R. Martinelli, and G. Torta, “Parallelising an Aggregate Programming Framework with Message-Passing Interface,” in 2023 IEEE International Conference on Autonomic Computing and Self-Organizing Systems Companion (ACSOS-C), 2023, p. 140–145. doi:10.1109/ACSOS-C58168.2023.00054
    [BibTeX]
    @inproceedings{23:acsos:fcppmpi,
    title = {Parallelising an Aggregate Programming Framework with Message-Passing Interface},
    author = {Giorgio Audrito and Alberto Riccardo Martinelli and Gianluca Torta},
    year = {2023},
    booktitle = {2023 IEEE International Conference on Autonomic Computing and Self-Organizing Systems Companion (ACSOS-C)},
    volume = {},
    pages = {140--145},
    doi = {10.1109/ACSOS-C58168.2023.00054},
    keywords = {parallel},
    number = {}
    }

  • J. Garcia-Blas, G. Sanchez-Gallegos, C. Petre, A. R. Martinelli, M. Aldinucci, and J. Carretero, “Hercules: Scalable and Network Portable In-Memory Ad-Hoc File System for Data-Centric and High-Performance Applications,” in Euro-Par 2023: Parallel Processing, Cham, 2023, p. 679–693.
    [BibTeX] [Abstract]

    The growing demands for data processing by new data-intensive applications are putting pressure on the performance and capacity of HPC storage systems. The advancement in storage technologies, such as NVMe and persistent memory, are aimed at meeting these demands. However, relying solely on ultra-fast storage devices is not cost-effective, leading to the need for multi-tier storage hierarchies to move data based on its usage. To address this issue, ad-hoc file systems have been proposed as a solution. They utilise the available storage of compute nodes, such as memory and persistent storage, to create a temporary file system that adapts to the application behaviour in the HPC environment. This work presents the design, implementation, and evaluation of a distributed ad-hoc in-memory storage system (Hercules), highlighting the new communication model included in Hercules. This communication model takes advantage of the Unified Communication X framework (UCX). This solution leverages the capabilities of RDMA protocols, including Infiniband, Onmipath, shared memory, and zero-copy transfers. The preliminary evaluation results show excellent network utilisation compared with other existing technologies.

    @inproceedings{10.1007/978-3-031-39698-4_46,
    title = {Hercules: Scalable and Network Portable In-Memory Ad-Hoc File System for Data-Centric and High-Performance Applications},
    author = {Garcia-Blas, Javier and Sanchez-Gallegos, Genaro and Petre, Cosmin and Martinelli, Alberto Riccardo and Aldinucci, Marco and Carretero, Jesus},
    year = {2023},
    booktitle = {Euro-Par 2023: Parallel Processing},
    publisher = {Springer Nature Switzerland},
    address = {Cham},
    pages = {679--693},
    isbn = {978-3-031-39698-4},
    abstract = {The growing demands for data processing by new data-intensive applications are putting pressure on the performance and capacity of HPC storage systems. The advancement in storage technologies, such as NVMe and persistent memory, are aimed at meeting these demands. However, relying solely on ultra-fast storage devices is not cost-effective, leading to the need for multi-tier storage hierarchies to move data based on its usage. To address this issue, ad-hoc file systems have been proposed as a solution. They utilise the available storage of compute nodes, such as memory and persistent storage, to create a temporary file system that adapts to the application behaviour in the HPC environment. This work presents the design, implementation, and evaluation of a distributed ad-hoc in-memory storage system (Hercules), highlighting the new communication model included in Hercules. This communication model takes advantage of the Unified Communication X framework (UCX). This solution leverages the capabilities of RDMA protocols, including Infiniband, Onmipath, shared memory, and zero-copy transfers. The preliminary evaluation results show excellent network utilisation compared with other existing technologies.},
    editor = {Cano, Jos{\'e} and Dikaiakos, Marios D. and Papadopoulos, George A. and Peric{\`a}s, Miquel and Sakellariou, Rizos}
    }

  • I. Colonnelli, B. Casella, G. Mittone, Y. Arfat, B. Cantalupo, R. Esposito, A. R. Martinelli, D. Medić, and M. Aldinucci, “Federated Learning meets HPC and cloud,” in Astrophysics and Space Science Proceedings, Catania, Italy, 2023, p. 193–199. doi:10.1007/978-3-031-34167-0_39
    [BibTeX] [Abstract] [Download PDF]

    HPC and AI are fated to meet for several reasons. This article will discuss some of them and argue why this will happen through the set of methods and technologies that underpin cloud computing. As a paradigmatic example, we present a new federated learning system that collaboratively trains a deep learning model in different supercomputing centers. The system is based on the StreamFlow workflow manager designed for hybrid cloud-HPC infrastructures.

    @inproceedings{22:ml4astro,
    title = {Federated Learning meets {HPC} and cloud},
    author = {Iacopo Colonnelli and Bruno Casella and Gianluca Mittone and Yasir Arfat and Barbara Cantalupo and Roberto Esposito and Alberto Riccardo Martinelli and Doriana Medi\'{c} and Marco Aldinucci},
    year = {2023},
    booktitle = {Astrophysics and Space Science Proceedings},
    publisher = {Springer},
    address = {Catania, Italy},
    volume = {60},
    pages = {193--199},
    doi = {10.1007/978-3-031-34167-0_39},
    isbn = {978-3-031-34167-0},
    abstract = {HPC and AI are fated to meet for several reasons. This article will discuss some of them and argue why this will happen through the set of methods and technologies that underpin cloud computing. As a paradigmatic example, we present a new federated learning system that collaboratively trains a deep learning model in different supercomputing centers. The system is based on the StreamFlow workflow manager designed for hybrid cloud-HPC infrastructures.},
    editor = {Bufano, Filomena and Riggi, Simone and Sciacca, Eva and Schilliro, Francesco},
    url = {https://iris.unito.it/retrieve/5631da1c-96a0-48c0-a48e-2cdf6b84841d/main.pdf},
    bdsk-url-1 = {https://iris.unito.it/retrieve/5631da1c-96a0-48c0-a48e-2cdf6b84841d/main.pdf},
    keywords = {across, eupilot, streamflow, federated}
    }

2022

  • G. Agosta, M. Aldinucci, C. Alvarez, R. Ammendola, Y. Arfat, O. Beaumont, M. Bernaschi, A. Biagioni, T. Boccali, B. Bramas, C. Brandolese, B. Cantalupo, M. Carrozzo, D. Cattaneo, A. Celestini, M. Celino, I. Colonnelli, P. Cretaro, P. D’Ambra, M. Danelutto, R. Esposito, L. Eyraud-Dubois, A. Filgueras, W. Fornaciari, O. Frezza, A. Galimberti, F. Giacomini, B. Goglin, D. Gregori, A. Guermouche, F. Iannone, M. Kulczewski, F. Lo Cicero, A. Lonardo, A. R. Martinelli, M. Martinelli, X. Martorell, G. Massari, S. Montangero, G. Mittone, R. Namyst, A. Oleksiak, P. Palazzari, P. S. Paolucci, F. Reghenzani, C. Rossi, S. Saponara, F. Simula, F. Terraneo, S. Thibault, M. Torquati, M. Turisini, P. Vicini, M. Vidal, D. Zoni, and G. Zummo, “Towards EXtreme scale technologies and accelerators for euROhpc hw/Sw supercomputing applications for exascale: The TEXTAROSSA approach,” Microprocessors and Microsystems, vol. 95, p. 104679, 2022. doi:10.1016/j.micpro.2022.104679
    [BibTeX] [Abstract]

    In the near future, Exascale systems will need to bridge three technology gaps to achieve high performance while remaining under tight power constraints: energy efficiency and thermal control; extreme computation efficiency via HW acceleration and new arithmetic; methods and tools for seamless integration of reconfigurable accelerators in heterogeneous HPC multi-node platforms. TEXTAROSSA addresses these gaps through a co-design approach to heterogeneous HPC solutions, supported by the integration and extension of HW and SW IPs, programming models, and tools derived from European research.

    @article{textarossa2022micpro:,
    title = {Towards EXtreme scale technologies and accelerators for euROhpc hw/Sw supercomputing applications for exascale: The TEXTAROSSA approach},
    author = {Giovanni Agosta and Marco Aldinucci and Carlos Alvarez and Roberto Ammendola and Yasir Arfat and Olivier Beaumont and Massimo Bernaschi and Andrea Biagioni and Tommaso Boccali and Berenger Bramas and Carlo Brandolese and Barbara Cantalupo and Mauro Carrozzo and Daniele Cattaneo and Alessandro Celestini and Massimo Celino and Iacopo Colonnelli and Paolo Cretaro and Pasqua D'Ambra and Marco Danelutto and Roberto Esposito and Lionel Eyraud-Dubois and Antonio Filgueras and William Fornaciari and Ottorino Frezza and Andrea Galimberti and Francesco Giacomini and Brice Goglin and Daniele Gregori and Abdou Guermouche and Francesco Iannone and Michal Kulczewski and Francesca {Lo Cicero} and Alessandro Lonardo and Alberto R. Martinelli and Michele Martinelli and Xavier Martorell and Giuseppe Massari and Simone Montangero and Gianluca Mittone and Raymond Namyst and Ariel Oleksiak and Paolo Palazzari and Pier Stanislao Paolucci and Federico Reghenzani and Cristian Rossi and Sergio Saponara and Francesco Simula and Federico Terraneo and Samuel Thibault and Massimo Torquati and Matteo Turisini and Piero Vicini and Miquel Vidal and Davide Zoni and Giuseppe Zummo},
    year = {2022},
    journal = {Microprocessors and Microsystems},
    volume = {95},
    pages = {104679},
    doi = {10.1016/j.micpro.2022.104679},
    issn = {0141-9331},
    abstract = {In the near future, Exascale systems will need to bridge three technology gaps to achieve high performance while remaining under tight power constraints: energy efficiency and thermal control; extreme computation efficiency via HW acceleration and new arithmetic; methods and tools for seamless integration of reconfigurable accelerators in heterogeneous HPC multi-node platforms. TEXTAROSSA addresses these gaps through a co-design approach to heterogeneous HPC solutions, supported by the integration and extension of HW and SW IPs, programming models, and tools derived from European research.},
    bdsk-url-1 = {https://doi.org/10.1016/j.micpro.2022.104679},
    keywords = {textrossa}
    }

2021

  • G. Agosta, W. Fornaciari, A. Galimberti, G. Massari, F. Reghenzani, F. Terraneo, D. Zoni, C. Brandolese, M. Celino, F. Iannone, P. Palazzari, G. Zummo, M. Bernaschi, P. D’Ambra, S. Saponara, M. Danelutto, M. Torquati, M. Aldinucci, Y. Arfat, B. Cantalupo, I. Colonnelli, R. Esposito, A. R. Martinelli, G. Mittone, O. Beaumont, B. Bramas, L. Eyraud-Dubois, B. Goglin, A. Guermouche, R. Namyst, S. Thibault, A. Filgueras, M. Vidal, C. Alvarez, X. Martorell, A. Oleksiak, M. Kulczewski, A. Lonardo, P. Vicini, F. L. Cicero, F. Simula, A. Biagioni, P. Cretaro, O. Frezza, P. S. Paolucci, M. Turisini, F. Giacomini, T. Boccali, S. Montangero, and R. Ammendola, “TEXTAROSSA: Towards EXtreme scale Technologies and Accelerators for euROhpc hw/Sw Supercomputing Applications for exascale,” in Proc. of the 24th Euromicro Conference on Digital System Design (DSD), Palermo, Italy, 2021. doi:10.1109/DSD53832.2021.00051
    [BibTeX] [Abstract]

    To achieve high performance and high energy effi- ciency on near-future exascale computing systems, three key technology gaps needs to be bridged. These gaps include: en- ergy efficiency and thermal control; extreme computation effi- ciency via HW acceleration and new arithmetics; methods and tools for seamless integration of reconfigurable accelerators in heterogeneous HPC multi-node platforms. TEXTAROSSA aims at tackling this gap through a co-design approach to heterogeneous HPC solutions, supported by the integration and extension of HW and SW IPs, programming models and tools derived from European research.

    @inproceedings{21:DSD:textarossa,
    title = {{TEXTAROSSA}: Towards EXtreme scale Technologies and Accelerators for euROhpc hw/Sw Supercomputing Applications for exascale},
    author = {Giovanni Agosta and William Fornaciari and Andrea Galimberti and Giuseppe Massari and Federico Reghenzani and Federico Terraneo and Davide Zoni and Carlo Brandolese and Massimo Celino and Francesco Iannone and Paolo Palazzari and Giuseppe Zummo and Massimo Bernaschi and Pasqua D'Ambra and Sergio Saponara and Marco Danelutto and Massimo Torquati and Marco Aldinucci and Yasir Arfat and Barbara Cantalupo and Iacopo Colonnelli and Roberto Esposito and Alberto Riccardo Martinelli and Gianluca Mittone and Olivier Beaumont and Berenger Bramas and Lionel Eyraud-Dubois and Brice Goglin and Abdou Guermouche and Raymond Namyst and Samuel Thibault and Antonio Filgueras and Miquel Vidal and Carlos Alvarez and Xavier Martorell and Ariel Oleksiak and Michal Kulczewski and Alessandro Lonardo and Piero Vicini and Francesco Lo Cicero and Francesco Simula and Andrea Biagioni and Paolo Cretaro and Ottorino Frezza and Pier Stanislao Paolucci and Matteo Turisini and Francesco Giacomini and Tommaso Boccali and Simone Montangero and Roberto Ammendola},
    year = {2021},
    month = aug,
    booktitle = {Proc. of the 24th Euromicro Conference on Digital System Design ({DSD})},
    publisher = {IEEE},
    address = {Palermo, Italy},
    doi = {10.1109/DSD53832.2021.00051},
    abstract = {To achieve high performance and high energy effi- ciency on near-future exascale computing systems, three key technology gaps needs to be bridged. These gaps include: en- ergy efficiency and thermal control; extreme computation effi- ciency via HW acceleration and new arithmetics; methods and tools for seamless integration of reconfigurable accelerators in heterogeneous HPC multi-node platforms. TEXTAROSSA aims at tackling this gap through a co-design approach to heterogeneous HPC solutions, supported by the integration and extension of HW and SW IPs, programming models and tools derived from European research.},
    date-added = {2021-09-04 12:07:42 +0200},
    date-modified = {2021-09-04 12:23:41 +0200},
    bdsk-url-1 = {https://doi.org/10.1109/DSD53832.2021.00051},
    keywords = {textarossa, streamflow}
    }

  • M. Aldinucci, V. Cesare, I. Colonnelli, A. R. Martinelli, G. Mittone, and B. Cantalupo, “Practical Parallelizazion of a Laplace Solver with MPI,” in ENEA CRESCO in the fight against COVID-19, 2021, p. 21–24.
    [BibTeX] [Abstract]

    This work exposes a practical methodology for the semi-automatic parallelization of existing code. We show how a scientific sequential code can be parallelized through our approach. The obtained parallel code is only slightly different from the starting sequential one, providing an example of how little re-designing our methodology involves. The performance of the parallelized code, executed on the CRESCO6 cluster, is then exposed and discussed. We also believe in the educational value of this approach and suggest its use as a teaching device for students.

    @inproceedings{21:laplace:enea,
    title = {Practical Parallelizazion of a {Laplace} Solver with {MPI}},
    author = {Aldinucci, Marco and Cesare, Valentina and Colonnelli, Iacopo and Martinelli, Alberto Riccardo and Mittone, Gianluca and Cantalupo, Barbara},
    year = {2021},
    booktitle = {ENEA CRESCO in the fight against COVID-19},
    publisher = {ENEA},
    pages = {21--24},
    abstract = {This work exposes a practical methodology for the semi-automatic parallelization of existing code. We show how a scientific sequential code can be parallelized through our approach. The obtained parallel code is only slightly different from the starting sequential one, providing an example of how little re-designing our methodology involves. The performance of the parallelized code, executed on the CRESCO6 cluster, is then exposed and discussed. We also believe in the educational value of this approach and suggest its use as a teaching device for students.},
    editor = {Francesco Iannone},
    keywords = {hpc4ai}
    }

  • M. Aldinucci, V. Cesare, I. Colonnelli, A. R. Martinelli, G. Mittone, B. Cantalupo, C. Cavazzoni, and M. Drocco, “Practical Parallelization of Scientific Applications with OpenMP, OpenACC and MPI,” Journal of Parallel and Distributed Computing, vol. 157, p. 13–29, 2021. doi:10.1016/j.jpdc.2021.05.017
    [BibTeX] [Abstract] [Download PDF]

    This work aims at distilling a systematic methodology to modernize existing sequential scientific codes with a little re-designing effort, turning an old codebase into \emph{modern} code, i.e., parallel and robust code. We propose a semi-automatic methodology to parallelize scientific applications designed with a purely sequential programming mindset, possibly using global variables, aliasing, random number generators, and stateful functions. We demonstrate that the same methodology works for the parallelization in the shared memory model (via OpenMP), message passing model (via MPI), and General Purpose Computing on GPU model (via OpenACC). The method is demonstrated parallelizing four real-world sequential codes in the domain of physics and material science. The methodology itself has been distilled in collaboration with MSc students of the Parallel Computing course at the University of Torino, that applied it for the first time to the project works that they presented for the final exam of the course. Every year the course hosts some special lectures from industry representatives, who present how they use parallel computing and offer codes to be parallelized.

    @article{21:jpdc:loop,
    title = {Practical Parallelization of Scientific Applications with {OpenMP, OpenACC and MPI}},
    author = {Aldinucci, Marco and Cesare, Valentina and Colonnelli, Iacopo and Martinelli, Alberto Riccardo and Mittone, Gianluca and Cantalupo, Barbara and Cavazzoni, Carlo and Drocco, Maurizio},
    year = {2021},
    journal = {Journal of Parallel and Distributed Computing},
    volume = {157},
    pages = {13--29},
    doi = {10.1016/j.jpdc.2021.05.017},
    abstract = {This work aims at distilling a systematic methodology to modernize existing sequential scientific codes with a little re-designing effort, turning an old codebase into \emph{modern} code, i.e., parallel and robust code. We propose a semi-automatic methodology to parallelize scientific applications designed with a purely sequential programming mindset, possibly using global variables, aliasing, random number generators, and stateful functions. We demonstrate that the same methodology works for the parallelization in the shared memory model (via OpenMP), message passing model (via MPI), and General Purpose Computing on GPU model (via OpenACC). The method is demonstrated parallelizing four real-world sequential codes in the domain of physics and material science. The methodology itself has been distilled in collaboration with MSc students of the Parallel Computing course at the University of Torino, that applied it for the first time to the project works that they presented for the final exam of the course. Every year the course hosts some special lectures from industry representatives, who present how they use parallel computing and offer codes to be parallelized.},
    date-added = {2021-06-10 22:05:54 +0200},
    date-modified = {2021-06-10 22:30:05 +0200},
    url = {https://iris.unito.it/retrieve/handle/2318/1792557/770851/Practical_Parallelization_JPDC_preprint.pdf},
    bdsk-url-1 = {https://iris.unito.it/retrieve/handle/2318/1792557/770851/Practical_Parallelization_JPDC_preprint.pdf},
    bdsk-url-2 = {https://doi.org/10.1016/j.jpdc.2021.05.017},
    keywords = {saperi}
    }

2018

  • C. Misale, M. Drocco, G. Tremblay, A. R. Martinelli, and M. Aldinucci, “PiCo: High-performance data analytics pipelines in modern C++,” Future Generation Computer Systems, vol. 87, p. 392–403, 2018. doi:10.1016/j.future.2018.05.030
    [BibTeX] [Abstract] [Download PDF]

    In this paper, we present a new C++ API with a fluent interface called PiCo (Pipeline Composition). PiCo’s programming model aims at making easier the programming of data analytics applications while preserving or enhancing their performance. This is attained through three key design choices: (1) unifying batch and stream data access models, (2) decoupling processing from data layout, and (3) exploiting a stream-oriented, scalable, efficient C++11 runtime system. PiCo proposes a programming model based on pipelines and operators that are polymorphic with respect to data types in the sense that it is possible to reuse the same algorithms and pipelines on different data models (e.g., streams, lists, sets, etc.). Preliminary results show that PiCo, when compared to Spark and Flink, can attain better performances in terms of execution times and can hugely improve memory utilization, both for batch and stream processing.

    @article{18:fgcs:pico,
    title = {PiCo: High-performance data analytics pipelines in modern C++},
    author = {Claudia Misale and Maurizio Drocco and Guy Tremblay and Alberto R. Martinelli and Marco Aldinucci},
    year = {2018},
    journal = {Future Generation Computer Systems},
    booktitle = {Future Generation Computer Systems},
    volume = {87},
    pages = {392--403},
    doi = {10.1016/j.future.2018.05.030},
    abstract = {In this paper, we present a new C++ API with a fluent interface called PiCo (Pipeline Composition). PiCo's programming model aims at making easier the programming of data analytics applications while preserving or enhancing their performance. This is attained through three key design choices: (1) unifying batch and stream data access models, (2) decoupling processing from data layout, and (3) exploiting a stream-oriented, scalable, efficient C++11 runtime system. PiCo proposes a programming model based on pipelines and operators that are polymorphic with respect to data types in the sense that it is possible to reuse the same algorithms and pipelines on different data models (e.g., streams, lists, sets, etc.). Preliminary results show that PiCo, when compared to Spark and Flink, can attain better performances in terms of execution times and can hugely improve memory utilization, both for batch and stream processing.},
    date-added = {2018-05-18 21:24:31 +0000},
    date-modified = {2020-11-15 17:22:30 +0100},
    url = {https://iris.unito.it/retrieve/handle/2318/1668444/414280/fgcs_pico.pdf},
    bdsk-url-1 = {https://iris.unito.it/retrieve/handle/2318/1668444/414280/fgcs_pico.pdf},
    bdsk-url-2 = {https://doi.org/10.1016/j.future.2018.05.030},
    keywords = {toreador, bigdata, fastflow}
    }