Alberto Riccardo Martinelli

Alberto Martinelli

PhD Student
Computer Science Department, University of Turin
Parallel Computing group
Via Pessinetto 12, 10149 Torino – Italy
E-mail:  alberto.martinelli AT edu.unito.it

Short Bio

Alberto Riccardo Martinelli is a Ph.D. student in computer science at the University of Turin. He received his master’s degree with honors in Computer Science from the University of Turin.

Fields of interest:

  • Parallel computing
  • Distributed computing
  • High performance computing

Publications

2018

  • C. Misale, M. Drocco, G. Tremblay, A. R. Martinelli, and M. Aldinucci, “Pico: high-performance data analytics pipelines in modern c++,” Future generation computer systems, vol. 87, pp. 392-403, 2018. doi:10.1016/j.future.2018.05.030
    [BibTeX] [Abstract] [Download PDF]

    In this paper, we present a new C++ API with a fluent interface called PiCo (Pipeline Composition). PiCo’s programming model aims at making easier the programming of data analytics applications while preserving or enhancing their performance. This is attained through three key design choices: (1) unifying batch and stream data access models, (2) decoupling processing from data layout, and (3) exploiting a stream-oriented, scalable, efficient C++11 runtime system. PiCo proposes a programming model based on pipelines and operators that are polymorphic with respect to data types in the sense that it is possible to reuse the same algorithms and pipelines on different data models (e.g., streams, lists, sets, etc.). Preliminary results show that PiCo, when compared to Spark and Flink, can attain better performances in terms of execution times and can hugely improve memory utilization, both for batch and stream processing.

    @article{18:fgcs:pico,
    abstract = {In this paper, we present a new C++ API with a fluent interface called PiCo (Pipeline Composition). PiCo's programming model aims at making easier the programming of data analytics applications while preserving or enhancing their performance. This is attained through three key design choices: (1) unifying batch and stream data access models, (2) decoupling processing from data layout, and (3) exploiting a stream-oriented, scalable, efficient C++11 runtime system. PiCo proposes a programming model based on pipelines and operators that are polymorphic with respect to data types in the sense that it is possible to reuse the same algorithms and pipelines on different data models (e.g., streams, lists, sets, etc.). Preliminary results show that PiCo, when compared to Spark and Flink, can attain better performances in terms of execution times and can hugely improve memory utilization, both for batch and stream processing.},
    author = {Claudia Misale and Maurizio Drocco and Guy Tremblay and Alberto R. Martinelli and Marco Aldinucci},
    booktitle = {Future Generation Computer Systems},
    date-added = {2018-05-18 21:24:31 +0000},
    date-modified = {2020-11-15 17:22:30 +0100},
    doi = {10.1016/j.future.2018.05.030},
    journal = {Future Generation Computer Systems},
    keywords = {toreador, bigdata, fastflow},
    pages = {392-403},
    title = {PiCo: High-performance data analytics pipelines in modern C++},
    url = {https://iris.unito.it/retrieve/handle/2318/1668444/414280/fgcs_pico.pdf},
    volume = {87},
    year = {2018}
    }