Iacopo Colonnelli

Iacopo Colonnelli
Ph.D. Student at 
Computer Science Department, University of Turin
Parallel Computing.group
Via Pessinetto 12, 10149 Torino – Italy icona mappa
Email: iacopo.colonnelli@unito.it

Short Bio

Iacopo Colonnelli is a Ph.D. student in Modeling and Data Science at Università di Torino. He received his master’s degree in Computer Engineering from Politecnico di Torino with a thesis on a high-performance parallel tracking algorithm for the ALICE experiment at CERN.

His research focuses on both statistical and computational aspects of data analysis at large scale and on workflow modeling and management in heterogeneous distributed architectures.

Publications

2020

  • I. Colonnelli, B. Cantalupo, I. Merelli, and M. Aldinucci, “Streamflow: cross-breeding cloud with HPC,” IEEE Transactions on Emerging Topics in Computing, 2020. doi:10.1109/TETC.2020.3019202
    [BibTeX] [Abstract] [Download PDF]

    Workflows are among the most commonly used tools in a variety of execution environments. Many of them target a specific environment; few of them make it possible to execute an entire workflow in different environments, e.g. Kubernetes and batch clusters. We present a novel approach to workflow execution, called StreamFlow, that complements the workflow graph with the declarative description of potentially complex execution environments, and that makes it possible the execution onto multiple sites not sharing a common data space. StreamFlow is then exemplified on a novel bioinformatics pipeline for single cell transcriptomic data analysis workflow.

    @article{20Lstreamflow:tect,
    abstract = {Workflows are among the most commonly used tools in a variety of execution environments. Many of them target a specific environment; few of them make it possible to execute an entire workflow in different environments, e.g. Kubernetes and batch clusters. We present a novel approach to workflow execution, called StreamFlow, that complements the workflow graph with the declarative description of potentially complex execution environments, and that makes it possible the execution onto multiple sites not sharing a common data space. StreamFlow is then exemplified on a novel bioinformatics pipeline for single cell transcriptomic data analysis workflow.},
    author = {Iacopo Colonnelli and Barbara Cantalupo and Ivan Merelli and Marco Aldinucci},
    date-added = {2020-08-27 09:29:49 +0200},
    date-modified = {2020-08-27 09:36:33 +0200},
    doi = {10.1109/TETC.2020.3019202},
    url = {https://arxiv.org/pdf/2002.01558},
    journal = {{IEEE} {T}ransactions on {E}merging {T}opics in {C}omputing },
    keywords = {deephealth, hpc4ai},
    title = {StreamFlow: cross-breeding cloud with {HPC}},
    year = {2020}
    }

  • V. Cesare, I. Colonnelli, and M. Aldinucci, “Practical parallelization of scientific applications,” in Proc. of 28th euromicro intl. conference on parallel distributed and network-based processing (pdp), Västerås, Sweden, 2020, pp. 376-384. doi:10.1109/PDP50117.2020.00064
    [BibTeX] [Abstract] [Download PDF]

    This work aims at distilling a systematic methodology to modernize existing sequential scientific codes with a limited re-designing effort, turning an old codebase into modern code, i.e., parallel and robust code. We propose an automatable methodology to parallelize scientific applications designed with a purely sequential programming mindset, thus possibly using global variables, aliasing, random number generators, and stateful functions. We demonstrate the methodology by way of an astrophysical application, where we model at the same time the kinematic profiles of 30 disk galaxies with a Monte Carlo Markov Chain (MCMC), which is sequential by definition. The parallel code exhibits a 12 times speedup on a 48-core platform.

    @inproceedings{20:looppar:pdp,
    abstract = {This work aims at distilling a systematic methodology to modernize existing sequential scientific codes with a limited re-designing effort, turning an old codebase into modern code, i.e., parallel and robust code. We propose an automatable methodology to parallelize scientific applications designed with a purely sequential programming mindset, thus possibly using global variables, aliasing, random number generators, and stateful functions. We demonstrate the methodology by way of an astrophysical application, where we model at the same time the kinematic profiles of 30 disk galaxies with a Monte Carlo Markov Chain (MCMC), which is sequential by definition. The parallel code exhibits a 12 times speedup on a 48-core platform.},
    address = {V{\"a}ster{\aa}s, Sweden},
    author = {Valentina Cesare and Iacopo Colonnelli and Marco Aldinucci},
    booktitle = {Proc. of 28th Euromicro Intl. Conference on Parallel Distributed and network-based Processing (PDP)},
    date-modified = {2020-04-05 02:21:31 +0200},
    doi = {10.1109/PDP50117.2020.00064},
    keywords = {hpc4ai, c3s},
    pages = {376-384},
    publisher = {IEEE},
    title = {Practical Parallelization of Scientific Applications},
    url = {https://iris.unito.it/retrieve/handle/2318/1735377/601141/2020_looppar_PDP.pdf},
    year = {2020},
    bdsk-url-1 = {https://doi.org/10.1109/PDP50117.2020.00064}
    }

2019

  • P. Viviani, M. Drocco, D. Baccega, I. Colonnelli, and M. Aldinucci, “Deep learning at scale,” in Proc. of 27th euromicro intl. conference on parallel distributed and network-based processing (pdp), Pavia, Italy, 2019, pp. 124-131. doi:10.1109/EMPDP.2019.8671552
    [BibTeX] [Abstract] [Download PDF]

    This work presents a novel approach to distributed training of deep neural networks (DNNs) that aims to overcome the issues related to mainstream approaches to data parallel training. Established techniques for data parallel training are discussed from both a parallel computing and deep learning perspective, then a different approach is presented that is meant to allow DNN training to scale while retaining good convergence properties. Moreover, an experimental implementation is presented as well as some preliminary results.

    @inproceedings{19:deeplearn:pdp,
    abstract = {This work presents a novel approach to distributed training of deep neural networks (DNNs) that aims to overcome the issues related to mainstream approaches to data parallel training. Established techniques for data parallel training are discussed from both a parallel computing and deep learning perspective, then a different approach is presented that is meant to allow DNN training to scale while retaining good convergence properties. Moreover, an experimental implementation is presented as well as some preliminary results.},
    address = {Pavia, Italy},
    author = {Paolo Viviani and Maurizio Drocco and Daniele Baccega and Iacopo Colonnelli and Marco Aldinucci},
    booktitle = {Proc. of 27th Euromicro Intl. Conference on Parallel Distributed and network-based Processing (PDP)},
    date-added = {2020-01-30 10:48:12 +0100},
    date-modified = {2020-01-30 10:48:12 +0100},
    doi = {10.1109/EMPDP.2019.8671552},
    keywords = {deep learning, distributed computing, machine learning, large scale, C++},
    pages = {124-131},
    publisher = {IEEE},
    title = {Deep Learning at Scale},
    url = {https://iris.unito.it/retrieve/handle/2318/1695211/487778/19_deeplearning_PDP.pdf},
    year = {2019},
    bdsk-url-1 = {https://iris.unito.it/retrieve/handle/2318/1695211/487778/19_deeplearning_PDP.pdf}
    }

  • M. Drocco, P. Viviani, I. Colonnelli, M. Aldinucci, and M. Grangetto, “Accelerating spectral graph analysis through wavefronts of linear algebra operations,” in Proc. of 27th euromicro intl. conference on parallel distributed and network-based processing (pdp), Pavia, Italy, 2019, pp. 9-16. doi:10.1109/EMPDP.2019.8671640
    [BibTeX] [Abstract] [Download PDF]

    The wavefront pattern captures the unfolding of a parallel computation in which data elements are laid out as a logical multidimensional grid and the dependency graph favours a diagonal sweep across the grid. In the emerging area of spectral graph analysis, the computing often consists in a wavefront running over a tiled matrix, involving expensive linear algebra kernels. While these applications might benefit from parallel heterogeneous platforms (multi-core with GPUs),programming wavefront applications directly with high-performance linear algebra libraries yields code that is complex to write and optimize for the specific application. We advocate a methodology based on two abstractions (linear algebra and parallel pattern-based run-time), that allows to develop portable, self-configuring, and easy-to-profile code on hybrid platforms.

    @inproceedings{19:gsp:pdp,
    abstract = {The wavefront pattern captures the unfolding of a parallel computation in which data elements are laid out as a logical multidimensional grid and the dependency graph favours a diagonal sweep across the grid. In the emerging area of spectral graph analysis, the computing often consists in a wavefront running over a tiled matrix, involving expensive linear algebra kernels. While these applications might benefit from parallel heterogeneous platforms (multi-core with GPUs),programming wavefront applications directly with high-performance linear algebra libraries yields code that is complex to write and optimize for the specific application. We advocate a methodology based on two abstractions (linear algebra and parallel pattern-based run-time), that allows to develop portable, self-configuring, and easy-to-profile code on hybrid platforms.},
    address = {Pavia, Italy},
    author = {Maurizio Drocco and Paolo Viviani and Iacopo Colonnelli and Marco Aldinucci and Marco Grangetto},
    booktitle = {Proc. of 27th Euromicro Intl. Conference on Parallel Distributed and network-based Processing (PDP)},
    date-modified = {2019-03-22 23:07:10 +0100},
    doi = {10.1109/EMPDP.2019.8671640},
    keywords = {eigenvalues, wavefront, GPU, CUDA, linear algebra},
    pages = {9-16},
    publisher = {IEEE},
    title = {Accelerating spectral graph analysis through wavefronts of linear algebra operations},
    url = {https://iris.unito.it/retrieve/handle/2318/1695315/488105/19_wavefront_PDP.pdf},
    year = {2019},
    bdsk-url-1 = {https://iris.unito.it/retrieve/handle/2318/1695315/488105/19_wavefront_PDP.pdf},
    bdsk-url-2 = {https://doi.org/10.1109/EMPDP.2019.8671640}
    }