Iacopo Colonnelli


Assistant Professor (RTDA)
Alpha Research Group (Parallel Computing)
University of Turin, Computer Science Dept.
Via Pessinetto 12, 10149 Torino – Italy
Email: iacopo.colonnelli@unito.it
ORCiD: 0000-0001-9290-2017

Short Bio

Iacopo Colonnelli is a Computer Science assistant professor (RTDA). He received his Ph.D. with honours in Modeling and Data Science at Università di Torino with a thesis on novel workflow models for heterogeneous distributed systems, and his master’s degree in Computer Engineering from Politecnico di Torino with a thesis on a high-performance parallel tracking algorithm for the ALICE experiment at CERN.

His research focuses on both statistical and computational aspects of data analysis at large scale and on workflow modeling and management in heterogeneous distributed architectures. He is a member of the CWL Technical Team.

Open Source Software

  • Creator and maintainer of StreamFlow, a container-native workflow management system which supports hybrid workflows and their executions on cloud-HPC infrastructures.
  • Creator and maintainer of Jupyter Workflow, an extension of the IPython kernel designed to support distributed literate workflows and to execute them in a distributed fashion on cloud-HPC architectures.

Research Projects

  • Space Center of Excellence (EC HE, HORIZON-EUROHPC-JU-2021-COE-01): Scalable Parallel and distributed Astrophysical Codes for Exascale (2023, 48 months, total cost 8M€, G.A. n. 101093441).
  • European Pilot (EC H2020 RIA, EuroHPC-02-2020, 42 months, total cost 30M€, G.A. n. 101034126).
  • EUPEX (EC H2020 RIA, EuroHPC-02-2020): European Pilot for Exascale (2021, 48 months, total cost 41M€, G.A. n. 101033975).
  • TEXTAROSSA (EC H2020 RIA, EuroHPC-01-2019): Towards EXtreme scale Technologies and Accelerators for euROhpc hw/Sw Supercomputing Applications for exascale (2021, 36 months, total cost 6M€, G.A. n. 956831).
  • ACROSS (EC H2020 IA, EuroHPC-01-2019): HPC Big Data Artificial Intelligence cross-stack platform toward exascale (2021, 36 months, total cost 8M€, G.A. n. 955648).
  • DeepHealth (EC H2020 IA, ICT-2018-11): Deep-Learning and HPC to Boost Biomedical Applications for Health (2019, 36 months, total cost 14.8M€, G.A. 825111).
  • HPC4AI (Regione Piemonte, POR FESR Regione Piemonte): Turin’s centre in High-Performance Computing for Artificial Intelligence (2018, 24 months, total cost 4.5M€).

Achievements

Program Committee Memberships

  • PAW-ATM 2023 (6th Annual Parallel Applications Workshop, Alternatives To MPI+X), Denver, Colorado, USA, 2023.
  • HLPP 2023 (16th International Symposium on High-level Parallel Programming and Applications), Cluj-Napoca, Romania, 2023.
  • HPCMALL 2023 (2nd International Workshop on Malleability Techniques Applications in High-Performance Computingg), Hamburg, Germany, 2023.
  • EURO-PAR 2023 (29th International European Conference on Parallel and Distributed Computing), Limassol, Cyprus, 2023.
  • PDP 2023 (31st Euromicro International Conference on Parallel, Distributed, and Network-Based Processing), Napoli, Italy, 2023.
  • HLPP 2022 (15th International Symposium on High-level Parallel Programming and Applications), Porto, Portugal, 2022.
  • EURO-PAR 2022 (28th International European Conference on Parallel and Distributed Computing), Glasgow, Scotland, United Kingdom, 2022.
  • HPCMALL 2022 (Malleability Techniques Applications in High-Performance Computing), Hamburg, Germany, 2022.
  • IPTA 2022 (11th International Conference on Image Processing Theory, Tools and Applications), Salzburg, Austria, 2022.
  • EURO-PAR 2021 (27th International European Conference on Parallel and Distributed Computing), Lisbon, Portugal, 2021.
  • HiPC 2020 (27th IEEE International Conference on High Performance Computing, Data, and Analytics), Pune, India, 2020.
  • IPTA 2019 (9th International Conference on Image Processing Theory, Tools and Applications), Istanbul, Turkey, 2019.

Publications

2023

  • G. Mittone, W. Riviera, I. Colonnelli, R. Birke, and M. Aldinucci, “Model-agnostic federated learning,” in Euro-par 2023: parallel processing, Limassol, Cyprus, 2023. doi:10.48550/arXiv.2303.04906
    [BibTeX] [Abstract] [Download PDF]

    Since its debut in 2016, Federated Learning (FL) has been tied to the inner workings of Deep Neural Networks (DNNs). On the one hand, this allowed its development and widespread use as DNNs proliferated. On the other hand, it neglected all those scenarios in which using DNNs is not possible or advantageous. The fact that most current FL frameworks only allow training DNNs reinforces this problem. To address the lack of FL solutions for non-DNN-based use cases, we propose MAFL (Model-Agnostic Federated Learning). MAFL marries a model-agnostic FL algorithm, AdaBoost.F, with an open industry-grade FL framework: Intel OpenFL. MAFL is the first FL system not tied to any specific type of machine learning model, allowing exploration of FL scenarios beyond DNNs and trees. We test MAFL from multiple points of view, assessing its correctness, flexibility and scaling properties up to 64 nodes. We optimised the base software achieving a 5.5x speedup on a standard FL scenario. MAFL is compatible with x86-64, ARM-v8, Power and RISC-V.

    @inproceedings{23:mittone:mafl,
    title = {Model-Agnostic Federated Learning},
    author = {Mittone, Gianluca and Riviera, Walter and Colonnelli, Iacopo and Birke, Robert and Aldinucci, Marco},
    abstract = {Since its debut in 2016, Federated Learning (FL) has been tied to the inner workings of Deep Neural Networks (DNNs). On the one hand, this allowed its development and widespread use as DNNs proliferated. On the other hand, it neglected all those scenarios in which using DNNs is not possible or advantageous. The fact that most current FL frameworks only allow training DNNs reinforces this problem. To address the lack of FL solutions for non-DNN-based use cases, we propose MAFL (Model-Agnostic Federated Learning). MAFL marries a model-agnostic FL algorithm, AdaBoost.F, with an open industry-grade FL framework: Intel OpenFL. MAFL is the first FL system not tied to any specific type of machine learning model, allowing exploration of FL scenarios beyond DNNs and trees. We test MAFL from multiple points of view, assessing its correctness, flexibility and scaling properties up to 64 nodes. We optimised the base software achieving a 5.5x speedup on a standard FL scenario. MAFL is compatible with x86-64, ARM-v8, Power and RISC-V.},
    institution = {Computer Science Department, University of Torino},
    address = {Limassol, Cyprus},
    date-added = {2023-03-8 21:51:14 +0000},
    booktitle = {Euro-Par 2023: Parallel Processing},
    pages = {},
    publisher = {{Springer}},
    month = august,
    year = {2023},
    isbn = {},
    keywords = {eupilot, icsc},
    doi = {10.48550/arXiv.2303.04906},
    url = {https://doi.org/10.48550/arXiv.2303.04906},
    note = {https://arxiv.org/abs/2303.04906}
    }

  • G. Mittone, N. Tonci, R. Birke, I. Colonnelli, D. Medić, A. Bartolini, R. Esposito, E. Parisi, F. Beneventi, M. Polato, M. Torquati, L. Benini, and M. Aldinucci, “Experimenting with emerging RISC-V systems for decentralised machine learning,” in 20th ACM international conference on computing frontiers (cf ’23), Bologna, Italy, 2023. doi:10.1145/3587135.3592211
    [BibTeX] [Abstract] [Download PDF]

    Decentralised Machine Learning (DML) enables collaborative machine learning without centralised input data. Federated Learning (FL) and Edge Inference are examples of DML. While tools for DML (especially FL) are starting to flourish, many are not flexible and portable enough to experiment with novel systems (e.g., RISC-V), non-fully connected topologies, and asynchronous collaboration schemes. We overcome these limitations via a domain-specific language allowing to map DML schemes to an underlying middleware, i.e. the FastFlow parallel programming library. We experiment with it by generating different working DML schemes on two emerging architectures (ARM-v8, RISC-V) and the x86-64 platform. We characterise the performance and energy efficiency of the presented schemes and systems. As a byproduct, we introduce a RISC-V porting of the PyTorch framework, the first publicly available to our knowledge.

    @inproceedings{23:mittone:fl-riscv,
    title = {Experimenting with Emerging {RISC-V} Systems for Decentralised Machine Learning},
    author = {Mittone, Gianluca and Tonci, Nicolò and Birke, Robert and Colonnelli, Iacopo and Medić, Doriana and Bartolini, Andrea and Esposito, Roberto and Parisi, Emanuele and Beneventi, Francesco and Polato, Mirko and Torquati, Massimo and Benini, Luca and Aldinucci, Marco},
    abstract = {Decentralised Machine Learning (DML) enables collaborative machine learning without centralised input data. Federated Learning (FL) and Edge Inference are examples of DML. While tools for DML (especially FL) are starting to flourish, many are not flexible and portable enough to experiment with novel systems (e.g., RISC-V), non-fully connected topologies, and asynchronous collaboration schemes. We overcome these limitations via a domain-specific language allowing to map DML schemes to an underlying middleware, i.e. the FastFlow parallel programming library. We experiment with it by generating different working DML schemes on two emerging architectures (ARM-v8, RISC-V) and the x86-64 platform. We characterise the performance and energy efficiency of the presented schemes and systems. As a byproduct, we introduce a RISC-V porting of the PyTorch framework, the first publicly available to our knowledge.},
    institution = {Computer Science Department, University of Torino},
    address = {Bologna, Italy},
    date-added = {2023-03-14 15:34:00 +0000},
    booktitle = {20th {ACM} International Conference on Computing Frontiers (CF '23)},
    pages = {},
    publisher = {{ACM}},
    month = may,
    year = {2023},
    isbn = {979-8-4007-0140-5/23/05},
    keywords = {eupilot, icsc},
    doi = {10.1145/3587135.3592211},
    url = {https://hdl.handle.net/2318/1898473},
    note = {https://arxiv.org/abs/2302.07946}
    }

  • Y. Arfat, G. Mittone, I. Colonnelli, F. D’Ascenzo, R. Esposito, and M. Aldinucci, “Pooling critical datasets with federated learning,” in Proc. of 31st euromicro intl. conference on parallel distributed and network-based processing (pdp), Napoli, Italy, 2023.
    [BibTeX] [Abstract] [Download PDF]

    Federated Learning (FL) is becoming popular in different industrial sectors where data access is critical for security, privacy and the economic value of data itself. Unlike traditional machine learning, where all the data must be globally gathered for analysis, FL makes it possible to extract knowledge from data distributed across different organizations that can be coupled with different Machine Learning paradigms. In this work, we replicate, using Federated Learning, the analysis of a pooled dataset (with AdaBoost) that has been used to define the PRAISE score, which is today among the most accurate scores to evaluate the risk of a second acute myocardial infarction. We show that thanks to the extended-OpenFL framework, which implements AdaBoost.F, we can train a federated PRAISE model that exhibits comparable accuracy and recall as the centralised model. We achieved F1 and F2 scores which are consistently comparable to the PRAISE score study of a 16- parties federation but within an order of magnitude less time.

    @inproceedings{23:praise-fl:pdp,
    abstract = {Federated Learning (FL) is becoming popular in different industrial sectors where data access is critical for security, privacy and the economic value of data itself. Unlike traditional machine learning, where all the data must be globally gathered for analysis, FL makes it possible to extract knowledge from data distributed across different organizations that can be coupled with different Machine Learning paradigms. In this work, we replicate, using Federated Learning, the analysis of a pooled dataset (with AdaBoost) that has been used to define the PRAISE score, which is today among the most accurate scores to evaluate the risk of a second acute myocardial infarction. We show that thanks to the extended-OpenFL framework, which implements AdaBoost.F, we can train a federated PRAISE model that exhibits comparable accuracy and recall as the centralised model. We achieved F1 and F2 scores which are consistently comparable to the PRAISE score study of a 16- parties federation but within an order of magnitude less time.},
    address = {Napoli, Italy},
    author = {Yasir Arfat and Gianluca Mittone and Iacopo Colonnelli and Fabrizio D'Ascenzo and Roberto Esposito and Marco Aldinucci},
    booktitle = {Proc. of 31st Euromicro Intl. Conference on Parallel Distributed and network-based Processing (PDP)},
    date-added = {2023-02-04 18:16:36 +0100},
    date-modified = {2023-02-04 18:34:25 +0100},
    keywords = {admire, hpc4ai, c3s},
    publisher = {IEEE},
    title = {Pooling critical datasets with Federated Learning},
    url = {https://iris.unito.it/retrieve/491e22ec-3db5-4989-a063-085a199edd20/23_pdp_fl.pdf},
    year = {2023}
    }

  • S. G. Contaldo, L. Alessandri, I. Colonnelli, M. Beccuti, and M. Aldinucci, “Bringing cell subpopulation discovery on a cloud-HPC using rCASC and StreamFlow,” in Single cell transcriptomics: methods and protocols, R. A. Calogero and V. Benes, Eds., New York, NY: Springer US, 2023, p. 337–345. doi:10.1007/978-1-0716-2756-3_17
    [BibTeX] [Abstract] [Download PDF]

    The idea behind novel single-cell RNA sequencing (scRNA-seq) pipelines is to isolate single cells through microfluidic approaches and generate sequencing libraries in which the transcripts are tagged to track their cell of origin. Modern scRNA-seq platforms are capable of analyzing up to many thousands of cells in each run. Then, combined with massive high-throughput sequencing producing billions of reads, scRNA-seq allows the assessment of fundamental biological properties of cell populations and biological systems at unprecedented resolution.

    @inbook{Contaldo2023,
    abstract = {The idea behind novel single-cell RNA sequencing (scRNA-seq) pipelines is to isolate single cells through microfluidic approaches and generate sequencing libraries in which the transcripts are tagged to track their cell of origin. Modern scRNA-seq platforms are capable of analyzing up to many thousands of cells in each run. Then, combined with massive high-throughput sequencing producing billions of reads, scRNA-seq allows the assessment of fundamental biological properties of cell populations and biological systems at unprecedented resolution.},
    address = {New York, NY},
    author = {Contaldo, Sandro Gepiro and Alessandri, Luca and Colonnelli, Iacopo and Beccuti, Marco and Aldinucci, Marco},
    booktitle = {Single Cell Transcriptomics: Methods and Protocols},
    doi = {10.1007/978-1-0716-2756-3_17},
    editor = {Calogero, Raffaele Adolfo and Benes, Vladimir},
    isbn = {978-1-0716-2756-3},
    pages = {337--345},
    publisher = {Springer {US}},
    title = {Bringing Cell Subpopulation Discovery on a Cloud-{HPC} Using {rCASC} and {StreamFlow}},
    url = {https://datacloud.di.unito.it/index.php/s/KMfKo4m7GTGdZmF},
    year = 2023,
    keywords = {streamflow},
    bdsk-url-1 = {https://doi.org/10.1007/978-1-0716-2756-3_17}
    }

2022

  • I. Colonnelli, B. Casella, G. Mittone, Y. Arfat, B. Cantalupo, R. Esposito, A. R. Martinelli, D. Medić, and M. Aldinucci, “Federated learning meets HPC and cloud,” in Astrophysics and space science proceedings, Catania, Italy, 2022.
    [BibTeX] [Abstract] [Download PDF]

    HPC and AI are fated to meet for several reasons. This article will discuss some of them and argue why this will happen through the set of methods and technologies that underpin cloud computing. As a paradigmatic example, we present a new federated learning system that collaboratively trains a deep learning model in different supercomputing centers. The system is based on the StreamFlow workflow manager designed for hybrid cloud-HPC infrastructures.

    @inproceedings{22:ml4astro,
    abstract = {HPC and AI are fated to meet for several reasons. This article will discuss some of them and argue why this will happen through the set of methods and technologies that underpin cloud computing. As a paradigmatic example, we present a new federated learning system that collaboratively trains a deep learning model in different supercomputing centers. The system is based on the StreamFlow workflow manager designed for hybrid cloud-HPC infrastructures.},
    address = {Catania, Italy},
    author = {Iacopo Colonnelli and Bruno Casella and Gianluca Mittone and Yasir Arfat and Barbara Cantalupo and Roberto Esposito and Alberto Riccardo Martinelli and Doriana Medi\'{c} and Marco Aldinucci},
    booktitle = {Astrophysics and Space Science Proceedings},
    keywords = {across, eupilot, streamflow},
    publisher = {Springer},
    title = {Federated Learning meets {HPC} and cloud},
    url = {https://iris.unito.it/retrieve/5631da1c-96a0-48c0-a48e-2cdf6b84841d/main.pdf},
    year = {2022}
    }

  • I. Colonnelli and M. Aldinucci, “Hybrid workflows for large – scale scientific applications,” in Sixth EAGE high performance computing workshop, Milano, Italy, 2022, p. 1–5. doi:10.3997/2214-4609.2022615029
    [BibTeX] [Abstract] [Download PDF]

    Large-scale scientific applications are facing an irreversible transition from monolithic, high-performance oriented codes to modular and polyglot deployments of specialised (micro-)services. The reasons behind this transition are many: coupling of standard solvers with Deep Learning techniques, offloading of data analysis and visualisation to Cloud, and the advent of specialised hardware accelerators. Topology-aware Workflow Management Systems (WMSs) play a crucial role. In particular, topology-awareness allows an explicit mapping of workflow steps onto heterogeneous locations, allowing automated executions on top of hybrid architectures (e.g., cloud+HPC or classical+quantum). Plus, topology-aware WMSs can offer nonfunctional requirements OOTB, e.g. components’ life-cycle orchestration, secure and efficient data transfers, fault tolerance, and cross-cluster execution of urgent workloads. Augmenting interactive Jupyter Notebooks with distributed workflow capabilities allows domain experts to prototype and scale applications using the same technological stack, while relying on a feature-rich and user-friendly web interface. This abstract will showcase how these general methodologies can be applied to a typical geoscience simulation pipeline based on the Full Wavefront Inversion (FWI) technique. In particular, a prototypical Jupyter Notebook will be executed interactively on Cloud. Preliminary data analyses and post-processing will be executed locally, while the computationally demanding optimisation loop will be scheduled on a remote HPC cluster.

    @inproceedings{22:eage-hpc-workshop,
    author = {Iacopo Colonnelli and
    Marco Aldinucci},
    abstract = {Large-scale scientific applications are facing an irreversible transition from monolithic, high-performance oriented codes to modular and polyglot deployments of specialised (micro-)services. The reasons behind this transition are many: coupling of standard solvers with Deep Learning techniques, offloading of data analysis and visualisation to Cloud, and the advent of specialised hardware accelerators. Topology-aware Workflow Management Systems (WMSs) play a crucial role. In particular, topology-awareness allows an explicit mapping of workflow steps onto heterogeneous locations, allowing automated executions on top of hybrid architectures (e.g., cloud+HPC or classical+quantum). Plus, topology-aware WMSs can offer nonfunctional requirements OOTB, e.g. components’ life-cycle orchestration, secure and efficient data transfers, fault tolerance, and cross-cluster execution of urgent workloads. Augmenting interactive Jupyter Notebooks with distributed workflow capabilities allows domain experts to prototype and scale applications using the same technological stack, while relying on a feature-rich and user-friendly web interface. This abstract will showcase how these general methodologies can be applied to a typical geoscience simulation pipeline based on the Full Wavefront Inversion (FWI) technique. In particular, a prototypical Jupyter Notebook will be executed interactively on Cloud. Preliminary data analyses and post-processing will be executed locally, while the computationally demanding optimisation loop will be scheduled on a remote HPC cluster.},
    title = {Hybrid Workflows For Large - Scale Scientific Applications},
    address = {Milano, Italy},
    booktitle = {Sixth {EAGE} High Performance Computing Workshop},
    pages = {1--5},
    publisher = {{European Association of Geoscientists \& Engineers }},
    month = sep,
    year = {2022},
    issn = {2214-4609},
    keywords = {across, eupex},
    doi = {10.3997/2214-4609.2022615029},
    url = {https://iris.unito.it/retrieve/d79ddabb-f9d7-4a55-9f84-1528b1533ba3/Extended_Abstract.pdf}
    }

  • G. Agosta, M. Aldinucci, C. Alvarez, R. Ammendola, Y. Arfat, O. Beaumont, M. Bernaschi, A. Biagioni, T. Boccali, B. Bramas, C. Brandolese, B. Cantalupo, M. Carrozzo, D. Cattaneo, A. Celestini, M. Celino, I. Colonnelli, P. Cretaro, P. D’Ambra, M. Danelutto, R. Esposito, L. Eyraud-Dubois, A. Filgueras, W. Fornaciari, O. Frezza, A. Galimberti, F. Giacomini, B. Goglin, D. Gregori, A. Guermouche, F. Iannone, M. Kulczewski, F. Lo Cicero, A. Lonardo, A. R. Martinelli, M. Martinelli, X. Martorell, G. Massari, S. Montangero, G. Mittone, R. Namyst, A. Oleksiak, P. Palazzari, P. S. Paolucci, F. Reghenzani, C. Rossi, S. Saponara, F. Simula, F. Terraneo, S. Thibault, M. Torquati, M. Turisini, P. Vicini, M. Vidal, D. Zoni, and G. Zummo, “Towards extreme scale technologies and accelerators for eurohpc hw/sw supercomputing applications for exascale: the textarossa approach,” Microprocessors and microsystems, vol. 95, p. 104679, 2022. doi:10.1016/j.micpro.2022.104679
    [BibTeX] [Abstract]

    In the near future, Exascale systems will need to bridge three technology gaps to achieve high performance while remaining under tight power constraints: energy efficiency and thermal control; extreme computation efficiency via HW acceleration and new arithmetic; methods and tools for seamless integration of reconfigurable accelerators in heterogeneous HPC multi-node platforms. TEXTAROSSA addresses these gaps through a co-design approach to heterogeneous HPC solutions, supported by the integration and extension of HW and SW IPs, programming models, and tools derived from European research.

    @article{textarossa2022micpro:,
    abstract = {In the near future, Exascale systems will need to bridge three technology gaps to achieve high performance while remaining under tight power constraints: energy efficiency and thermal control; extreme computation efficiency via HW acceleration and new arithmetic; methods and tools for seamless integration of reconfigurable accelerators in heterogeneous HPC multi-node platforms. TEXTAROSSA addresses these gaps through a co-design approach to heterogeneous HPC solutions, supported by the integration and extension of HW and SW IPs, programming models, and tools derived from European research.},
    author = {Giovanni Agosta and Marco Aldinucci and Carlos Alvarez and Roberto Ammendola and Yasir Arfat and Olivier Beaumont and Massimo Bernaschi and Andrea Biagioni and Tommaso Boccali and Berenger Bramas and Carlo Brandolese and Barbara Cantalupo and Mauro Carrozzo and Daniele Cattaneo and Alessandro Celestini and Massimo Celino and Iacopo Colonnelli and Paolo Cretaro and Pasqua D'Ambra and Marco Danelutto and Roberto Esposito and Lionel Eyraud-Dubois and Antonio Filgueras and William Fornaciari and Ottorino Frezza and Andrea Galimberti and Francesco Giacomini and Brice Goglin and Daniele Gregori and Abdou Guermouche and Francesco Iannone and Michal Kulczewski and Francesca {Lo Cicero} and Alessandro Lonardo and Alberto R. Martinelli and Michele Martinelli and Xavier Martorell and Giuseppe Massari and Simone Montangero and Gianluca Mittone and Raymond Namyst and Ariel Oleksiak and Paolo Palazzari and Pier Stanislao Paolucci and Federico Reghenzani and Cristian Rossi and Sergio Saponara and Francesco Simula and Federico Terraneo and Samuel Thibault and Massimo Torquati and Matteo Turisini and Piero Vicini and Miquel Vidal and Davide Zoni and Giuseppe Zummo},
    doi = {10.1016/j.micpro.2022.104679},
    issn = {0141-9331},
    journal = {Microprocessors and Microsystems},
    keywords = {textrossa},
    pages = {104679},
    title = {Towards EXtreme scale technologies and accelerators for euROhpc hw/Sw supercomputing applications for exascale: The TEXTAROSSA approach},
    volume = {95},
    year = {2022},
    bdsk-url-1 = {https://doi.org/10.1016/j.micpro.2022.104679}
    }

  • M. Aldinucci, D. Atienza, F. Bolelli, M. Caballero, I. Colonnelli, J. Flich, J. A. Gómez, D. González, C. Grana, M. Grangetto, S. Leo, P. López, D. Oniga, R. Paredes, L. Pireddu, E. Quiñones, T. Silva, E. Tartaglione, and M. Zapater, “The DeepHealth toolkit: a key european free and open-source software for deep learning and computer vision ready to exploit heterogeneous HPC and Cloud architectures,” in Technologies and applications for big data value, E. Curry, S. Auer, A. J. Berre, A. Metzger, M. S. Perez, and S. Zillner, Eds., Cham: Springer international publishing, 2022, p. 183–202. doi:10.1007/978-3-030-78307-5_9
    [BibTeX] [Abstract] [Download PDF]

    At the present time, we are immersed in the convergence between Big Data, High-Performance Computing and Artificial Intelligence. Technological progress in these three areas has accelerated in recent years, forcing different players like software companies and stakeholders to move quickly. The European Union is dedicating a lot of resources to maintain its relevant position in this scenario, funding projects to implement large-scale pilot testbeds that combine the latest advances in Artificial Intelligence, High-Performance Computing, Cloud and Big Data technologies. The DeepHealth project is an example focused on the health sector whose main outcome is the DeepHealth toolkit, a European unified framework that offers deep learning and computer vision capabilities, completely adapted to exploit underlying heterogeneous High-Performance Computing, Big Data and cloud architectures, and ready to be integrated into any software platform to facilitate the development and deployment of new applications for specific problems in any sector. This toolkit is intended to be one of the European contributions to the field of AI. This chapter introduces the toolkit with its main components and complementary tools, providing a clear view to facilitate and encourage its adoption and wide use by the European community of developers of AI-based solutions and data scientists working in the healthcare sector and others.

    @incollection{22:TABDV,
    abstract = {At the present time, we are immersed in the convergence between Big Data, High-Performance Computing and Artificial Intelligence. Technological progress in these three areas has accelerated in recent years, forcing different players like software companies and stakeholders to move quickly. The European Union is dedicating a lot of resources to maintain its relevant position in this scenario, funding projects to implement large-scale pilot testbeds that combine the latest advances in Artificial Intelligence, High-Performance Computing, Cloud and Big Data technologies. The DeepHealth project is an example focused on the health sector whose main outcome is the DeepHealth toolkit, a European unified framework that offers deep learning and computer vision capabilities, completely adapted to exploit underlying heterogeneous High-Performance Computing, Big Data and cloud architectures, and ready to be integrated into any software platform to facilitate the development and deployment of new applications for specific problems in any sector. This toolkit is intended to be one of the European contributions to the field of AI. This chapter introduces the toolkit with its main components and complementary tools, providing a clear view to facilitate and encourage its adoption and wide use by the European community of developers of AI-based solutions and data scientists working in the healthcare sector and others.},
    address = {Cham},
    author = {Marco Aldinucci and David Atienza and Federico Bolelli and M\'{o}nica Caballero and Iacopo Colonnelli and Jos\'{e} Flich and Jon Ander G\'{o}mez and David Gonz\'{a}lez and Costantino Grana and Marco Grangetto and Simone Leo and Pedro L\'{o}pez and Dana Oniga and Roberto Paredes and Luca Pireddu and Eduardo Qui\~{n}ones and Tatiana Silva and Enzo Tartaglione and Marina Zapater},
    booktitle = {Technologies and Applications for Big Data Value},
    chapter = {9},
    doi = {10.1007/978-3-030-78307-5_9},
    editor = {Edward Curry and S\"{o}ren Auer and Arne J. Berre and Andreas Metzger and Maria S. Perez and Sonja Zillner},
    isbn = {978-3-030-78307-5},
    keywords = {deephealth, streamflow},
    pages = {183--202},
    publisher = {Springer International Publishing},
    title = {The {DeepHealth} Toolkit: A Key European Free and Open-Source Software for Deep Learning and Computer Vision Ready to Exploit Heterogeneous {HPC} and {C}loud Architectures},
    url = {https://link.springer.com/content/pdf/10.1007/978-3-030-78307-5_9.pdf},
    year = {2022},
    bdsk-url-1 = {https://link.springer.com/content/pdf/10.1007/978-3-030-78307-5_9.pdf},
    bdsk-url-2 = {https://doi.org/10.1007/978-3-030-78307-5_9}
    }

  • E. Quiñones, J. Perales, J. Ejarque, A. Badouh, S. Marco, F. Auzanneau, F. Galea, D. González, J. R. Hervás, T. Silva, I. Colonnelli, B. Cantalupo, M. Aldinucci, E. Tartaglione, R. Tornero, J. Flich, J. M. Martinez, D. Rodriguez, I. Catalán, J. Garcia, and C. Hernández, “The DeepHealth HPC infrastructure: leveraging heterogenous HPC and cloud computing infrastructures for IA-based medical solutions,” in HPC, big data, and AI convergence towards exascale: challenge and vision, O. Terzo and J. Martinovič, Eds., Boca Raton, Florida: CRC press, 2022, p. 191–216. doi:10.1201/9781003176664
    [BibTeX] [Abstract] [Download PDF]

    This chapter presents the DeepHealth HPC toolkit for an efficient execution of deep learning (DL) medical application into HPC and cloud-computing infrastructures, featuring many-core, GPU, and FPGA acceleration devices. The toolkit offers to the European Computer Vision Library and the European Distributed Deep Learning Library (EDDL), developed in the DeepHealth project as well, the mechanisms to distribute and parallelize DL operations on HPC and cloud infrastructures in a fully transparent way. The toolkit implements workflow managers used to orchestrate HPC workloads for an efficient parallelization of EDDL training operations on HPC and cloud infrastructures, and includes the parallel programming models for an efficient execution EDDL inference and training operations on many-core, GPUs and FPGAs acceleration devices.

    @incollection{22:deephealth:HPCbook,
    abstract = {This chapter presents the DeepHealth HPC toolkit for an efficient execution of deep learning (DL) medical application into HPC and cloud-computing infrastructures, featuring many-core, GPU, and FPGA acceleration devices. The toolkit offers to the European Computer Vision Library and the European Distributed Deep Learning Library (EDDL), developed in the DeepHealth project as well, the mechanisms to distribute and parallelize DL operations on HPC and cloud infrastructures in a fully transparent way. The toolkit implements workflow managers used to orchestrate HPC workloads for an efficient parallelization of EDDL training operations on HPC and cloud infrastructures, and includes the parallel programming models for an efficient execution EDDL inference and training operations on many-core, GPUs and FPGAs acceleration devices.},
    address = {Boca Raton, Florida},
    author = {Eduardo Qui\~{n}ones and Jesus Perales and Jorge Ejarque and Asaf Badouh and Santiago Marco and Fabrice Auzanneau and Fran\c{c}ois Galea and David Gonz\'{a}lez and Jos\'{e} Ram\'{o}n Herv\'{a}s and Tatiana Silva and Iacopo Colonnelli and Barbara Cantalupo and Marco Aldinucci and Enzo Tartaglione and Rafael Tornero and Jos\'{e} Flich and Jose Maria Martinez and David Rodriguez and Izan Catal\'{a}n and Jorge Garcia and Carles Hern\'{a}ndez},
    booktitle = {{HPC}, Big Data, and {AI} Convergence Towards Exascale: Challenge and Vision},
    chapter = {10},
    doi = {10.1201/9781003176664},
    editor = {Olivier Terzo and Jan Martinovi\v{c}},
    isbn = {978-1-0320-0984-1},
    keywords = {deephealth, streamflow},
    pages = {191--216},
    publisher = {{CRC} Press},
    title = {The {DeepHealth} {HPC} Infrastructure: Leveraging Heterogenous {HPC} and Cloud Computing Infrastructures for {IA}-based Medical Solutions},
    url = {https://iris.unito.it/retrieve/handle/2318/1832050/912413/Preprint.pdf},
    year = {2022},
    bdsk-url-1 = {https://iris.unito.it/retrieve/handle/2318/1832050/912413/Preprint.pdf},
    bdsk-url-2 = {https://doi.org/10.1201/9781003176664}
    }

  • I. Colonnelli, M. Aldinucci, B. Cantalupo, L. Padovani, S. Rabellino, C. Spampinato, R. Morelli, R. Di Carlo, N. Magini, and C. Cavazzoni, “Distributed workflows with Jupyter,” Future generation computer systems, vol. 128, pp. 282-298, 2022. doi:10.1016/j.future.2021.10.007
    [BibTeX] [Abstract] [Download PDF]

    The designers of a new coordination interface enacting complex workflows have to tackle a dichotomy: choosing a language-independent or language-dependent approach. Language-independent approaches decouple workflow models from the host code’s business logic and advocate portability. Language-dependent approaches foster flexibility and performance by adopting the same host language for business and coordination code. Jupyter Notebooks, with their capability to describe both imperative and declarative code in a unique format, allow taking the best of the two approaches, maintaining a clear separation between application and coordination layers but still providing a unified interface to both aspects. We advocate the Jupyter Notebooks’ potential to express complex distributed workflows, identifying the general requirements for a Jupyter-based Workflow Management System (WMS) and introducing a proof-of-concept portable implementation working on hybrid Cloud-HPC infrastructures. As a byproduct, we extended the vanilla IPython kernel with workflow-based parallel and distributed execution capabilities. The proposed Jupyter-workflow (Jw) system is evaluated on common scenarios for High Performance Computing (HPC) and Cloud, showing its potential in lowering the barriers between prototypical Notebooks and production-ready implementations.

    @article{21:FGCS:jupyflow,
    abstract = {The designers of a new coordination interface enacting complex workflows have to tackle a dichotomy: choosing a language-independent or language-dependent approach. Language-independent approaches decouple workflow models from the host code's business logic and advocate portability. Language-dependent approaches foster flexibility and performance by adopting the same host language for business and coordination code. Jupyter Notebooks, with their capability to describe both imperative and declarative code in a unique format, allow taking the best of the two approaches, maintaining a clear separation between application and coordination layers but still providing a unified interface to both aspects. We advocate the Jupyter Notebooks' potential to express complex distributed workflows, identifying the general requirements for a Jupyter-based Workflow Management System (WMS) and introducing a proof-of-concept portable implementation working on hybrid Cloud-HPC infrastructures. As a byproduct, we extended the vanilla IPython kernel with workflow-based parallel and distributed execution capabilities. The proposed Jupyter-workflow (Jw) system is evaluated on common scenarios for High Performance Computing (HPC) and Cloud, showing its potential in lowering the barriers between prototypical Notebooks and production-ready implementations.},
    author = {Iacopo Colonnelli and Marco Aldinucci and Barbara Cantalupo and Luca Padovani and Sergio Rabellino and Concetto Spampinato and Roberto Morelli and Rosario {Di Carlo} and Nicol{\`o} Magini and Carlo Cavazzoni},
    doi = {10.1016/j.future.2021.10.007},
    issn = {0167-739X},
    journal = {Future Generation Computer Systems},
    keywords = {streamflow, jupyter-workflow},
    pages = {282-298},
    title = {Distributed workflows with {Jupyter}},
    url = {https://www.sciencedirect.com/science/article/pii/S0167739X21003976},
    volume = {128},
    year = {2022},
    bdsk-url-1 = {https://www.sciencedirect.com/science/article/pii/S0167739X21003976},
    bdsk-url-2 = {https://doi.org/10.1016/j.future.2021.10.007}
    }

2021

  • G. Agosta, W. Fornaciari, A. Galimberti, G. Massari, F. Reghenzani, F. Terraneo, D. Zoni, C. Brandolese, M. Celino, F. Iannone, P. Palazzari, G. Zummo, M. Bernaschi, P. D’Ambra, S. Saponara, M. Danelutto, M. Torquati, M. Aldinucci, Y. Arfat, B. Cantalupo, I. Colonnelli, R. Esposito, A. R. Martinelli, G. Mittone, O. Beaumont, B. Bramas, L. Eyraud-Dubois, B. Goglin, A. Guermouche, R. Namyst, S. Thibault, A. Filgueras, M. Vidal, C. Alvarez, X. Martorell, A. Oleksiak, M. Kulczewski, A. Lonardo, P. Vicini, F. L. Cicero, F. Simula, A. Biagioni, P. Cretaro, O. Frezza, P. S. Paolucci, M. Turisini, F. Giacomini, T. Boccali, S. Montangero, and R. Ammendola, “TEXTAROSSA: towards extreme scale technologies and accelerators for eurohpc hw/sw supercomputing applications for exascale,” in Proc. of the 24th euromicro conference on digital system design (DSD), Palermo, Italy, 2021. doi:10.1109/DSD53832.2021.00051
    [BibTeX] [Abstract]

    To achieve high performance and high energy effi- ciency on near-future exascale computing systems, three key technology gaps needs to be bridged. These gaps include: en- ergy efficiency and thermal control; extreme computation effi- ciency via HW acceleration and new arithmetics; methods and tools for seamless integration of reconfigurable accelerators in heterogeneous HPC multi-node platforms. TEXTAROSSA aims at tackling this gap through a co-design approach to heterogeneous HPC solutions, supported by the integration and extension of HW and SW IPs, programming models and tools derived from European research.

    @inproceedings{21:DSD:textarossa,
    abstract = {To achieve high performance and high energy effi- ciency on near-future exascale computing systems, three key technology gaps needs to be bridged. These gaps include: en- ergy efficiency and thermal control; extreme computation effi- ciency via HW acceleration and new arithmetics; methods and tools for seamless integration of reconfigurable accelerators in heterogeneous HPC multi-node platforms. TEXTAROSSA aims at tackling this gap through a co-design approach to heterogeneous HPC solutions, supported by the integration and extension of HW and SW IPs, programming models and tools derived from European research.},
    address = {Palermo, Italy},
    author = {Giovanni Agosta and William Fornaciari and Andrea Galimberti and Giuseppe Massari and Federico Reghenzani and Federico Terraneo and Davide Zoni and Carlo Brandolese and Massimo Celino and Francesco Iannone and Paolo Palazzari and Giuseppe Zummo and Massimo Bernaschi and Pasqua D'Ambra and Sergio Saponara and Marco Danelutto and Massimo Torquati and Marco Aldinucci and Yasir Arfat and Barbara Cantalupo and Iacopo Colonnelli and Roberto Esposito and Alberto Riccardo Martinelli and Gianluca Mittone and Olivier Beaumont and Berenger Bramas and Lionel Eyraud-Dubois and Brice Goglin and Abdou Guermouche and Raymond Namyst and Samuel Thibault and Antonio Filgueras and Miquel Vidal and Carlos Alvarez and Xavier Martorell and Ariel Oleksiak and Michal Kulczewski and Alessandro Lonardo and Piero Vicini and Francesco Lo Cicero and Francesco Simula and Andrea Biagioni and Paolo Cretaro and Ottorino Frezza and Pier Stanislao Paolucci and Matteo Turisini and Francesco Giacomini and Tommaso Boccali and Simone Montangero and Roberto Ammendola},
    booktitle = {Proc. of the 24th Euromicro Conference on Digital System Design ({DSD})},
    date-added = {2021-09-04 12:07:42 +0200},
    date-modified = {2021-09-04 12:23:41 +0200},
    doi = {10.1109/DSD53832.2021.00051},
    keywords = {textarossa, streamflow},
    month = aug,
    publisher = {IEEE},
    title = {{TEXTAROSSA}: Towards EXtreme scale Technologies and Accelerators for euROhpc hw/Sw Supercomputing Applications for exascale},
    year = {2021},
    bdsk-url-1 = {https://doi.org/10.1109/DSD53832.2021.00051}
    }

  • M. Aldinucci, V. Cesare, I. Colonnelli, A. R. Martinelli, G. Mittone, and B. Cantalupo, “Practical parallelizazion of a Laplace solver with MPI,” in Enea cresco in the fight against covid-19, 2021, p. 21–24.
    [BibTeX] [Abstract]

    This work exposes a practical methodology for the semi-automatic parallelization of existing code. We show how a scientific sequential code can be parallelized through our approach. The obtained parallel code is only slightly different from the starting sequential one, providing an example of how little re-designing our methodology involves. The performance of the parallelized code, executed on the CRESCO6 cluster, is then exposed and discussed. We also believe in the educational value of this approach and suggest its use as a teaching device for students.

    @inproceedings{21:laplace:enea,
    abstract = {This work exposes a practical methodology for the semi-automatic parallelization of existing code. We show how a scientific sequential code can be parallelized through our approach. The obtained parallel code is only slightly different from the starting sequential one, providing an example of how little re-designing our methodology involves. The performance of the parallelized code, executed on the CRESCO6 cluster, is then exposed and discussed. We also believe in the educational value of this approach and suggest its use as a teaching device for students.},
    author = {Aldinucci, Marco and Cesare, Valentina and Colonnelli, Iacopo and Martinelli, Alberto Riccardo and Mittone, Gianluca and Cantalupo, Barbara},
    booktitle = {ENEA CRESCO in the fight against COVID-19},
    editor = {Francesco Iannone},
    keywords = {hpc4ai},
    pages = {21--24},
    publisher = {ENEA},
    title = {Practical Parallelizazion of a {Laplace} Solver with {MPI}},
    year = {2021}
    }

  • I. Colonnelli, B. Cantalupo, C. Spampinato, M. Pennisi, and M. Aldinucci, “Bringing ai pipelines onto cloud-hpc: setting a baseline for accuracy of covid-19 diagnosis,” in Enea cresco in the fight against covid-19, 2021. doi:10.5281/zenodo.5151511
    [BibTeX] [Abstract] [Download PDF]

    HPC is an enabling platform for AI. The introduction of AI workloads in the HPC applications basket has non-trivial consequences both on the way of designing AI applications and on the way of providing HPC computing. This is the leitmotif of the convergence between HPC and AI. The formalized definition of AI pipelines is one of the milestones of HPC-AI convergence. If well conducted, it allows, on the one hand, to obtain portable and scalable applications. On the other hand, it is crucial for the reproducibility of scientific pipelines. In this work, we advocate the StreamFlow Workflow Management System as a crucial ingredient to define a parametric pipeline, called “CLAIRE COVID-19 Universal Pipeline”, which is able to explore the optimization space of methods to classify COVID-19 lung lesions from CT scans, compare them for accuracy, and therefore set a performance baseline. The universal pipeline automatizes the training of many different Deep Neural Networks (DNNs) and many different hyperparameters. It, therefore, requires a massive computing power, which is found in traditional HPC infrastructure thanks to the portability-by-design of pipelines designed with StreamFlow. Using the universal pipeline, we identified a DNN reaching over 90\% accuracy in detecting COVID-19 lesions in CT scans.

    @inproceedings{21:covi:enea,
    abstract = {HPC is an enabling platform for AI. The introduction of AI workloads in the HPC applications basket has non-trivial consequences both on the way of designing AI applications and on the way of providing HPC computing. This is the leitmotif of the convergence between HPC and AI. The formalized definition of AI pipelines is one of the milestones of HPC-AI convergence. If well conducted, it allows, on the one hand, to obtain portable and scalable applications. On the other hand, it is crucial for the reproducibility of scientific pipelines. In this work, we advocate the StreamFlow Workflow Management System as a crucial ingredient to define a parametric pipeline, called ``CLAIRE COVID-19 Universal Pipeline'', which is able to explore the optimization space of methods to classify COVID-19 lung lesions from CT scans, compare them for accuracy, and therefore set a performance baseline. The universal pipeline automatizes the training of many different Deep Neural Networks (DNNs) and many different hyperparameters. It, therefore, requires a massive computing power, which is found in traditional HPC infrastructure thanks to the portability-by-design of pipelines designed with StreamFlow. Using the universal pipeline, we identified a DNN reaching over 90\% accuracy in detecting COVID-19 lesions in CT scans.},
    author = {Colonnelli, Iacopo and Cantalupo, Barbara and Spampinato, Concetto and Pennisi, Matteo and Aldinucci, Marco},
    booktitle = {ENEA CRESCO in the fight against COVID-19},
    doi = {10.5281/zenodo.5151511},
    editor = {Francesco Iannone},
    keywords = {streamflow},
    publisher = {ENEA},
    title = {Bringing AI pipelines onto cloud-HPC: setting a baseline for accuracy of COVID-19 diagnosis},
    url = {https://iris.unito.it/retrieve/handle/2318/1796029/779853/21_AI-pipelines_ENEA-COVID19.pdf},
    year = {2021},
    bdsk-url-1 = {https://iris.unito.it/retrieve/handle/2318/1796029/779853/21_AI-pipelines_ENEA-COVID19.pdf},
    bdsk-url-2 = {https://doi.org/10.5281/zenodo.5151511}
    }

  • O. D. Filippo, J. Kang, F. Bruno, J. Han, A. Saglietto, H. Yang, G. Patti, K. Park, R. Parma, H. Kim, L. D. Luca, H. Gwon, M. Iannaccone, W. J. Chun, G. Smolka, S. Hur, E. Cerrato, S. H. Han, C. di Mario, Y. B. Song, J. Escaned, K. H. Choi, G. Helft, J. Doh, A. T. Giachet, S. Hong, S. Muscoli, C. Nam, G. Gallone, D. Capodanno, D. Trabattoni, Y. Imori, V. Dusi, B. Cortese, A. Montefusco, F. Conrotto, I. Colonnelli, I. Sheiban, G. M. de Ferrari, B. Koo, and F. D’Ascenzo, “Benefit of extended dual antiplatelet therapy duration in acute coronary syndrome patients treated with drug eluting stents for coronary bifurcation lesions (from the BIFURCAT registry),” The american journal of cardiology, 2021. doi:10.1016/j.amjcard.2021.07.005
    [BibTeX] [Abstract] [Download PDF]

    Optimal dual antiplatelet therapy (DAPT) duration for patients undergoing percutaneous coronary intervention (PCI) for coronary bifurcations is an unmet issue. The BIFURCAT registry was obtained by merging two registries on coronary bifurcations. Three groups were compared in a two-by-two fashion: short-term DAPT (≤ 6 months), intermediate-term DAPT (6-12 months) and extended DAPT (>12 months). Major adverse cardiac events (MACE) (a composite of all-cause death, myocardial infarction (MI), target-lesion revascularization and stent thrombosis) were the primary endpoint. Single components of MACE were the secondary endpoints. Events were appraised according to the clinical presentation: chronic coronary syndrome (CCS) versus acute coronary syndrome (ACS). 5537 patients (3231 ACS, 2306 CCS) were included. After a median follow-up of 2.1 years (IQR 0.9-2.2), extended DAPT was associated with a lower incidence of MACE compared with intermediate-term DAPT (2.8\% versus 3.4\%, adjusted HR 0.23 [0.1-0.54], p <0.001), driven by a reduction of all-cause death in the ACS cohort. In the CCS cohort, an extended DAPT strategy was not associated with a reduced risk of MACE. In conclusion, among real-world patients receiving PCI for coronary bifurcation, an extended DAPT strategy was associated with a reduction of MACE in ACS but not in CCS patients.

    @article{21:ajc:bifurcat,
    abstract = {Optimal dual antiplatelet therapy (DAPT) duration for patients undergoing percutaneous coronary intervention (PCI) for coronary bifurcations is an unmet issue. The BIFURCAT registry was obtained by merging two registries on coronary bifurcations. Three groups were compared in a two-by-two fashion: short-term DAPT (≤ 6 months), intermediate-term DAPT (6-12 months) and extended DAPT (>12 months). Major adverse cardiac events (MACE) (a composite of all-cause death, myocardial infarction (MI), target-lesion revascularization and stent thrombosis) were the primary endpoint. Single components of MACE were the secondary endpoints. Events were appraised according to the clinical presentation: chronic coronary syndrome (CCS) versus acute coronary syndrome (ACS). 5537 patients (3231 ACS, 2306 CCS) were included. After a median follow-up of 2.1 years (IQR 0.9-2.2), extended DAPT was associated with a lower incidence of MACE compared with intermediate-term DAPT (2.8\% versus 3.4\%, adjusted HR 0.23 [0.1-0.54], p <0.001), driven by a reduction of all-cause death in the ACS cohort. In the CCS cohort, an extended DAPT strategy was not associated with a reduced risk of MACE. In conclusion, among real-world patients receiving PCI for coronary bifurcation, an extended DAPT strategy was associated with a reduction of MACE in ACS but not in CCS patients.},
    author = {Ovidio De Filippo and Jeehoon Kang and Francesco Bruno and Jung-Kyu Han and Andrea Saglietto and Han-Mo Yang and Giuseppe Patti and Kyung-Woo Park and Radoslaw Parma and Hyo-Soo Kim and Leonardo De Luca and Hyeon-Cheol Gwon and Mario Iannaccone and Woo Jung Chun and Grzegorz Smolka and Seung-Ho Hur and Enrico Cerrato and Seung Hwan Han and Carlo di Mario and Young Bin Song and Javier Escaned and Ki Hong Choi and Gerard Helft and Joon-Hyung Doh and Alessandra Truffa Giachet and Soon-Jun Hong and Saverio Muscoli and Chang-Wook Nam and Guglielmo Gallone and Davide Capodanno and Daniela Trabattoni and Yoichi Imori and Veronica Dusi and Bernardo Cortese and Antonio Montefusco and Federico Conrotto and Iacopo Colonnelli and Imad Sheiban and Gaetano Maria de Ferrari and Bon-Kwon Koo and Fabrizio D'Ascenzo},
    doi = {10.1016/j.amjcard.2021.07.005},
    issn = {0002-9149},
    journal = {The American Journal of Cardiology},
    title = {Benefit of Extended Dual Antiplatelet Therapy Duration in Acute Coronary Syndrome Patients Treated with Drug Eluting Stents for Coronary Bifurcation Lesions (from the {BIFURCAT} Registry)},
    url = {https://www.sciencedirect.com/science/article/pii/S0002914921006354},
    year = {2021},
    bdsk-url-1 = {https://www.sciencedirect.com/science/article/pii/S0002914921006354},
    bdsk-url-2 = {https://doi.org/10.1016/j.amjcard.2021.07.005}
    }

  • M. Aldinucci, V. Cesare, I. Colonnelli, A. R. Martinelli, G. Mittone, B. Cantalupo, C. Cavazzoni, and M. Drocco, "Practical parallelization of scientific applications with OpenMP, OpenACC and MPI," Journal of parallel and distributed computing, vol. 157, pp. 13-29, 2021. doi:10.1016/j.jpdc.2021.05.017
    [BibTeX] [Abstract] [Download PDF]

    This work aims at distilling a systematic methodology to modernize existing sequential scientific codes with a little re-designing effort, turning an old codebase into \emph{modern} code, i.e., parallel and robust code. We propose a semi-automatic methodology to parallelize scientific applications designed with a purely sequential programming mindset, possibly using global variables, aliasing, random number generators, and stateful functions. We demonstrate that the same methodology works for the parallelization in the shared memory model (via OpenMP), message passing model (via MPI), and General Purpose Computing on GPU model (via OpenACC). The method is demonstrated parallelizing four real-world sequential codes in the domain of physics and material science. The methodology itself has been distilled in collaboration with MSc students of the Parallel Computing course at the University of Torino, that applied it for the first time to the project works that they presented for the final exam of the course. Every year the course hosts some special lectures from industry representatives, who present how they use parallel computing and offer codes to be parallelized.

    @article{21:jpdc:loop,
    abstract = {This work aims at distilling a systematic methodology to modernize existing sequential scientific codes with a little re-designing effort, turning an old codebase into \emph{modern} code, i.e., parallel and robust code. We propose a semi-automatic methodology to parallelize scientific applications designed with a purely sequential programming mindset, possibly using global variables, aliasing, random number generators, and stateful functions. We demonstrate that the same methodology works for the parallelization in the shared memory model (via OpenMP), message passing model (via MPI), and General Purpose Computing on GPU model (via OpenACC). The method is demonstrated parallelizing four real-world sequential codes in the domain of physics and material science. The methodology itself has been distilled in collaboration with MSc students of the Parallel Computing course at the University of Torino, that applied it for the first time to the project works that they presented for the final exam of the course. Every year the course hosts some special lectures from industry representatives, who present how they use parallel computing and offer codes to be parallelized. },
    author = {Aldinucci, Marco and Cesare, Valentina and Colonnelli, Iacopo and Martinelli, Alberto Riccardo and Mittone, Gianluca and Cantalupo, Barbara and Cavazzoni, Carlo and Drocco, Maurizio},
    date-added = {2021-06-10 22:05:54 +0200},
    date-modified = {2021-06-10 22:30:05 +0200},
    doi = {10.1016/j.jpdc.2021.05.017},
    journal = {Journal of Parallel and Distributed Computing},
    keywords = {saperi},
    pages = {13-29},
    title = {Practical Parallelization of Scientific Applications with {OpenMP, OpenACC and MPI}},
    url = {https://iris.unito.it/retrieve/handle/2318/1792557/770851/Practical_Parallelization_JPDC_preprint.pdf},
    volume = {157},
    year = {2021},
    bdsk-url-1 = {https://iris.unito.it/retrieve/handle/2318/1792557/770851/Practical_Parallelization_JPDC_preprint.pdf},
    bdsk-url-2 = {https://doi.org/10.1016/j.jpdc.2021.05.017}
    }

  • I. Colonnelli, B. Cantalupo, R. Esposito, M. Pennisi, C. Spampinato, and M. Aldinucci, "HPC Application Cloudification: The StreamFlow Toolkit," in 12th workshop on parallel programming and run-time management techniques for many-core architectures and 10th workshop on design tools and architectures for multicore embedded computing platforms (parma-ditam 2021), Dagstuhl, Germany, 2021, p. 5:1–5:13. doi:10.4230/OASIcs.PARMA-DITAM.2021.5
    [BibTeX] [Abstract] [Download PDF]

    Finding an effective way to improve accessibility to High-Performance Computing facilities, still anchored to SSH-based remote shells and queue-based job submission mechanisms, is an open problem in computer science. This work advocates a cloudification of HPC applications through a cluster-as-accelerator pattern, where computationally demanding portions of the main execution flow hosted on a Cloud Finding an effective way to improve accessibility to High-Performance Computing facilities, still anchored to SSH-based remote shells and queue-based job submission mechanisms, is an open problem in computer science. This work advocates a cloudification of HPC applications through a cluster-as-accelerator pattern, where computationally demanding portions of the main execution flow hosted on a Cloud infrastructure can be offloaded to HPC environments to speed them up. We introduce StreamFlow, a novel Workflow Management System that supports such a design pattern and makes it possible to run the steps of a standard workflow model on independent processing elements with no shared storage. We validated the proposed approach's effectiveness on the CLAIRE COVID-19 universal pipeline, i.e. a reproducible workflow capable of automating the comparison of (possibly all) state-of-the-art pipelines for the diagnosis of COVID-19 interstitial pneumonia from CT scans images based on Deep Neural Networks (DNNs).

    @inproceedings{colonnelli_et_al:OASIcs.PARMA-DITAM.2021.5,
    abstract = {Finding an effective way to improve accessibility to High-Performance Computing facilities, still anchored to SSH-based remote shells and queue-based job submission mechanisms, is an open problem in computer science. This work advocates a cloudification of HPC applications through a cluster-as-accelerator pattern, where computationally demanding portions of the main execution flow hosted on a Cloud Finding an effective way to improve accessibility to High-Performance Computing facilities, still anchored to SSH-based remote shells and queue-based job submission mechanisms, is an open problem in computer science. This work advocates a cloudification of HPC applications through a cluster-as-accelerator pattern, where computationally demanding portions of the main execution flow hosted on a Cloud infrastructure can be offloaded to HPC environments to speed them up. We introduce StreamFlow, a novel Workflow Management System that supports such a design pattern and makes it possible to run the steps of a standard workflow model on independent processing elements with no shared storage. We validated the proposed approach's effectiveness on the CLAIRE COVID-19 universal pipeline, i.e. a reproducible workflow capable of automating the comparison of (possibly all) state-of-the-art pipelines for the diagnosis of COVID-19 interstitial pneumonia from CT scans images based on Deep Neural Networks (DNNs).},
    address = {Dagstuhl, Germany},
    annote = {Keywords: cloud computing, distributed computing, high-performance computing, streamflow, workflow management systems},
    author = {Colonnelli, Iacopo and Cantalupo, Barbara and Esposito, Roberto and Pennisi, Matteo and Spampinato, Concetto and Aldinucci, Marco},
    booktitle = {12th Workshop on Parallel Programming and Run-Time Management Techniques for Many-core Architectures and 10th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2021)},
    doi = {10.4230/OASIcs.PARMA-DITAM.2021.5},
    editor = {Bispo, Jo\~{a}o and Cherubin, Stefano and Flich, Jos\'{e}},
    isbn = {978-3-95977-181-8},
    issn = {2190-6807},
    keywords = {deephealth, hpc4ai, streamflow},
    pages = {5:1--5:13},
    publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
    series = {Open Access Series in Informatics (OASIcs)},
    title = {{HPC Application Cloudification: The StreamFlow Toolkit}},
    url = {https://drops.dagstuhl.de/opus/volltexte/2021/13641/pdf/OASIcs-PARMA-DITAM-2021-5.pdf},
    urn = {urn:nbn:de:0030-drops-136419},
    volume = {88},
    year = {2021},
    bdsk-url-1 = {https://drops.dagstuhl.de/opus/volltexte/2021/13641/pdf/OASIcs-PARMA-DITAM-2021-5.pdf},
    bdsk-url-2 = {https://doi.org/10.4230/OASIcs.PARMA-DITAM.2021.5}
    }

  • F. D'Ascenzo, O. De Filippo, G. Gallone, G. Mittone, M. A. Deriu, M. Iannaccone, A. Ariza-Solé, C. Liebetrau, S. Manzano-Fernández, G. Quadri, T. Kinnaird, G. Campo, J. P. Simao Henriques, J. M. Hughes, A. Dominguez-Rodriguez, M. Aldinucci, U. Morbiducci, G. Patti, S. Raposeiras-Roubin, E. Abu-Assi, G. M. De Ferrari, F. Piroli, A. Saglietto, F. Conrotto, P. Omedé, A. Montefusco, M. Pennone, F. Bruno, P. P. Bocchino, G. Boccuzzi, E. Cerrato, F. Varbella, M. Sperti, S. B. Wilton, L. Velicki, I. Xanthopoulou, A. Cequier, A. Iniguez-Romo, I. Munoz Pousa, M. Cespon Fernandez, B. Caneiro Queija, R. Cobas-Paz, A. Lopez-Cuenca, A. Garay, P. F. Blanco, A. Rognoni, G. Biondi Zoccai, S. Biscaglia, I. Nunez-Gil, T. Fujii, A. Durante, X. Song, T. Kawaji, D. Alexopoulos, Z. Huczek, J. R. Gonzalez Juanatey, S. Nie, M. Kawashiri, I. Colonnelli, B. Cantalupo, R. Esposito, S. Leonardi, W. Grosso Marra, A. Chieffo, U. Michelucci, D. Piga, M. Malavolta, S. Gili, M. Mennuni, C. Montalto, L. Oltrona Visconti, and Y. Arfat, "Machine learning-based prediction of adverse events following an acute coronary syndrome (PRAISE): a modelling study of pooled datasets," The lancet, vol. 397, iss. 10270, pp. 199-207, 2021. doi:10.1016/S0140-6736(20)32519-8
    [BibTeX] [Abstract] [Download PDF]

    Background The accuracy of current prediction tools for ischaemic and bleeding events after an acute coronary syndrome (ACS) remains insufficient for individualised patient management strategies. We developed a machine learning-based risk stratification model to predict all-cause death, recurrent acute myocardial infarction, and major bleeding after ACS. Methods Different machine learning models for the prediction of 1-year post-discharge all-cause death, myocardial infarction, and major bleeding (defined as Bleeding Academic Research Consortium type 3 or 5) were trained on a cohort of 19826 adult patients with ACS (split into a training cohort [80%] and internal validation cohort [20%]) from the BleeMACS and RENAMI registries, which included patients across several continents. 25 clinical features routinely assessed at discharge were used to inform the models. The best-performing model for each study outcome (the PRAISE score) was tested in an external validation cohort of 3444 patients with ACS pooled from a randomised controlled trial and three prospective registries. Model performance was assessed according to a range of learning metrics including area under the receiver operating characteristic curve (AUC). Findings The PRAISE score showed an AUC of 0.82 (95% CI 0.78-0.85) in the internal validation cohort and 0.92 (0.90-0.93) in the external validation cohort for 1-year all-cause death; an AUC of 0.74 (0.70-0.78) in the internal validation cohort and 0.81 (0.76-0.85) in the external validation cohort for 1-year myocardial infarction; and an AUC of 0.70 (0.66-0.75) in the internal validation cohort and 0.86 (0.82-0.89) in the external validation cohort for 1-year major bleeding. Interpretation A machine learning-based approach for the identification of predictors of events after an ACS is feasible and effective. The PRAISE score showed accurate discriminative capabilities for the prediction of all-cause death, myocardial infarction, and major bleeding, and might be useful to guide clinical decision making.

    @article{21:lancet,
    abstract = {Background The accuracy of current prediction tools for ischaemic and bleeding events after an acute coronary syndrome (ACS) remains insufficient for individualised patient management strategies. We developed a machine learning-based risk stratification model to predict all-cause death, recurrent acute myocardial infarction, and major bleeding after ACS.
    Methods Different machine learning models for the prediction of 1-year post-discharge all-cause death, myocardial infarction, and major bleeding (defined as Bleeding Academic Research Consortium type 3 or 5) were trained on a cohort of 19826 adult patients with ACS (split into a training cohort [80%] and internal validation cohort [20%]) from the BleeMACS and RENAMI registries, which included patients across several continents. 25 clinical features routinely assessed at discharge were used to inform the models. The best-performing model for each study outcome (the PRAISE score) was tested in an external validation cohort of 3444 patients with ACS pooled from a randomised controlled trial and three prospective registries. Model performance was assessed according to a range of learning metrics including area under the receiver operating characteristic curve (AUC).
    Findings The PRAISE score showed an AUC of 0.82 (95% CI 0.78-0.85) in the internal validation cohort and 0.92 (0.90-0.93) in the external validation cohort for 1-year all-cause death; an AUC of 0.74 (0.70-0.78) in the internal validation cohort and 0.81 (0.76-0.85) in the external validation cohort for 1-year myocardial infarction; and an AUC of 0.70 (0.66-0.75) in the internal validation cohort and 0.86 (0.82-0.89) in the external validation cohort for 1-year major bleeding.
    Interpretation A machine learning-based approach for the identification of predictors of events after an ACS is feasible and effective. The PRAISE score showed accurate discriminative capabilities for the prediction of all-cause death, myocardial infarction, and major bleeding, and might be useful to guide clinical decision making.},
    author = {Fabrizio D'Ascenzo and Ovidio {De Filippo} and Guglielmo Gallone and Gianluca Mittone and Marco Agostino Deriu and Mario Iannaccone and Albert Ariza-Sol\'e and Christoph Liebetrau and Sergio Manzano-Fern\'andez and Giorgio Quadri and Tim Kinnaird and Gianluca Campo and Jose Paulo {Simao Henriques} and James M Hughes and Alberto Dominguez-Rodriguez and Marco Aldinucci and Umberto Morbiducci and Giuseppe Patti and Sergio Raposeiras-Roubin and Emad Abu-Assi and Gaetano Maria {De Ferrari} and Francesco Piroli and Andrea Saglietto and Federico Conrotto and Pierluigi Omed\'e and Antonio Montefusco and Mauro Pennone and Francesco Bruno and Pier Paolo Bocchino and Giacomo Boccuzzi and Enrico Cerrato and Ferdinando Varbella and Michela Sperti and Stephen B. Wilton and Lazar Velicki and Ioanna Xanthopoulou and Angel Cequier and Andres Iniguez-Romo and Isabel {Munoz Pousa} and Maria {Cespon Fernandez} and Berenice {Caneiro Queija} and Rafael Cobas-Paz and Angel Lopez-Cuenca and Alberto Garay and Pedro Flores Blanco and Andrea Rognoni and Giuseppe {Biondi Zoccai} and Simone Biscaglia and Ivan Nunez-Gil and Toshiharu Fujii and Alessandro Durante and Xiantao Song and Tetsuma Kawaji and Dimitrios Alexopoulos and Zenon Huczek and Jose Ramon {Gonzalez Juanatey} and Shao-Ping Nie and Masa-aki Kawashiri and Iacopo Colonnelli and Barbara Cantalupo and Roberto Esposito and Sergio Leonardi and Walter {Grosso Marra} and Alaide Chieffo and Umberto Michelucci and Dario Piga and Marta Malavolta and Sebastiano Gili and Marco Mennuni and Claudio Montalto and Luigi {Oltrona Visconti} and Yasir Arfat},
    date-modified = {2021-03-26 23:53:19 +0100},
    doi = {10.1016/S0140-6736(20)32519-8},
    issn = {0140-6736},
    journal = {The Lancet},
    keywords = {deephealth, hpc4ai},
    number = {10270},
    pages = {199-207},
    title = {Machine learning-based prediction of adverse events following an acute coronary syndrome {(PRAISE)}: a modelling study of pooled datasets},
    url = {https://www.researchgate.net/profile/James_Hughes3/publication/348501148_Machine_learning-based_prediction_of_adverse_events_following_an_acute_coronary_syndrome_PRAISE_a_modelling_study_of_pooled_datasets/links/6002a81ba6fdccdcb858b6c2/Machine-learning-based-prediction-of-adverse-events-following-an-acute-coronary-syndrome-PRAISE-a-modelling-study-of-pooled-datasets.pdf},
    volume = {397},
    year = {2021},
    bdsk-url-1 = {https://www.researchgate.net/profile/James_Hughes3/publication/348501148_Machine_learning-based_prediction_of_adverse_events_following_an_acute_coronary_syndrome_PRAISE_a_modelling_study_of_pooled_datasets/links/6002a81ba6fdccdcb858b6c2/Machine-learning-based-prediction-of-adverse-events-following-an-acute-coronary-syndrome-PRAISE-a-modelling-study-of-pooled-datasets.pdf},
    bdsk-url-2 = {https://doi.org/10.1016/S0140-6736(20)32519-8}
    }

  • I. Colonnelli, B. Cantalupo, I. Merelli, and M. Aldinucci, "StreamFlow: cross-breeding cloud with HPC," IEEE Transactions on Emerging Topics in Computing, vol. 9, iss. 4, p. 1723–1737, 2021. doi:10.1109/TETC.2020.3019202
    [BibTeX] [Abstract] [Download PDF]

    Workflows are among the most commonly used tools in a variety of execution environments. Many of them target a specific environment; few of them make it possible to execute an entire workflow in different environments, e.g. Kubernetes and batch clusters. We present a novel approach to workflow execution, called StreamFlow, that complements the workflow graph with the declarative description of potentially complex execution environments, and that makes it possible the execution onto multiple sites not sharing a common data space. StreamFlow is then exemplified on a novel bioinformatics pipeline for single cell transcriptomic data analysis workflow.

    @article{20Lstreamflow:tetc,
    abstract = {Workflows are among the most commonly used tools in a variety of execution environments. Many of them target a specific environment; few of them make it possible to execute an entire workflow in different environments, e.g. Kubernetes and batch clusters. We present a novel approach to workflow execution, called StreamFlow, that complements the workflow graph with the declarative description of potentially complex execution environments, and that makes it possible the execution onto multiple sites not sharing a common data space. StreamFlow is then exemplified on a novel bioinformatics pipeline for single cell transcriptomic data analysis workflow.},
    author = {Iacopo Colonnelli and Barbara Cantalupo and Ivan Merelli and Marco Aldinucci},
    date-added = {2020-08-27 09:29:49 +0200},
    date-modified = {2020-08-27 09:36:33 +0200},
    doi = {10.1109/TETC.2020.3019202},
    journal = {{IEEE} {T}ransactions on {E}merging {T}opics in {C}omputing},
    keywords = {deephealth, hpc4ai, streamflow},
    number = {4},
    pages = {1723--1737},
    title = {{StreamFlow}: cross-breeding cloud with {HPC}},
    url = {https://arxiv.org/pdf/2002.01558},
    volume = {9},
    year = {2021},
    bdsk-url-1 = {https://arxiv.org/pdf/2002.01558},
    bdsk-url-2 = {https://doi.org/10.1109/TETC.2020.3019202}
    }

2020

  • V. Cesare, I. Colonnelli, and M. Aldinucci, "Practical parallelization of scientific applications," in Proc. of 28th euromicro intl. conference on parallel distributed and network-based processing (pdp), Västerås, Sweden, 2020, pp. 376-384. doi:10.1109/PDP50117.2020.00064
    [BibTeX] [Abstract] [Download PDF]

    This work aims at distilling a systematic methodology to modernize existing sequential scientific codes with a limited re-designing effort, turning an old codebase into modern code, i.e., parallel and robust code. We propose an automatable methodology to parallelize scientific applications designed with a purely sequential programming mindset, thus possibly using global variables, aliasing, random number generators, and stateful functions. We demonstrate the methodology by way of an astrophysical application, where we model at the same time the kinematic profiles of 30 disk galaxies with a Monte Carlo Markov Chain (MCMC), which is sequential by definition. The parallel code exhibits a 12 times speedup on a 48-core platform.

    @inproceedings{20:looppar:pdp,
    abstract = {This work aims at distilling a systematic methodology to modernize existing sequential scientific codes with a limited re-designing effort, turning an old codebase into modern code, i.e., parallel and robust code. We propose an automatable methodology to parallelize scientific applications designed with a purely sequential programming mindset, thus possibly using global variables, aliasing, random number generators, and stateful functions. We demonstrate the methodology by way of an astrophysical application, where we model at the same time the kinematic profiles of 30 disk galaxies with a Monte Carlo Markov Chain (MCMC), which is sequential by definition. The parallel code exhibits a 12 times speedup on a 48-core platform.},
    address = {V{\"a}ster{\aa}s, Sweden},
    author = {Valentina Cesare and Iacopo Colonnelli and Marco Aldinucci},
    booktitle = {Proc. of 28th Euromicro Intl. Conference on Parallel Distributed and network-based Processing (PDP)},
    date-modified = {2020-04-05 02:21:31 +0200},
    doi = {10.1109/PDP50117.2020.00064},
    keywords = {hpc4ai, c3s},
    pages = {376-384},
    publisher = {IEEE},
    title = {Practical Parallelization of Scientific Applications},
    url = {https://iris.unito.it/retrieve/handle/2318/1735377/601141/2020_looppar_PDP.pdf},
    year = {2020},
    bdsk-url-1 = {https://doi.org/10.1109/PDP50117.2020.00064},
    bdsk-url-2 = {https://iris.unito.it/retrieve/handle/2318/1735377/601141/2020_looppar_PDP.pdf}
    }

2019

  • P. Viviani, M. Drocco, D. Baccega, I. Colonnelli, and M. Aldinucci, "Deep learning at scale," in Proc. of 27th euromicro intl. conference on parallel distributed and network-based processing (pdp), Pavia, Italy, 2019, pp. 124-131. doi:10.1109/EMPDP.2019.8671552
    [BibTeX] [Abstract] [Download PDF]

    This work presents a novel approach to distributed training of deep neural networks (DNNs) that aims to overcome the issues related to mainstream approaches to data parallel training. Established techniques for data parallel training are discussed from both a parallel computing and deep learning perspective, then a different approach is presented that is meant to allow DNN training to scale while retaining good convergence properties. Moreover, an experimental implementation is presented as well as some preliminary results.

    @inproceedings{19:deeplearn:pdp,
    abstract = {This work presents a novel approach to distributed training of deep neural networks (DNNs) that aims to overcome the issues related to mainstream approaches to data parallel training. Established techniques for data parallel training are discussed from both a parallel computing and deep learning perspective, then a different approach is presented that is meant to allow DNN training to scale while retaining good convergence properties. Moreover, an experimental implementation is presented as well as some preliminary results.},
    address = {Pavia, Italy},
    author = {Paolo Viviani and Maurizio Drocco and Daniele Baccega and Iacopo Colonnelli and Marco Aldinucci},
    booktitle = {Proc. of 27th Euromicro Intl. Conference on Parallel Distributed and network-based Processing (PDP)},
    date-added = {2020-01-30 10:48:12 +0100},
    date-modified = {2020-11-15 15:00:34 +0100},
    doi = {10.1109/EMPDP.2019.8671552},
    keywords = {machine learning},
    pages = {124-131},
    publisher = {IEEE},
    title = {Deep Learning at Scale},
    url = {https://iris.unito.it/retrieve/handle/2318/1695211/487778/19_deeplearning_PDP.pdf},
    year = {2019},
    bdsk-url-1 = {https://iris.unito.it/retrieve/handle/2318/1695211/487778/19_deeplearning_PDP.pdf},
    bdsk-url-2 = {https://doi.org/10.1109/EMPDP.2019.8671552}
    }

  • M. Drocco, P. Viviani, I. Colonnelli, M. Aldinucci, and M. Grangetto, "Accelerating spectral graph analysis through wavefronts of linear algebra operations," in Proc. of 27th euromicro intl. conference on parallel distributed and network-based processing (pdp), Pavia, Italy, 2019, pp. 9-16. doi:10.1109/EMPDP.2019.8671640
    [BibTeX] [Abstract] [Download PDF]

    The wavefront pattern captures the unfolding of a parallel computation in which data elements are laid out as a logical multidimensional grid and the dependency graph favours a diagonal sweep across the grid. In the emerging area of spectral graph analysis, the computing often consists in a wavefront running over a tiled matrix, involving expensive linear algebra kernels. While these applications might benefit from parallel heterogeneous platforms (multi-core with GPUs),programming wavefront applications directly with high-performance linear algebra libraries yields code that is complex to write and optimize for the specific application. We advocate a methodology based on two abstractions (linear algebra and parallel pattern-based run-time), that allows to develop portable, self-configuring, and easy-to-profile code on hybrid platforms.

    @inproceedings{19:gsp:pdp,
    abstract = {The wavefront pattern captures the unfolding of a parallel computation in which data elements are laid out as a logical multidimensional grid and the dependency graph favours a diagonal sweep across the grid. In the emerging area of spectral graph analysis, the computing often consists in a wavefront running over a tiled matrix, involving expensive linear algebra kernels. While these applications might benefit from parallel heterogeneous platforms (multi-core with GPUs),programming wavefront applications directly with high-performance linear algebra libraries yields code that is complex to write and optimize for the specific application. We advocate a methodology based on two abstractions (linear algebra and parallel pattern-based run-time), that allows to develop portable, self-configuring, and easy-to-profile code on hybrid platforms.},
    address = {Pavia, Italy},
    author = {Maurizio Drocco and Paolo Viviani and Iacopo Colonnelli and Marco Aldinucci and Marco Grangetto},
    booktitle = {Proc. of 27th Euromicro Intl. Conference on Parallel Distributed and network-based Processing (PDP)},
    date-modified = {2021-04-24 23:22:22 +0200},
    doi = {10.1109/EMPDP.2019.8671640},
    pages = {9-16},
    publisher = {IEEE},
    title = {Accelerating spectral graph analysis through wavefronts of linear algebra operations},
    url = {https://iris.unito.it/retrieve/handle/2318/1695315/488105/19_wavefront_PDP.pdf},
    year = {2019},
    bdsk-url-1 = {https://iris.unito.it/retrieve/handle/2318/1695315/488105/19_wavefront_PDP.pdf},
    bdsk-url-2 = {https://doi.org/10.1109/EMPDP.2019.8671640}
    }

Talks

2023

  • I. Colonnelli, UNITO tools presentation, CN HPC Flagship 3 Working Day Bologna, Italy: , May, 2023.
    [BibTeX] [Download PDF]
    @misc{23:FL3WorkingDay,
    address = {Bologna, Italy},
    author = {Iacopo Colonnelli},
    keywords = {streamflow, jupyter-workflow},
    month = {May},
    howpublished = {CN HPC Flagship 3 Working Day},
    title = {{UNITO} tools presentation},
    url = {https://datacloud.di.unito.it/index.php/s/fgHbnLDQSFtcwLd},
    year = {2023}
    }

  • G. Mittone, N. Tonci, R. Birke, I. Colonnelli, D. Medić, A. Bartolini, R. Esposito, E. Parisi, F. Beneventi, M. Polato, M. Torquati, L. Benini, and M. Aldinucci, Experimenting with emerging risc-v systems for decentralised machine learning, 20th ACM international conference on computing frontiers (CF '23) , Invited talk, May, 2023.
    [BibTeX] [Abstract] [Download PDF]

    Decentralised Machine Learning (DML) enables collaborative machine learning without centralised input data. Federated Learning (FL) and Edge Inference are examples of DML. While tools for DML (especially FL) are starting to flourish, many are not flexible and portable enough to experiment with novel processors (e.g., RISC-V), non-fully connected network topologies, and asynchronous collaboration schemes. We overcome these limitations via a domain-specific language allowing us to map DML schemes to an underlying middleware, i.e. the FastFlow parallel programming library. We experiment with it by generating different working DML schemes on x86-64 and ARM platforms and an emerging RISC-V one. We characterise the performance and energy efficiency of the presented schemes and systems. As a byproduct, we introduce a RISC-V porting of the PyTorch framework, the first publicly available to our knowledge.

    @misc{23:ACMCF,
    abstract = {Decentralised Machine Learning (DML) enables collaborative machine learning without centralised input data. Federated Learning (FL) and Edge Inference are examples of DML. While tools for DML (especially FL) are starting to flourish, many are not flexible and portable enough to experiment with novel processors (e.g., RISC-V), non-fully connected network topologies, and asynchronous collaboration schemes. We overcome these limitations via a domain-specific language allowing us to map DML schemes to an underlying middleware, i.e. the FastFlow parallel programming library. We experiment with it by generating different working DML schemes on x86-64 and ARM platforms and an emerging RISC-V one. We characterise the performance and energy efficiency of the presented schemes and systems. As a byproduct, we introduce a RISC-V porting of the PyTorch framework, the first publicly available to our knowledge.},
    author = {Gianluca Mittone and Nicolò Tonci and Robert Birke and Iacopo Colonnelli and Doriana Medić and Andrea Bartolini and Roberto Esposito and Emanuele Parisi and Francesco Beneventi and Mirko Polato and Massimo Torquati and Luca Benini and Marco Aldinucci},
    howpublished = {20th ACM international conference on computing frontiers (CF '23)},
    keywords = {invited, eupilot, icsc},
    month = {May},
    note = {Invited talk},
    title = {Experimenting with Emerging RISC-V Systems for Decentralised Machine Learning},
    url = {https://datacloud.di.unito.it/index.php/s/BYyqZbHzzN4DL8Z},
    year = {2023}
    }

  • S. Karvounari, E. Mathioulaki, M. R. Crusoe, and I. Colonnelli, Standardised workflows at EBRAINS, Human Brain Project Summit 2023 Marseille, France: , Invited talk, March, 2023.
    [BibTeX] [Abstract] [Download PDF]

    A hands-on training offer for Standardised Workflows in EBRAINS. A short presentation will be used as an introduction, while the main hands-on session will provide information about Writing and Executing Standardised Workflows. TC will give some guidelines, so attendees can experiment with writing CWL tools and workflows and then they will be given access to VM to execute these workflows. The Workflows Dashboard will be also presented during the same session, offering to the attendees the opportunity to understand the different functionalities, use it with TC support and provide useful comments.

    @misc{23:HBPSummit,
    abstract = {A hands-on training offer for Standardised Workflows in EBRAINS. A short presentation will be used as an introduction, while the main hands-on session will provide information about Writing and Executing Standardised Workflows. TC will give some guidelines, so attendees can experiment with writing CWL tools and workflows and then they will be given access to VM to execute these workflows. The Workflows Dashboard will be also presented during the same session, offering to the attendees the opportunity to understand the different functionalities, use it with TC support and provide useful comments.},
    address = {Marseille, France},
    author = {Sofia Karvounari and Eleni Mathioulaki and Michael R. Crusoe and Iacopo Colonnelli},
    howpublished = {Human Brain Project Summit 2023},
    keywords = {invited, streamflow, across, eupex, space},
    month = {March},
    note = {Invited talk},
    title = {Standardised Workflows at {EBRAINS}},
    url = {https://datacloud.di.unito.it/index.php/s/K5YQKTsX9N7NLT8},
    year = {2023}
    }

  • I. Colonnelli, CWL for HPC: are we there yet?, 2023 CWL Conference Heidelberg, Germany: , Invited talk, March, 2023.
    [BibTeX] [Abstract] [Download PDF]

    Modern HPC applications are becoming so heterogeneous and complex that a modular approach to their design, deployment and orchestration is now necessary. This talk explores the benefits of using a vendor-agnostic workflow language (CWL) coupled with a hybrid workflow management system (StreamFlow) in the HPC ecosystem. Also, it will examine the requirements needed to model HPC applications effectively, the CWL’s readiness to meet such requirements, and the proposals made to improve the language where needed. Four real use cases will drive the discussion: the ACROSS Project (G.A. n. 955648), where CWL is the primary interface to model three HPC workflows, and the EUPEX Project (G.A. n. 101033975), where StreamFlow will be used for the rapid prototyping of a seismic engineering HPC application for a Modular Supercomputing Architecture (MSA) system.

    @misc{23:CWLConference,
    abstract = {Modern HPC applications are becoming so heterogeneous and complex that a modular approach to their design, deployment and orchestration is now necessary. This talk explores the benefits of using a vendor-agnostic workflow language (CWL) coupled with a hybrid workflow management system (StreamFlow) in the HPC ecosystem. Also, it will examine the requirements needed to model HPC applications effectively, the CWL’s readiness to meet such requirements, and the proposals made to improve the language where needed. Four real use cases will drive the discussion: the ACROSS Project (G.A. n. 955648), where CWL is the primary interface to model three HPC workflows, and the EUPEX Project (G.A. n. 101033975), where StreamFlow will be used for the rapid prototyping of a seismic engineering HPC application for a Modular Supercomputing Architecture (MSA) system.},
    address = {Heidelberg, Germany},
    author = {Iacopo Colonnelli},
    howpublished = {2023 CWL Conference},
    keywords = {invited, streamflow, across, eupex},
    month = {March},
    note = {Invited talk},
    title = {{CWL} for {HPC}: are we there yet?},
    url = {https://datacloud.di.unito.it/index.php/s/CMCd5LiZeXsxwEg},
    year = {2023}
    }

2022

  • I. Colonnelli and M. Aldinucci, Hybrid workflows for large-scale scientific applications, 6th EAGE High Performance Computing Workshop Milano, Italy: , Sep, 2022.
    [BibTeX] [Download PDF]
    @misc{22:eage,
    abstract = {Large-scale scientific applications are facing an irreversible transition from monolithic, high-performance oriented codes to modular and polyglot deployments of specialised (micro-)services. The reasons behind this transition are many: coupling of standard solvers with Deep Learning techniques, offloading of data analysis and visualisation to Cloud, and the advent of specialised hardware accelerators. Topology-aware Workflow Management Systems (WMSs) play a crucial role. In particular, topology-awareness allows an explicit mapping of workflow steps onto heterogeneous locations, allowing automated executions on top of hybrid architectures (e.g., cloud+HPC or classical+quantum). Plus, topology-aware WMSs can offer non-functional requirements OOTB, e.g. components’ life-cycle orchestration, secure and efficient data transfers, fault tolerance, and cross-cluster execution of urgent workloads. Augmenting interactive Jupyter Notebooks with distributed workflow capabilities allows domain experts to prototype and scale applications using the same technological stack, while relying on a feature-rich and user-friendly web interface. This abstract will showcase how these general methodologies can be applied to a typical geoscience simulation pipeline based on the Full Wavefront Inversion (FWI) technique. In particular, a prototypical Jupyter Notebook will be executed interactively on Cloud. Preliminary data analyses and post-processing will be executed locally, while the computationally demanding optimisation loop will be scheduled on a remote HPC cluster.},
    author = {Iacopo Colonnelli and Marco Aldinucci},
    title = {Hybrid Workflows For Large-Scale Scientific Applications},
    howpublished = {6th EAGE High Performance Computing Workshop},
    month = {Sep},
    year = {2022},
    note = {},
    address = {Milano, Italy},
    keywords = {eupex, across, textarossa, jupyter-workflow},
    abstract = {},
    annote = {},
    url = {https://datacloud.di.unito.it/index.php/s/GScPS5LCPdt6Yoo}
    }

  • I. Colonnelli, B. Cantalupo, D. Medić, and M. Aldinucci, Hybrid workflows for heterogeneous distributed computing, 3rd Italian Workshop on HPC (ITWSHPC) Torino, Italy: , Sep, 2022.
    [BibTeX] [Download PDF]
    @misc{22:itwshpc,
    author = {Iacopo Colonnelli and Barbara Cantalupo and Doriana Medi\'{c} and Marco Aldinucci},
    title = {Hybrid workflows for heterogeneous distributed computing},
    howpublished = {3rd Italian Workshop on HPC (ITWSHPC)},
    month = {Sep},
    year = {2022},
    note = {},
    address = {Torino, Italy},
    keywords = {eupex, across, admire, eupilot, textarossa, eumaster4hpc},
    abstract = {},
    annote = {},
    url = {https://datacloud.di.unito.it/index.php/s/ienbcA2DJ26aioE}
    }

  • I. Colonnelli and M. Aldinucci, CINI HPC-KTT: HPC Key Technologies and Tools national lab, NVIDIA HPC Roundtable Casalecchio di Reno, Italy: , Invited talk, Sep, 2022.
    [BibTeX] [Download PDF]
    @misc{22:nvidia_hpc_roundtable,
    author = {Iacopo Colonnelli and Marco Aldinucci},
    title = {{CINI HPC-KTT}: {HPC} {K}ey {T}echnologies and {T}ools National Lab},
    howpublished = {NVIDIA HPC Roundtable},
    month = {Sep},
    year = {2022},
    note = {Invited talk},
    address = {Casalecchio di Reno, Italy},
    keywords = {invited, eupex, across, admire, eupilot, textarossa, eumaster4hpc},
    abstract = {},
    annote = {},
    url = {https://datacloud.di.unito.it/index.php/s/9EQniZ2dGzdJ26f}
    }

  • I. Colonnelli and D. Tranchitella, Dossier: multi-tenant distributed Jupyter Notebooks, DoK Talks 141 Virtual event: , Invited talk, July, 2022.
    [BibTeX] [Abstract] [Download PDF]

    When providing data analysis as a service, one must tackle several problems. Data privacy and protection by design are crucial when working on sensitive data. Performance and scalability are fundamental for compute-intensive workloads, e.g. training Deep Neural Networks. User-friendly interfaces and fast prototyping tools are essential to allow domain experts to experiment with new techniques. Portability and reproducibility are necessary to assess the actual value of results. Kubernetes is the best platform to provide reliable, elastic, and maintainable services. However, Kubernetes alone is not enough to achieve large-scale multi-tenant reproducible data analysis. OOTB support for multi-tenancy is too rough, with only two levels of segregation (i.e. the single namespace or the entire cluster). Offloading computation to off-cluster resources is non-trivial and requires the user's manual configuration. Also, Jupyter Notebooks per se cannot provide much scalability (they execute locally and sequentially) and reproducibility (users can run cells in any order and any number of times). The Dossier platform allows system administrators to manage multi-tenant distributed Jupyter Notebooks at the cluster level in the Kubernetes way, i.e. through CRDs. Namespaces are aggregated in Tenants, and all security and accountability aspects are managed at that level. Each Notebook spawns into a user-dedicated namespace, subject to all Tenant-level constraints. Users can rely on provisioned resources, either in-cluster worker nodes or external resources like HPC facilities. Plus, they can plug their computing nodes in a BYOD fashion. Notebooks are interpreted as distributed workflows, where each cell is a task that one can offload to a different location in charge of its execution.

    @misc{22:data-on-kubernetes,
    optkey = {},
    author = {Iacopo Colonnelli and Dario Tranchitella},
    abstract = {When providing data analysis as a service, one must tackle several problems. Data privacy and protection by design are crucial when working on sensitive data. Performance and scalability are fundamental for compute-intensive workloads, e.g. training Deep Neural Networks. User-friendly interfaces and fast prototyping tools are essential to allow domain experts to experiment with new techniques. Portability and reproducibility are necessary to assess the actual value of results. Kubernetes is the best platform to provide reliable, elastic, and maintainable services. However, Kubernetes alone is not enough to achieve large-scale multi-tenant reproducible data analysis. OOTB support for multi-tenancy is too rough, with only two levels of segregation (i.e. the single namespace or the entire cluster). Offloading computation to off-cluster resources is non-trivial and requires the user's manual configuration. Also, Jupyter Notebooks per se cannot provide much scalability (they execute locally and sequentially) and reproducibility (users can run cells in any order and any number of times). The Dossier platform allows system administrators to manage multi-tenant distributed Jupyter Notebooks at the cluster level in the Kubernetes way, i.e. through CRDs. Namespaces are aggregated in Tenants, and all security and accountability aspects are managed at that level. Each Notebook spawns into a user-dedicated namespace, subject to all Tenant-level constraints. Users can rely on provisioned resources, either in-cluster worker nodes or external resources like HPC facilities. Plus, they can plug their computing nodes in a BYOD fashion. Notebooks are interpreted as distributed workflows, where each cell is a task that one can offload to a different location in charge of its execution.},
    title = {Dossier: multi-tenant distributed {J}upyter {N}otebooks},
    howpublished = {DoK Talks 141},
    month = {July},
    year = {2022},
    keywords = {jupyter-workflow, across, deephealth, hpc4ai},
    note = {Invited talk},
    address = {Virtual event},
    optannote = {},
    url = {https://datacloud.di.unito.it/index.php/s/RNqTGmTqWS66qHT}
    }

  • I. Colonnelli, StreamFlow, 2nd HealthyCloud Workshop: Analysis of existing orchestration mechanisms for distributed computational analyses Virtual event: , Invited talk, July, 2022.
    [BibTeX] [Download PDF]
    @misc{22:healthycloud-workshop,
    optkey = {},
    author = {Iacopo Colonnelli},
    title = {{StreamFlow}},
    howpublished = {2nd HealthyCloud Workshop: Analysis of existing orchestration mechanisms for distributed computational analyses},
    month = {July},
    year = {2022},
    keywords = {invited, streamflow, deephealth, across, eupex, textarossa},
    note = {Invited talk},
    address = {Virtual event},
    optannote = {},
    url = {https://datacloud.di.unito.it/index.php/s/Taz8qtzmkmn9ffT}
    }

  • I. Colonnelli and M. Aldinucci, T4.1: streaming models, TEXTAROSSA General Meeting Roma, Italy: , June, 2022.
    [BibTeX] [Download PDF]
    @misc{22:textarossa-ga-meeting,
    optkey = {},
    author = {Iacopo Colonnelli and Marco Aldinucci},
    title = {T4.1: Streaming models},
    howpublished = {TEXTAROSSA General Meeting},
    month = {June},
    year = {2022},
    keywords = {textarossa},
    address = {Roma, Italy},
    optannote = {},
    url = {https://datacloud.di.unito.it/index.php/s/cNBnwSnTc8GiCkN}
    }

  • I. Colonnelli, StreamFlow: a topology-aware WMS, ELIXIR Cloud, Data & AAI Bi-weekly Technical Calls Virtual event: , Invited talk, June, 2022.
    [BibTeX] [Download PDF]
    @misc{22:elixir-streamflow,
    optkey = {},
    author = {Iacopo Colonnelli},
    title = {{StreamFlow}: a topology-aware {WMS}},
    howpublished = {ELIXIR Cloud, Data & AAI Bi-weekly Technical Calls},
    month = {June},
    year = {2022},
    keywords = {invited, streamflow, dephealth, across, eupex, textarossa},
    note = {Invited talk},
    address = {Virtual event},
    optannote = {},
    url = {https://datacloud.di.unito.it/index.php/s/Z9GsKnRCxmBdMd3}
    }

  • I. Colonnelli, StreamFlow: a framework for hybrid workflows, EUPEX WP5 bi-weekly meeting Virtual event: , April, 2022.
    [BibTeX] [Download PDF]
    @misc{22:eupex-streamflow,
    optkey = {},
    author = {Iacopo Colonnelli},
    title = {{StreamFlow}: A framework for hybrid workflows},
    howpublished = {EUPEX WP5 bi-weekly meeting},
    month = {April},
    year = {2022},
    keywords = {streamflow, eupex},
    address = {Virtual event},
    optannote = {},
    url = {https://datacloud.di.unito.it/index.php/s/NjKEySP7HfrCQHZ}
    }

  • I. Colonnelli and D. Tranchitella, OpenDeepHealth: crafting a deep learning platform as a service with Kubernetes, J on The Beach 2022 Malaga, Spain: , April, 2022.
    [BibTeX] [Download PDF]
    @misc{22:jotb22,
    optkey = {},
    author = {Iacopo Colonnelli and Dario Tranchitella},
    title = {{OpenDeepHealth}: Crafting a Deep Learning Platform as a Service with {K}ubernetes},
    howpublished = {J on The Beach 2022},
    month = {April},
    year = {2022},
    keywords = {streamflow, jupyter-workflow, across, deephealth, hpc4ai},
    address = {Malaga, Spain},
    optannote = {},
    url = {https://datacloud.di.unito.it/index.php/s/n6J7STNnwdyqtET}
    }

  • I. Colonnelli, Distributed workflows with Jupyter, J on The Beach 2022 Malaga, Spain: , Workshop, April, 2022.
    [BibTeX] [Download PDF]
    @misc{22:jotb22-workshop,
    optkey = {},
    author = {Iacopo Colonnelli},
    title = {Distributed workflows with {J}upyter},
    howpublished = {J on The Beach 2022},
    month = {April},
    year = {2022},
    keywords = {jupyter-workflow, deephealth, across},
    address = {Malaga, Spain},
    optannote = {},
    note = {Workshop},
    url = {https://datacloud.di.unito.it/index.php/s/om89q55S6ePf2Ji}
    }

  • I. Colonnelli, StreamFlow: a framework for hybrid workflows, ACROSS WP4 meeting Virtual event: , February, 2022.
    [BibTeX] [Download PDF]
    @misc{22:across-streamflow,
    optkey = {},
    author = {Iacopo Colonnelli},
    title = {{StreamFlow}: A framework for hybrid workflows},
    howpublished = {ACROSS WP4 meeting},
    month = {February},
    year = {2022},
    keywords = {streamflow, across},
    address = {Virtual event},
    optannote = {},
    url = {https://datacloud.di.unito.it/index.php/s/FXFTKtQSRf6anMX}
    }

  • I. Colonnelli, The OpenDeepHealth toolkit, DeepHealth Winter School Torino, Italy: , January, 2022.
    [BibTeX] [Download PDF]
    @misc{22:DHWinterSchool,
    address = {Torino, Italy},
    author = {Iacopo Colonnelli},
    howpublished = {DeepHealth Winter School},
    keywords = {deephealth, hpc4ai},
    month = {January},
    title = {The {OpenDeepHealth} toolkit},
    url = {https://datacloud.di.unito.it/index.php/s/cJ8pRNsWRrfwPqr},
    year = {2022},
    bdsk-url-1 = {https://datacloud.di.unito.it/index.php/s/cJ8pRNsWRrfwPqr}
    }

2021

  • I. Colonnelli, StreamFlow: a framework for hybrid workflows, ACROSS WP4 meeting Virtual event: , October, 2021.
    [BibTeX] [Download PDF]
    @misc{21:across-streamflow,
    optkey = {},
    author = {Iacopo Colonnelli},
    title = {{StreamFlow}: A framework for hybrid workflows},
    howpublished = {ACROSS WP4 meeting},
    month = {October},
    year = {2021},
    keywords = {streamflow, across},
    address = {Virtual event},
    optannote = {},
    url = {https://datacloud.di.unito.it/index.php/s/yrGYJL6CyNywF8a}
    }

  • I. Colonnelli, HPC containers, ACROSS WP4 meeting Virtual event: , July, 2021.
    [BibTeX] [Download PDF]
    @misc{21:across-containers,
    optkey = {},
    author = {Iacopo Colonnelli},
    title = {{HPC} Containers},
    howpublished = {ACROSS WP4 meeting},
    month = {July},
    year = {2021},
    keywords = {across},
    address = {Virtual event},
    optannote = {},
    url = {https://datacloud.di.unito.it/index.php/s/ddf3YBjpm8KBGAF}
    }

  • M. Aldinucci and I. Colonnelli, The universal cloud-HPC pipeline for the AI-assisted explainable diagnosis of COVID-19 pneumonia, NVidia GTC'21 Virtual event: , Invited talk, April, 2021.
    [BibTeX] [Abstract] [Download PDF]

    We'll present a methodology to run DNN pipelines on hybrid cloud+HPC infrastructure. We'll also define a "universal pipeline" for medical images. The pipeline can reproduce all state-of-the-art DNNs to diagnose COVID-19 pneumonia, which appeared in the literature during the first Italian lockdown and following months. We can run all of them (across cloud+HPC platforms) and compare their performance in terms of sensitivity and specificity to set a baseline to evaluate future progress in the automated diagnosis of COVID-19. Also, the pipeline makes existing DNNs explainable by way of adversarial training. The pipeline is easily portable and can run across different infrastructures, adapting the performance-urgency trade-off. The methodology builds onto two novel software programs: the streamflow workflow system and the AI-sandbox concept (parallel container with user-space encrypted file system). We reach over 92\% accuracy in diagnosing COVID pneumonia.

    @misc{21:gtc:clairecovid,
    abstract = {We'll present a methodology to run DNN pipelines on hybrid cloud+HPC infrastructure. We'll also define a "universal pipeline" for medical images. The pipeline can reproduce all state-of-the-art DNNs to diagnose COVID-19 pneumonia, which appeared in the literature during the first Italian lockdown and following months. We can run all of them (across cloud+HPC platforms) and compare their performance in terms of sensitivity and specificity to set a baseline to evaluate future progress in the automated diagnosis of COVID-19. Also, the pipeline makes existing DNNs explainable by way of adversarial training. The pipeline is easily portable and can run across different infrastructures, adapting the performance-urgency trade-off. The methodology builds onto two novel software programs: the streamflow workflow system and the AI-sandbox concept (parallel container with user-space encrypted file system). We reach over 92\% accuracy in diagnosing COVID pneumonia.},
    address = {Virtual event},
    author = {Marco Aldinucci and Iacopo Colonnelli},
    howpublished = {NVidia GTC'21},
    keywords = {invited, streamflow, deephealth, hpc4ai},
    month = {April},
    note = {Invited talk},
    title = {The Universal Cloud-{HPC} Pipeline for the {AI}-Assisted Explainable Diagnosis of {COVID-19} Pneumonia},
    url = {https://datacloud.di.unito.it/index.php/s/AkQLbPpEEtDzbbm},
    year = {2021},
    bdsk-url-1 = {https://datacloud.di.unito.it/index.php/s/AkQLbPpEEtDzbbm}
    }

  • I. Colonnelli, StreamFlow: cross breeding cloud with HPC, 2021 CWL Mini Conference Virtual event: , Invited talk, February, 2021.
    [BibTeX] [Abstract] [Download PDF]

    Workflows are among the most commonly used tools in a variety of execution environments. Many of them target a specific environment; few of them make it possible to execute an entire workflow in different environments, e.g. Kubernetes and batch clusters. We present a novel approach to workflow execution, called StreamFlow, that complements the workflow graph with the declarative description of potentially complex execution environments, and that makes it possible the execution onto multiple sites not sharing a common data space.

    @misc{21:CWLMiniConference,
    abstract = {Workflows are among the most commonly used tools in a variety of execution environments. Many of them target a specific environment; few of them make it possible to execute an entire workflow in different environments, e.g. Kubernetes and batch clusters. We present a novel approach to workflow execution, called StreamFlow, that complements the workflow graph with the declarative description of potentially complex execution environments, and that makes it possible the execution onto multiple sites not sharing a common data space.},
    address = {Virtual event},
    author = {Iacopo Colonnelli},
    howpublished = {2021 CWL Mini Conference},
    keywords = {invited, streamflow, deephealth},
    month = {February},
    note = {Invited talk},
    title = {{StreamFlow}: cross breeding cloud with {HPC}},
    url = {https://datacloud.di.unito.it/index.php/s/Le9gg4PfjRxBwXD},
    year = {2021}
    }

2020

  • I. Colonnelli and S. Rabellino, JupyterFlow: Jupyter Notebooks su larga scala, Workshop GARR 2020 Virtual event: , November, 2020.
    [BibTeX] [Abstract] [Download PDF]

    I Jupyter Notebook sono largamente utilizzati sia in ambito industriale che accademico come strumento di didattica, prototipazione e analisi esplorative. Purtroppo il sistema runtime standard di Jupyter non è abbastanza potente per sostenere un carichi di lavoro reali e spesso l'unica soluzione è quella di riscrivere il codice da zero in una tecnologia con supporto HPC. Intrgrando lo stack Jupyter con StreamFlow (https://streamflow.di.unito.it/) è possibile creare i Notebook tramite un'interfaccia web su cloud ed eseguirli in maniera trasparente in remoto su una VM con GPU o su nodi HPC.

    @misc{20:GarrWorkshop,
    abstract = {I Jupyter Notebook sono largamente utilizzati sia in ambito industriale che accademico come strumento di didattica, prototipazione e analisi esplorative. Purtroppo il sistema runtime standard di Jupyter non \`{e} abbastanza potente per sostenere un carichi di lavoro reali e spesso l'unica soluzione \`{e} quella di riscrivere il codice da zero in una tecnologia con supporto HPC. Intrgrando lo stack Jupyter con StreamFlow (https://streamflow.di.unito.it/) \`{e} possibile creare i Notebook tramite un'interfaccia web su cloud ed eseguirli in maniera trasparente in remoto su una VM con GPU o su nodi HPC.},
    address = {Virtual event},
    author = {Iacopo Colonnelli and Sergio Rabellino},
    howpublished = {Workshop GARR 2020},
    keywords = {jupyter-workflow, hpc4ai, deephealth},
    month = {November},
    title = {{JupyterFlow}: {J}upyter {N}otebooks su larga scala},
    url = {https://datacloud.di.unito.it/index.php/s/ASPEmyXAj5QscgC},
    year = {2020},
    bdsk-url-1 = {https://datacloud.di.unito.it/index.php/s/ASPEmyXAj5QscgC},
    bdsk-url-2 = {https://www.eventi.garr.it/it/ws20/programma/speaker/680-iacopo-colonnelli}
    }

  • I. Colonnelli, StreamFlow: cross breeding cloud with HPC, HPC-Europa3 2nd Transnational Access Meeting (TAM) Virtual event: , Invited talk, October, 2020.
    [BibTeX] [Abstract] [Download PDF]

    Workflows are among the most commonly used tools in a variety of execution environments. Many of them target a specific environment; few of them make it possible to execute an entire workflow in different environments, e.g. Kubernetes and batch clusters. We present a novel approach to workflow execution, called StreamFlow, that complements the workflow graph with the declarative description of potentially complex execution environments, and that makes it possible the execution onto multiple sites not sharing a common data space. StreamFlow is then exemplified on a novel bioinformatics pipeline for single-cell transcriptomic data analysis workflow.

    @misc{20:HPCEuropa3TAM,
    abstract = {Workflows are among the most commonly used tools in a variety of execution environments. Many of them target a specific environment; few of them make it possible to execute an entire workflow in different environments, e.g. Kubernetes and batch clusters. We present a novel approach to workflow execution, called StreamFlow, that complements the workflow graph with the declarative description of potentially complex execution environments, and that makes it possible the execution onto multiple sites not sharing a common data space. StreamFlow is then exemplified on a novel bioinformatics pipeline for single-cell transcriptomic data analysis workflow.},
    address = {Virtual event},
    author = {Iacopo Colonnelli},
    howpublished = {HPC-Europa3 2nd Transnational Access Meeting (TAM)},
    keywords = {invited, streamflow},
    month = {October},
    note = {Invited talk},
    title = {{StreamFlow}: cross breeding cloud with {HPC}},
    url = {https://datacloud.di.unito.it/index.php/s/qPHHrSNxk8QXJDw},
    year = {2020},
    bdsk-url-1 = {https://datacloud.di.unito.it/index.php/s/qPHHrSNxk8QXJDw},
    bdsk-utl-2 = {https://drive.google.com/file/d/1aTVhlsrS7R2FzEWRTTtm4Mzr7O8R4Jq1/view}
    }

2019

  • I. Colonnelli, StreamFlow: un approccio dichiarativo a workflow e pipeline di micro-servizi, Workshop GARR 2019 Roma, Italy: , October, 2019.
    [BibTeX] [Abstract] [Download PDF]

    Negli ultimi anni, gli approcci orientati ai container si sono dimostrati particolarmente efficaci nel garantire portabilità e riproducibilità dei workflow scientifici. Tuttavia, con il continuo aumento del volume di dati a disposizione e la crescente complessità delle procedure di analisi in ogni campo della ricerca, anche i requisiti di performance e riusabilità si fanno via via sempre più essenziali. L'obiettivo principale di StreamFlow è quello di fornire un nuovo paradigma, totalmente dichiarativo, per la descrizione e l'accelerazione di workflow scientifici in ambienti distribuiti. La peculiarità di StreamFlow risiede nel fatto che l'ambiente di esecuzione è interamente descritto in termini di servizi (container), connessioni tra essi e fattori di replica. Inoltre, ogni task del workflow è esplicitamente mappato sulla tipologia di servizio richiesta. Questo permette un maggior controllo sull'utilizzo delle risorse e politiche di scheduling più precise, a vantaggio delle performance. I principali vantaggi di un approccio dichiarativo sono invece la più facile comprensione ed estensione dei modelli esistenti, a vantaggio della riusabilitià.

    @misc{19:GarrWorkshop,
    abstract = {Negli ultimi anni, gli approcci orientati ai container si sono dimostrati particolarmente efficaci nel garantire portabilit\`{a} e riproducibilit\`{a} dei workflow scientifici. Tuttavia, con il continuo aumento del volume di dati a disposizione e la crescente complessit\`{a} delle procedure di analisi in ogni campo della ricerca, anche i requisiti di performance e riusabilit\`{a} si fanno via via sempre pi\`{u} essenziali. L'obiettivo principale di StreamFlow \`{e} quello di fornire un nuovo paradigma, totalmente dichiarativo, per la descrizione e l'accelerazione di workflow scientifici in ambienti distribuiti. La peculiarit\`{a} di StreamFlow risiede nel fatto che l'ambiente di esecuzione \`{e} interamente descritto in termini di servizi (container), connessioni tra essi e fattori di replica. Inoltre, ogni task del workflow \`{e} esplicitamente mappato sulla tipologia di servizio richiesta. Questo permette un maggior controllo sull'utilizzo delle risorse e politiche di scheduling pi\`{u} precise, a vantaggio delle performance. I principali vantaggi di un approccio dichiarativo sono invece la pi\`{u} facile comprensione ed estensione dei modelli esistenti, a vantaggio della riusabiliti\`{a}.},
    address = {Roma, Italy},
    author = {Iacopo Colonnelli},
    howpublished = {Workshop GARR 2019},
    keywords = {streamflow},
    month = {October},
    title = {{StreamFlow}: un approccio dichiarativo a workflow e pipeline di micro-servizi},
    url = {https://datacloud.di.unito.it/index.php/s/kZqyiQnBEQNdXJe},
    year = {2019},
    bdsk-url-1 = {https://datacloud.di.unito.it/index.php/s/kZqyiQnBEQNdXJe},
    bdsk-url-2 = {https://www.eventi.garr.it/it/ws19/programma/speaker/580-iacopo-colonnelli}
    }

  • I. Colonnelli, Deep learning at scale, 27th Euromicro International Conference on Parallel, Distributed and Network-based Processing (PDP 2019) Pavia, Italy: IEEE, February, 2019.
    [BibTeX] [Abstract] [Download PDF]

    This work presents a novel approach to distributed training of deep neural networks (DNNs) that aims to overcome the issues related to mainstream approaches to data parallel training. Established techniques for data parallel training are discussed from both a parallel computing and deep learning perspective, then a different approach is presented that is meant to allow DNN training to scale while retaining good convergence properties. Moreover, an experimental implementation is presented as well as some preliminary results.

    @misc{19:PDPNNT,
    abstract = {This work presents a novel approach to distributed training of deep neural networks (DNNs) that aims to overcome the issues related to mainstream approaches to data parallel training. Established techniques for data parallel training are discussed from both a parallel computing and deep learning perspective, then a different approach is presented that is meant to allow DNN training to scale while retaining good convergence properties. Moreover, an experimental implementation is presented as well as some preliminary results.},
    address = {Pavia, Italy},
    author = {Iacopo Colonnelli},
    howpublished = {27th Euromicro International Conference on Parallel, Distributed and Network-based Processing (PDP 2019)},
    publisher = {{IEEE}},
    keywords = {misc},
    month = {February},
    title = {Deep Learning at Scale},
    url = {https://datacloud.di.unito.it/index.php/s/nRW9M69C3AtpDoM},
    year = {2019},
    bdsk-url-1 = {https://datacloud.di.unito.it/index.php/s/nRW9M69C3AtpDoM}
    }

  • I. Colonnelli, Accelerating spectral graph analysis through wavefronts of linear algebra operations, 27th Euromicro International Conference on Parallel, Distributed and Network-based Processing (PDP 2019) Pavia, Italy: IEEE, February, 2019.
    [BibTeX] [Abstract] [Download PDF]

    The wavefront pattern captures the unfolding of a parallel computation in which data elements are laid out as a logical multidimensional grid and the dependency graph favours a diagonal sweep across the grid. In the emerging area of spectral graph analysis, the computing often consists in a wavefront running over a tiled matrix, involving expensive linear algebra kernels. While these applications might benefit from parallel heterogeneous platforms (multi-core with GPUs),programming wavefront applications directly with high-performance linear algebra libraries yields code that is complex to write and optimize for the specific application. We advocate a methodology based on two abstractions (linear algebra and parallel pattern-based run-time), that allows to develop portable, self-configuring, and easy-to-profile code on hybrid platforms.

    @misc{19:PDPArmadillo,
    abstract = {The wavefront pattern captures the unfolding of a parallel computation in which data elements are laid out as a logical multidimensional grid and the dependency graph favours a diagonal sweep across the grid. In the emerging area of spectral graph analysis, the computing often consists in a wavefront running over a tiled matrix, involving expensive linear algebra kernels. While these applications might benefit from parallel heterogeneous platforms (multi-core with GPUs),programming wavefront applications directly with high-performance linear algebra libraries yields code that is complex to write and optimize for the specific application. We advocate a methodology based on two abstractions (linear algebra and parallel pattern-based run-time), that allows to develop portable, self-configuring, and easy-to-profile code on hybrid platforms.},
    address = {Pavia, Italy},
    author = {Iacopo Colonnelli},
    howpublished = {27th Euromicro International Conference on Parallel, Distributed and Network-based Processing (PDP 2019)},
    publisher = {{IEEE}},
    keywords = {misc},
    month = {February},
    title = {Accelerating spectral graph analysis through wavefronts of linear algebra operations},
    url = {https://datacloud.di.unito.it/index.php/s/zK4eSzdsdB8CfQX},
    year = {2019},
    bdsk-url-1 = {https://datacloud.di.unito.it/index.php/s/zK4eSzdsdB8CfQX}
    }