Samuele Fonio
PhD Student
Computer Science Department, University of Turin
Parallel Computing group
Via Pessinetto 12, 10149 Torino – Italy
E-mail: samuele.fonio@unito.it
Samuele Fonio is a PhD student from Modeling and Data Science at UniTo funded by Leonardo Company.
He graduated in Mathematics in 2020 and in Stochastics and Data Science in 2022 with a thesis in Deep Learning comparing the behaviour of different geometries in Image classification task using prototype learning.
Fields of interest:
- Deep Learning
- Geometric Deep Learning
- Federated Learning
Publications
2024
Samuele Fonio, Mirko Polato, Roberto Esposito
FedHP: Federated Learning with Hyperspherical Prototypical Regularization Proceedings Article
In: 32nd European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, (ESANN), Bruges, Belgium, 2024.
Abstract | Links | BibTeX | Tags: ai, icsc
@inproceedings{24:esann:fonio:fedhp,
title = {FedHP: Federated Learning with Hyperspherical Prototypical Regularization},
author = {Samuele Fonio and Mirko Polato and Roberto Esposito},
url = {https://www.esann.org/sites/default/files/proceedings/2024/ES2024-183.pdf},
year = {2024},
date = {2024-10-01},
booktitle = {32nd European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, (ESANN)},
address = {Bruges, Belgium},
abstract = {This paper presents FedHP, an algorithm that amalgamates federated learning, hyperspherical geometries, and prototype learning. Federated Learning (FL) has garnered attention as a privacy-preserving method for constructing robust models across distributed datasets. Traditionally, FL involves exchanging model parameters to uphold data privacy; however, in scenarios with costly data communication, exchanging large neural net- work models becomes impractical. In such instances, prototype learning provides a feasible solution by necessitating the exchange of a few class prototypes instead of entire deep learning models. Motivated by these considerations, our approach leverages recent advancements in prototype learning, particularly the benefits offered by non-Euclidean geometries. Alongside introducing FedHP, we provide empirical evidence demonstrat- ing its comparable performance to other state-of-the-art approaches while significantly reducing communication costs.},
keywords = {ai, icsc},
pubstate = {published},
tppubtype = {inproceedings}
}
Marco Edoardo Santimaria, Samuele Fonio, Giulio Malenza, Iacopo Colonnelli, Marco Aldinucci
Benchmarking Parallelization Models through Karmarkar Interior-point method Proceedings Article
In: Chis, Horacio González-Vélez Adriana E. (Ed.): Proc. of 32nd Euromicro intl. Conference on Parallel, Distributed and Network-based Processing (PDP), pp. 1-8, IEEE, Dublin, Ireland, 2024, ISSN: 2377-5750.
Abstract | Links | BibTeX | Tags: HPC, icsc
@inproceedings{24:pdp:karmarkar,
title = {Benchmarking Parallelization Models through Karmarkar Interior-point method},
author = {Marco Edoardo Santimaria and Samuele Fonio and Giulio Malenza and Iacopo Colonnelli and Marco Aldinucci},
editor = {Horacio González-Vélez Adriana E. Chis},
url = {https://hdl.handle.net/2318/1964571},
doi = {10.1109/PDP62718.2024.00010},
issn = {2377-5750},
year = {2024},
date = {2024-03-01},
booktitle = {Proc. of 32nd Euromicro intl. Conference on Parallel, Distributed and Network-based Processing (PDP)},
pages = {1-8},
publisher = {IEEE},
address = {Dublin, Ireland},
abstract = {Optimization problems are one of the main focus of scientific research. Their computational-intensive nature makes them prone to be parallelized with consistent improvements in performance. This paper sheds light on different parallel models for accelerating Karmarkar's Interior-point method. To do so, we assess parallelization strategies for individual operations within the aforementioned Karmarkar's algorithm using OpenMP, GPU acceleration with CUDA, and the recent Parallel Standard C++ Linear Algebra library (PSTL) executing both on GPU and CPU. Our different implementations yield interesting benchmark results that show the optimal approach for parallelizing interior point algorithms for general Linear Programming (LP) problems. In addition, we propose a more theoretical perspective of the parallelization of this algorithm, with a detailed study of our OpenMP implementation, showing the limits of optimizing the single operations},
keywords = {HPC, icsc},
pubstate = {published},
tppubtype = {inproceedings}
}
Oussama Harrak, Bruno Casella, Samuele Fonio, Piero Fariselli, Gianluca Mittone, Tiziana Sanavia, Marco Aldinucci
Federated AdaBoost for Survival Analysis Proceedings Article
In: Proceedings of the ECML-PKDD Workshop, 2nd workshop on advancements in Federated Learning, Vilnius, Lithuania, 2024.
Abstract | BibTeX | Tags: epi, icsc
@inproceedings{harrak2024fedsurvboost,
title = {Federated AdaBoost for Survival Analysis},
author = {Oussama Harrak and Bruno Casella and Samuele Fonio and Piero Fariselli and Gianluca Mittone and Tiziana Sanavia and Marco Aldinucci},
year = {2024},
date = {2024-01-01},
booktitle = {Proceedings of the ECML-PKDD Workshop, 2nd workshop on advancements in Federated Learning},
address = {Vilnius, Lithuania},
abstract = {This work proposes FedSurvBoost, a federated learning pipeline for survival analysis based on the AdaBoost.F algorithm, which iteratively aggregates the best local weak hypotheses. Our method extends AdaBoost.F by removing the dependence on the number of classes coefficient from the computation of the weights of the best model. This makes it suitable for regression tasks, such as survival analysis. We show the effectiveness of our approach by comparing it with state-of-the-art methods, specifically developed for survival analysis problems, on two common survival datasets. Our code is available at https://github.com/oussamaHarrak/FedSurvBoost.},
keywords = {epi, icsc},
pubstate = {published},
tppubtype = {inproceedings}
}
2023
Samuele Fonio, Lorenzo Paletto, Mattia Cerrato, Dino Ienco, Roberto Esposito
Hierarchical priors for Hyperspherical Prototypical Networks Proceedings Article
In: 31th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, ESANN, Bruges, Belgium, 2023, (In print).
Abstract | Links | BibTeX | Tags: ai, icsc
@inproceedings{23:esann:fonio,
title = {Hierarchical priors for Hyperspherical Prototypical Networks},
author = {Samuele Fonio and Lorenzo Paletto and Mattia Cerrato and Dino Ienco and Roberto Esposito},
url = {https://www.esann.org/sites/default/files/proceedings/2023/ES2023-65.pdf},
year = {2023},
date = {2023-10-01},
booktitle = {31th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, ESANN},
address = {Bruges, Belgium},
abstract = {In this paper, we explore the usage of hierarchical priors to improve learning in contexts where the number of available examples is extremely low. Specifically, we consider a Prototype Learning setting where deep neural networks are used to embed data in hyperspherical geometries.In this scenario, we propose an innovative way to learn the prototypes by combining class separation and hierarchical information. In addition, we introduce a contrastive loss function capable of balancing the exploitation of prototypes through a prototype pruning mechanism. We compare the proposed method with state-of-the-art approaches on two public datasets.},
note = {In print},
keywords = {ai, icsc},
pubstate = {published},
tppubtype = {inproceedings}
}
Samuele Fonio
Benchmarking Federated Learning Frameworks for Medical Imaging Tasks Proceedings Article
In: Foresti, G. L., Fusiello, A., Hancock, E. (Ed.): Image Analysis and Processing - ICIAP 2023 Workshops. ICIAP 2023, Springer, Cham, Udine, Italy, 2023, (In print).
Abstract | Links | BibTeX | Tags: ai, eupilot, icsc
@inproceedings{23:iciap:fedmed:ws:fonio,
title = {Benchmarking Federated Learning Frameworks for Medical Imaging Tasks},
author = {Samuele Fonio},
editor = {G. L. Foresti and A. Fusiello and E. Hancock},
url = {https://link.springer.com/chapter/10.1007/978-3-031-51026-7_20},
doi = {10.1007/978-3-031-51026-7_20},
year = {2023},
date = {2023-09-01},
booktitle = {Image Analysis and Processing - ICIAP 2023 Workshops. ICIAP 2023},
volume = {14366},
publisher = {Springer, Cham},
address = {Udine, Italy},
abstract = {This paper presents a comprehensive benchmarking study of various Federated Learning (FL) frameworks applied to the task of Medical Image Classification. The research specifically addresses the often neglected and complex aspects of scalability and usability in off-the-shelf FL frameworks. Through experimental validation using real case deployments, we provide empirical evidence of the performance and practical relevance of open source FL frameworks. Our findings contribute valuable insights for anyone interested in deploying a FL system, with a particular focus on the healthcare domain—an increasingly attractive field for FL applications.},
note = {In print},
keywords = {ai, eupilot, icsc},
pubstate = {published},
tppubtype = {inproceedings}
}
Gianluca Mittone, Samuele Fonio
Benchmarking Federated Learning Scalability Proceedings Article
In: Proceedings of the 2nd Italian Conference on Big Data and Data Science, ITADATA 2023, September 11-13, 2023, CEUR, Naples, Italy, 2023.
Abstract | Links | BibTeX | Tags: eupilot, HPC, icsc
@inproceedings{23:itadata:extabstract:mittone:fonio,
title = {Benchmarking Federated Learning Scalability},
author = {Gianluca Mittone and Samuele Fonio},
url = {https://hdl.handle.net/2318/1933852},
year = {2023},
date = {2023-09-01},
booktitle = {Proceedings of the 2nd Italian Conference on Big Data and Data Science, ITADATA 2023, September 11-13, 2023},
publisher = {CEUR},
address = {Naples, Italy},
abstract = {Federated Learning (FL) is a widespread Machine Learning paradigm handling distributed Big Data. In this work, we demonstrate that different FL frameworks expose different scaling performances despite adopting the same technologies, highlighting the need for a more comprehensive study on the topic.},
keywords = {eupilot, HPC, icsc},
pubstate = {published},
tppubtype = {inproceedings}
}
Bruno Casella, Samuele Fonio
Architecture-Based FedAvg for Vertical Federated Learning Proceedings Article
In: Proceedings of the 3rd Workshop on Distributed Machine Learning for the Intelligent Computing Continuum (DML-ICC), IEEE/ACM UCC 2023, Taormina, Italy, 4 December 2023, 2023, (https://iris.unito.it/bitstream/2318/1949730/1/HALF_HVL_for_DML_ICC23___Taormina-2.pdf).
Abstract | Links | BibTeX | Tags: ai, epi, icsc
@inproceedings{23:casella:architecturalfedavg,
title = {Architecture-Based FedAvg for Vertical Federated Learning},
author = {Bruno Casella and Samuele Fonio},
url = {https://iris.unito.it/retrieve/173d9960-8531-419d-9bd5-5acce6694c4e/Aggregation%20Based%20VFL.pdf},
doi = {10.1145/3603166.3632559},
year = {2023},
date = {2023-01-01},
booktitle = {Proceedings of the 3rd Workshop on Distributed Machine Learning for the Intelligent Computing Continuum (DML-ICC), IEEE/ACM UCC 2023, Taormina, Italy, 4 December 2023},
abstract = {Federated Learning (FL) has emerged as a promising solution to address privacy concerns by collaboratively training Deep Learning (DL) models across distributed parties. This work proposes an architecture-based aggregation strategy in Vertical FL, where parties hold data with different attributes but shared instances. Our approach leverages the identical architectural parts, i.e. neural network layers, of different models to selectively aggregate weights, which is particularly relevant when collaborating with institutions holding different types of datasets, i.e., image, text, or tabular datasets. In a scenario where two entities train DL models, such as a Convolutional Neural Network (CNN) and a Multi-Layer Perceptron (MLP), our strategy computes the average only for architecturally identical segments. This preserves data-specific features learned from demographic and clinical data. We tested our approach on two clinical datasets, i.e., the COVID-CXR dataset and the ADNI study. Results show that our method achieves comparable results with the centralized scenario, in which all the data are collected in a single data lake, and benefits from FL generalizability. In particular, compared to the non-federated models, our proposed proof-of-concept model exhibits a slight performance loss on the COVID-CXR dataset (less than 8%), but outperforms ADNI models by up to 12%. Moreover, communication costs between training rounds are minimized by exchanging only the dense layer parameters.},
note = {https://iris.unito.it/bitstream/2318/1949730/1/HALF_HVL_for_DML_ICC23___Taormina-2.pdf},
keywords = {ai, epi, icsc},
pubstate = {published},
tppubtype = {inproceedings}
}
Talks
2023
Samuele Fonio
Benchmarking Federated Learning Frameworks for Medical Imaging Tasks Miscellaneous
Image Analysis and Processing - ICIAP 2023 - 22th International Conference - FedMed, 2023.
Abstract | Links | BibTeX | Tags: ai, eupilot, fl, icsc
@misc{23:iciap:benchmed,
title = {Benchmarking Federated Learning Frameworks for Medical Imaging Tasks},
author = {Samuele Fonio},
url = {https://datacloud.di.unito.it/index.php/s/sR7YeTGgfH4DtCR},
year = {2023},
date = {2023-09-01},
address = {Udine, Italy},
abstract = {This paper presents a comprehensive benchmarking study of various Federated Learning (FL) frameworks applied to the task of Medical Image Classification. The research specifically addresses the often neglected and complex aspects of scalability and usability in off-the-shelf FL frameworks. Through experimental validation using real case deployments, we provide empirical evidence of the performance and practical relevance of open source FL frameworks. Our findings contribute valuable insights for anyone interested in deploying a FL system, with a particular focus on the healthcare domain—an increasingly attractive field for FL applications.},
howpublished = {Image Analysis and Processing - ICIAP 2023 - 22th International Conference - FedMed},
keywords = {ai, eupilot, fl, icsc},
pubstate = {published},
tppubtype = {misc}
}
Gianluca Mittone, Samuele Fonio
Benchmarking Federated Learning Scalability Miscellaneous
2nd Italian Conference on Big Data and Data Science (ITADATA 2023), 2023.
Abstract | Links | BibTeX | Tags: ai, eupilot, fl, icsc
@misc{23:itadata:fl_scaling,
title = {Benchmarking Federated Learning Scalability},
author = {Gianluca Mittone and Samuele Fonio},
url = {https://datacloud.di.unito.it/index.php/s/QZGxC4X3s5LG5oT},
year = {2023},
date = {2023-09-01},
address = {Naples, Italy},
abstract = {Federated Learning (FL) is a widespread Machine Learning paradigm handling distributed Big Data. In this work, we demonstrate that different FL frameworks expose different scaling performances despite adopting the same technologies, highlighting the need for a more comprehensive study on the topic.},
howpublished = {2nd Italian Conference on Big Data and Data Science (ITADATA 2023)},
keywords = {ai, eupilot, fl, icsc},
pubstate = {published},
tppubtype = {misc}
}
Bruno Casella, Samuele Fonio
Architecture-Based FedAvg for Vertical Federated Learning Miscellaneous
2023.
Abstract | Links | BibTeX | Tags: ai, epi, fl, icsc
@misc{23:casella:architecturalfedavgtalk,
title = {Architecture-Based FedAvg for Vertical Federated Learning},
author = {Bruno Casella and Samuele Fonio},
url = {https://datacloud.di.unito.it/index.php/s/kJQxnqG4d2ZSicK},
year = {2023},
date = {2023-01-01},
booktitle = {Proceedings of the 3rd Workshop on Distributed Machine Learning for the Intelligent Computing Continuum (DML-ICC), IEEE/ACM UCC 2023, Taormina, Italy, 4 December 2023},
abstract = {Federated Learning (FL) has emerged as a promising solution to address privacy concerns by collaboratively training Deep Learning (DL) models across distributed parties. This work proposes an architecture-based aggregation strategy in Vertical FL, where parties hold data with different attributes but shared instances. Our approach leverages the identical architectural parts, i.e. neural network layers, of different models to selectively aggregate weights, which is particularly relevant when collaborating with institutions holding different types of datasets, i.e., image, text, or tabular datasets. In a scenario where two entities train DL models, such as a Convolutional Neural Network (CNN) and a Multi-Layer Perceptron (MLP), our strategy computes the average only for architecturally identical segments. This preserves data-specific features learned from demographic and clinical data. We tested our approach on two clinical datasets, i.e., the COVID-CXR dataset and the ADNI study. Results show that our method achieves comparable results with the centralized scenario, in which all the data are collected in a single data lake, and benefits from FL generalizability. In particular, compared to the non-federated models, our proposed proof-of-concept model exhibits a slight performance loss on the COVID-CXR dataset (less than 8%), but outperforms ADNI models by up to 12%. Moreover, communication costs between training rounds are minimized by exchanging only the dense layer parameters.},
keywords = {ai, epi, fl, icsc},
pubstate = {published},
tppubtype = {misc}
}