Robert René Maria Birke
Tenured assistant professor at Computer Science Department, University of Turin
Parallel Computing group
Via Pessinetto 12, 10149 Torino – Italy
Email: robert.birke@unito.it
Short Bio
Robert Birke is a tenured assistant professor in the Parallel Computing research group at the University of Torino. He received his Ph.D. in Electronics and Communications Engineering from the Politecnico di Torino, Italy, (2009).
He has been a visiting researcher at IBM Research Zurich, Switzerland, and a Principal Scientist at ABB Corporate Research, Switzerland. His research interests are in the broad area of virtual resource management including network design, workload characterization, and AI and big-data application optimization.
He has published more than 90 papers at venues related to communication, system performance and machine learning, e.g., SIGCOMM, SIGMETRICS, FAST, INFOCOM, ACML and JSAC.
He is a senior member of IEEE.
Publications
2024
Gianluca Mittone, Giulio Malenza, Marco Aldinucci, Robert Birke
Distributed Edge Inference: an Experimental Study on Multiview Detection Proceedings Article
In: Proc. of the 16th IEEE/ACM Intl. Conference on Utility and Cloud Computing Companion (UCC), pp. 1-6, ACM, Taormina, Italy, 2024, (eupilot, icsc).
Abstract | Links | BibTeX | Tags: ai, eupilot, icsc
@inproceedings{23:mittone:multiview,
title = {Distributed Edge Inference: an Experimental Study on Multiview Detection},
author = {Gianluca Mittone and Giulio Malenza and Marco Aldinucci and Robert Birke},
url = {https://iris.unito.it/handle/2318/1950083},
doi = {10.1145/3603166.3632561},
year = {2024},
date = {2024-12-01},
booktitle = {Proc. of the 16th IEEE/ACM Intl. Conference on Utility and Cloud Computing Companion (UCC)},
volume = {30},
pages = {1-6},
publisher = {ACM},
address = {Taormina, Italy},
institution = {Computer Science Department, University of Torino},
abstract = {Computing is evolving rapidly to cater to the increasing demand for sophisticated services, and Cloud computing lays a solid foundation for flexible on-demand provisioning. However, as the size of applications grows, the centralised client-server approach used by Cloud computing increasingly limits the applications' scalability. To achieve ultra-scalability, cloud/edge/fog computing converges into the compute continuum, completely decentralising the infrastructure to encompass universal, pervasive resources. The compute continuum makes devising applications benefitting from this complex environment a challenging research problem. We put the opportunities the compute continuum others to the test through a real-world multi-view detection model (MvDet) implemented with the FastFL C/C++ high-performance edge inference framework. Computational performance is discussed considering many experimental scenarios, encompassing different edge computational capabilities and network bandwidths. We obtain up to 1.92x speedup in inference time over a centralised solution using the same devices.},
note = {eupilot, icsc},
keywords = {ai, eupilot, icsc},
pubstate = {published},
tppubtype = {inproceedings}
}
Miruna Bețianu, Abele Mălan, Marco Aldinucci, Robert Birke, Lydia Chen
DALLMi: Domain Adaption for LLM-based Multi-label Classifier Proceedings Article
In: Yang, De-Nian, Xie, Xing, Tseng, Vincent S., Pei, Jian, Huang, Jen-Wei, Lin, Jerry Chun-Wei (Ed.): Proceedings of the 28th Pacific-Asia Conference on Knowledge Discovery and Data Mining, pp. 277–289, Springer, Taipei, Taiwan, 2024.
Abstract | Links | BibTeX | Tags: ai, eupilot, icsc
@inproceedings{24:betianu:llm,
title = {DALLMi: Domain Adaption for LLM-based Multi-label Classifier},
author = {Miruna Bețianu and Abele Mălan and Marco Aldinucci and Robert Birke and Lydia Chen},
editor = {De-Nian Yang and Xing Xie and Vincent S. Tseng and Jian Pei and Jen-Wei Huang and Jerry Chun-Wei Lin},
url = {https://hdl.handle.net/2318/1976672},
doi = {10.1007/978-981-97-2259-4_21},
year = {2024},
date = {2024-05-01},
booktitle = {Proceedings of the 28th Pacific-Asia Conference on Knowledge Discovery and Data Mining},
volume = {14647},
pages = {277–289},
publisher = {Springer},
address = {Taipei, Taiwan},
series = {Lecture Notes in Computer Science},
abstract = {Large language models (LLMs) increasingly serve as the backbone for classifying text associated with distinct domains and simultaneously several labels (classes). When encountering domain shifts, e.g., classifier of movie reviews from IMDb to Rotten Tomatoes, adapting such an LLM-based multi-label classifier is challenging due to incomplete label sets at the target domain and daunting training overhead. The existing domain adaptation methods address either image multi-label classifiers or text binary classifiers. In this paper, we design DALLMi, Domain Adaptation Large Language Model interpolator, a first-of-its-kind semi-supervised domain adaptation method for text data models based on LLMs, specifically BERT. The core of DALLMi is the novel variation loss and MixUp regularization, which jointly leverage the limited positively labeled and large quantity of unlabeled text and, importantly, their interpolation from the BERT word embeddings. DALLMi also introduces a label-balanced sampling strategy to overcome the imbalance between labeled and unlabeled data. We evaluate DALLMi against the partial-supervised and unsupervised approach on three datasets under different scenarios of label availability for the target domain. Our results show that DALLMi achieves higher mAP than unsupervised and partially-supervised approaches by 19.9% and 52.2%, respectively.},
keywords = {ai, eupilot, icsc},
pubstate = {published},
tppubtype = {inproceedings}
}
Chi Hong, Robert Birke, Pin-Yu Chen, Lydia Chen
On Dark Knowledge for Distilling Generators Proceedings Article
In: Yang, De-Nian, Xie, Xing, Tseng, Vincent S., Pei, Jian, Huang, Jen-Wei, Lin, Jerry Chun-Wei (Ed.): Proceedings of the 28th Pacific-Asia Conference on Knowledge Discovery and Data Mining, pp. 235–247, Springer, Taipei, Taiwan, 2024.
Abstract | Links | BibTeX | Tags: ai, epi, icsc
@inproceedings{24:chen:llm,
title = {On Dark Knowledge for Distilling Generators},
author = {Chi Hong and Robert Birke and Pin-Yu Chen and Lydia Chen},
editor = {De-Nian Yang and Xing Xie and Vincent S. Tseng and Jian Pei and Jen-Wei Huang and Jerry Chun-Wei Lin},
url = {https://hdl.handle.net/2318/1976671},
doi = {10.1007/978-981-97-2253-2_19},
year = {2024},
date = {2024-05-01},
booktitle = {Proceedings of the 28th Pacific-Asia Conference on Knowledge Discovery and Data Mining},
volume = {14646},
pages = {235–247},
publisher = {Springer},
address = {Taipei, Taiwan},
series = {Lecture Notes in Computer Science},
abstract = {Knowledge distillation has been applied on generative models, such as Variational Autoencoder (VAE) and Generative Adversarial Networks (GANs). To distill the knowledge, the synthetic outputs of a teacher generator are used to train a student model. While the dark knowledge, i.e., the probabilistic output, is well explored in distilling classifiers, little is known about the existence of an equivalent dark knowledge for generative models and its extractability. In this paper, we derive the first kind of empirical risk bound for distilling generative models from a Bayesian perspective. Through our analysis, we show the existence of the dark knowledge for generative models, i.e., Bayes probability distribution of a synthetic output from a given input, which achieves lower empirical risk bound than merely using the synthetic output of the generators. Furthermore, we propose a Dark Knowledge based Distillation , DKtill, which trains the student generator based on the (approximate) dark knowledge. Our extensive evaluation on distilling VAE, conditional GANs, and translation GANs on Facades and CelebA datasets show that the FID of student generators trained by DKtill combining dark knowledge are lower than student generators trained only by the synthetic outputs by up to 42.66%, and 78.99%, respectively.},
keywords = {ai, epi, icsc},
pubstate = {published},
tppubtype = {inproceedings}
}
Bruno Casella, Iacopo Colonnelli, Gianluca Mittone, Robert Birke, Walter Riviera, Antonio Sciarappa, Carlo Cavazzoni, Marco Aldinucci
A Performance Analysis for Confidential Federated Learning Proceedings Article
In: Proceedings of the 2024 Deep Learning Security and Privacy Workshop, IEEE Symposium on Security and Privacy 2024, San Francisco, CA, 2024.
Abstract | Links | BibTeX | Tags: ai, confidential, epi, icsc
@inproceedings{24:casella:sgx,
title = {A Performance Analysis for Confidential Federated Learning},
author = {Bruno Casella and Iacopo Colonnelli and Gianluca Mittone and Robert Birke and Walter Riviera and Antonio Sciarappa and Carlo Cavazzoni and Marco Aldinucci},
url = {https://iris.unito.it/retrieve/b5877a97-2d8d-4e95-8791-0aa4a1b953b3/DLSP___CONFIDENTIAL_FL.pdf},
doi = {10.1109/SPW63631.2024.00009},
year = {2024},
date = {2024-05-01},
booktitle = {Proceedings of the 2024 Deep Learning Security and Privacy Workshop, IEEE Symposium on Security and Privacy 2024},
address = {San Francisco, CA},
abstract = {Federated Learning (FL) has emerged as a solution to preserve data privacy by keeping the data locally on each participant's device. However, FL alone is still vulnerable to attacks that can cause privacy leaks. Therefore, it becomes necessary to take additional security measures at the cost of increasing runtimes. The Trusted Execution Environment (TEE) approach promises to offer the highest degree of security during execution. However, TEEs suffer from memory limits which prevent safe end-to-end FL training of modern deep models. State-of- the-art approaches limit secure training to selected layers, failing to avert the full spectrum of attacks or adopt layer-wise training affecting model performance. We benchmark the usage of a library OS (LibOS) to run the full, unmodified end-to-end FL training inside the TEE. We extensively evaluate and model the overhead of the different security mechanisms needed to protect the data and model during computation (TEE), communication (TLS), and storage (disk encryption). The obtained results across three datasets and two models demonstrate that LibOSes are a viable way to seamlessly inject security into FL with limited overhead (at most 2x), offering valuable guidance for researchers and developers aiming to apply FL in data-security-focused contexts.},
keywords = {ai, confidential, epi, icsc},
pubstate = {published},
tppubtype = {inproceedings}
}
Giulio Malenza, Valentina Cesare, Marco Edoardo Santimaria, Robert Birke, Alberto Vecchiato, Ugo Becciani, Marco Aldinucci
Performance portability via C++ PSTL, SYCL, OpenMP, and HIP: the Gaia AVU-GSR case study Proceedings Article
In: SC24-W: Workshops of the International Conference for High Performance Computing, Networking, Storage and Analysis, pp. 1152-1163, IEEE, 2024, ISBN: 979-8-3503-5554-3.
Abstract | Links | BibTeX | Tags: eupex, icsc
@inproceedings{Malenza_P3HPC_24,
title = {Performance portability via C++ PSTL, SYCL, OpenMP, and HIP: the Gaia AVU-GSR case study},
author = {Giulio Malenza and Valentina Cesare and Marco Edoardo Santimaria and Robert Birke and Alberto Vecchiato and Ugo Becciani and Marco Aldinucci},
url = {https://conferences.computer.org/sc-wpub/pdfs/SC-W2024-6oZmigAQfgJ1GhPL0yE3pS/555400b152/555400b152.pdf},
doi = {10.1109/SCW63240.2024.00157},
isbn = {979-8-3503-5554-3},
year = {2024},
date = {2024-01-01},
booktitle = {SC24-W: Workshops of the International Conference for High Performance Computing, Networking, Storage and Analysis},
pages = {1152-1163},
publisher = {IEEE},
abstract = {Applications that analyze data from modern scientific experiments will soon require a computing capacity of ExaFLOPs. The current trend to achieve such performance is to employ GPU-accelerated supercomputers and design applications to optimally exploit this hardware. Since each supercomputer is typically a one-off project, the necessity of having computational languages portable across diverse CPU and GPU architectures without performance losses is increasingly compelling. Here, we study the performance portability of the LSQR algorithm as found in the AVU-GSR code of the ESA Gaia mission. This code computes the astrometric parameters of the ~108 stars in our Galaxy. The LSQR algorithm is widely used across a broad range of high-performance computing (HPC) applications, elevating the study's relevance beyond the astrophysical domain. We developed different GPU-accelerated ports based on CUDA, C++ PSTL, SYCL, OpenMP, and HIP. We carefully verified the correctness of each port and tuned them to five different GPU-accelerated platforms from NVIDIA and AMD to evaluate the performance portability (PP) in terms of the harmonic mean of the application's performance efficiency across the tested hardware. HIP was demonstrated to be the most portable solution with a 0.94 average PP across the tested problem sizes, closely followed by SYCL coupled with AdaptiveCpp (ACPP) with 0.93. If we only consider NVIDIA platforms, CUDA would be the winner with 0.97. The tuning-oblivious C++ PSTL achieves 0.62 when coupled with vendor-specific compilers.},
keywords = {eupex, icsc},
pubstate = {published},
tppubtype = {inproceedings}
}
Adriano Marques Garcia, Giulio Malenza, Robert Birke, Marco Aldinucci
Assessing Large Language Models Inference Performance on a 64-core RISC-V CPU with Silicon-Enabled Vectors Proceedings Article
In: Antelmi, Alessia, Carlini, Emanuele, Dazzi, Patrizio (Ed.): Proceedings of BigHPC2024: Special Track on Big Data and High-Performance Computing, co-located with the 3textsuperscriptrd Italian Conference on Big Data and Data Science, ITADATA2024, pp. 1-9, CEUR-WS.org, Pisa, Italy, 2024.
Abstract | Links | BibTeX | Tags: eupilot, icsc
@inproceedings{24:garcia:itadata,
title = {Assessing Large Language Models Inference Performance on a 64-core RISC-V CPU with Silicon-Enabled Vectors},
author = {Adriano Marques Garcia and Giulio Malenza and Robert Birke and Marco Aldinucci},
editor = {Alessia Antelmi and Emanuele Carlini and Patrizio Dazzi},
url = {https://iris.unito.it/retrieve/1540f675-5e88-4f57-95e7-df8e0fe5f1df/paper110.pdf},
year = {2024},
date = {2024-01-01},
booktitle = {Proceedings of BigHPC2024: Special Track on Big Data and High-Performance Computing, co-located with the 3textsuperscriptrd Italian Conference on Big Data and Data Science, ITADATA2024},
volume = {3785},
pages = {1-9},
publisher = {CEUR-WS.org},
address = {Pisa, Italy},
series = {CEUR Workshop Proceedings},
abstract = {The rising usage of compute-intensive AI applications with fast response time requirements, such as text generation using large language models, underscores the need for more efficient and versatile hardware solutions. This drives the exploration of emerging architectures like RISC-V, which has the potential to deliver strong performance within tight power constraints. The recent commercial release of processors with RISC-V Vector (RVV) silicon-enabled extensions further amplifies the significance of RISC-V architectures, offering enhanced capabilities for parallel processing and accelerating tasks critical to large language models and other AI applications. This work aims to evaluate the BERT and GPT-2 language models inference performance on the SOPHON SG2042 64-core RISC-V architecture with silicon-enabled RVV v0.7.1. We benchmarked the models with and without RVV, using OpenBLAS and BLIS as BLAS backends for PyTorch to enable vectorization. Enabling RVV in OpenBLAS improved the inference performance by up to 40% in some cases.},
keywords = {eupilot, icsc},
pubstate = {published},
tppubtype = {inproceedings}
}
Iacopo Colonnelli, Robert Birke, Giulio Malenza, Gianluca Mittone, Alberto Mulone, Jeroen Galjaard, Lydia Y. Chen, Sanzio Bassini, Gabriella Scipione, Jan Martinovič, Vit Vondrák, Marco Aldinucci
Cross-Facility Federated Learning Journal Article
In: Procedia Computer Science, vol. 240, pp. 3–12, 2024, ISSN: 1877-0509.
Abstract | Links | BibTeX | Tags: icsc, space, streamflow
@article{24:eurohpc:xffl,
title = {Cross-Facility Federated Learning},
author = {Iacopo Colonnelli and Robert Birke and Giulio Malenza and Gianluca Mittone and Alberto Mulone and Jeroen Galjaard and Lydia Y. Chen and Sanzio Bassini and Gabriella Scipione and Jan Martinovič and Vit Vondrák and Marco Aldinucci},
url = {https://www.sciencedirect.com/science/article/pii/S1877050924016909},
doi = {10.1016/j.procs.2024.07.003},
issn = {1877-0509},
year = {2024},
date = {2024-01-01},
booktitle = {Proceedings of the First EuroHPC user day},
journal = {Procedia Computer Science},
volume = {240},
pages = {3–12},
publisher = {Elsevier},
address = {Bruxelles, Belgium},
abstract = {In a decade, AI frontier research transitioned from the researcher's workstation to thousands of high-end hardware-accelerated compute nodes. This rapid evolution shows no signs of slowing down in the foreseeable future. While top cloud providers may be able to keep pace with this growth rate, obtaining and efficiently exploiting computing resources at that scale is a daunting challenge for universities and SMEs. This work introduces the Cross-Facility Federated Learning (XFFL) framework to bridge this compute divide, extending the opportunity to efficiently exploit multiple independent data centres for extreme-scale deep learning tasks to data scientists and domain experts. XFFL relies on hybrid workflow abstractions to decouple tasks from environment-specific technicalities, reducing complexity and enhancing reusability. In addition, Federated Learning (FL) algorithms eliminate the need to move large amounts of data between different facilities, reducing time-to-solution and preserving data privacy. The XFFL approach is empirically evaluated by training a full LLaMAv2 7B instance on two facilities of the EuroHPC JU, showing how the increased computing power completely compensates for the additional overhead introduced by two data centres.},
keywords = {icsc, space, streamflow},
pubstate = {published},
tppubtype = {article}
}
Simon Queyrut, Robert Birke, Pascal Felber, Valerio Schiavon
CLUES: Collusive Theft of Conditional Generative Adversarial Networks Proceedings Article
In: 43rd International Symposium on Reliable Distributed Systems SRDS, 2024.
BibTeX | Tags: ai, icsc
@inproceedings{24:queyrut:srds,
title = {CLUES: Collusive Theft of Conditional Generative Adversarial Networks},
author = {Simon Queyrut and Robert Birke and Pascal Felber and Valerio Schiavon},
year = {2024},
date = {2024-01-01},
booktitle = {43rd International Symposium on Reliable Distributed Systems SRDS},
keywords = {ai, icsc},
pubstate = {published},
tppubtype = {inproceedings}
}
Zilong Zhao, Aditya Kunar, Robert Birke, Hiek Van Scheer, Lydia Y. Chen
CTAB-GAN+: enhancing tabular data synthesis Journal Article
In: Frontiers Big Data, vol. 6, 2024.
@article{24:fdata:zhao,
title = {CTAB-GAN+: enhancing tabular data synthesis},
author = {Zilong Zhao and Aditya Kunar and Robert Birke and Hiek Van Scheer and Lydia Y. Chen},
url = {https://doi.org/10.3389/fdata.2023.1296508},
doi = {10.3389/FDATA.2023.1296508},
year = {2024},
date = {2024-01-01},
journal = {Frontiers Big Data},
volume = {6},
keywords = {ai},
pubstate = {published},
tppubtype = {article}
}
Nur Zincir-Heywood, Robert Birke, Elias Bou-Harb, Takeru Inoue, Neeraj Kumar, Hanan Lutfiyya, Deepak Puthal, Abdallah Shami, Natalia Stakhanova
Guest Editorial: Special section on Networks, Systems, and Services Operations and Management Through Intelligence Journal Article
In: IEEE Trans. Netw. Serv. Manag., vol. 21, no. 3, pp. 2608–2612, 2024.
@article{24:tnsm:nur,
title = {Guest Editorial: Special section on Networks, Systems, and Services
Operations and Management Through Intelligence},
author = {Nur Zincir-Heywood and Robert Birke and Elias Bou-Harb and Takeru Inoue and Neeraj Kumar and Hanan Lutfiyya and Deepak Puthal and Abdallah Shami and Natalia Stakhanova},
url = {https://doi.org/10.1109/TNSM.2024.3416861},
doi = {10.1109/TNSM.2024.3416861},
year = {2024},
date = {2024-01-01},
journal = {IEEE Trans. Netw. Serv. Manag.},
volume = {21},
number = {3},
pages = {2608–2612},
keywords = {ai},
pubstate = {published},
tppubtype = {article}
}
2023
Zilong Zhao, Robert Birke, Lydia Y. Chen
FCT-GAN: Enhancing Global Correlation of Table Synthesis via Fourier Transform Proceedings Article
In: 32nd ACM International Conference on Information and Knowledge Management (CIKM '23), ACM, Birmingham, United Kingdom, 2023.
Abstract | Links | BibTeX | Tags: icsc
@inproceedings{23:zhao:fctgan,
title = {FCT-GAN: Enhancing Global Correlation of Table Synthesis via Fourier Transform},
author = {Zilong Zhao and Robert Birke and Lydia Y. Chen},
url = {https://iris.unito.it/retrieve/966ba767-dbbd-41e1-b4e3-7ab7ba09303f/FCT-GAN.pdf},
doi = {10.1145/3583780.3615202},
year = {2023},
date = {2023-10-01},
booktitle = {32nd ACM International Conference on Information and Knowledge Management (CIKM '23)},
publisher = {ACM},
address = {Birmingham, United Kingdom},
abstract = {An alternative method for sharing knowledge while complying with strict data access regulations, such as the European General Data Protection Regulation (GDPR), is the emergence of synthetic tabular data. Mainstream table synthesizers utilize methodologies derived from Generative Adversarial Networks (GAN). Although several state-of-the-art (SOTA) tabular GAN algorithms inherit Convolutional Neural Network (CNN)-based architectures, which have proven effective for images, they tend to overlook two critical properties of tabular data: (i) the global correlation across columns, and (ii) the semantic invariance to the column order. Permuting columns in a table does not alter the semantic meaning of the data, but features extracted by CNNs can change significantly due to their limited convolution filter kernel size. To address the above problems, we propose FCT-GAN– the first conditional tabular GAN to adopt Fourier networks into table synthesis. FCT-GAN enhances permutation invariant GAN training by strengthening the learning of global correlations via Fourier layers. Extensive evaluation on benchmarks and real-world datasets show that FCT-GAN can synthesize tabular data with better (up to 27.8%) machine learning utility (i.e. a proxy of global correlations) and higher (up to 26.5%) statistical similarity to real data. FCT-GAN also has the least variation on synthetic data quality among 7 SOTA baselines on 3 different training-data column orders.},
keywords = {icsc},
pubstate = {published},
tppubtype = {inproceedings}
}
Chi Hong, Jiyue Huang, Robert Birke, Lydia Y. Chen
Exploring and Exploiting Data-Free Model Stealing Proceedings Article
In: European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD), Turin, Italy, 2023.
Abstract | Links | BibTeX | Tags: eupilot, icsc
@inproceedings{23:hong:datafree,
title = {Exploring and Exploiting Data-Free Model Stealing},
author = {Chi Hong and Jiyue Huang and Robert Birke and Lydia Y. Chen},
url = {https://iris.unito.it/retrieve/ce44dec6-12c9-443d-99e7-f1141e50aa3a/Data-free%20Model%20Stealing.pdf},
doi = {10.1007/978-3-031-43424-2_2},
year = {2023},
date = {2023-09-01},
booktitle = {European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD)},
address = {Turin, Italy},
abstract = {Deep machine learning models, e.g., image classifier, are increasingly deployed in the wild to provide services to users. Adversaries are shown capable of stealing the knowledge of these models by sending inference queries and then training substitute models based on query results. The availability and quality of adversarial query inputs are undoubtedly crucial in the stealing process. The recent prior art demonstrates the feasibility of replacing real data by exploring the synthetic adversarial queries, so called data-free attacks, under strong adversarial assumptions, i.e., the deployed classier returns not only class labels but also class probabilities. In this paper, we consider a general adversarial model and propose an effective data-free stealing algorithm, Tandem-GAN, which not only explores synthetic queries but also explicitly exploits the high quality ones. The core of TandemGAN is composed of (i) substitute model which imitates the target model through synthetic queries and their inferred labels; and (ii) a tandem generator consisting of two networks, Gx and Ge, which first explores the synthetic data space via Gx and then exploits high-quality examples via Ge to maximize the knowledge transfer from the target to the substitute model. Our results on four datasets show that the accuracy of our trained substitute model ranges between 96-67% of the target model and outperforms the existing state-of-the-art data-free model stealing approach by up to 2.5X.},
keywords = {eupilot, icsc},
pubstate = {published},
tppubtype = {inproceedings}
}
Gianluca Mittone, Walter Riviera, Iacopo Colonnelli, Robert Birke, Marco Aldinucci
Model-Agnostic Federated Learning Proceedings Article
In: Euro-Par 2023: Parallel Processing, pp. 383–396, Springer, Limassol, Cyprus, 2023.
Abstract | Links | BibTeX | Tags: ai, confidential, eupilot, icsc, riscv
@inproceedings{23:mittone:mafl,
title = {Model-Agnostic Federated Learning},
author = {Gianluca Mittone and Walter Riviera and Iacopo Colonnelli and Robert Birke and Marco Aldinucci},
url = {https://doi.org/10.1007/978-3-031-39698-4_26},
doi = {10.1007/978-3-031-39698-4_26},
year = {2023},
date = {2023-08-01},
booktitle = {Euro-Par 2023: Parallel Processing},
volume = {14100},
pages = {383–396},
publisher = {Springer},
address = {Limassol, Cyprus},
institution = {Computer Science Department, University of Torino},
abstract = {Since its debut in 2016, Federated Learning (FL) has been tied to the inner workings of Deep Neural Networks (DNNs). On the one hand, this allowed its development and widespread use as DNNs proliferated. On the other hand, it neglected all those scenarios in which using DNNs is not possible or advantageous. The fact that most current FL frameworks only allow training DNNs reinforces this problem. To address the lack of FL solutions for non-DNN-based use cases, we propose MAFL (Model-Agnostic Federated Learning). MAFL marries a model-agnostic FL algorithm, AdaBoost.F, with an open industry-grade FL framework: Intel OpenFL. MAFL is the first FL system not tied to any specific type of machine learning model, allowing exploration of FL scenarios beyond DNNs and trees. We test MAFL from multiple points of view, assessing its correctness, flexibility and scaling properties up to 64 nodes. We optimised the base software achieving a 5.5x speedup on a standard FL scenario. MAFL is compatible with x86-64, ARM-v8, Power and RISC-V.},
keywords = {ai, confidential, eupilot, icsc, riscv},
pubstate = {published},
tppubtype = {inproceedings}
}
Zilong Zhao, Robert Birke, Lydia Y. Chen
GDTS: GAN-based Distributed Tabular Synthesizer Proceedings Article
In: 16th IEEE International Conference on Cloud Computing (CLOUD), IEEE, Chicago, USA, 2023.
Abstract | Links | BibTeX | Tags: ai
@inproceedings{23:cloud:gdts,
title = {GDTS: GAN-based Distributed Tabular Synthesizer},
author = {Zilong Zhao and Robert Birke and Lydia Y. Chen},
url = {https://iris.unito.it/retrieve/8bc610de-3ccd-4a0a-b97f-ee329e487b76/GDTS_IEEE_CLOUD_preprint.pdf},
doi = {10.1109/CLOUD60044.2023.00078},
year = {2023},
date = {2023-07-01},
booktitle = {16th IEEE International Conference on Cloud Computing (CLOUD)},
publisher = {IEEE},
address = {Chicago, USA},
abstract = {Generative Adversarial Networks (GANs) are typically trained to synthesize data, from images and more recently tabular data, under the assumption of directly accessible training data. While learning image GANs on Federated Learning (FL) and Multi-Discriminator (MD) systems has just been demonstrated, it is unknown if tabular GANs can be learned from decentralized data sources. Different from image GANs, state-of-the-art tabular GANs require prior knowledge on the data distribution of each (discrete and continuous) column to agree on a common encoding – risking privacy guarantees. In this paper, we propose GDTS, a distributed framework for GAN-based tabular synthesizer. GDTS provides different system architectures to match the two training paradigms termed GDTS FL and GDTS MD. Key to enable learning on distributed data is the proposed novel privacy-preserving multi-source feature encoding to capture the global data properties. In addition GDTS encompasses a weighting strategy based on table similarity to counter the detrimental effects of non-IID data and a validation pipeline to easily assess and compare the performance of different paradigms and hyper parameters. We evaluate the effectiveness of GDTS in terms of synthetic data quality, and overall training scalability. Experiments show that GDTS FL achieves better statistical similarity and machine learning utility between generated and original data compared to GDTS MD.},
keywords = {ai},
pubstate = {published},
tppubtype = {inproceedings}
}
Iacopo Colonnelli, Robert Birke, Marco Aldinucci
Experimenting with PyTorch on RISC-V Proceedings Article
In: RISC-V Summit Europe 2023, Barcelona, Spain, 2023, (Poster).
Abstract | Links | BibTeX | Tags: eupilot, icsc, riscv
@inproceedings{23:risc-v-summit,
title = {Experimenting with PyTorch on RISC-V},
author = {Iacopo Colonnelli and Robert Birke and Marco Aldinucci},
url = {https://iris.unito.it/retrieve/429bf344-9090-42c3-809c-1b8ac320a930/2023-06-08-Iacopo-COLONNELLI-abstract.pdf},
year = {2023},
date = {2023-06-01},
booktitle = {RISC-V Summit Europe 2023},
address = {Barcelona, Spain},
abstract = {RISC-V is an emerging instruction set architecture. Its modular and extensible open-source royalty-free design is increasingly attracting interest from both research and industry. Nowadays, different RISC-V-based boards can be bought off the shelf. However, software availability is equivalently vital in guaranteeing the RISC-V ecosystem's success. Here we contribute with the first publicly available port of PyTorch. PyTorch is one of the most popular Deep Learning libraries available today. As such, it is a crucial enabler in running state-of-the-art AI applications on RISC-V-based systems and a first step towards a fully democratic end-to-end codesign process.},
note = {Poster},
keywords = {eupilot, icsc, riscv},
pubstate = {published},
tppubtype = {inproceedings}
}
Marco Aldinucci, Robert Birke, Antonio Brogi, Emanuele Carlini, Massimo Coppola, Marco Danelutto, Patrizio Dazzi, Luca Ferrucci, Forti Stefano, Hanna Kavalionak, Gabriele Mencagli, Matteo Mordacchin, Marcelo Pasin, Federica Paganelli, Massimo Torquati
A Proposal for a Continuum-aware Programming Model: From Workflows to Services Autonomously Interacting in the Compute Continuum Proceedings Article
In: 2023 IEEE 47th Annual Computers, Software, and Applications Conference (COMPSAC), IEEE, Turin, Italy, 2023.
Abstract | Links | BibTeX | Tags: icsc
@inproceedings{23:aldinucci:continuum,
title = {A Proposal for a Continuum-aware Programming Model: From Workflows to Services Autonomously Interacting in the Compute Continuum},
author = {Marco Aldinucci and Robert Birke and Antonio Brogi and Emanuele Carlini and Massimo Coppola and Marco Danelutto and Patrizio Dazzi and Luca Ferrucci and Forti Stefano and Hanna Kavalionak and Gabriele Mencagli and Matteo Mordacchin and Marcelo Pasin and Federica Paganelli and Massimo Torquati},
url = {https://iris.unito.it/retrieve/2ae13a33-5814-43da-8ea6-2d3e8b122384/Continuum-aware-PM.pdf},
doi = {10.1109/COMPSAC57700.2023.00287},
year = {2023},
date = {2023-06-01},
booktitle = {2023 IEEE 47th Annual Computers, Software, and Applications Conference (COMPSAC)},
publisher = {IEEE},
address = {Turin, Italy},
abstract = {This paper proposes a continuum-aware programming model enabling the execution of application workflows across the compute continuum: cloud, fog and edge resources. It simplifies the management of heterogeneous nodes while alleviating the burden of programmers and unleashing innovation. This model optimizes the continuum through advanced development experiences by transforming workflows into autonomous service collaborations. It reduces complexity in positioning/interconnecting services across the continuum. A meta-model introduces high-level workflow descriptions as service networks with defined contracts and quality of service, thus enabling the deployment/management of workflows as first-class entities. It also provides automation based on policies, monitoring and heuristics. Tailored mechanisms orchestrate/manage services across the continuum, optimizing performance, cost, data protection and sustainability while managing risks. This model facilitates incremental development with visibility of design impacts and seamless evolution of applications and infrastructures. In this work, we explore this new computing paradigm showing how it can trigger the development of a new generation of tools to support the compute continuum progress.},
keywords = {icsc},
pubstate = {published},
tppubtype = {inproceedings}
}
Jani Valtari, Anna Kulmala, Sandro Schönborn, David Khozaya, Robert Birke, Reikko Jyrki
Real-life Pilot of Virtual Protection and Control - Experiences and Performance Analysis Proceedings Article
In: 27th International Conference on Electricity Distribution (CIRED), Rome, Italy, 2023.
Abstract | Links | BibTeX | Tags: RT
@inproceedings{23:valtari:pilot,
title = {Real-life Pilot of Virtual Protection and Control - Experiences and Performance Analysis},
author = {Jani Valtari and Anna Kulmala and Sandro Schönborn and David Khozaya and Robert Birke and Reikko Jyrki},
url = {https://iris.unito.it/retrieve/5de5fb00-02bf-4ba8-a4db-5876415d5105/virtualization_full_paper_cired2023_submitted.pdf},
doi = {10.1049/icp.2023.1219},
year = {2023},
date = {2023-06-01},
booktitle = {27th International Conference on Electricity Distribution (CIRED)},
address = {Rome, Italy},
abstract = {Virtualized protection and control (VPC) is seen as a promising evolution for the centralized protection and control (CPC) concept. Centralization of protection functions consolidates the functions of multiple traditional relays into one device. This consolidation reduces communications network complexity and offers effective ways to manage protection applications of the substation. Making the CPC available as a VPC software image instead of a dedicated device creates yet another degree of freedom. The solution becomes hardware independent, bringing more flexibility and scalability to the solution. ABB and Caruna together wanted to explore these possibilities in a real-life substation pilot. This paper describes the piloted VPC environment and the results from the piloting period. The results show that virtualization technology is suitable for time critical protection and control applications, with real-time performance comparable to existing non- virtualized solutions.},
keywords = {RT},
pubstate = {published},
tppubtype = {inproceedings}
}
Sandro Schönborn, Robert Birke, David Kozhaya, Thanikesavan Sivanthi
Real-Time Performance of Virtualised Protection and Control Software Proceedings Article
In: 27th International Conference on Electricity Distribution (CIRED), Rome, Italy, 2023.
Abstract | Links | BibTeX | Tags: RT
@inproceedings{23:schoenborn:vipac,
title = {Real-Time Performance of Virtualised Protection and Control Software},
author = {Sandro Schönborn and Robert Birke and David Kozhaya and Thanikesavan Sivanthi},
url = {https://iris.unito.it/retrieve/eb610327-6e38-4f5e-8673-e62f2d956821/10702-Scho%cc%88nborn.pdf},
doi = {10.1049/icp.2023.1028},
year = {2023},
date = {2023-06-01},
booktitle = {27th International Conference on Electricity Distribution (CIRED)},
address = {Rome, Italy},
abstract = {Substation automation is ever challenged by the integration of distributed energy resources which imposes higher deployment flexibility and adaptability for protection and control. Although virtualization helps to run software applications independent of the underlying platform in IT infrastructures and cloud computing, it is still not commonly used in the field of substation automation. This is mainly due to the real-time performance demands of substation automation protection and control applications. In this article, we present an approach for running substation automation protection and control software in virtual environments. We contrast the real-time performance of different virtualization technologies under different workloads and focus on the performance evaluation of protection and control software in container- based solutions running on Linux with PREEMPT RT. We also present additional results for performance achieved in virtual machines. Our results clearly demonstrate that it is possible to run substation automation protection and control software in virtual environments while still providing the necessary performance. This paves the way for the deployment of substation protection and control software in virtualisation environments.},
keywords = {RT},
pubstate = {published},
tppubtype = {inproceedings}
}
Gianluca Mittone, Nicolò Tonci, Robert Birke, Iacopo Colonnelli, Doriana Medić, Andrea Bartolini, Roberto Esposito, Emanuele Parisi, Francesco Beneventi, Mirko Polato, Massimo Torquati, Luca Benini, Marco Aldinucci
Experimenting with Emerging RISC-V Systems for Decentralised Machine Learning Proceedings Article
In: 20th ACM International Conference on Computing Frontiers (CF '23), ACM, Bologna, Italy, 2023, ISBN: 979-8-4007-0140-5/23/05, (https://arxiv.org/abs/2302.07946).
Abstract | Links | BibTeX | Tags: ai, confidential, eupilot, HPC, icsc, riscv
@inproceedings{23:mittone:fl-riscv,
title = {Experimenting with Emerging RISC-V Systems for Decentralised Machine Learning},
author = {Gianluca Mittone and Nicolò Tonci and Robert Birke and Iacopo Colonnelli and Doriana Medić and Andrea Bartolini and Roberto Esposito and Emanuele Parisi and Francesco Beneventi and Mirko Polato and Massimo Torquati and Luca Benini and Marco Aldinucci},
url = {https://dl.acm.org/doi/pdf/10.1145/3587135.3592211},
doi = {10.1145/3587135.3592211},
isbn = {979-8-4007-0140-5/23/05},
year = {2023},
date = {2023-05-01},
booktitle = {20th ACM International Conference on Computing Frontiers (CF '23)},
publisher = {ACM},
address = {Bologna, Italy},
institution = {Computer Science Department, University of Torino},
abstract = {Decentralised Machine Learning (DML) enables collaborative machine learning without centralised input data. Federated Learning (FL) and Edge Inference are examples of DML. While tools for DML (especially FL) are starting to flourish, many are not flexible and portable enough to experiment with novel systems (e.g., RISC-V), non-fully connected topologies, and asynchronous collaboration schemes. We overcome these limitations via a domain-specific language allowing to map DML schemes to an underlying middleware, i.e. the FastFlow parallel programming library. We experiment with it by generating different working DML schemes on two emerging architectures (ARM-v8, RISC-V) and the x86-64 platform. We characterise the performance and energy efficiency of the presented schemes and systems. As a byproduct, we introduce a RISC-V porting of the PyTorch framework, the first publicly available to our knowledge.},
note = {https://arxiv.org/abs/2302.07946},
keywords = {ai, confidential, eupilot, HPC, icsc, riscv},
pubstate = {published},
tppubtype = {inproceedings}
}
William Fornaciari, Federico Reghenzani, Federico Terraneo, Davide Baroffio, Cecilia Metra, Martin Omana, Josie E. Rodriguez Condia, Matteo Sonza Reorda, Robert Birke, Iacopo Colonnelli, Gianluca Mittone, Marco Aldinucci, Gabriele Mencagli, Francesco Iannone, Filippo Palombi, Giuseppe Zummo, Daniele Cesarini, Federico Tesser
RISC-V-based Platforms for HPC: Analyzing Non-functional Properties for Future HPC and Big-Data Clusters Proceedings Article
In: Embedded Computer Systems: Architectures, Modeling, and Simulation - 23rd International Conference, SAMOS 2023, Samos, Greece, 2023, (icsc).
Abstract | Links | BibTeX | Tags: icsc, riscv
@inproceedings{23:SAMOS,
title = {RISC-V-based Platforms for HPC: Analyzing Non-functional Properties for Future HPC and Big-Data Clusters},
author = {William Fornaciari and Federico Reghenzani and Federico Terraneo and Davide Baroffio and Cecilia Metra and Martin Omana and Josie E. Rodriguez Condia and Matteo Sonza Reorda and Robert Birke and Iacopo Colonnelli and Gianluca Mittone and Marco Aldinucci and Gabriele Mencagli and Francesco Iannone and Filippo Palombi and Giuseppe Zummo and Daniele Cesarini and Federico Tesser},
url = {https://iris.unito.it/retrieve/b627eab0-3aa1-4fd7-8685-f47c62c792b3/SAMOS_2023_CN_HPC_FL1.pdf},
doi = {10.1007/978-3-031-46077-7_26},
year = {2023},
date = {2023-01-01},
booktitle = {Embedded Computer Systems: Architectures, Modeling, and Simulation - 23rd International Conference, SAMOS 2023},
address = {Samos, Greece},
abstract = {High-PerformanceComputing(HPC)haveevolvedtobeused to perform simulations of systems where physical experimentation is pro- hibitively impractical, expensive, or dangerous. This paper provides a general overview and showcases the analysis of non-functional properties in RISC-V-based platforms for HPCs. In particular, our analyses target the evaluation of power and energy control, thermal management, and reliability assessment of promising systems, structures, and technologies devised for current and future generation of HPC machines. The main set of design methodologies and technologies developed within the activ- ities of the Future and HPC & Big Data spoke of the National Centre of HPC, Big Data and Quantum Computing project are described along with the description of the testbed for experimenting two-phase cooling approaches.},
note = {icsc},
keywords = {icsc, riscv},
pubstate = {published},
tppubtype = {inproceedings}
}
Amirmasoud Ghiassi, Robert Birke, Lydia Chen
Robust Learning via Golden Symmetric Loss of (un)Trusted Labels Proceedings Article
In: SDM '23: SIAM International Conference on Data Mining, pp. 568–576, 2023.
Abstract | Links | BibTeX | Tags: textarossa
@inproceedings{sdm-ghiassi23,
title = {Robust Learning via Golden Symmetric Loss of (un)Trusted Labels},
author = {Amirmasoud Ghiassi and Robert Birke and Lydia Chen},
url = {https://datacloud.di.unito.it/index.php/s/b6z3moNLxnNiCxz},
doi = {10.1137/1.9781611977653.ch64},
year = {2023},
date = {2023-01-01},
booktitle = {SDM '23: SIAM International Conference on Data Mining},
pages = {568–576},
abstract = {Learning robust deep models against noisy labels becomes ever critical when today's data is commonly collected from open platforms and subject to adversarial corruption. The information on the label corruption process, i.e., corruption matrix, can greatly enhance the robustness of deep models but still fall behind in combating hard classes. In this paper, we propose to construct a golden symmetric loss (GSL) based on the estimated corruption matrix as to avoid overfitting to noisy labels and learn effectively from hard classes. GSL is the weighted sum of the corrected regular cross entropy and reverse cross entropy. By leveraging a small fraction of trusted clean data, we estimate the corruption matrix and use it to correct the loss as well as to determine the weights of GSL. We theoretically prove the robustness of the proposed loss function in the presence of dirty labels. We provide a heuristics to adaptively tune the loss weights of GSL according to the noise rate and diversity measured from the dataset. We evaluate our proposed golden symmetric loss on both vision and natural language deep models subject to different types of label noise patterns. Empirical results show that GSL can significantly outperform the existing robust training methods on different noise patterns, showing accuracy improvement up to 18% on CIFAR-100 and 1% on real world noisy dataset of Clothing1M.},
keywords = {textarossa},
pubstate = {published},
tppubtype = {inproceedings}
}
2022
Yujin Zhu, Zilong Zhao, Robert Birke, Lydia Y. Chen
Permutation-Invariant Tabular Data Synthesis Proceedings Article
In: Tsumoto, Shusaku, Ohsawa, Yukio, Chen, Lei, Poel, Dirk Van, Hu, Xiaohua, Motomura, Yoichi, Takagi, Takuya, Wu, Lingfei, Xie, Ying, Abe, Akihiro, Raghavan, Vijay (Ed.): IEEE International Conference on Big Data (Big Data), pp. 5855–5864, IEEE, 2022.
Abstract | Links | BibTeX | Tags: analytics
@inproceedings{bigdata-zhu22,
title = {Permutation-Invariant Tabular Data Synthesis},
author = {Yujin Zhu and Zilong Zhao and Robert Birke and Lydia Y. Chen},
editor = {Shusaku Tsumoto and Yukio Ohsawa and Lei Chen and Dirk Van Poel and Xiaohua Hu and Yoichi Motomura and Takuya Takagi and Lingfei Wu and Ying Xie and Akihiro Abe and Vijay Raghavan},
url = {https://datacloud.di.unito.it/index.php/s/b6z3moNLxnNiCxz},
doi = {10.1109/BigData55660.2022.10020639},
year = {2022},
date = {2022-12-01},
booktitle = {IEEE International Conference on Big Data (Big Data)},
pages = {5855–5864},
publisher = {IEEE},
abstract = {Tabular data synthesis is an emerging approach to circumvent strict regulations on data privacy while discovering knowledge through big data. Although state-of-the-art AI-based tabular data synthesizers, e.g., table-GAN, CTGAN, TVAE, and CTAB-GAN, are effective at generating synthetic tabular data, their training is sensitive to column permutations of input data. In this paper, we first c onduct a n e xtensive e mpirical s tudy to disclose such a property of permutation invariance and an in-depth analysis of the existing synthesizers. We show that changing the input column order worsens the statistical difference between real and synthetic data by up to 38.67% due to the encoding of tabular data and the network architectures. To fully unleash the potential of big synthetic tabular data, we propose two solutions: (i) AE-GAN, a synthesizer that uses an autoencoder network to represent the tabular data and GAN networks to synthesize the latent representation, and (ii) a feature sorting algorithm to find t he s uitable c olumn o rder o f i nput d ata f or CNN-based synthesizers. We evaluate the proposed solutions on five datasets in terms of the sensitivity to the column permutation, the quality of synthetic data, and the utility in downstream analyses. Our results show that we enhance the property of permutation-invariance when training synthesizers and further improve the quality and utility of synthetic data, up to 22%, compared to the existing synthesizers.},
keywords = {analytics},
pubstate = {published},
tppubtype = {inproceedings}
}
Christopher Stewart, Nathaniel Morris, Lydia Y. Chen, Robert Birke
Performance Modeling for Short-Term Cache Allocation Proceedings Article
In: Proceedings of the 51st International Conference on Parallel Processing (ICPP), pp. 31:1–31:11, ACM, 2022.
Abstract | Links | BibTeX | Tags: parallel
@inproceedings{icpp-stewart22,
title = {Performance Modeling for Short-Term Cache Allocation},
author = {Christopher Stewart and Nathaniel Morris and Lydia Y. Chen and Robert Birke},
url = {https://doi.org/10.1145/3545008.3545094},
doi = {10.1145/3545008.3545094},
year = {2022},
date = {2022-08-01},
booktitle = {Proceedings of the 51st International Conference on Parallel Processing (ICPP)},
pages = {31:1–31:11},
publisher = {ACM},
abstract = {Short-term cache allocation grants and then revokes access to processor cache lines dynamically. For online services, short-term allocation can speed up targeted query executions and free up cache lines reserved, but normally not needed, for performance. However, in collocated settings, short-term allocation can increase cache contention, slowing down collocated query executions. To offset slowdowns, collocated services may request short-term allocation more often, making the problem worse. Short-term allocation policies manage which queries receive cache allocations and when. In collocated settings, these policies should balance targeted query speedups against slowdowns caused by recurring cache contention. We present a model-driven approach that (1) predicts response time under a given policy, (2) explores competing policies and (3) chooses policies that yield low response time for all collocated services. Our approach profiles cache usage offline, characterizes the effects of cache allocation policies using deep learning techniques and devises novel performance models for short-term allocation with online services. We tested our approach using data processing, cloud, and high-performance computing benchmarks collocated on Intel processors equipped with Cache Allocation Technology. Our models predicted median response time with 11% absolute percent error. Short-term allocation policies found using our approach out performed state-of-the-art shared cache allocation policies by 1.2-2.3X.},
keywords = {parallel},
pubstate = {published},
tppubtype = {inproceedings}
}
Amirmasoud Ghiassi, Robert Birke, Lydia Y. Chen
LABNET: A Collaborative Method for DNN Training and Label Aggregation Proceedings Article
In: Rocha, Ana Paula, Steels, Luc, Herik, H. Jaap (Ed.): 14th International Conference on Agents and Artificial Intelligence (ICAART), pp. 56–66, SCITEPRESS, 2022.
Abstract | Links | BibTeX | Tags:
@inproceedings{ghiassi/iccart22,
title = {LABNET: A Collaborative Method for DNN Training and Label Aggregation},
author = {Amirmasoud Ghiassi and Robert Birke and Lydia Y. Chen},
editor = {Ana Paula Rocha and Luc Steels and H. Jaap Herik},
url = {https://www.scitepress.org/Link.aspx?doi=10.5220/0010770400003116},
doi = {10.5220/0010770400003116},
year = {2022},
date = {2022-02-01},
booktitle = {14th International Conference on Agents and Artificial Intelligence (ICAART)},
pages = {56–66},
publisher = {SCITEPRESS},
abstract = {Today, to label the massive datasets needed to train Deep Neural Networks (DNNs), cheap and error-prone methods such as crowdsourcing are used. Label aggregation methods aim to infer the true labels from noisy labels annotated by crowdsourcing workers via labels statistics features. Aggregated labels are the main data source to train deep neural networks, and their accuracy directly affects the deep neural network performance. In this paper, we argue that training DNN and aggregating labels are not two separate tasks. Incorporation between DNN training and label aggregation connects data features, noisy labels, and aggregated labels. Since each image contains valuable knowledge about its label, the data features help aggregation methods enhance their performance. We propose LABNET an iterative two-step method. Step one: the label aggregation algorithm provides labels to train the DNN. Step two: the DNN shares a representation of the data features with the label aggregation algorithm. These steps are repeated until the converging label aggregation error rate. To evaluate LABNET we conduct an extensive empirical comparison on CIFAR-10 and CIFAR-100 under different noise and worker statistics. Our evaluation results show that LABNET achieves the highest mean accuracy with an increase of at least 8% to 0.6% and lowest error rate with a reduction of 7.5% to 0.25% against existing aggregation and training methods in most cases.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Bart Cox, Robert Birke, Lydia Y. Chen
Memory-aware and context-aware multi-DNN inference on the edge Journal Article
In: Pervasive and Mobile Computing, vol. 83, pp. 1–16, 2022, ISSN: 1574-1192.
Abstract | Links | BibTeX | Tags: ai
@article{COX2022101594,
title = {Memory-aware and context-aware multi-DNN inference on the edge},
author = {Bart Cox and Robert Birke and Lydia Y. Chen},
url = {https://www.sciencedirect.com/science/article/pii/S1574119222000372},
doi = {https://doi.org/10.1016/j.pmcj.2022.101594},
issn = {1574-1192},
year = {2022},
date = {2022-01-01},
journal = {Pervasive and Mobile Computing},
volume = {83},
pages = {1–16},
abstract = {Deep neural networks (DNNs) are becoming the core components of many applications running on edge devices, especially for real time image-based analysis. Increasingly, multi-faced knowledge is extracted by executing multiple DNNs inference models, e.g., identifying objects, faces, and genders from images. It is of paramount importance to guarantee low response times of such multi-DNN executions as it affects not only users quality of experience but also safety. The challenge, largely unaddressed by the state of the art, is how to overcome the memory limitation of edge devices without altering the DNN models. In this paper, we design and implement Masa, a responsive memory-aware multi-DNN execution and scheduling framework, which requires no modification of DNN models. The aim of Masa is to consistently ensure the average response time when deterministically and stochastically executing multiple DNN-based image analyses. The enabling features of Masa are (i) modeling inter- and intra-network dependency, (ii) leveraging complimentary memory usage of each layer, and (iii) exploring the context dependency of DNNs. We verify the correctness and scheduling optimality via mixed integer programming. We extensively evaluate two versions of Masa, context-oblivious and context-aware, on three configurations of Raspberry Pi and a large set of popular DNN models triggered by different generation patterns of images. Our evaluation results show that Masa can achieve lower average response times by up to 90% on devices with small memory, i.e., 512 MB to 1 GB, compared to the state of the art multi-DNN scheduling solutions.},
keywords = {ai},
pubstate = {published},
tppubtype = {article}
}
2021
Amirmasoud Ghiassi, Robert Birke, Lydia Y. Chen
TrustNet: Learning from Trusted Data Against (A)symmetric Label Noise Proceedings Article
In: 8th IEEE/ACM International Conference on Big Data Computing, Applications and Technologies (BDCAT), pp. 52–62, ACM, 2021.
Abstract | Links | BibTeX | Tags:
@inproceedings{bdcat-ghiassi21,
title = {TrustNet: Learning from Trusted Data Against (A)symmetric Label Noise},
author = {Amirmasoud Ghiassi and Robert Birke and Lydia Y. Chen},
url = {https://doi.org/10.1145/3492324.3494166},
doi = {10.1145/3492324.3494166},
year = {2021},
date = {2021-12-01},
booktitle = {8th IEEE/ACM International Conference on Big Data Computing, Applications and Technologies (BDCAT)},
pages = {52–62},
publisher = {ACM},
abstract = {Big Data systems allow collecting massive datasets to feed the data hungry deep learning. Labelling these ever-bigger datasets is increasingly challenging and label errors affect even highly curated sets. This makes robustness to label noise a critical property for weakly-supervised classifiers. The related works on resilient deep networks tend to focus on a limited set of synthetic noise patterns, and with disparate views on their impacts, e.g., robustness against symmetric v.s. asymmetric noise patterns. In this paper, we first extend the theoretical analysis of test accuracy for any given noise patterns. Based on the insights, we design TrustNet that first learns the pattern of noise corruption, being it both symmetric or asymmetric, from a small set of trusted data. Then, TrustNet is trained via a robust loss function, which weights the given labels against the inferred labels from the learned noise pattern. The weight is adjusted based on model uncertainty across training epochs. We evaluate TrustNet on synthetic label noise for CIFAR-10, CIFAR-100 and big real-world data with label noise, i.e., Clothing1M. We compare against state-of-the-art methods demonstrating the strong robustness of TrustNet under a diverse set of noise patterns.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Zilong Zhao, Aditya Kunar, Robert Birke, Lydia Y. Chen
CTAB-GAN: Effective Table Data Synthesizing Proceedings Article
In: Balasubramanian, Vineeth N., Tsang, Ivor (Ed.): Proceedings of The 13th Asian Conference on Machine Learning, pp. 97–112, PMLR, 2021.
Abstract | Links | BibTeX | Tags:
@inproceedings{pmlr-v157-zhao21a,
title = {CTAB-GAN: Effective Table Data Synthesizing},
author = {Zilong Zhao and Aditya Kunar and Robert Birke and Lydia Y. Chen},
editor = {Vineeth N. Balasubramanian and Ivor Tsang},
url = {https://proceedings.mlr.press/v157/zhao21a.html},
year = {2021},
date = {2021-11-01},
booktitle = {Proceedings of The 13th Asian Conference on Machine Learning},
volume = {157},
pages = {97–112},
publisher = {PMLR},
series = {Proceedings of Machine Learning Research},
abstract = {While data sharing is crucial for knowledge development, privacy concerns and strict regulation (e.g., European General Data Protection Regulation (GDPR)) unfortunately limit its full effectiveness. Synthetic tabular data emerges as an alternative to enable data sharing while fulfilling regulatory and privacy constraints. The state-of-the-art tabular data synthesizers draw methodologies from Generative Adversarial Networks (GAN) and address two main data types in industry, i.e., continuous and categorical. In this paper, we develop CTAB-GAN, a novel conditional table GAN architecture that can effectively model diverse data types, including a mix of continuous and categorical variables. Moreover, we address data imbalance and long tail issues, i.e., certain variables have drastic frequency differences across large values. To achieve those aims, we first introduce the information loss, classification loss and generator loss to the conditional GAN. Secondly, we design a novel conditional vector, which efficiently encodes the mixed data type and skewed distribution of data variable. We extensively evaluate CTAB-GAN with the state of the art GANs that generate synthetic tables, in terms of data similarity and analysis utility. The results on five datasets show that the synthetic data of CTAB-GAN remarkably resembles the real data for all three types of variables and results into higher accuracy for five machine learning algorithms, by up to 17%.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Taraneh Younesian, Zilong Zhao, Amirmasoud Ghiassi, Robert Birke, Lydia Y Chen
QActor: Active Learning on Noisy Labels Proceedings Article
In: Balasubramanian, Vineeth N., Tsang, Ivor (Ed.): Proceedings of The 13th Asian Conference on Machine Learning, pp. 548–563, PMLR, 2021.
Abstract | Links | BibTeX | Tags:
@inproceedings{pmlr-v157-younesian21a,
title = {QActor: Active Learning on Noisy Labels},
author = {Taraneh Younesian and Zilong Zhao and Amirmasoud Ghiassi and Robert Birke and Lydia Y Chen},
editor = {Vineeth N. Balasubramanian and Ivor Tsang},
url = {https://proceedings.mlr.press/v157/younesian21a.html},
year = {2021},
date = {2021-11-01},
booktitle = {Proceedings of The 13th Asian Conference on Machine Learning},
volume = {157},
pages = {548–563},
publisher = {PMLR},
series = {Proceedings of Machine Learning Research},
abstract = {Noisy labeled data is more a norm than a rarity for self-generated content that is continuously published on the web and social media from non-experts. Active querying experts are conventionally adopted to provide labels for the informative samples which don't have labels, instead of possibly incorrect labels. The new challenge that arises here is how to discern the informative and noisy labels which benefit from expert cleaning. In this paper, we aim to leverage the stringent oracle budget to robustly maximize learning accuracy. We propose a noise-aware active learning framework, QActor, and a novel measure emphCENT, which considers both cross-entropy and entropy to select informative and noisy labels for an expert cleansing. QActor iteratively cleans samples via quality models and actively querying an expert on those noisy yet informative samples. To adapt to learning capacity per iteration, QActor dynamically adjusts the query limit according to the learning loss for each learning iteration. We extensively evaluate different image datasets with noise label ratios ranging between 30% and 60%. Our results show that QActor can nearly match the optimal accuracy achieved using only clean data at the cost of only an additional 10% of ground truth data from the oracle.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Giuliano Albanese, Robert Birke, Georgia Giannopoulou, Sandro Schönborn, Thanikesavan Sivanthi
Evaluation of Networking Options for Containerized Deployment of Real-Time Applications Proceedings Article
In: 26th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), pp. 1–8, IEEE, 2021.
Abstract | Links | BibTeX | Tags:
@inproceedings{etfa-albanese21,
title = {Evaluation of Networking Options for Containerized Deployment of Real-Time Applications},
author = {Giuliano Albanese and Robert Birke and Georgia Giannopoulou and Sandro Schönborn and Thanikesavan Sivanthi},
url = {https://doi.org/10.1109/ETFA45728.2021.9613320},
doi = {10.1109/ETFA45728.2021.9613320},
year = {2021},
date = {2021-09-01},
booktitle = {26th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA)},
pages = {1–8},
publisher = {IEEE},
abstract = {Enterprises in the field of industrial automation experience an increasing demand for providing virtualized software solutions. Inspired by the recent trends in serverless and cloud computing, software virtualization is considered even for safety-critical applications with hard real-time requirements, as a means of avoiding hardware vendor lock-in and reducing volume and maintenance cost of devices. In this work, we evaluate the applicability of OS-level virtualization to an industrial automation use case. Our application runs in Docker containers on top of Linux patched with PREEMPT_RT. We investigate the ability of Docker coupled with diverse networking technologies to fulfill the latency requirements of the application under normal or heavy system load. We empirically compare four networking technologies with respect to communication latency and frequency of missing packets. The results indicate that Docker with certain technologies, such as the Single Root I/O Virtualization interface, performs robustly even under heavy load, enabling sufficient performance isolation and low overhead that does not jeopardise the real-time performance of our application.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Amirmasoud Ghiassi, Robert Birke, Rui Han, Lydia Y. Chen
LABELNET: Recovering Noisy Labels Proceedings Article
In: International Joint Conference on Neural Networks (IJCNN), pp. 1–8, IEEE, 2021.
Abstract | Links | BibTeX | Tags:
@inproceedings{ijcnn-ghiassi21,
title = {LABELNET: Recovering Noisy Labels},
author = {Amirmasoud Ghiassi and Robert Birke and Rui Han and Lydia Y. Chen},
url = {https://doi.org/10.1109/IJCNN52387.2021.9533562},
doi = {10.1109/IJCNN52387.2021.9533562},
year = {2021},
date = {2021-07-01},
booktitle = {International Joint Conference on Neural Networks (IJCNN)},
pages = {1–8},
publisher = {IEEE},
abstract = {Today's available datasets in the wild, e.g., from social media and open platforms, present tremendous opportunities and challenges for deep learning, as there is a significant portion of tagged images, but often with noisy, i.e. erroneous, labels. Recent studies improve the robustness of deep models against noisy labels without the knowledge of true labels. In this paper, we advocate to derive a stronger classifier which proactively makes use of the noisy labels in addition to the original images - turning noisy labels into learning features. To such an end, we propose a novel framework, LABELNET, composed of Amateur and Expert, which iteratively learn from each other. Amateur is a regular image classifier trained by the feedback of Expert, which imitates how human experts would correct the predicted labels from Amateur using the noise pattern learnt from the knowledge of both the noisy and ground truth labels. The trained Amateur and Expert proactively leverage the images and their noisy labels to infer image classes. Our empirical evaluations on noisy versions of MNIST, CIFAR-10, CIFAR-100 and real-world data of Clothing1M show that the proposed model can achieve robust classification against a wide range of noise ratios and with as little as 20-50% training data, compared to state-of-the-art deep models that solely focus on distilling the impact of noisy labels.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Chi Hong, Amirmasoud Ghiassi, Yichi Zhou, Robert Birke, Lydia Y. Chen
Online Label Aggregation: A Variational Bayesian Approach Proceedings Article
In: Leskovec, Jure, Grobelnik, Marko, Najork, Marc, Tang, Jie, Zia, Leila (Ed.): WWW '21: The Web Conference 2021, pp. 1904–1915, ACM / IW3C2, 2021.
Abstract | Links | BibTeX | Tags: ai
@inproceedings{www-hong21,
title = {Online Label Aggregation: A Variational Bayesian Approach},
author = {Chi Hong and Amirmasoud Ghiassi and Yichi Zhou and Robert Birke and Lydia Y. Chen},
editor = {Jure Leskovec and Marko Grobelnik and Marc Najork and Jie Tang and Leila Zia},
url = {https://doi.org/10.1145/3442381.3449933},
doi = {10.1145/3442381.3449933},
year = {2021},
date = {2021-04-01},
booktitle = {WWW '21: The Web Conference 2021},
pages = {1904–1915},
publisher = {ACM / IW3C2},
abstract = {Noisy labeled data is more a norm than a rarity for crowd sourced contents. It is effective to distill noise and infer correct labels through aggregating results from crowd workers. To ensure the time relevance and overcome slow responses of workers, online label aggregation is increasingly requested, calling for solutions that can incrementally infer true label distribution via subsets of data items. In this paper, we propose a novel online label aggregation framework, BiLA , which employs variational Bayesian inference method and designs a novel stochastic optimization scheme for incremental training. BiLA is flexible to accommodate any generating distribution of labels by the exact computation of its posterior distribution. We also derive the convergence bound of the proposed optimizer. We compare BiLA with the state of the art based on minimax entropy, neural networks and expectation maximization algorithms, on synthetic and real-world data sets. Our evaluation results on various online scenarios show that BiLA can effectively infer the true labels, with an error rate reduction of at least 10 to 1.5 percent points for synthetic and real-world datasets, respectively.},
keywords = {ai},
pubstate = {published},
tppubtype = {inproceedings}
}
Bart Cox, Jeroen Galjaard, Amirmasoud Ghiassi, Robert Birke, Lydia Y. Chen
Masa: Responsive Multi-DNN Inference on the Edge Proceedings Article
In: 19th IEEE International Conference on Pervasive Computing and Communications (PerCom), pp. 1–10, IEEE, 2021.
Abstract | Links | BibTeX | Tags:
@inproceedings{percom-cox21a,
title = {Masa: Responsive Multi-DNN Inference on the Edge},
author = {Bart Cox and Jeroen Galjaard and Amirmasoud Ghiassi and Robert Birke and Lydia Y. Chen},
url = {https://doi.org/10.1109/PERCOM50583.2021.9439111},
doi = {10.1109/PERCOM50583.2021.9439111},
year = {2021},
date = {2021-03-01},
booktitle = {19th IEEE International Conference on Pervasive Computing and Communications (PerCom)},
pages = {1–10},
publisher = {IEEE},
abstract = {Deep neural networks (DNNs) are becoming the core components of many applications running on edge devices, especially for real time image-based analysis. Increasingly, multi-faced knowledge is extracted via executing multiple DNNs inference models, e.g., identifying objects, faces, and genders from images. The response times of multi-DNN highly affect users' quality of experience and safety as well. Different DNNs exhibit diversified resource requirements and execution patterns across layers and networks, which may easily exceed the available device memory and riskily degrade the responsiveness. In this paper, we design and implement Masa, a responsive memory-aware multi-DNN execution framework, an on-device middleware featuring on modeling inter- and intra-network dependency and leveraging complimentary memory usage of each layer. Masa can consistently ensure the average response time when deterministically and stochastically executing multiple DNN-based image analyses. We extensively evaluate Masa on three configurations of Raspberry Pi and a large set of popular DNN models triggered by different generation patterns of images. Our evaluation results show that Masa can achieve lower average response times by up to 90% on devices with small memory, i.e., 512 MB to 1 GB, compared to the state of the art multi-DNN scheduling solutions.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Jeroen Galjaard, Bart Cox, Amirmasoud Ghiassi, Lydia Y. Chen, Robert Birke
MemA: Fast Inference of Multiple Deep Models Proceedings Article
In: 19th IEEE International Conference on Pervasive Computing and Communications Workshops and other Affiliated Events, pp. 281–286, IEEE, 2021.
Abstract | Links | BibTeX | Tags:
@inproceedings{percom-galjaard21,
title = {MemA: Fast Inference of Multiple Deep Models},
author = {Jeroen Galjaard and Bart Cox and Amirmasoud Ghiassi and Lydia Y. Chen and Robert Birke},
url = {https://doi.org/10.1109/PerComWorkshops51409.2021.9430952},
doi = {10.1109/PerComWorkshops51409.2021.9430952},
year = {2021},
date = {2021-03-01},
booktitle = {19th IEEE International Conference on Pervasive Computing and Communications Workshops and other Affiliated Events},
pages = {281–286},
publisher = {IEEE},
abstract = {The execution of deep neural network (DNN) inference jobs on edge devices has become increasingly popular. Multiple of such inference models can concurrently analyse the on-device data, e.g. images, to extract valuable insights. Prior art focuses on low-power accelerators, compressed neural network architectures, and specialized frameworks to reduce execution time of single inference jobs on edge devices which are resource constrained. However, it is little known how different scheduling policies can further improve the runtime performance of multi-inference jobs without additional edge resources. To enable the exploration of scheduling policies, we first develop an execution framework, EdgeCaffe, which splits the DNN inference jobs by loading and execution of each network layer. We empirically characterize the impact of loading and scheduling policies on the execution time of multi-inference jobs and point out their dependency on the available memory space. We propose a novel memory-aware scheduling policy, MemA, which opportunistically interleaves the executions of different types of DNN layers based on their estimated run-time memory demands. Our evaluation on exhaustive combinations of five networks, data inputs, and memory configurations show that MemA can alleviate the degradation of execution times of multi-inference (up to 5×) under severely constrained memory compared to standard scheduling policies without affecting accuracy.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Zilong Zhao, Robert Birke, Rui Han, Bogdan Robu, Sara Bouchenak, Sonia Ben Mokhtar, Lydia Y. Chen
Enhancing Robustness of On-Line Learning Models on Highly Noisy Data Journal Article
In: IEEE Trans. Dependable Secur. Comput., vol. 18, no. 5, pp. 2177–2192, 2021.
Abstract | Links | BibTeX | Tags: ai
@article{ZhaoBHRBMC21,
title = {Enhancing Robustness of On-Line Learning Models on Highly Noisy Data},
author = {Zilong Zhao and Robert Birke and Rui Han and Bogdan Robu and Sara Bouchenak and Sonia Ben Mokhtar and Lydia Y. Chen},
url = {https://doi.org/10.1109/TDSC.2021.3063947},
doi = {10.1109/TDSC.2021.3063947},
year = {2021},
date = {2021-01-01},
journal = {IEEE Trans. Dependable Secur. Comput.},
volume = {18},
number = {5},
pages = {2177–2192},
abstract = {Classification algorithms have been widely adopted to detect anomalies for various systems, e.g., IoT, cloud and face recognition, under the common assumption that the data source is clean, i.e., features and labels are correctly set. However, data collected from the wild can be unreliable due to careless annotations or malicious data transformation for incorrect anomaly detection. In this article, we extend a two-layer on-line data selection framework: Robust Anomaly Detector (RAD) with a newly designed ensemble prediction where both layers contribute to the final anomaly detection decision. To adapt to the on-line nature of anomaly detection, we consider additional features of conflicting opinions of classifiers, repetitive cleaning, and oracle knowledge. We on-line learn from incoming data streams and continuously cleanse the data, so as to adapt to the increasing learning capacity from the larger accumulated data set. Moreover, we explore the concept of oracle learning that provides additional information of true labels for difficult data points. We specifically focus on three use cases, (i) detecting 10 classes of IoT attacks, (ii) predicting 4 classes of task failures of big data jobs, and (iii) recognising 100 celebrities faces. Our evaluation results show that RAD can robustly improve the accuracy of anomaly detection, to reach up to 98.95 percent for IoT device attacks (i.e., +7%), up to 85.03 percent for cloud task failures (i.e., +14%) under 40 percent label noise, and for its extension, it can reach up to 77.51 percent for face recognition (i.e., +39%) under 30 percent label noise. The proposed RAD and its extensions are general and can be applied to different anomaly detection algorithms.},
keywords = {ai},
pubstate = {published},
tppubtype = {article}
}
Robert Birke, Juan F. Pérez, Zhan Qiu, Mathias Björkqvist, Lydia Y. Chen
sPARE: Partial Replication for Multi-Tier Applications in the Cloud Journal Article
In: IEEE Trans. Serv. Comput., vol. 14, no. 2, pp. 574–588, 2021.
Abstract | Links | BibTeX | Tags: parallel
@article{BirkePQBC21,
title = {sPARE: Partial Replication for Multi-Tier Applications in the Cloud},
author = {Robert Birke and Juan F. Pérez and Zhan Qiu and Mathias Björkqvist and Lydia Y. Chen},
url = {https://doi.org/10.1109/TSC.2017.2780845},
doi = {10.1109/TSC.2017.2780845},
year = {2021},
date = {2021-01-01},
journal = {IEEE Trans. Serv. Comput.},
volume = {14},
number = {2},
pages = {574–588},
abstract = {Offering consistent low latency remains a key challenge for distributed applications, especially when deployed on the cloud where virtual machines (VMs) suffer from capacity variability caused by co-located tenants. Replicating redundant requests was shown to be an effective mechanism to defend application performance from high capacity variability. While the prior art centers on single-tier systems, it still remains an open question how to design replication strategies for distributed multi-tier systems. In this paper, we design a first of its kind PArtial REplication system, sPARE, that replicates and dispatches read-only workloads for distributed multi-tier web applications. The two key components of sPARE are (i) the variability-aware replicator that coordinates the replication levels on all tiers via an iterative searching algorithm, and (ii) the replication-aware arbiter that uses a novel token-based arbitration algorithm (TAD) to dispatch requests in each tier. We evaluate sPARE on web serving and searching applications, i.e., MediaWiki and Solr, the former deployed on our private cloud and the latter on Amazon EC2. Our results based on various interference patterns and traffic loads show that sPARE is able to improve the tail latency of MediaWiki and Solr by a factor of almost 2.7x and 2.9x, respectively.},
keywords = {parallel},
pubstate = {published},
tppubtype = {article}
}
Talks
2024
Iacopo Colonnelli, Robert Birke, Giulio Malenza, Gianluca Mittone, Alberto Mulone, Marco Aldinucci
Cross-Facility Federated Learning - Part II Miscellaneous
2024, (Invited talk).
Links | BibTeX | Tags: eupex, icsc, space
@misc{24:ic:elise:xffl,
title = {Cross-Facility Federated Learning - Part II},
author = {Iacopo Colonnelli and Robert Birke and Giulio Malenza and Gianluca Mittone and Alberto Mulone and Marco Aldinucci},
url = {https://datacloud.di.unito.it/index.php/s/7HonBpcWPxotXLX},
year = {2024},
date = {2024-06-01},
address = {Helsinki, Finland},
note = {Invited talk},
keywords = {eupex, icsc, space},
pubstate = {published},
tppubtype = {misc}
}
Robert Birke
FLaaS: Federated Learning as a Service Miscellaneous
ICSC - Spoke 1 meeting, 2024.
Abstract | Links | BibTeX | Tags: ai, icsc
@misc{24:icsc:spoke1:ifab,
title = {FLaaS: Federated Learning as a Service},
author = {Robert Birke},
url = {https://datacloud.di.unito.it/index.php/s/yHXdTnC8xEqoJ6Y},
year = {2024},
date = {2024-02-01},
address = {Torino, Italy},
abstract = {Presentation about the Innovation Grant in collaboration with IFAB},
howpublished = {ICSC - Spoke 1 meeting},
keywords = {ai, icsc},
pubstate = {published},
tppubtype = {misc}
}
Robert Birke
The impact of the advances in generative models on applications and systems Miscellaneous
8th GDR RSD / ASF Winter School on Distributed Systems & Networks 2024, 2024, (Keynote talk).
Abstract | Links | BibTeX | Tags: ai, eupilot, textarossa
@misc{24:ASF:WINTER,
title = {The impact of the advances in generative models on applications and systems},
author = {Robert Birke},
url = {https://datacloud.di.unito.it/index.php/s/QYTCMfWp4sY5qx4},
year = {2024},
date = {2024-01-01},
address = {Le Pleynet, France},
abstract = {Generative models have achieved unprecedented quality levels across a wide range of data types. This advance often stems from the ever increasing data and compute used to train larger and larger models. One major use case of such synthetic data is in privacy-compliant data sharing. Gartner predicts that synthetic data will reduce by 2025 the need for real data by 70% for analytics and machine learning. We will look at generative models, with a special focus on tabular data, and the issue of democratization of large model training.},
howpublished = {8th GDR RSD / ASF Winter School on Distributed Systems & Networks 2024},
note = {Keynote talk},
keywords = {ai, eupilot, textarossa},
pubstate = {published},
tppubtype = {misc}
}
2023
Iacopo Colonnelli, Robert Birke, Giulio Malenza, Gianluca Mittone, Alberto Mulone, Marco Aldinucci, Valerio Basile, Marco Antonio Stranisci, Viviana Patti, Jeroen Galjaard, Lydia Y. Chen, Sanzio Bassini, Massimiliano Guarrasi, Gabriella Scipione, Jan Martinovič, Vit Vondrák
Cross-Facility Federated Learning Miscellaneous
1st EuroHPC User Day, 2023.
Links | BibTeX | Tags: across, ai, eupex, eupilot, HPC
@misc{23:eurohpc,
title = {Cross-Facility Federated Learning},
author = {Iacopo Colonnelli and Robert Birke and Giulio Malenza and Gianluca Mittone and Alberto Mulone and Marco Aldinucci and Valerio Basile and Marco Antonio Stranisci and Viviana Patti and Jeroen Galjaard and Lydia Y. Chen and Sanzio Bassini and Massimiliano Guarrasi and Gabriella Scipione and Jan Martinovič and Vit Vondrák},
url = {https://datacloud.di.unito.it/index.php/s/DDAz4QkJP3WZ68M},
year = {2023},
date = {2023-12-01},
address = {Bruxelles, Belgium},
howpublished = {1st EuroHPC User Day},
keywords = {across, ai, eupex, eupilot, HPC},
pubstate = {published},
tppubtype = {misc}
}
Gianluca Mittone, Giulio Malenza, Marco Aldinucci, Robert Birke
Distributed Edge Inference: an Experimental Study on Multiview Detection Miscellaneous
The 16th IEEE/ACM International Conference on Utility and Cloud Computing (UCC 2023), 2023.
Abstract | Links | BibTeX | Tags: ai, eupilot, icsc
@misc{23:ucc:multiview,
title = {Distributed Edge Inference: an Experimental Study on Multiview Detection},
author = {Gianluca Mittone and Giulio Malenza and Marco Aldinucci and Robert Birke},
url = {https://datacloud.di.unito.it/index.php/s/XfjNZEPSNfSKPFr},
year = {2023},
date = {2023-12-01},
address = {Taormina, Italy},
abstract = {Computing is evolving rapidly to cater to the increasing demand for sophisticated services, and Cloud computing lays a solid foundation for flexible on-demand provisioning. However, as the size of applications grows, the centralised client-server approach used by Cloud computing increasingly limits the applications scalability. To achieve ultra-scalability, cloud/edge/fog computing converges into the compute continuum, completely decentralising the infrastructure to encompass universal, pervasive resources. The compute continuum makes devising applications benefitting from this complex environment a challenging research problem. We put the opportunities the compute continuum others to the test through a real-world multi-view detection model (MvDet) implemented with the FastFL C/C++ high-performance edge inference framework. Computational performance is discussed considering many experimental scenarios, encompassing different edge computational capabilities and network bandwidths. We obtain up to 1.92x speedup in inference time over a centralised solution using the same devices.},
howpublished = {The 16th IEEE/ACM International Conference on Utility and Cloud Computing (UCC 2023)},
keywords = {ai, eupilot, icsc},
pubstate = {published},
tppubtype = {misc}
}
Gianluca Mittone, Walter Riviera, Iacopo Colonnelli, Robert Birke, Marco Aldinucci
Model-Agnostic Federated Learning Miscellaneous
29th International European Conference on Parallel and Distributed Computing (Euro-Par '23), 2023.
Abstract | Links | BibTeX | Tags: ai, eupilot, icsc
@misc{23:europar:mafl,
title = {Model-Agnostic Federated Learning},
author = {Gianluca Mittone and Walter Riviera and Iacopo Colonnelli and Robert Birke and Marco Aldinucci},
url = {https://datacloud.di.unito.it/index.php/s/9T6G2tRreRomBAE},
year = {2023},
date = {2023-09-01},
address = {Limassol, Cyprus},
abstract = {Since its debut in 2016, Federated Learning (FL) has been tied to the inner workings of Deep Neural Networks (DNNs); this allowed its development as DNNs proliferated but neglected those scenarios in which using DNNs is not possible or advantageous. The fact that most current FL frameworks only support DNNs reinforces this problem. To address the lack of non-DNN-based FL solutions, we propose MAFL (Model-Agnostic Federated Learning). MAFL merges a model-agnostic FL algorithm, AdaBoost.F, with an open industry-grade FL framework: Intel® OpenFL. MAFL is the first FL system not tied to any machine learning model, allowing exploration of FL beyond DNNs. We test MAFL from multiple points of view, assessing its correctness, flexibility, and scaling properties up to 64 nodes of an HPC cluster. We also show how we optimised OpenFL achieving a 5.5x speedup over a standard FL scenario. MAFL is compatible with x86-64, ARM-v8, Power and RISC-V.},
howpublished = {29th International European Conference on Parallel and Distributed Computing (Euro-Par '23)},
keywords = {ai, eupilot, icsc},
pubstate = {published},
tppubtype = {misc}
}
Gianluca Mittone, Robert Birke, Marco Aldinucci
Model-Agnostic Federated Learning Miscellaneous
29th International European Conference on Parallel and Distributed Computing (Euro-Par '23), 2023.
Abstract | Links | BibTeX | Tags: eupilot, icsc
@misc{23:europar:phdtalk,
title = {Model-Agnostic Federated Learning},
author = {Gianluca Mittone and Robert Birke and Marco Aldinucci},
url = {https://datacloud.di.unito.it/index.php/s/pT3qxkwzzsHR3nS},
year = {2023},
date = {2023-08-01},
address = {Limassol, Cyprus},
abstract = {Since its debut in 2016, Federated Learning (FL) has been tied to the inner workings of Deep Neural Networks (DNNs); this allowed its development as DNNs proliferated but neglected those scenarios in which using DNNs is not possible or advantageous. The fact that most current FL frameworks only support DNNs reinforces this problem. To address the lack of non-DNN-based FL solutions, we propose MAFL (Model-Agnostic Federated Learning). Decentralised Machine Learning (DML) enables collaborative machine learning without centralised input data. Federated Learning (FL) and Edge Inference are examples of DML. While tools for DML (especially FL) are starting to flourish, many are not flexible and portable enough to experiment with novel processors (e.g., RISC-V), non-fully connected network topologies, and asynchronous collaboration schemes. We overcome these limitations via a domain-specific language allowing us to map DML schemes to an underlying middleware, i.e. the FastFlow parallel programming library.},
howpublished = {29th International European Conference on Parallel and Distributed Computing (Euro-Par '23)},
keywords = {eupilot, icsc},
pubstate = {published},
tppubtype = {misc}
}
Gianluca Mittone, Nicolò Tonci, Robert Birke, Iacopo Colonnelli, Doriana Medić, Andrea Bartolini, Roberto Esposito, Emanuele Parisi, Francesco Beneventi, Mirko Polato, Massimo Torquati, Luca Benini, Marco Aldinucci
Experimenting with Emerging RISC-V Systems for Decentralised Machine Learning Miscellaneous
20th ACM international conference on computing frontiers (CF '23), 2023, (Invited talk).
Abstract | Links | BibTeX | Tags: ai, eupilot, icsc
@misc{23:ACMCF,
title = {Experimenting with Emerging RISC-V Systems for Decentralised Machine Learning},
author = {Gianluca Mittone and Nicolò Tonci and Robert Birke and Iacopo Colonnelli and Doriana Medić and Andrea Bartolini and Roberto Esposito and Emanuele Parisi and Francesco Beneventi and Mirko Polato and Massimo Torquati and Luca Benini and Marco Aldinucci},
url = {https://datacloud.di.unito.it/index.php/s/BYyqZbHzzN4DL8Z},
year = {2023},
date = {2023-05-01},
abstract = {Decentralised Machine Learning (DML) enables collaborative machine learning without centralised input data. Federated Learning (FL) and Edge Inference are examples of DML. While tools for DML (especially FL) are starting to flourish, many are not flexible and portable enough to experiment with novel processors (e.g., RISC-V), non-fully connected network topologies, and asynchronous collaboration schemes. We overcome these limitations via a domain-specific language allowing us to map DML schemes to an underlying middleware, i.e. the FastFlow parallel programming library. We experiment with it by generating different working DML schemes on x86-64 and ARM platforms and an emerging RISC-V one. We characterise the performance and energy efficiency of the presented schemes and systems. As a byproduct, we introduce a RISC-V porting of the PyTorch framework, the first publicly available to our knowledge.},
howpublished = {20th ACM international conference on computing frontiers (CF '23)},
note = {Invited talk},
keywords = {ai, eupilot, icsc},
pubstate = {published},
tppubtype = {misc}
}