2022

Introducing Arm: the most widely adopted computing architecture

Read More

Abstract. Arm is a family of reduced instruction set computer (RISC) instruction set architectures for computer processors, configured for various environments. It is estimated that more than 200 billions devices have been built so far, from the tiniest IoT device to the current most powerful supercomputer in the world. Daily we touch tens if not hundreds of Arm-based microcontrollers and processors. The present lecture will introduce the Arm architecture, its history and its key characteristics (in particular Arm-based high-end CPU powering current and future supercomputers).

Bio. Filippo is a member of the NVIDIA EMEA HPC team working as HPC Developer Relations manager. In this role, he works closely with computational scientists from several science domains to understand their needs and help them prepare their software to run efficiently on GPU-accelerated HPC systems and achieve ground-breaking new science. He has been developing and contributing to various HPC codes (mainly Physics, Chemistry, and Engineering) for more than a decade. His GPU journey started back in 2010, at the time of the Fermi architecture. He has been a long-term contributor of Quantum ESPRESSO and he co-authored various porting iterations. Prior to NVIDIA, Filippo was Staff Research Engineer at Arm Research. His work focused on HW-SW co-design for HPC and HPDA workloads. Before Arm, he worked for 5 years at the Research Computing Services of the University of Cambridge as Head of Research Software Engineering leading several HPC projects both with academia and private companies. Since January 2020 he is a member of the EPSRC e-Infrastructure Strategic Advisory Team (SAT) which is an advisory body for UK e-Infrastructure strategy (HPC, AI, Cloud, and Software).

How to optimize an HPC system among the ten most powerful in the world

Read More

Abstract. With real-world applications, it is unlikely that all the resources of a cluster node can be fully utilized at the same time by a single application. Consider, for example, the likelihood of a hypothetical application that can fully exploit the accelerator and the network card’s available bandwidth at the same time. Therefore, whenever a single application completely exhausts one of the node’s resources, it might prevent the possibility for other applications to exploit the remaining resources, which are likely to remain partially unused. For instance, there are many applications that are data-intensive, and seismic imaging is one of those. With data-intensive computations, the bottleneck is likely to be memory bandwidth, memory size, network congestion, and/or storage access, while computing units may or may not be stressed to their maximum. Moreover, since most of the seismic imaging algorithms are embarrassingly parallel, in the context of a seismic imaging multi-user and multi-project scenario the challenge often is to efficiently execute a very large number of asynchronous and loosely coupled tasks. This combines with the nature of modern full-wave imaging and velocity analysis, which results in variability of memory requirement and computing unit occupancy that can span several orders of magnitude (according to the geophysical parameter of the maximum frequency of the analysis).
Accordingly, whenever the majority of the workload has these characteristics, in our experience there are several recommendable actions that can be undertaken to maximize cluster resource utilization: pack more than one job on each node; if needed, apply strong scaling do prevent a single job from exhausting any of the node resources; if applicable, run lightweight distributed services to relax some of their bottlenecks

Bio. Luca Bortot, HPC project leader at ENI (Milano)


2021

How to program NVIDIA GPU: past, present and future

Read More

Abstract. The idea of General Purpose GPU Computing (GPGPU) is more than a decade old. Over time, it took off and it has become ubiquitous. It is widely recognized by the HPC community that to reach the next milestone in performance, the Exascale, an accelerated system will be required to meet power and budget constraints. NVIDIA is not the only technology provider with a portfolio of GPU technology but it has been the first to believe in the relevance of accelerated computing and investing both in hardware and software innovation. Hardware is difficult, software is as difficult or even harder. Any hardware technology difficult to the program will inevitably die or becoming irrelevant. Investing in the software ecosystem and developer tools is essential for any technology success. This talk will provide an overview of languages (mainly C/C++, Fortran, and Python) and programming models and frameworks that can be used to develop code for NVIDIA GPU. NVIDIA strategy and vision about parallel computing will be discussed and few examples will be shown to highlight the pros and cons of the various approaches. The talk does not require any prior knowledge of GPU computing.

Bio. Filippo is a member of the NVIDIA EMEA HPC team working as HPC Developer Relations manager. In this role, he works closely with computational scientists from several science domains to understand their needs and help them prepare their software to run efficiently on GPU-accelerated HPC systems and achieve ground-breaking new science. He has been developing and contributing to various HPC codes (mainly Physics, Chemistry, and Engineering) for more than a decade. His GPU journey started back in 2010, at the time of the Fermi architecture. He has been a long-term contributor of Quantum ESPRESSO and he co-authored various porting iterations. Prior to NVIDIA, Filippo was Staff Research Engineer at Arm Research. His work focused on HW-SW co-design for HPC and HPDA workloads. Before Arm, he worked for 5 years at the Research Computing Services of the University of Cambridge as Head of Research Software Engineering leading several HPC projects both with academia and private companies. Since January 2020 he is a member of the EPSRC e-Infrastructure Strategic Advisory Team (SAT) which is an advisory body for UK e-Infrastructure strategy (HPC, AI, Cloud, and Software).

Cloud Computing and what you can do with it

Read More

Abstract. Research on cloud computing and infrastructures is highly relevant to both industry and academia. According to Gartner, Inc., more than 75% of organizations using cloud services, indicate that they have a cloud-first strategy, meaning that there is a strong shift from traditional on-premises data centers to cloud-based technologies and resources. This allows organizations to use computing, network, and storage resources made available by cloud providers, without the need of taking care of the maintenance of such resources. In this seminar, we will see what is cloud computing and what kind of services it provides, and the different flavors of cloud computing. We will also talk about how we can write programs to run on the cloud, namely cloud-native applications, with a particular focus on container technologies and the de-facto container orchestrator, Kubernetes. Finally, we will move towards cloud and high-performance computing convergence, and the efforts IBM Research and Red Hat OpenShift are doing to close the gap, in collaboration with National Laboratories and the open-source community.

Bio. Claudia Misale is a Research Staff Member in the Hybrid Cloud Infrastructure Software group at IBM T.J. Watson Research Center (NY).  She received her Ph.D. in Computer Science at the University of Torino in May 2017, with a thesis on programming models for big data analytics. She is working on Kubernetes-level security for IBM Public Cloud, but her research is also focused on porting scientific HPC workflows to the Cloud by enabling batch scheduling alternatives for Kubernetes. She collaborates with Lawrence Livermore National Laboratory, Red Hat, the University of Illinois at Urbana-Champaign, and Barcelona Supercomputing Centers addressing Cloud, HPC, and their convergence. 
She is mainly interested in cloud computing, and her background is in high-level parallel programming models and patterns for parallel and distributed computing, big data analytics on HPC platforms. 

Practical Distributed Programming in C++

Read More

Abstract. The need for coupling high performance with productivity is steering the recent evolution of the C++ language where low-level aspects of parallel and distributed computing are now part of the standard or under discussion for inclusion. The Standard Template Library (STL) includes containers and algorithms as primary notions, coupled with execution policies that allow exploiting parallel platforms (e.g., multi-cores) on top of a well-defined operational semantics. However, as of today, there is no support for distributed-memory platforms. Namely, as opposed to the so-called parallel STL, there is no such thing as a “distributed STL”. In this lecture, I will discuss the first state-of-the-art effort towards a distributed STL, which I led as member of the HPC group at the Pacific Northwest National Laboratory (PNNL), in 2019. The effort yielded the design and implementation of a stack for STL-compliant programs (containers, iterators, algorithms, and execution policies) running on distributed-memory systems. The distributed STL is currently in the process of being proposed for C++ standardization.

Bio. Maurizio Drocco is a Research Staff Member at the IBM Thomas J. Watson Research Center (NY), in the Hybrid Cloud Infrastructure group. He received his Ph.D. in Computer Science at the University of Torino in October 2017, with a thesis on distributed programming in C++. He has been Research Associate at the US  DOE Pacific Northwest National Laboratory (Richland, WA) and Research Intern at the IBM Research centers in Dublin and NY. He has co-authored papers in international journals and conferences (Google Scholar h-index 13). His research focuses on the security and programming aspects of the cloud, and his background is in parallel, distributed, and high-performance computing.

High-Performance Computing and Digital Transformation

Read More

Abstract. In a recent speech, IBM CEO Arvind Krishna said that: “digital transformation has been accelerated during the COVID-19 pandemic, and ultimately every company will become an AI company.” This is not strictly true, but what is true is that every company will have to adopt AI technologies. AI is considered in a broad meaning. We can be more precise by saying that every industry will have to apply digital technologies to determine a paradigm shift. The value of goods/services moves from the exploitation of physical systems to the exploitation of knowledge. AI, computer simulations, and other digital technologies help mine out more knowledge faster.
The more, the better. In this scenario, HPC is a tool to process BigData, enable AI, and perform simulations; it can accelerate the creation of value thanks to generating new knowledge and performing more accurate predictions. Whereas computational capacity is a fundamental resource for competitiveness, row computational capacity alone is useless, a crunching device transforming sequences of 1 and 0 into other sequences of 1 and 0. The software is the key to unlock the value. This is why, besides the supercomputer, we need to create the capability to implement applications or improve the already existing ones. In the talk, I will present how Leonardo, with the key contribution of the HPC Lab, intends to implement leadership software tools and computational infrastructure able to add value to the company and ultimately transform it to be more digital than physical.

Bio. Born in Formigine (Mo) Italy, the 20/05/1970, is presently head of Cloud Computing in Leonardo, and director of the Leonardo HPC Lab. Before joining Leonardo, he spent more than 20 years in Cineca (Italian supercomputing center), where he became head of HPC R&D, with responsibility for the evolution and exploitation of the national and European HPC infrastructure. He is a member of the EuroHPC Research and Innovation Advisory Board, a steering board member of the ETP4HPC association, and Leonardo representative in GAIA-X. From a high-level education standpoint, he holds a Ph.D. in computational material science obtained at the International School for Advanced Studies (ISAS-SISSA) of Trieste, defended in 1998.
During his Ph.D., he has studied various problems concerning the implementation and the efficiency of parallel numerical algorithms being used in computer simulations. He collaborates with different user communities to enable applications on massively parallel HPC systems and innovative architecture, as well as cloud solutions.
In particular, he is responsible for the parallel design of the Quantum ESPRESSO suite of codes, and one of the core developers of the EXSCALATE drug design platform. He co-authored more than 100 peer review articles, including Science, Physical Review Letters,
Nature Materials, and many others. 

Paolo Viviani (Links foundation)
June 7th, 2021 – 16.00 CEST

Quantum Computing: A hype-avoiding introduction

Read More

Link to the seminar (WebEX)

Abstract. Quantum computing is generating more hype than ever among the race to boast the most qubits and prophecies about encryption Armageddon, but how much of this hype is reasonable?
This talk will start with a brief introduction to quantum computation, then it will present the main technical features and challenges of the most popular approaches to quantum computing (quantum gates, annealing, and, briefly, quantum simulators). Doing this, will try to avoid all this hype and provide an honest review of what is possible, and useful, to achieve on current Noisy intermediate-scale quantum machines, with a particular focus on combinatorial optimization. Finally, a short demo of an optimization problem solved on D-Wave machines will be provided.

Bio. I am a senior researcher at the Links Foundation in Torino in the area of Advanced Computing and Applications. Previously I have been a Research engineer at Noesis Solutions NV working on HPC and AI applications. I got my Ph.D. in Computer Science at the Department of Computer Science of the University of Torino and the MSc. in Theoretical Physics the University of Torino.

Daniele Gregori (E4 Engineering)
June 9th, 2021 – 10.00 CEST

HPC architecture overview

Read More

Abstract. The presentation deals with the HPC architecture, analyzing the general characteristics of a high-performance computing cluster both from the hardware point of view and from the management software point of view. Some Important trendy features will be presented that expand those of the classic HPC cluster.

Bio. Daniele Gregori is a Ph.D. in physics. He worked on the control and monitoring system of the LHCb experiment at CERN. Subsequently, he moved into the development and management of the INFN-CNAF Tier1 computing center, in particular in the high-performance storage and GRID computing systems. Since 2015 he has been involved in the design of HPC systems at E4 Computer Engineering, since 2021 he is the technical manager of European Projects in which the E4 company is involved.


2020

Luca Cipriani (Arduino)
May 28th, 2020 

Architetture a 8bit a confronto, Atmega328p vs. Atmega32u4

Read More

Abstract. The presentation deals with the HPC architecture, analyzing the general characteristics of a high-performance computing cluster both from the hardware point of view and from the management software point of view. Some Important trendy features will be presented that expand those of the classic HPC cluster.

Bio. Daniele Gregori is a Ph.D. in physics. He worked on the control and monitoring system of the LHCb experiment at CERN. Subsequently, he moved into the development and management of the INFN-CNAF Tier1 computing center, in particular in the high-performance storage and GRID computing systems. Since 2015 he has been involved in the design of HPC systems at E4 Computer Engineering, since 2021 he is the technical manager of European Projects in which the E4 company is involved.