Here the Euro-Par 2018 presentation slides. Abstract Submission deadline: February 9, 2018, 23:59 AOE.
Here the Euro-Par 2018 presentation slides. Abstract Submission deadline: February 9, 2018, 23:59 AOE.
Claudia Misale got her PhD in Computer Science at University of Torino on May 11, 2017 defending her thesis entitled “PiCo: A Domain-Specific Language for Data Analytics Pipelines”
In her thesis Claudia reviews and analyses the state of the art frameworks for data analytics and proposing a methodology to compare their expressiveness and advocates the design of a novel C++ DSL for big data analytics (so-called PiCo: Pipeline Composition). PiCo, differently from Spark/Flink/etc is fully polymorphic and exhibit a clear separation between data and transformations. This, together with the careful C++/Fastflow implementation, eases the application development since data scientists can play with the pipelining of different transformations without any need to adapt the data type (and its memory layout). Type is inferred along the transformation pipeline in a “fluent” programming fashion. The clear separation between transformation, data type and its layout in memory make it possible to really optimise data movements, memory usage and eventually performance. Application developed with PiCo exhibit a huge 10x reduced memory footprint against Spark/Flink equivalent. The fully C++/Fastflow run-time support make it possible to really generate the network of run-time support processes from data processing pipeline, thus achieving the maximum scalability imposed by true data dependencies (much beyond the simple master-worker paradigm of Spark and Flink). Being PiCo C++11/14, it is already open to host native GPU offloading, which paves the way for the analytics-machine learning converge. See more at DOI:10.5281/zenodo.579753
Claudia is flying today to New York to start her career as a scientist at IBM TJ Watson research center within the Data-Centric Systems Solutions.
Congratulations Claudia. It has been a pleasure working with you for the past 4 years.
Our project OptiBike (total cost 230K €) has been incorporated into the Fortissimo2 EU @i4ms project (n. 680481). Also this is the first EU-funded project at @C3sUnito and at HPC laboratory of ICxT@UNITO.
We are looking forward for the kick-off.
OptiBike: Robust Lightweight Composite Bicycle design and optimization
In the current design process for composite materials, the effect of manufacturing uncertainty on the structural and dynamic performances of mechanical structures (aeronautic, automotive and others) cannot be accurately assessed due to the limitations of the current computational resources and are hence compensated for by applying safety factors. This non ideal situation usually leads to overdesigned structures that could potentially meet higher performances with the same safety levels.
The objective of this experiment is to establish a design workflow and service for composite material modelling, simulation and numerical optimization that uses uncertainty quantification and HPC to deliver high performance and reliable composite material products. This design workflow can be applied to any composite material structure, from aeronautics to bicycles, and enables SMEs an easy to use and hassle-free access to advanced material design methodologies and the relative HPC infrastructures that allow the application of reliability based design optimization (RBDO). To demonstrate this design workflow, a full composite bicycle will be designed and optimized based on real manufacturing data provided by IDEC, a Spanish SME that provides unique design and manufacturing capabilities for composite materials.
The expected results of this experiment are:
Adopting INDIGO-Data Cloud: the Scientific Computing Competence Centre of the University of Torino will use INDIGO software tools
The INDIGO-Data Cloud project is happy to announce that it signed a Memorandum of Understanding (MoU) with the Scientific Computing Competence Centre of the University of Torino (C3S).
A Memorandum of Understanding has just been signed.
The INDIGO-Data Cloud project is happy to announce that it signed a Memorandum of Understanding (MoU) with the Scientific Computing Competence Centre of the University of Torino (C3S). C3S is a research centre that focuses on scientific computing technology and applications and manages OCCAM, • Open Computing Cluster for Advanced data Manipulation • a multipurpose HPC Cluster. Thanks to this agreement, a collaboration has been set-up between C3S and the INDIGO-DataCloud project for the use and development of advanced tools for scientific computing, particularly for heterogeneous use-case management in an HPC infrastructure context.
C3S will have access to software tools developed by INDIGO-DataCloud and will be able to integrate them in the management layer of the OCCAM supercomputer. The INDIGO teams collaborate with C3S in adapting and porting the tools to the specific use cases, giving support on a best-effort basis and providing, whenever feasible, patches and customisations for its software products.
“INDIGO-DataCloud aims at providing services that can be deployed on different computing platforms and enable the interoperability of heterogeneous e-infrastructures. C3S is a very interesting opportunity to test such capabilities and prove how our tools can really make the difference, providing seamless access, elasticity and scalability for the exploitation of data and computational resources” • says Giacinto Donvito, the Technical Director of the INDIGO-DataCloud Project.
“We have a very wide variety of use cases, from traditional HPC in computational chemistry, physics and astrophysics to data-intensive genomics and computational biology all the way to social sciences and even the humanities, so we will have to use the best tools to accommodate them all. We trust that many INDIGO products will help us to improve the performance and usability of our centre” says Matteo Sereno, professor at the Department of Computer Science of the University of Torino and C3S Director.
Technical information in:
M. Aldinucci, S. Bagnasco, S. Lusso, P. Pasteris, S. Vallero, and S. Rabellino, “The Open Computing Cluster for Advanced data Manipulation (OCCAM),” in The 22nd International Conference on Computing in High Energy and Nuclear Physics (CHEP), San Francisco, USA, 2016.
Lo sviluppo tumultuoso delle tecnologie digitali e l’affermarsi di Internet come mondo virtuale che diventa sempre più integrato alla quotidianità hanno esponenzialmente accresciuto la disponibilità d’informazione in tutti i campi del sapere umano. In questo contesto, la scienza dei dati (o data science) si propone come nuovo ambito di conoscenza intrinsecamente interdisciplinare, capace di trasformare in profondità le modalità e l’impatto della ricerca universitaria.
L’Università degli Studi di Torino si è attrezzata velocemente per questa sfida epocale e vuole presentare pubblicamente la propria capacità innovativa sul piano teorico, della ricerca applicata e della formazione, spaziando dagli algoritmi ai big data, dall’Internet of Things al machine learning.
Grazie al supporto del Collegio Carlo Alberto e alla Compagnia di San Paolo è stato possibile organizzare una giornata seminariale in cui ricercatori e docenti divulgano alcune delle ricerche più avanzate e si confrontano sulle opportunità (e i rischi) che nuovi approcci offrono alla scienza, alla cultura, alla società e all’economia.
Gianmaria Ajani – Magnifico Rettore dell’Università degli Studi diTorino
Francesco Profumo- Presidente della Compagnia di San Paolo
Marco Guerzoni (Dip. di Economia e Statistica e Despina BigData Lab)
Marco Aldinucci (Dip. di Informatica e C3S)
Moderatore: Paolo Provero (Dipartimento di BiotecnologieMolecolari e Scienze per la salute)
Enzo Medico (Dip. di Oncologia)
Alberto Acquadro (Dipartimento di Scienze Agrarie, Forestali eAlimentari)
Moderatore: Magda Fontana (Dip. di Economia e Statistica eDespina Big Data Lab)
Elvira Di Nardo (Dip. di Matematica)
Massimiliano Nuccio (Dip. di Economia e Statistica e DespinaBig Data Lab)
Alberto Oddenino (Dip. di Giurisprudenza)
Moderatore: Filippo Barbera (Dip. Culture Politica e Società)
Laura Sacerdote (Dip. di Matematica)
Antonio Canale (Dip. di Scienze economico-sociali ematematico-statistiche)
Giancarlo Ruffo (Dip. di Informatica)
Matteo Ruggiero (Dip. di Scienze economico-sociali ematematico-statistiche)
Moderatore: Lorenzo Richiardi (Dip. di Scienze Mediche)
Marco Vincenti (Dip. di Chimica)
Mario Giacobini (Dip. di Scienze Veterinarie)
Moderatore: Guido Boella (Dip. di Informatica)
– Mappe sensoriali e la nuova scienza delle città
Rossano Schifanella (Dip.di Informatica)
Marcello Bogetti (Labnet)
Vincenzo Lombardo (Dip. di Informatica)
Vito Frontuto (Dip. di Economia e Statistica)
Moderatore: Aldo Geuna (Dip. di Economia e Statistica eCollegio Carlo Alberto)
– Valerio Cencig (Chief Data Officer – Intesa S. Paolo)
– Stefano Gallo (Direttore ICT – Città della Salute e dellaScienza)
– Roberto Moriondo (Direttore Generale -Comune di Novara)
– Daniela Paolotti (Research Leader – Fondazione ISI)
– Emilio Paolucci (BigData@Polito – Politecnico di Torino)
– Gian Paolo Zanetta (Cittá della Salute e della Scienza)
Moderatore: Pietro Terna (Presidente – Collegio Carlo Alberto)
– Giuseppina De Santis (Assessore alle Attività Produttive eall’Innovazione – Regione Piemonte)
– Francesca Leon (Assessore alla Cultura – Comune di Torino)
– Paola Pisano (Assessore all’Innovazione – Comune di Torino)
Una sessione di poster allestita all’ingresso presenterà alcune delle ricerche realizzate o in corso presso i diversi dipartimenti dell’Università di Torino
1st Regional DIMA-HUB Workshop
November 3rd, 2016
Centro Congressi Unione Industriale di Torino
Don’t miss the HPC session (moderated by Marco Aldinucci) and CPS (moderated by Enrico Bini)
DIMA-HUB is a feasibility study positioned within the context of the mentoring and sponsorship programme of the I4MS initiative (Phase 2) aiming to extend the geographical coverage of the I4MS ecosystem by establishing a Regional Digital Innovation (RDMI) hub in the Piedmont Region, Italy. The mission of the hub is to foster and accelerate technology advances, while supporting start-ups and SMEs in their digital transformation path by connecting them with firms and competence centers within and from outside the Piedmont Region.
The consortium consists of five members, three of which have specialization in research and development on digital manufacturing and the remaining two are regional innovation clusters:
More information at DIMA-HUB home page
Paolo Inaudi is the recipient of the award “best MSc thesis of the year” 2015/16 in Computer Science at University of Torino (so-called “medaglia d’argento” of University of Torino). Paolo graduated with a thesis entitled “Design and development of a libfabric provider for the Ronniee/A3Cube high-performance network”.
Paolo graduate exactly in time with all exams achieved with 100% of the score. During his MSc thesis, Paolo developed an almost complete libfabric provider, which is available on GitHub under LGPLv3. The work has been made actually possible thanks to direct A3Cube Inc. support (under the UNITO-A3Cube MoU).
Eventually, Paolo is the 3-in-a-row MSc student in the last 5 years achieving the best MSc thesis of the year in Computer Science at University of Torino with a thesis in parallel computing within the alpha group.
The novel Competence Center for Scientific Computing at University of Torino and INFN Torino (C3S@UNITO) is eventually opening this week. The inauguration workshop will take place on October 7, 2016 at main theatre of the Campus Luigi Einaudi. The center involves over 16 departments of University of Torino and hosts the brand new OCCAM platform.
The program and (free) registration form is here. Everybody is invited.
The Open Computing Cluster for Advanced data Manipulation (OCCAM) is a multi-purpose flexible HPC cluster designed and operated by a collaboration between the University of Torino and the Torino branch of the Istituto Nazionale di Fisica Nucleare. It is aimed at providing a flexible, reconfigurable and extendable infrastructure to cater to a wide range of different scientific computing needs, as well as a platform for R&D activities on computational technologies themselves. Extending it with novel architecture CPU, accelerator or hybrid microarchitecture (such as forthcoming Intel Xeon Phi Knights Landing) should be as a simple as plugging a node in a rack.
The initial system counts slightly more than 1100 cpu cores and includes different types of computing nodes (standard dual-socket nodes, large quad-sockets nodes with 768 GB RAM, and multi-GPU nodes) and two separate disk storage subsystems: a smaller high-performance scratch area, based on the Lustre file system, intended for direct computational I/O and a larger one, of the order of 1PB, to archive near-line data for archival purposes. All the components of the system are interconnected through a 10Gb/s Ethernet layer with one-level topology and an InfiniBand FDR 56Gbps layer in fat-tree topology.
A system of this kind, heterogeneous and reconfigurable by design, poses a number of challenges related to the frequency at which heterogeneous hardware resources might change their availability and shareability status, which in turn affect methods and means to allocate, manage, optimize, bill, monitor VMs, virtual farms, jobs, interactive bare-metal sessions.
The topic of the workshop is indeed the description of some of the use cases that prompted the design ad construction of the HPC cluster, its architecture and a first characterisation of its performance by some synthetic benchmark tools and a few realistic use-case tests.
More technical details at CHEP 2016: M. Aldinucci, S. Bagnasco, S. Lusso, P. Pasteris, and S. Rabellino, “The Open Computing Cluster for Advanced data Manipulation (OCCAM),” in The 22nd International Conference on Computing in High Energy and Nuclear Physics (CHEP), San Francisco, USA, 2016.
The EU FP7 REPARA project @reparaproject is now completed. Running for 3 years (2013-2016) with total cost 3.5M €, it has been evaluated “excellent” during mid-term EU review demonstrating its absolute scientific value. Among the other results, the REPARA project paves the way to efficient but easy to use parallel patterns into standard C++.