Maurizio’s Poster awarded at PNNL PoGo Symposium

On Aug. 21, Discovery Hall at PNNL hosted the 10th Annual Post Graduate Research Symposium. There Post Graduates have the opportunity to present on their work. Judges score both oral and poster presentations in each degree category (i.e., Post Bachelors, Post Masters and Post Doctorate).

Bo Peng earned the Post Doctorate Top Oral Presentation for the Impact of Secondary Phase on SCR Catalyst Durability. Maurizio Drocco claimed top prize of the Post Doctorate Top Poster Presentation for his Dynamic Allocation Schemas in Task-Based Runtimes. Kuan-Ting Lin won in the Post Masters Posters with his Renewable Polymer Precursor(s) from Acetaldehyde CondensationHope Lackey won in the Post Bachelor Posters with his Elemental Separation of Fourteen Lanthanides via Isotachophoresis in a Microfluidic Device.

Past and present Linus Pauling fellows presented their research at the 2018 Symposium.
Terms of Use: Our images are freely and publicly available for use with the credit line, “Andrea Starr | Pacific Northwest National Laboratory”; Please use provided caption information for use in appropriate context.

13M€ EU H2020 DeepHealth project is heading to kick-off

DeepHealth:  Deep-Learning and HPC to Boost Biomedical Applications for Health

EU H2020 ICT-11-2018 Innovation Action – Total cost 12.8M€, 2019-2021 (36 months)

The Department of Computer Science of the University of Turin, together with the departments of neuroscience and medical sciences of the University of Turin, the consortium of public hospitals in the city of Turin (city of science and health) and 20 other European academic and industrial partners they are joining forces to push the methods of Artificial Intelligence, supported by High-Performance Computing, towards superhuman precision levels to support diagnosis, monitoring and cure of diseases.

As parallel computing reserach group, we will support deephealth with a novel platform for the training of deep networks with medical images. The platform will be novel, high-performance, easy to use, fully open and available for use to the whole national reserach community by way of the HPC cloud ecosystem at  HPC4AI (Competence Center for High Performance and Artificial Intelligence Turin).

Excited.


DeepHealth:  Deep-Learning and HPC to Boost Biomedical Applications for Health

Health scientific discovery and innovation are expected to quickly move forward under the so-called “fourth paradigm of science”, which relies on unifying the traditionally separated and heterogeneous high-performance computing and big data analytics environments.
Under this paradigm, the DeepHealth project will provide HPC computing power at the service of biomedical applications; and apply Deep Learning (DL) techniques on large and complex biomedical datasets to support new and more efficient ways of diagnosis, monitoring and treatment of diseases.

DeepHealth will develop a flexible and scalable framework for the HPC + Big Data environment, based on two new libraries: the European Distributed Deep Learning Library (EDDLL) and the European Computer Vision Library (ECVL). The framework will be validated in 14 use cases which will allow to train models and provide training data from different medical areas (migraine, dementia, depression, etc.). The resulting trained models, and the libraries will be integrated and validated in 7 existing biomedical software platforms, which include: a) commercial platforms (e.g. PHILIPS Clinical Decision Support System from or THALES PIAF; and b) research-oriented platforms (e.g. CEA`s ExpressIFTM or CRS4`s Digital Pathology). The impact is measured by tracking the time-to-model-in-production (ttmip).

Through this approach, DeepHealth will also standardise HPC resources to the needs of DL applications, and underpin the compatibility and uniformity on the set of tools used by medical staff and expert users. The final DeepHealth solution will be compatible with HPC infrastructures ranging from the ones in supercomputing centres to the ones in hospitals.
DeepHealth involves 21 partners from 9 European Countries, gathering a multidisciplinary group from research organisations (9), health organisations (4) as well as (4) large and (4) SME industrial partners, with a strong commitment towards innovation, exploitation and sustainability.

Maurizio got his PhD he is now postdoctoral researcher at Pacific Northwest National Laboratory (PNNL)

Maurizio Drocco got his PhD in Computer Science at University of Torino on October 2017 defending his thesis entitled “Parallel Programming with Global Asynchronous Memory: Models, C++ APIs and Implementations” and tomorrow will start a new adventure as a Postdoctoral researcher at Pacific Northwest National Laboratory (PNNL), WA, USA.

Congratulations Maurizio. It has been a pleasure working with you for the past 4 years.

M. Drocco, “Parallel Programming with Global Asynchronous Memory: Models, C++ APIs and Implementations,” PhD Thesis, 2017. doi:10.5281/zenodo.1037585

Viva snapshot

Viva committee: Prof. Jose Daniel Garcia Sanchez (UC3M, Madrid, Spain), Prof. Marco Danelutto Damiani (Universi†à di Pisa, Italy), Prof.Siegfried Benkner ( University of Vienna, Austria).

Continue reading

Claudia got her PhD on data analytics and she is now Scientist at IBM TJ Watson

Claudia Misale got her PhD in Computer Science at University of Torino on May 11, 2017 defending her thesis entitled “PiCo: A Domain-Specific Language for Data Analytics Pipelines” 

In her thesis Claudia reviews and analyses the state of the art frameworks for data analytics and proposing a methodology to compare their expressiveness and advocates the design of a novel C++ DSL for big data analytics (so-called PiCo: Pipeline Composition). PiCo, differently from Spark/Flink/etc is fully polymorphic and exhibit a clear separation between data and transformations. This, together with the careful C++/Fastflow implementation, eases the application development since data scientists can play with the pipelining of different transformations without any need to adapt the data type (and its memory layout). Type is inferred along the transformation pipeline in a “fluent” programming fashion. The clear separation between transformation, data type and its layout in memory make it possible to really optimise data movements, memory usage and eventually performance. Application developed with PiCo exhibit a huge 10x reduced memory footprint against Spark/Flink equivalent. The fully C++/Fastflow run-time support make it possible to really generate the network of run-time support processes from data processing pipeline, thus achieving the maximum scalability imposed by true data dependencies (much beyond the simple master-worker paradigm of Spark and Flink). Being PiCo C++11/14, it is already open to host native GPU offloading, which paves the way for the analytics-machine learning converge. See more at  DOI:10.5281/zenodo.579753

Claudia is flying today to New York to start her career as a scientist at IBM TJ Watson research center within the Data-Centric Systems Solutions.

Congratulations Claudia. It has been a pleasure working with you for the past 4 years.

Viva snaphots

Claudia-thesis

Claudia viva talk

Claudia-thesis

Viva committee: Prof. Jose Daniel Garcia Sanchez (UC3M, Madrid, Spain), prof. Ernesto Damiani (Khalifa University, Abu Dhabi, UAE), Prof. Domenico Talia (Università della Calabria, Italy)

 

 

 

Continue reading

OptiBike experiment funded under Fortissimo2 EU I4MS

Our project OptiBike (total cost 230K €) has been incorporated into the Fortissimo2 EU @i4ms project (n. 680481). Also this is the first EU-funded project at  and at HPC  laboratory of ICxT@UNITO.

We are looking forward for the kick-off.

 

Maximum loads and principal directions in Global Stiffness Load case.

Maximum loads and principal directions in Global Stiffness Load case

OptiBike: Robust Lightweight Composite Bicycle design and optimization

In the current design process for composite materials, the effect of manufacturing uncertainty on the structural and dynamic performances of mechanical structures (aeronautic, automotive and others) cannot be accurately assessed due to the limitations of the current computational resources and are hence compensated for by applying safety factors. This non ideal situation usually leads to overdesigned structures that could potentially meet higher performances with the same safety levels.
The objective of this experiment is to establish a design workflow and service for composite material modelling, simulation and numerical optimization that uses uncertainty quantification and HPC to deliver high performance and reliable composite material products. This design workflow can be applied to any composite material structure, from aeronautics to bicycles, and enables SMEs an easy to use and hassle-free access to advanced material design methodologies and the relative HPC infrastructures that allow the application of reliability based design optimization (RBDO). To demonstrate this design workflow, a full composite bicycle will be designed and optimized based on real manufacturing data provided by IDEC, a Spanish SME that provides unique design and manufacturing capabilities for composite materials.

The expected results of this experiment are:

  1. a design optimization approach for composite materials by means of uncertainty quantification techniques for reliability and robustness;
  2. a design optimization service that can be used by every user to perform reliability based product optimization (this will be demonstrated for the bicycle case of IDEC).

Partners

  • Noesis solution, Belgium (coord)
  • IDEC, Spain
  • University of Torino (Alpha, ICxT and C3S), Italy
  • Arctur, Slovenia

 

C3S@UNITO will use INDIGO software tools

Adopting INDIGO-Data Cloud: the Scientific Computing Competence Centre of the University of Torino will use INDIGO software tools

The INDIGO-Data Cloud project is happy to announce that it signed a Memorandum of Understanding (MoU) with the Scientific Computing Competence Centre of the University of Torino (C3S).

A Memorandum of Understanding has just been signed.

The INDIGO-Data Cloud project is happy to announce that it signed a Memorandum of Understanding (MoU) with the Scientific Computing Competence Centre of the University of Torino (C3S). C3S is a research centre that focuses on scientific computing technology and applications and manages OCCAM, • Open Computing Cluster for Advanced data Manipulation • a multipurpose HPC Cluster. Thanks to this agreement, a collaboration has been set-up between C3S and the INDIGO-DataCloud project for the use and development of advanced tools for scientific computing, particularly for heterogeneous use-case management in an HPC infrastructure context.

C3S will have access to software tools developed by INDIGO-DataCloud and will be able to integrate them in the management layer of the OCCAM supercomputer. The INDIGO teams collaborate with C3S in adapting and porting the tools to the specific use cases, giving support on a best-effort basis and providing, whenever feasible, patches and customisations for its software products.

“INDIGO-DataCloud aims at providing services that can be deployed on different computing platforms and enable the interoperability of heterogeneous e-infrastructures. C3S is a very interesting opportunity to test such capabilities and prove how our tools can really make the difference, providing seamless access, elasticity and scalability for the exploitation of data and computational resources” • says Giacinto Donvito, the Technical Director of the INDIGO-DataCloud Project.

“We have a very wide variety of use cases, from traditional HPC in computational chemistry, physics and astrophysics to data-intensive genomics and computational biology all the way to social sciences and even the humanities, so we will have to use the best tools to accommodate them all. We trust that many INDIGO products will help us to improve the performance and usability of our centre” says Matteo Sereno, professor at the Department of Computer Science of the University of Torino and C3S Director.

http://cordis.europa.eu/news/rcn/138515_en.html

Technical information in:

M. Aldinucci, S. Bagnasco, S. Lusso, P. Pasteris, S. Vallero, and S. Rabellino, “The Open Computing Cluster for Advanced data Manipulation (OCCAM),” in The 22nd International Conference on Computing in High Energy and Nuclear Physics (CHEP), San Francisco, USA, 2016.

Ricerca, innovazione e formazione al tempo dei Big Data

Ricerca, innovazione e formazione al tempo dei Big Data

Lo sviluppo tumultuoso delle tecnologie digitali e l’affermarsi di Internet come mondo virtuale che diventa sempre più integrato alla quotidianità hanno esponenzialmente accresciuto la disponibilità d’informazione in tutti i campi del sapere umano. In questo contesto, la scienza dei dati (o data science) si propone come nuovo ambito di conoscenza intrinsecamente interdisciplinare, capace di trasformare in profondità le modalità e l’impatto della ricerca universitaria.

L’Università degli Studi di Torino si è attrezzata velocemente per questa sfida epocale e vuole presentare pubblicamente la propria capacità innovativa sul piano teorico, della ricerca applicata e della formazione, spaziando dagli algoritmi ai big data, dall’Internet of Things al machine learning.

Grazie al supporto del Collegio Carlo Alberto e alla Compagnia di San Paolo è stato possibile organizzare una giornata seminariale in cui ricercatori e docenti divulgano alcune delle ricerche più avanzate e si confrontano sulle opportunità (e i rischi) che nuovi approcci offrono alla scienza, alla cultura, alla società e all’economia.

PROGRAMMA

9:30      Saluti istituzionali

Gianmaria Ajani – Magnifico Rettore dell’Università degli Studi diTorino

Francesco Profumo- Presidente della Compagnia di San Paolo

10:00    Introduzione. Scienza dei dati epiattaforme abilitanti

Marco Guerzoni (Dip. di Economia e Statistica e Despina BigData Lab)

Marco Aldinucci (Dip. di Informatica e C3S)

10:15    La rivoluzione dei big data nelle scienze della vita

Moderatore: Paolo Provero (Dipartimento di BiotecnologieMolecolari e Scienze per la salute)

–       La rivoluzione digitale e la medicina personalizzata inoncologia

Enzo Medico (Dip. di Oncologia)

–       Big Data e genomica vegetale

Alberto Acquadro (Dipartimento di Scienze Agrarie, Forestali eAlimentari)

11:00    La complessità svelata: big data perl’economia e l’impresa

Moderatore: Magda Fontana (Dip. di Economia e Statistica eDespina Big Data Lab)

–       Il grado di verità dei dati nei processi decisionali

Elvira Di Nardo (Dip. di Matematica)

–       Analisi predittive e comportamento dei consumatori

Massimiliano Nuccio (Dip. di Economia e Statistica e DespinaBig Data Lab)

–       Le sfide normative dei Big Data tra standards giuridici eflussi transfrontalieri

Alberto Oddenino (Dip. di Giurisprudenza)

11.50      Data Science: strumenti e modelli perreti e applicazioni

Moderatore: Filippo Barbera (Dip. Culture Politica e Società)

–       A cosa servono i modelli matematici: il caso dello studiodelle reti

Laura Sacerdote (Dip. di Matematica)

–       Modelli statistici per reti: applicazioni alle neuroscienze

Antonio Canale (Dip. di Scienze economico-sociali ematematico-statistiche)

–       La scienza delle reti: dai dati alle applicazioni

Giancarlo Ruffo (Dip. di Informatica)

12.40    L’offerta didattica su Big Dataall’Università di Torino

Matteo Ruggiero (Dip. di Scienze economico-sociali ematematico-statistiche)

 

13:00-14:00 ——————Pranzo———————————–

14:00    Vita e salute

Moderatore: Lorenzo Richiardi (Dip. di Scienze Mediche)

–       Big data analitico-clinici per prevenzione e diagnosticamedica ad alta focalizzazione

Marco Vincenti (Dip. di Chimica)

–       Dal dato al modello in epidemiologia veterinaria

Mario Giacobini (Dip. di Scienze Veterinarie)

14.45    Società, cultura e cittá intelligenti

Moderatore: Guido Boella (Dip. di Informatica)

–       Mappe sensoriali e la nuova scienza delle città

Rossano Schifanella (Dip.di Informatica)

–       Fisica Sociale e Big Data al servizio del benessere deicittadini

Marcello Bogetti (Labnet)

–       Smart big data per le scienze umane

Vincenzo Lombardo (Dip. di Informatica)

–       Equità e sostenibilitá nei consumi energetici: unapproccio IoT

Vito Frontuto (Dip. di Economia e Statistica)

15.50    TAVOLA ROTONDA Data-drivenInnovation: l’ecosistema big data a Torino

Moderatore: Aldo Geuna (Dip. di Economia e Statistica eCollegio Carlo Alberto)

–      Valerio Cencig (Chief Data Officer – Intesa S. Paolo)

–      Stefano Gallo (Direttore ICT – Città della Salute e dellaScienza)

–      Roberto Moriondo (Direttore Generale -Comune di Novara)

–      Daniela Paolotti (Research Leader – Fondazione ISI)

–      Emilio Paolucci (BigData@Polito – Politecnico di Torino)

–      Gian Paolo Zanetta (Cittá della Salute e della Scienza)

16.45    TAVOLA ROTONDA Le istituzioni verso ibig data

Moderatore: Pietro Terna (Presidente – Collegio Carlo Alberto)

–      Giuseppina De Santis (Assessore alle Attività Produttive eall’Innovazione – Regione Piemonte)

–      Francesca Leon (Assessore alla Cultura – Comune di Torino)

–      Paola Pisano (Assessore all’Innovazione – Comune di Torino)

Una sessione di poster allestita all’ingresso presenterà alcune delle ricerche realizzate o in corso presso i diversi dipartimenti dell’Università di Torino

http://www.unitonews.it/index.php/en/event_detail/luniversita-di-torino-verso-il-futuro-ricerca-innovazione-e-formazione-al-tempo-dei-big-data

EU I4MS Digital Manufacturing HUB – 1st Workshop, Torino, Italy

1st Regional DIMA-HUB Workshop
November 3rd, 2016
Centro Congressi Unione Industriale di Torino

Don’t miss the HPC session (moderated by Marco Aldinucci) and CPS (moderated by Enrico Bini)

Register here


DIMA-HUB is a feasibility study positioned within the context of the mentoring and sponsorship programme of the I4MS initiative (Phase 2) aiming to extend the geographical coverage of the I4MS ecosystem by establishing a Regional Digital Innovation (RDMI) hub in the Piedmont Region, Italy. The mission of the hub is to foster and accelerate technology advances, while supporting start-ups and SMEs in their digital transformation path by connecting them with firms and competence centers within and from outside the Piedmont Region.

Themes

  • Advanced laser-based applications (including additive manufacturing)
  • Internet of Things (IoT)
  • Robotics, Cyber Physical Systems (CPS)
  • High Performance Computing (HPC) 

 

The consortium consists of five members, three of which have specialization in research and development on digital manufacturing and the remaining two are regional innovation clusters:

  1. Politecnico di Torino
  2. Università di Torino
  3. MESAP
  4. Torino Wireless (TOWL)
  5. Istituto Superiore Mario Boella (ISMB)

More information at DIMA-HUB home page

DIMA-HUB

Paolo Inaudi is the best MSc in CS@UNITO 2015/16 with a thesis on High-Performance libfabric

Paolo Inaudi is the recipient of the award “best MSc thesis of the year” 2015/16 in Computer Science at University of Torino (so-called “medaglia d’argento” of University of Torino). Paolo graduated with a thesis entitled “Design and development of a libfabric provider for the Ronniee/A3Cube high-performance network”.

Paolo graduate exactly in time with all exams achieved with 100% of the score. During his MSc thesis, Paolo developed an almost complete libfabric provider, which is available on GitHub under LGPLv3. The work has been made actually possible thanks to direct A3Cube Inc. support  (under the UNITO-A3Cube MoU).

Eventually, Paolo is the 3-in-a-row MSc student in the last 5 years achieving the best MSc thesis of the year in Computer Science at University of Torino with a thesis in parallel computing within the alpha group.

Congratulations Paolo!