The lab was founded in 1985 when the new course of Image Processing was included for the first time in the curriculum of the Master Degree in Information Science. In 1990 the lab was named in honor to Giovanni Tamburelli, professor at the Turin University. In the course of the years the lab has evolved along with the technological progress. Nowadays the lab constitutes the research hub of the activities of the EIDOS group in the following areas of image processing and computer graphics:
Image processing
The research activities consist in the design of image processing algorithms for restoration, segmentation, feature extraction and watermarking for data protection and integrity verification. The fields of applications are image and video coding (both 2D and 3D), cultural heritage, and biomedical imaging.
Computer vision
The research activities are devoted to the development of computer vision algorithms ranging from the design of interactive interfaces for 3D graphics applications, e.g. videogames, to industrial vision techniques for object recognition and biometric and forensic applications for identity verification and attribution.
Virtual reality
In this area the lab goal is to develop serious games using virtual reality for learning, training, and medical rehabilitation. Simulation in a virtual world allows simple interaction with objects that in the real world are far away in space and/or in time. In this area all previous competences are glued together to design innovative and complex virtual and augmented reality systems using state of the art graphical and physical simulation library.
Projects
Our latest works
deep learning
Pruning
Watermarking deep neural networks
health
Covid-19 Detection from CXRs
DeepHealth project
Colorectal Polyps Characterization
Lung Nodules Segmentation
Perfusion Maps Prediction
virtual reality
Virtual reality simulations in our CAVE
Pruning
Pruning deep neural networks
Deep Neural Networks (DNNs) can solve challenging tasks thanks to complex stacks of (convolutional) layers with millions of learnable parameters. DNNs are however challenging to deploy in scenarios where memory is limited (e.g., mobile devices), since their memory footprint grows linearly with the number of parameters. A number of strategies have been proposed to tackle this issue, including ad-hoc topology designs, parameter quantization and parameter pruning. Parameter pruning consists in dropping synapses between neurons, i.e. setting to zero part of the entries in the matrices representing the connections between layers. Concerning the choice of the parameters to prune, a number of different approaches have been proposed. Let us define the sensitivity of a parameter as the derivative of the network output(s) with respect to the parameter. It was shown that parameters with small sensitivity can be pruned from the topology with negligible impact on the network performance, outperforming approaches based on norm minimization. Concerning the network topology resulting from pruning, two different classes of strategies can be identified.
Unstructured strategies aim at maximizing the pruning ratio, i.e. the number of parameters pruned from the network, regardless of the resulting topology. For example, LOBSTER (LOss Based SensitiviTyRegularization) is a loss-based regularizer that drives some but not all parameters towards zero. It shrinks parameters for which the loss derivative is small, such that many parameters are first driven towards zero and then pruned with a threshold mechanism. In a number of scenarios, this method yields competitive results in terms of pruning ratio. The resulting connection matrices are however randomly sparse, i.e. they have no structure. Representing sparse matrices in a memory efficient format is a non-trivial problem, thus high pruning ratios do not necessarily translate into reduced memory footprints.
Structured strategies aim at pruning parameters from the network yet with a constraint on the resulting topology. For example SeReNe (Sensitivity-based Regularization of Neurons) is a method for learning sparse topologies with a structure, exploiting neural sensitivity as a regularizer. Here, the sensitivity of a neuron is defined as the variation of the network output with respect to the variation of the activity of the neuron. The lower the sensitivity of a neuron, the less the network output is perturbed if the neuron output changes. This term is included in the cost function as a regularization term: in such way, SeReNe is able to prune entire neurons at the cost of a somewhat lower pruning ratio.
References:
[1] Tartaglione, E., Lepsøy, S., Fiandrotti, A., & Francini, G. (2018). Learning Sparse Neural Networks via Sensitivity-Driven Regularization. NeurIPS.
[2] Tartaglione, Enzo, Andrea Bragagnolo, and Marco Grangetto. “Pruning artificial neural networks: a way to find well-generalizing, high-entropy sharp minima.” International Conference on Artificial Neural Networks. Springer, Cham, 2020.
[3] Tartaglione, E., Bragagnolo, A., Odierna, F., Fiandrotti, A., & Grangetto, M. (2021). SeReNe: Sensitivity based Regularization of Neurons for Structured Sparsity in Neural Networks. arXiv preprint arXiv:2102.03773.
[4] Tartaglione, E., Bragagnolo, A., Fiandrotti, A., & Grangetto, M. (2020). LOss-Based SensiTivity rEgulaRization: towards deep sparse neural networks. arXiv preprint arXiv:2011.09905.
Watermarking deep neural networks
Delving in the loss landscape to embed robust watermarks into neural networks
In the last decade the use of artificial neural networks (ANNs) in many fields like image processing or speech recognition has become a common practice because of their effectiveness to solve complex tasks. However, in such a rush, very little attention has been paid to security aspects. In this work we explore the possibility to embed a watermark into the ANN parameters.
We exploit model redundancy and adaptation capacity to lock a subset of its parameters to carry the watermark sequence. The watermark can be extracted in a simple way to claim copyright on models but can be very easily attacked with model fine-tuning.
To tackle this culprit we devise a novel watermark aware training strategy. We aim at delving into the loss landscape to find an optimal configuration of the parameters such that we are robust to fine-tuning attacks towards the watermarked parameters.
Our experimental results on classical ANN models trained on well-known MNIST and CIFAR-10 datasets show that the proposed approach makes the embedded watermark robust to fine-tuning and compression attacks.
References:
[1] Tartaglione, E., Grangetto, M., Cavagnino, D. & Botta, M. (2021). Delving in the loss landscape to embed robust watermarks into neural networks. International Conference on Pattern Recognition.
[1]E. Tartaglione, C. A. Barbano, C. Berzovini, M. Calandri, and M. Grangetto, “Unveiling covid-19 from chest x-ray with deep learning: a hurdles race with small data,” International Journal of Environmental Research and Public Health, vol. 17, no. 18, p. 6933, 2020.
[2]C. A. Barbano, E. Tartaglione, C. Berzovini, M. Calandri, and M. Grangetto, “A two-step explainable approach for COVID-19 computer-aided diagnosis from chest x-ray images,” arXiv preprint arXiv:2101.10223, 2021.
DeepHealth project
DeepHealth project
Deep-Learning and HPC to Boost Biomedical Applications for Health (DeepHealth) project is funded by the EC under the topic ICT-11-2018-2019 “HPC and Big Data enabled Large-scale Test-beds and Applications”. DeepHealth is a 3-year project, kicked-off in mid January 2019 and is expected to conclude its work in December 2021. The aim of DeepHealth is to offer a unified framework completely adapted to exploit underlying heterogeneous HPC and Big Data architectures; and assembled with state-of-the-art techniques in Deep Learning and Computer Vision. In particular,the project will combine High-Performance Computing (HPC) infrastructures with Deep Learning (DL) and Artificial Intelligence (AI) techniques to support biomedical applications that require the analysis of large and complex biomedical datasets and thus, new and more efficient ways of diagnosis, monitoring and treatment of diseases.
More information avaliable at https://deephealth-project.eu.
Colorectal Polyps Characterization
Colorectal Polyps Characterization
The use-case will focus on gastrointestinal pathology, specifically colon biopsies: these particular
samples represent a cornerstone activity for any surgical pathology laboratory. Differential diagnosis
includes a limited number of entities, mostly neoplastic (i.e. adenomas) and more rarely inflammatory.
Histopathological characterization of colorectal polyps by pathologists is the major tool for deciding
the following clinical/therapeutic management of patients. The histological slides contain histological
sections of the biological specimens stained with hematoxylin and eosin (H&E); H&E are chemical
substances used to achieve visible colour contrast, allowing morphological diagnosis based on
pattern recognition and assessment of specific features.
Deep-learning is being investigated by the scientific community as a possible tool to cut the overall
laboratory workload, but also to improve the diagnostic and prognostic efficacy of histological
examination. In this use case, some recent results in colorectal polyps classification will be
taken into account and, if possible, used as a benchmark and independently validated.
The medical experts participating to the use case selected six different classes for automatic
classification which represent the most common diagnoses and lead to different patients’
management:
The Pathology Unit of UNITO will provide a labeled set of whole-slide biopsy images that will be
exploited by machine learning experts in the team for models training and testing on DeepHealth ODH
platform (PF5). [1], [2], [3]
[1]C. A. Barbano et al., “UniToPatho, a labeled histopathological dataset for colorectal polyps classification and adenoma dysplasia grading,” arXiv preprint arXiv:2101.09991, 2021.
[2]D. Perlo, E. Tartaglione, L. Bertero, P. Cassoni, and M. Grangetto, “Dysplasia grading of colorectal polyps through CNN analysis of WSI,” arXiv preprint arXiv:2102.05498, 2021.
[3]L. B. C. A. B. D. P. E. T. P. C. M. G. A. F. A. G. L. Cavallo, “UNITOPATHO.” IEEE Dataport, 2021, doi: 10.21227/9fsv-tm25.
On January 12th, 2021 at 15:00, Marco Grangetto together with the UNITO team (Marco Aldinucci, Barbara Cantalupo, Iacopo Colonnelli, Riccardo Renzulli, Enzo Tartaglione) presented the demo “Lung nodules segmentation in CT scans by DeepHealth toolkit” at ICPR2020.
The demo (https://www.micc.unifi.it/icpr2020/index.php/demos/), presented in the Session: “Medical and Industrial Imaging”, showed how a Deep Learning pipeline, whose goal is to train a model to recognize lung nodules from chest CT scans, can be executed on the hybrid HPC-Cloud infrastructure. Specifically, the OpenDeepHealth infrastructure available at the University of Torino was presented, along with the usage of the Deep Learning libraries developed in the DeepHealth EU project.
ICPR2020 is the flagship conference of IAPR the International Association of Pattern Recognition and the premiere conference in pattern recognition, covering computer vision, image, sound, speech, sensor patterns processing and machine intelligence (https://www.micc.unifi.it/icpr2020/). The conference took place from 10 to 15 January 2021 and due to COVID-19, it was held entirely virtual using the UNDERLINE web platform. Only registered attendees could access the event.
Perfusion Maps Prediction
Perfusion Maps Prediction
References:
[1] Gava, U. A., D’Agata, F., Tartaglione, E., Grangetto, M., Bertolino, F., Santonocito, A., … & Bergui, M. (2021). Neural Network-derived perfusion maps: a Model-free approach to computed tomography perfusion in patients with acute ischemic stroke. arXiv preprint arXiv:2101.05992.
CAVE@Unito
3D simulation with CAVE technology
Thanks to the “Human Social Science and Humanities (HSSH) With & For Industry 4.0” project funded by Regione Piemonte
it is posibile to run 3D immersive simulation with CAVE technology.
References:
Events
2022
EIDOS @ NEURIPS 2022
36th Conference on Neural Information Processing Systems Neurips 2022