Tutorial sessions will take place on Monday, 11th September, 2023 at the same venue as the workshop, i.e. the NTNU campus in Gjovik; see the map below.
<
Tutorial 1
Davit Gigilashvili
NTNU, Norway
Aditya Sole
NTNU, Norway
Hard and soft metrology
challenges in Material Appearance
Time: 9:30
Date: Monday, 11th September, 2023Location: Ametyst building, room A154
Abstract
Proper measurement, reproduction, and communication of how objects and materials look are of utmost importance in various fields, such as additive manufacturing, computer graphics, and aesthetic medicine. While colour measurement, perception, and reproduction have attracted a lot of scholarly attention, the research on capturing and reproducing overall appearance – which in addition to colour, also includes gloss, translucency, and texture – remains in its infancy. Appearance can be measured using hard metrology – i.e. to measure optical material properties instrumentally, as well as using soft metrology – i.e. to measure perceptual attributes based on human responses. The link between the two remains largely unclear. This tutorial will present challenges about hard and soft metrology ways of appearance measurement, and the practical implications of the existing gap between the two. The presentation will be followed by the practical hands-on tutorial of a) an instrumental measurement of optical properties; b) a demonstration of appearance reproduction using Augmented Reality. The presentation will consist of two talks and last for 1.5 hours, while the hands-on demonstration will be 1 hour long.
Speaker’s Bio
Davit Gigilashvili received his BSc degree in Information Technologies from the University of Georgia (2015), a joint MSc in Color and Applied Computer Science from Jean Monnet University, University of Granada, and NTNU (2017), and PhD in Computer Science also from NTNU (2021). Currently he is a postdoctoral researcher at NTNU’s Colourlab. His research interests include material appearance and virtual reconstruction of cultural heritage. He has taught multiple courses and co-authored more than 20 publications on material appearance perception and reproduction.
Speaker’s Bio
Aditya Sole received his Bachelors in Engineering in Printing Technology from the PVG’s College of Engineering and Technology at Pune University, Pune, India (2005), MSc in Digital Colour Imaging from London College of Communication, University of the Arts London, London, UK (2006), and PhD in Computer Science from Norwegian University of Science and Technology, Gjøvik, Norway (2019). Currently he works as an associate professor at the Department of Computer Science at NTNU’s Colourlab. His current research interests include measurement and understanding visual appearance, graphical 3D printing, and bidirectional reflectance measurements. Aditya is involved in teaching both, at bachelor and master degree level at NTNU and has co-authored a number of publications within the fields of colour imaging, material appearance measurement and understanding, and bidirectional reflectance measurements.
Tutorial 2
Joseph Suresh Paul
School of Informatics, Kerala University, India
Recent trends in MRI reconstruction
Time: 09:30
Date: Monday, 11th September, 2023
Location: Kobolt building, room K109
Abstract
The tutorial mentions the recent trends in MRI reconstruction. MRI reconstruction is the process of generating an image from the raw data acquired during an MRI scan. MRI reconstruction can be considered an ill-posed problem because it involves the inversion of a non-linear and ill-conditioned mathematical model, which maps the underlying anatomy of the patient to the acquired MRI signal. The inversion process is complicated by a number of factors, including noise in the acquired data, incomplete sampling of k-space, and artifacts caused by patient motion, field inhomogeneities, and other sources.
In the case of MRI reconstruction, small errors in the acquired data can lead to significant artifacts and distortions in the reconstructed image, which can affect the accuracy of the diagnosis. MRI reconstruction is a computationally intensive task that can require a significant amount of memory, especially for large data sets and it also causes huge time consumption.
There are several MRI reconstruction algorithms applied to different types of accelerated MRI acquisition. These include partial Fourier acquisition, parallel imaging, compressed sensing and non-Cartesian MRI acquisition. The tutorial mainly focuses on reconstruction methods for non-Cartesian MRI acquisition. The process of creating an image from raw MRI data recorded using a non-Cartesian k-space sampling pattern, such as radial or spiral sampling, is known as non-Cartesian MRI reconstruction. The Nyquist sampling requirement, which is necessary for acquisition on a Cartesian grid, is not satisfied by the non-uniformly sampled k- space data, which makes non-Cartesian MRI reconstruction a challenging problem. However Non-Cartesian MRI reconstruction has the potential to improve imaging speed and resolution in various applications, such as real-time MRI and high-resolution imaging of small structures.
The tutorial would provide a detailed view of the following topics in relation to the non-Cartesian MRI reconstruction:
- The evolution of algorithms for non-Cartesian MRI reconstruction since early 2000 to till-date.
- The problem of tuning free-parameters for balancing speed of reconstruction and accuracy.
- Types of model-based optimization techniques used for reconstruction and their implementation.
- Dealing with the Encoding matrices of large dimensions and pre-conditioning during the reconstruction process.
Speaker’s Bio
Joseph Suresh Paul is a Professor in the School of Informatics at the Kerala University for Digital Sciences & technology (Former IIITM-K), Thiruvananthapuram, India. He obtained his Masters and Ph.D. degrees in Electrical Engineering from the Indian Institute of Technology, Madras, India in the year 2000. Previously, he has held Postdoctoral positions at the Johns Hopkins University School of Medicine, Baltimore, MD, USA and faculty positions at the National University of Singapore and the University of New South Wales, Sydney, Australia. His current interests include machine learning for compressed sensing, image reconstruction for parallel MRI, Neuroimaging and image processing for diagnostic applications
Tutorial 3
Maria Vanrell
Universitat Autònoma de Barcelona, Spain
Color in computer vision
Time: 13:30
Date: Monday, 11th September, 2023
Location: Beryll building, room B211
Abstract
The tutorial introduces the colour image formation model considered in computer
vision, and the problems that has been dealt in the field from a computational point of
view, but with perceptual considerations. The problems are introduced from: (a) colour constancy as a pre-processing step, (b) how colour-texture can be represented, that goes from a point to a feature; and (c) the role of colour in global image understanding, where colour descriptors are reviewed both from flat to deep descriptors. The presentation will consist of two talks and last for 1.5 hours, while the hands-on demonstration will be 2 hour long.
Outline:
- Basic definitions of Colour
- Image formation for Computer Vision
- Dichromatic Reflection model and the usual assumptions in CV
- From the colour of a point to Colour texture
- Colour Constancy
- Colour for image understanding
- Flat colour texture descriptors: Jointly or separately?6. Deep colour descriptors using Colour Selectivity Index
- A lot of colour selective neurons through all layers
- Neurons with strong entanglement between colour and shape
- Correlation with the Colour distribution of the dataset
- Parallelisms with human visio.
- Deep colour descriptors using Colour Selectivity Index
- A lot of colour selective neurons through all layers
- Neurons with strong entanglement between colour and shape
- Correlation with the Colour distribution of the dataset
- Parallelisms with human vision
Speaker’s Bio
Maria Vanrell is Full Professor at the Computer Science Department of the Universitat Autònoma de Barcelona (UAB). She graduated in Computer Engineering in 1990 and received a Ph.D. from the same university in 1996. She is currently attached to the Computer Vision Centre where she created the Colour in Context research group and is head of projects. Her main publications are essentially addressed to Colour in Computer Vision with some focus on bio-inspired considerations for computational colour models. Main contributions are in the problems of predicting colour saliency, naming, texture, induction, constancy, segmentation, recognition and decomposition in intrinsic components from a single image. Currently, she is the director of the Master in computer Vision at Barcelona that is held between 5 different universities.
Tutorial 4
Peter Eisert
Humboldt University Berlin & Fraunhofer HHI, Germany
From Volumetric Video to Interactive Virtual Humans
Time: 13:30
Date: Monday, 11th September, 2023
Location: Gneis building, room G303
Abstract
The use of immersive media and extended reality (XR) applications is rapidly increasing in many industries. The digital modeling of humans holds enormous potential for the development of innovative applications as human models can be used as natural and intuitive interfaces. Here, photo-realistic modelling and rendering of virtual humans is extremely important, as the human body and face are highly complex and exhibit large shape variability but also, especially, as humans are extremely sensitive to looking at humans. Volumetric video technology can build highly accurate and photorealistic reconstructions of humans but does not allow animation and interactivity.
In this tutorial, we will address the creation of high-quality animatable models from volumetric video data by the use of modern AI technologies. We will look into various methods for the creation of volumetric video, represented by geometry, appearance and shading. Standard volumetric video typically shows high realism but is restricted to a replay with free viewpoint rendering capabilities. In order to use such representations in interactive environments like VR/AR scenes, games, virtual assistants, or video communication scenarios, they need to be animatable or editable, e.g. to adapt head pose, gaze direction or to create novel animated assets from the captured content. Here, we will show means on how to create representations that enable animation, editing or relighting. In recent years, there has also been a breakthrough in terms of render quality by using AI methods for synthesis, like neural rendering and implicit representations like NeRFs. We will show how these techniques can be used for high quality rendering of virtual humans and how interaction with such representations can be established. Finally, we will showcase different applications and demonstrate their enormous potential for modern XR applications.
Speaker’s Bio
Peter Eisert is Professor for Visual Computing at Humboldt University Berlin and heading the Vision & Imaging Department at Fraunhofer HHI. In 2000, he got his PhD “with highest honors” from the University of Erlangen. In 2001, he worked as a postdoctoral fellow at Stanford University. He has coordinated and initiated numerous national and international 3rd party funded projects with a total budget of more than 25 Mio Euros. He has published more than 250 papers and is Associate Editor of the International Journal of Image and Video Processing as well as the Journal of Visual Communication and Image Representation. His research interests include 3D image analysis and synthesis, face body processing, computer vision, computer graphics, as well as deep learning in application areas like multimedia, production, security and medicine.