This course covers the mathematical aspects of image reconstruction, including inverse problems, measurement errors, and regularization strategies. It combines theoretical concepts with hands-on tutorials, introducing deep learning for inverse problems and a Bayesian approach, along with opportunities for self-guided learning.
This 7-hours course covers typical mathematical tasks and caveats of image reconstruction problems. We introduce the theoretical framework of inverse problems and how different imaging modalities and measurement errors can lead to unfavorable reconstructions. Furthermore, we consider regularization strategies to overcome these issues, both from the theoretical and practical side. This fundamental knowledge is to be used as a starting point for self-guided learning during and beyond the course time. Basing off of the classical regularization approaches, the course further gives an introduction into deep learning for inverse problems. Finally, we also explore the Bayesian viewpoint, where we also consider the problem of uncertainty quantification.
The workshop covers a lecture and a tutorial part with hands-on, during which the instructors are available for instructions, feedback and advice.
Learning goals
Lecture (1h):
- Understand mathematical tasks and caveats of inverse problems, with a special focus on image reconstruction problems.
- Recognize classical regularization approaches and their most relevant mathematical properties.
- Understand the key concepts and pitfalls of learning for inverse problems.
- Realize the importance of uncertainty quantification and the probabilistic/Bayesian approach to inverse problems.
Tutorial part 1 (1h): Introduction to Inverse Problems
In the first part of the tutorial, we briefly introduce the world of inverse problems. With the help of small code examples we explore the practical implications of ill-posedness. Using existing implementations, we also learn about the radon transform and how we can use it in Python.
Tutorial part 2 (1h): Model-based Regularization
In the second part of the tutorial we investigate regularization approaches numerically. After introducing the classic Tikhonov approach, we focus on sparsity promoting regularization. Here, we consider Wavelet denoising and methods based on the so-called Total Variation, which is a common choice of regularizer for natural images. All approaches will be illustrated with hands-on coding examples, where the governing test problem is the CT reconstruction task.
Tutorial part 3 (2h): Data Driven Approaches
In the third part of the tutorial, we introduce data-driven and deep-learning based reconstruction approaches. We first learn about so-called post-processing strategies where we employ a typical U-Net architecture. Using a synthetic dataset, made up of simple shapes, we train a U-Net to solve a limited angle CT problem. Here, we identify both the great potential but also the pitfalls of learning based approaches. Finally, we consider plug-and-play methods, which combine model- and learning-based approaches.
Tutorial part 4 (2h): Uncertainty Quantification
In the fourth part of the tutorial, we consider the Bayesian viewpoint of inverse problems. We start by illustrating the fact that with changing noise vectors, also the reconstructed images vary. This introduces the concepts of a noise, prior and solution distribution. We explore sampling techniques and how they can help quantifying uncertainties in reconstructions.
Prerequisites
To participate in this course, you need to know
- basic knowledge in coding with Python (as e.g. taught in the course “First steps in Python”) and the Packages NumPy and PyTorch.
- how to run Python code in your own setup.
Target group
This course is mainly for researchers and students interested in theory and application of mathematical image reconstruction.