December 7, 2017, 11:00–12:15
Room MS 001
For the stable solution of ill-posed inverse problems, it is necessary to incorporate a-priori knowledge about the solution and also the noise into the solution process. A typical assumption about the solution is its regularity, which can for instance be measured with respect to some homogeneous Sobolev norm. In contrast, the noise is the result from unavoidable errors in the measurement process, which can often be realistically modeled as being the realisation of some (e.g. i.i.d. Gaussian) random process. As a consequence, it makes sense to try to minimise some Sobolev norm of the approximate solution subject to the constraint that the residual behaves like a typical sample of an i.i.d. Gaussian random variable. In this talk we will consider a specific variational regularisation method based on this construction. The main idea is to measure the size of the residual using the multi-resolution norm, which is defined as the maximum of weighted local averages over a family of subsets of the domain of the residual. This norm scales with the logarithm of the number of measurements if applied to samples of an i.i.d. Gaussian random variable, while it scales with the square root of the number of measurements for samples of a continuous function. Even more, it is possible to derive interpolation inequalities involving the multi-resolution norm and different homogeneous Sobolev norms. These interpolation inequalities then allow for the derivation of convergence rates in expectation for the reconstruction as the number of measurements tends to infinity. It turns out that these rates are asymptotically almost optimal provided that the true solution is sufficiently smooth and the operator to be inverted has a well-defined degree of ill-posedness. This is a joint work with Housen Li (National University of Defense Technology) and Axel Munk (University of Gottingen).