17 novembre 2016, 11h00–12h15
Toulouse
Salle MC 202
MAD-Stat. Seminar
Résumé
For linear inverse problems $Y=\mathsf{A}\mu+\xi$, it is classical to recover the unknown function $\mu$ by an iterative scheme $(\widehat \mu^{(m)}, m=0,1,\ldots)$ and to provide $\widehat\mu^{(\tau)}$ as a result, where $\tau$ is some stopping rule. Stopping should be decided adaptively, that is in a data-driven way independently of the true function $\mu$. For deterministic noise $\xi$ the discrepancy principle is usally applied to determine $\tau$. In the context of stochastic noise $\xi$, we study oracle adaptation (that is, compared to the best possible stopping iteration). For a stopping rule based on the residual process, oracle adaptation bounds whithin a certain domain are established. For Sobolev balls, the domain of adaptivity matches a corresponding lower bound. The proofs use bias and variance transfer techniques from weak prediction error to strong $L^2$-error, as well as convexity arguments and concentration bounds for the stochastic part. The performance of our stopping rule for Landweber and spectral cutoff methods is illustrated numerically. (Joint work with Gilles Blanchard, Potsdam, and Marc Hoffmann, Paris)