Document de travail

Inference robust to outliers with L1‐norm penalization

Jad Beyhum

Résumé

This paper considers the problem of inference in a linear regression model with outliers where the number of outliers can grow with sample size but their proportion goes to 0. We apply an estimator penalizing the `1-norm of a random vector which is non-zero for outliers. We derive rates of convergence and asymptotic normality. Our estimator has the same asymptotic variance as the OLS estimator in the standard linear model. This enables to build tests and confidence sets in the usual and simple manner. The proposed procedure is also computationally advantageous as it amounts to solving a convex optimization program. Overall, the suggested approach constitutes a practical robust alternative to the ordinary least squares estimator.

Mots-clés

robust regression; L1-norm penalization; unknown variance.;

Remplacé par

Jad Beyhum, « Inference robust to outliers with L1‐norm penalization », ESAIM: Probability and Statistics, vol. 24, novembre 2020, p. 688–702.

Référence

Jad Beyhum, « Inference robust to outliers with L1‐norm penalization », TSE Working Paper, n° 19-1032, août 2019.

Voir aussi

Publié dans

TSE Working Paper, n° 19-1032, août 2019