Résumé
In appropriate frameworks, automatic differentiation is transparent to the user, at the cost of being a significant computational burden when the number of operations is large. For iterative algorithms, implicit differentiation alleviates this issue but requires custom implementation of Jacobian evaluation. In this paper, we study one-step differentiation, also known as Jacobian-free backpropagation, a method as easy as automatic differentiation and as performant as implicit differentiation for fast algorithms (e.g. superlinear optimization methods). We provide a complete theoretical approximation analysis with specific examples (Newton's method, gradient descent) along with its consequences in bilevel optimization. Several numerical examples illustrate the well-foundness of the one-step estimator.
Référence
Jérôme Bolte, Edouard Pauwels et Samuel Vaiter, « One-step differentiation of iterative algorithms », dans Advances in Neural Information Processing Systems 36, sous la direction de A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt et Sydney Levine, 2023, p. 77089–77103.
Voir aussi
Publié dans
Advances in Neural Information Processing Systems 36, sous la direction de A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt et Sydney Levine, 2023, p. 77089–77103
