Consistency, Robustness and Sparsity for Learning Algorithms

Language
en
Document Type
Doctoral Thesis
Granting Institution
Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Naturwissenschaftliche Fakultät
Issue Date
2024
Authors
Roith, Tim
Editor
Abstract

This thesis is concerned with consistency, robustness and sparsity of supervised and semi-supervised learning algorithms.

For the latter, we consider the so-called Lipschitz learning task (Nadler, Boaz, Nathan Srebro, and Xueyuan Zhou. "Statistical analysis of semi-supervised learning: The limit of infinite unlabelled data." Advances in neural information processing systems 22 (2009)) for which we prove Gamma convergence and convergence rates for discrete solutions to their continuum counterpart in the infinite data limit.

In the supervised regime, we deal with input-robustness w.r.t. adversarial attacks and resolution changes. For the multi-resolution setting, we analyze the role of Fourier neural operators (Li, Zongyi, et al. "Fourier neural operator for parametric partial differential equations." arXiv preprint arXiv:2010.08895 (2020).) and their connection to standard convolutional neural layers. Concerning the computational complexity of neural network training, we propose an algorithm based on Bregman iterations (Osher, Stanley, et al. "An iterative regularization method for total variation-based image restoration." Multiscale Modeling & Simulation 4.2 (2005)) that allows for sparse weight matrices throughout the training. We also provide the convergence analysis for the stochastic adaption of the original Bregman iterations.

DOI
URN
Faculties & Collections