Document Type
Article
Publication Date
2024
Publication Title
Journal of Machine Learning Research
Volume
25
Pages
1-45
Abstract
Sparsity of a learning solution is a desirable feature in machine learning. Certain reproducing kernel Banach spaces (RKBSs) are appropriate hypothesis spaces for sparse learning methods. The goal of this paper is to understand what kind of RKBSs can promote sparsity for learning solutions. We consider two typical learning models in an RKBS: the minimum norm interpolation (MNI) problem and the regularization problem. We first establish an explicit representer theorem for solutions of these problems, which represents the extreme points of the solution set by a linear combination of the extreme points of the subdifferential set, of the norm function, which is data-dependent. We then propose sufficient conditions on the RKBS that can transform the explicit representation of the solutions to a sparse kernel representation having fewer terms than the number of the observed data. Under the proposed sufficient conditions, we investigate the role of the regularization parameter on sparsity of the regularized solutions. We further show that two specific RKBSs, the sequence space t1(N) and the measure space, can have sparse representer theorems for both MNI and regularization models.
Rights
© 2024 Rui Wang, Yuesheng Xu, and Mingsong Yan
Published under a Creative Commons Attribution 4.0 International (CC BY 4.0) License.
Original Publication Citation
Wang, R., Xu, Y. S., & Yan, M. S. (2024). Sparse representer theorems for learning in reproducing kernel Banach spaces. Journal of Machine Learning Research, 25, 1-45, Article 93. https://www.jmlr.org/papers/v25/23-0645.html
Repository Citation
Wang, Rui; Xu, Yuesheng; and Yan, Mingsong, "Sparse Representer Theorems for Learning in Reproducing Kernel Banach Spaces" (2024). Mathematics & Statistics Faculty Publications. 253.
https://digitalcommons.odu.edu/mathstat_fac_pubs/253