Document Type

Article

Publication Date

2024

DOI

10.1137/24M162844X

Publication Title

SIAM-ASA Journal on Uncertainty Quantification

Volume

12

Issue

3

Pages

759-787

Abstract

Sparse Bayesian learning (SBL) models are extensively used in signal processing and machine learning for promoting sparsity through hierarchical priors. The hyperparameters in SBL models are crucial for the model’s performance, but they are often difficult to estimate due to the nonconvexity and the high-dimensionality of the associated objective function. This paper presents a comprehensive framework for hyperparameter estimation in SBL models, encompassing well-known algorithms such as the expectation-maximization, MacKay, and convex bounding algorithms. These algorithms are cohesively interpreted within an alternating minimization and linearization (AML) paradigm, distinguished by their unique linearized surrogate functions. Additionally, a novel algorithm within the AML framework is introduced, showing enhanced efficiency, especially under low signal noise ratios. This is further improved by a new alternating minimization and quadratic approximation paradigm, which includes a proximal regularization term. The paper substantiates these advancements with thorough convergence analysis and numerical experiments, demonstrating the algorithm’s effectiveness in various noise conditions and signal-to-noise ratios.

Rights

© 2024 by SIAM and ASA. All rights reserved. Unauthorized reproduction of this article is prohibited.

Included with the kind written permission of the copyright holders and the authors.

Original Publication Citation

Yu, F., Shen, L., & Song, G. (2024). Hyperparameter estimation for sparse Bayesian learning models. SIAM-ASA Journal on Uncertainty Quantification, 12(3), 759-787. https://doi.org/10.1137/24M162844X

Share

COinS