Bayesian Methodology for Adaptive Sparsity and Shrinkage in Regression

TBD, 2024

Bayesian and frequentist regularization techniques are routinely used when estimating regression models from data. However, most extant techniques require the practitioner to specify a priori whether the target is a dense or a sparse regression model. A recent proposal (the GLP prior distribution) overcomes this difficulty and bridges the sparse and dense model regimes. The paper introduces a novel prior distribution (ACSS) that aims to induce a negative dependence between the degree of sparsity and the amount of shrinkage in regression models. Hence, analogously to the GLP prior distribution, in can effectively estimate regression models exhibiting various degree of sparsity (from extremely sparse to completely dense). It is shown that sampling from the ACSS distribution is computationally very efficient based on a Gibbs sampling strategy. Extensive numerical work on synthetic and real data sets shows that estimation of the regression coefficients based on ACSS is orders of magnitude faster than GLP, and at the same time exhibits on many occasions a small advantage in terms of estimation and predictive accuracy.

Link to Paper