Copyright © 2010 Terrance Savitsky and Marina Vannucci. This is an open access article distributed under the
Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
We expand a framework for Bayesian variable selection for
Gaussian process (GP) models by employing spiked Dirichlet process (DP)
prior constructions over set partitions containing covariates. Our approach
results in a nonparametric treatment of the distribution of the covariance parameters of the GP covariance matrix that in turn induces a clustering of the
covariates. We evaluate two prior constructions: the first one employs a mixture of a point-mass and a continuous distribution as the centering distribution
for the DP prior, therefore, clustering all covariates. The second one employs a
mixture of a spike and a DP prior with a continuous distribution as the centering distribution, which induces clustering of the selected covariates only. DP
models borrow information across covariates through model-based clustering.
Our simulation results, in particular, show a reduction in posterior sampling
variability and, in turn, enhanced prediction performances. In our model formulations, we accomplish posterior inference by employing novel combinations and extensions of existing algorithms for inference with DP prior models and
compare performances under the two prior constructions.