English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Inference algorithms and learning theory for Bayesian sparse factor analysis

MPS-Authors
/persons/resource/persons84969

Stegle,  O
Max Planck Institute for Biological Cybernetics, Max Planck Society;
Former Research Group Machine Learning and Computational Biology, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Rattray, M., Stegle, O., Sharp, K., & Winn, J. (2009). Inference algorithms and learning theory for Bayesian sparse factor analysis. Bristol, UK: Institute of Physics. doi:10.1088/1742-6596/197/1/012002.


Cite as: http://hdl.handle.net/11858/00-001M-0000-0013-C2EA-9
Abstract
Bayesian sparse factor analysis has many applications; for example, it has been applied to the problem of inferring a sparse regulatory network from gene expression data. We describe a number of inference algorithms for Bayesian sparse factor analysis using a slab and spike mixture prior. These include well-established Markov chain Monte Carlo (MCMC) and variational Bayes (VB) algorithms as well as a novel hybrid of VB and Expectation Propagation (EP). For the case of a single latent factor we derive a theory for learning performance using the replica method. We compare the MCMC and VB/EP algorithm results with simulated data to the theoretical prediction. The results for MCMC agree closely with the theory as expected. Results for VB/EP are slightly sub-optimal but show that the new algorithm is effective for sparse inference. In large-scale problems MCMC is infeasible due to computational limitations and the VB/EP algorithm then provides a very useful computationally efficient alternative.