Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Konferenzbeitrag

Efficient inference in matrix-variate Gaussian models with iid observation noise

MPG-Autoren
/persons/resource/persons84969

Stegle,  O
Max Planck Institute for Developmental Biology, Max Planck Society;

/persons/resource/persons84763

Lippert,  C
Dept. Empirical Inference, Max Planck Institute for Intelligent Systems, Max Planck Society;

/persons/resource/persons84090

Mooij,  J
Dept. Empirical Inference, Max Planck Institute for Intelligent Systems, Max Planck Society;

/persons/resource/persons75313

Borgwardt,  K
Dept. Empirical Inference, Max Planck Institute for Intelligent Systems, Max Planck Society;
Max Planck Institute for Developmental Biology, Max Planck Society;

Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Stegle, O., Lippert, C., Mooij, J., Lawrence, N., & Borgwardt, K. (2012). Efficient inference in matrix-variate Gaussian models with iid observation noise. In J. Shawe-Taylor, R. Zemel, P. Bartlett, F. Pereira, & K. Weinberger (Eds.), Advances in Neural Information Processing Systems 24 (pp. 630-638). Red Hook, NY, USA: Curran.


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-0013-B876-D
Zusammenfassung
Inference in matrix-variate Gaussian models has major applications for multioutput prediction and joint learning of row and column covariances from matrixvariate data. Here, we discuss an approach for efficient inference in such models that explicitly account for iid observation noise. Computational tractability can be retained by exploiting the Kronecker product between row and column covariance matrices. Using this framework, we show how to generalize the Graphical Lasso in order to learn a sparse inverse covariance between features while accounting for a low-rank confounding covariance between samples. We show practical utility on applications to biology, where we model covariances with more than 100,000 dimensions. We find greater accuracy in recovering biological network structures and are able to better reconstruct the confounders.