English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Invariant Gaussian Process Latent Variable Models and Application in Causal Discovery

Zhang, K., Schölkopf, B., & Janzing, D. (2010). Invariant Gaussian Process Latent Variable Models and Application in Causal Discovery. In P. Grünwald, & P. Spirtes (Eds.), 26th Conference on Uncertainty in Artificial Intelligence (UAI 2010) (pp. 717-724). Corvallis, OR, USA: AUAI Press.

Item is

Files

show Files
hide Files
:
Pub3_[0].pdf (Any fulltext), 325KB
Name:
Pub3_[0].pdf
Description:
-
OA-Status:
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-
License:
-

Locators

show
hide
Description:
-
OA-Status:

Creators

show
hide
 Creators:
Zhang, K1, 2, Author           
Schölkopf, B1, 2, Author           
Janzing, D1, 2, Author           
Affiliations:
1Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497795              
2Max Planck Institute for Biological Cybernetics, Max Planck Society, Spemannstrasse 38, 72076 Tübingen, DE, ou_1497794              

Content

show
hide
Free keywords: -
 Abstract: In nonlinear latent variable models or dynamic models, if we consider the latent variables as confounders (common causes), the noise dependencies imply further relations
between the observed variables. Such models are then closely related to causal discovery in the presence of nonlinear confounders, which is a challenging problem. However, generally in such models the observation noise is assumed to be independent across data dimensions, and consequently the noise dependencies are ignored. In this paper we focus on the Gaussian process latent variable model
(GPLVM), from which we develop an extended model called invariant GPLVM (IGPLVM), which can adapt to arbitrary noise
covariances. With the Gaussian process prior put on a particular transformation of the latent nonlinear functions, instead of the original ones, the algorithm for IGPLVM involves almost the same computational loads as that
for the original GPLVM. Besides its potential application in causal discovery, IGPLVM has the advantage that its estimated latent nonlinear manifold is invariant to any nonsingular linear transformation of the data. Experimental
results on both synthetic and realworld data show its encouraging performance in nonlinear manifold learning and causal discovery.

Details

show
hide
Language(s):
 Dates: 2010-07
 Publication Status: Issued
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: BibTex Citekey: 6629
 Degree: -

Event

show
hide
Title: 26th Conference on Uncertainty in Artificial Intelligence (UAI 2010)
Place of Event: Catalina Island, CA, USA
Start-/End Date: 2010-07-08 - 2010-07-11

Legal Case

show

Project information

show

Source 1

show
hide
Title: 26th Conference on Uncertainty in Artificial Intelligence (UAI 2010)
Source Genre: Proceedings
 Creator(s):
Grünwald, P, Editor
Spirtes, P, Editor
Affiliations:
-
Publ. Info: Corvallis, OR, USA : AUAI Press
Pages: - Volume / Issue: - Sequence Number: - Start / End Page: 717 - 724 Identifier: ISBN: 978-0-9749039-6-5