English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

Distinguishing cause from effect using observational data: methods and benchmarks

MPS-Authors
/persons/resource/persons76340

Zscheischler,  Jakob
Empirical Inference of the Earth System, Dr. Miguel D. Mahecha, Department Biogeochemical Integration, Dr. M. Reichstein, Max Planck Institute for Biogeochemistry, Max Planck Society;

External Resource
Fulltext (public)

BGC2203.pdf
(Publisher version), 4MB

Supplementary Material (public)

BGC2203s.zip
(Supplementary material), 6MB

Citation

Mooij, J. M., Peters, J., Janzing, D., Zscheischler, J., & Schölkopf, B. (2015). Distinguishing cause from effect using observational data: methods and benchmarks. Journal of Machine Learning Research, 17: 32.


Cite as: http://hdl.handle.net/11858/00-001M-0000-0024-B93F-D
Abstract
The discovery of causal relationships from purely observational data is a fundamental problem in science. The most elementary form of such a causal discovery problem is to decide whether X causes Y or, alternatively, Y causes X, given joint observations of two variables X,Y. An example is to decide whether altitude causes temperature, or vice versa, given only joint measurements of both variables. Even under the simplifying assumptions of causal sufficiency, no feedback loops, and no selection bias, such bivariate causal discovery problems are very challenging. Nevertheless, several approaches for addressing those problems have been proposed in recent years. We review two families of such methods: Additive Noise Methods (ANM) and Information Geometric Causal Inference (IGCI). We present the benchmark CauseEffectPairs that consists of data for 96 different cause-effect pairs selected from 34 datasets from various domains (e.g., meteorology, biology, medicine, engineering, economy, etc.). We motivate our decisions regarding the "ground truth" causal directions of all pairs. We evaluate the performance of several bivariate causal discovery methods on these real-world benchmark data and in addition on artificially simulated data. Our empirical results indicate that certain methods are able to distinguish cause from effect using only purely observational data with an accuracy of 63-69%. Because of multiple-testing corrections, however, considerably more benchmark data would be needed to obtain statistically significant conclusions. A theoretical contribution of this paper is a proof of the consistency of the additive-noise method as originally proposed by Hoyer et al. (2009).