日本語
 
Help Privacy Policy ポリシー/免責事項
  詳細検索ブラウズ

アイテム詳細


公開

報告書

Multivariate Regression with Stiefel Constraints

MPS-Authors
/persons/resource/persons83791

Bakir,  GH
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83946

Gretton,  A
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83919

Franz,  MO
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84193

Schölkopf,  B
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

External Resource
There are no locators available
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
フルテキスト (公開)

MPIK-TR-128.pdf
(出版社版), 410KB

付随資料 (公開)
There is no public supplementary material available
引用

Bakir, G., Gretton, A., Franz, M., & Schölkopf, B.(2004). Multivariate Regression with Stiefel Constraints (128). Tübingen, Germany: Max Planck Institute for Biological Cybernetics.


引用: https://hdl.handle.net/11858/00-001M-0000-0013-F347-E
要旨
We introduce a new framework for regression between multi-dimensional spaces. Standard methods for solving this problem typically reduce the problem to one-dimensional
regression by choosing features in the input and/or output spaces. These methods, which
include PLS (partial least squares), KDE (kernel dependency estimation), and PCR
(principal component regression), select features based on different a-priori judgments as
to their relevance. Moreover, loss function and constraints are chosen not primarily on
statistical grounds, but to simplify the resulting optimisation. By contrast, in our
approach the feature construction and the regression estimation are performed jointly,
directly minimizing a loss function that we specify, subject to a rank constraint. A
major advantage of this approach is that the loss is no longer chosen according to the
algorithmic requirements, but can be tailored to the characteristics of the task at hand;
the features will then be optimal with respect to this objective. Our approach also
allows for the possibility of using a regularizer in the optimization. Finally, by processing the observations sequentially, our algorithm is able to work on large scale problems.