日本語
 
Help Privacy Policy ポリシー/免責事項
  詳細検索ブラウズ

アイテム詳細

登録内容を編集
  このアイテムは取り下げられました。詳細要約

取り下げ

会議論文

Unifying Divergence Minimization and Statistical Inference Via Convex Duality

MPS-Authors
/persons/resource/persons83782

Altun,  Y
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;

External Resource
There are no locators available
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
フルテキスト (公開)
公開されているフルテキストはありません
付随資料 (公開)
There is no public supplementary material available
引用

Altun, Y. (2006). Unifying Divergence Minimization and Statistical Inference Via Convex Duality. Learning Theory: 19th Annual Conference on Learning Theory (COLT 2006), 139-153.


要旨
In this paper we unify divergence minimization and statistical inference by means of convex duality. In the process of doing so, we prove that the dual of approximate maximum entropy estimation is maximum a posteriori estimation as a special case. Moreover, our treatment leads to stability and convergence bounds for many statistical learning problems. Finally, we show how an algorithm by Zhang can be used to solve this class of optimization problems efficiently.