English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Talk

Optimization for Machine Learning

MPS-Authors
/persons/resource/persons76142

Sra,  S
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84113

Nowozin,  S
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Sra, S., Nowozin, S., & Vishwanathan, S. (2008). Optimization for Machine Learning. Talk presented at NIPS 2008 Workshop: Optimization for Machine Learning. Whistler, BC, Canada. 2008-12-12.


Cite as: https://hdl.handle.net/21.11116/0000-0003-A094-0
Abstract
Classical optimization techniques have found widespread use in machine learning. Convex optimization has
occupied the center-stage and significant effort continues to be still devoted to it. New problems constantly
emerge in machine learning, e.g., structured learning and semi-supervised learning, while at the same time
fundamental problems such as clustering and classification continue to be better understood. Moreover,
machine learning is now very important for real-world problems with massive datasets, streaming inputs,
the need for distributed computation, and complex models. These challenging characteristics of modern
problems and datasets indicate that we must go beyond the traditional optimization approaches common
in machine learning. What is needed is optimization tuned for machine learning tasks. For example, tech-
niques such as non-convex optimization (for semi-supervised learning, sparsity constraints), combinatorial
optimization and relaxations (structured learning), stochastic optimization (massive datasets), decomposi-
tion techniques (parallel and distributed computation), and online learning (streaming inputs) are relevant
in this setting. These techniques naturally draw inspiration from other fields, such as operations research,
polyhedral combinatorics, theoretical computer science, and the optimization community.