English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Book Chapter

Machine Learning Methods for Automatic Image Colorization

MPS-Authors

Charpiat,  G.
Max Planck Society;

/persons/resource/persons83809

Bezrukov,  I.
Dept. Empirical Inference, Max Planck Institute for Intelligent Systems, Max Planck Society;

/persons/resource/persons75597

Hofmann,  M.
Dept. Empirical Inference, Max Planck Institute for Intelligent Systems, Max Planck Society;

/persons/resource/persons75217

Altun,  Y.
Dept. Empirical Inference, Max Planck Institute for Intelligent Systems, Max Planck Society;

/persons/resource/persons84193

Schölkopf,  B.
Dept. Empirical Inference, Max Planck Institute for Intelligent Systems, Max Planck Society;

External Ressource
No external resources are shared
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Charpiat, G., Bezrukov, I., Hofmann, M., Altun, Y., & Schölkopf, B. (2011). Machine Learning Methods for Automatic Image Colorization. In R. Lukac (Ed.), Computational Photography: Methods andApplications (pp. 395-418). Boca Raton, FL, USA: CRC Press.


Cite as: http://hdl.handle.net/11858/00-001M-0000-0010-4D6C-F
Abstract
We aim to color greyscale images automatically, without any manual intervention. The color proposition could then be interactively corrected by user-provided color landmarks if necessary. Automatic colorization is nontrivial since there is usually no one-to-one correspondence between color and local texture. The contribution of our framework is that we deal directly with multimodality and estimate, for each pixel of the image to be colored, the probability distribution of all possible colors, instead of choosing the most probable color at the local level. We also predict the expected variation of color at each pixel, thus defining a non-uniform spatial coherency criterion. We then use graph cuts to maximize the probability of the whole colored image at the global level. We work in the L-a-b color space in order to approximate the human perception of distances between colors, and we use machine learning tools to extract as much information as possible from a dataset of colored examples. The resulting algorithm is fast, designed to be more robust to texture noise, and is above all able to deal with ambiguity, in contrary to previous approaches.