English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Learning the Similarity Measure for Multi-Modal 3D Image Registration

MPS-Authors
/persons/resource/persons84040

Lee,  D
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83974

Hofmann,  M
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84235

Steinke,  F
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83782

Altun,  Y
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84193

Schölkopf,  B
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Lee, D., Hofmann, M., Steinke, F., Altun, Y., Cahill, N., & Schölkopf, B. (2009). Learning the Similarity Measure for Multi-Modal 3D Image Registration. In 2009 IEEE Conference on Computer Vision and Pattern Recognition (pp. 186-193). Piscataway, NJ, USA: IEEE Service Center.


Cite as: https://hdl.handle.net/11858/00-001M-0000-0013-C48D-B
Abstract
Multi-modal image registration is a challenging problem
in medical imaging. The goal is to align anatomically
identical structures; however, their appearance in images
acquired with different imaging devices, such as CT
or MR, may be very different. Registration algorithms generally
deform one image, the floating image, such that it
matches with a second, the reference image, by maximizing
some similarity score between the deformed and the reference
image. Instead of using a universal, but a priori fixed
similarity criterion such as mutual information, we propose
learning a similarity measure in a discriminative manner
such that the reference and correctly deformed floating
images receive high similarity scores. To this end, we
develop an algorithm derived from max-margin structured
output learning, and employ the learned similarity measure
within a standard rigid registration algorithm. Compared
to other approaches, our method adapts to the specific registration
problem at hand and exploits correlations between
neighboring pixels in the reference and the floating image.
Empirical evaluation on CT-MR/PET-MR rigid registration
tasks demonstrates that our approach yields robust performance
and outperforms the state of the art methods for
multi-modal medical image registration.