Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Vortrag

Multi-rigid motion correction of MR images

MPG-Autoren
/persons/resource/persons84372

Loktyushin,  A
Dept. Empirical Inference, Max Planck Institute for Intelligent Systems, Max Planck Society;

/persons/resource/persons84145

Pohmann,  R
Department High-Field Magnetic Resonance, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84193

Schölkopf,  B
Dept. Empirical Inference, Max Planck Institute for Intelligent Systems, Max Planck Society;

Externe Ressourcen

Link
(Verlagsversion)

Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Loktyushin, A., Nickisch, H., Pohmann, R., & Schölkopf, B. (2013). Multi-rigid motion correction of MR images. Talk presented at 30th Annual Scientific Meeting ESMRMB 2013. Toulouse, France.


Zitierlink: https://hdl.handle.net/21.11116/0000-0001-4ECF-0
Zusammenfassung
Purpose/Introduction: Much work was done over the last years to solve the problem of non-rigid motion correction in MRI. Prospective methods are limited to global motion correction, and thus to affine transformations [3]. Retrospective methods can address multi-rigid motion but need a motion reference usually a navigator [1]. We propose a retrospective method for multirigid motion correction that does not require any external motion reference. Subjects and Methods: We extend our blind motion-correction framework [2] to cover multi-rigid motion (multiple rigid bodies in FoV). Our method is based on an analytic forward model of multi-rigid motion in the MR scanner. As input, the algorithm requires a splitting of a given 2D/3D image into patches of arbitrary shape. We allow for both translational and rotational motion in each patch. To recover the image we perform alternating optimization with respect to image and motion parameters using a gradient-based approach. The objective consists of a quadratic data fidelity term and two regularization terms. The data fidelity ensures that reconstructed result fits the observation. We use total variation as a regularizer for the image, and for motion we put an quadratic penalty on the difference of consecutive motion parameters, thus penalizing rapid changes in the motion trajectory. We are able to efficiently explore the unknown parameter space of motion/image with the use of derivative-driven non-linear optimization. To ensure feasible computation times we implement our algorithm to run on modern graphic cards (GPUs). Results: We evaluated our algorithm on data acquired with a Siemens 3T Trio scanner. Using a wrist coil, we imaged two hands of a subject, which were moving against each other. In a second acquisition we imaged a single hand with moving index finger, while the hand was stationary. In both cases we used two patches for multi-rigid splitting, and motion involved both translation and rotation. Figure 1 shows that our algorithm significantly improves the image quality. Discussion/Conclusion: Our experimental results demonstrate the feasibility of blind multi-rigid motion correction using only image raw data. Although our framework is general enough to be applied to arbitrary patch-splittings, runtime remains an issue. Future work will concentrate on coupling motion parameters across patches to reduce the computational burden. This will allow to use denser patch-splittings, and thus address more realistic motion.