日本語
 
Help Privacy Policy ポリシー/免責事項
  詳細検索ブラウズ

アイテム詳細


公開

ポスター

Joint sequence optimization beats pure neural network approaches for super-resolution TSE

MPS-Authors
/persons/resource/persons230667

Glang,  F       
Department High-Field Magnetic Resonance, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons214560

Zaiss,  M       
Department High-Field Magnetic Resonance, Max Planck Institute for Biological Cybernetics, Max Planck Society;

External Resource
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
フルテキスト (公開)
公開されているフルテキストはありません
付随資料 (公開)
There is no public supplementary material available
引用

Dang, H., Golkov, V., Endres, J., Weinmüller, S., Glang, F., Wimmer, T., Cremers, D., Dörfler, A., Maier, A., & Zaiss, M. (2024). Joint sequence optimization beats pure neural network approaches for super-resolution TSE. Poster presented at ISMRM & ISMRT Annual Meeting & Exhibition 2024, Singapore.


引用: https://hdl.handle.net/21.11116/0000-000F-397B-8
要旨
Motivation: TSE flip angle trains can have a strong influence on the actual resolution of the acquired image and have consequently a considerable impact on the performance of a super-resolution task. Goal(s): We demonstrate the advantage of end-to-end optimization of sequence and neural network parameter compared to pure network training approaches. Approach: This MR-physics-informed training procedure jointly optimizes radiofrequency pulse trains of a PD- and T2-weighted TSE and subsequently applied CNN to predict corresponding PDw and T2w super-resolution TSE images. Results: The method generalizes from simulation-based optimization to in vivo measurements and acquired super-resolution images show higher accuracy compared to pure network training approaches. Impact: Acquired super-resolution image may improve evaluation of the data. Reduction of acquisition time compared to direct high-resolution acquisition leads to increase in patient comfort and minimization of motion artifacts.