Logo Logo
Hilfe
Kontakt
Switch language to English
Imaging error reduction for MR guided radiotherapy with deep learning-based intra-frame motion compensation
Imaging error reduction for MR guided radiotherapy with deep learning-based intra-frame motion compensation
Radiotherapy in the presence of intra-fractional motion can significantly benefit from real-time magnetic resonance imaging (MRI) guidance, owing to its superior soft tissue contrast and the absence of ionizing radiation. However, motion-related imaging errors have been identified as the primary contributor to overall loop latency in MR guided radiotherapy (MRgRT), leading to residual geometric tracking errors and subsequently affecting the effectiveness of active motion management. This thesis explores the feasibility of reducing these errors in MRgRT through deep learning-based intra-frame motion compensation techniques. Firstly, a motion-dependent k-space sampling simulation procedure was developed to investigate dynamic MR imaging behavior and motion-related imaging errors. Building upon this, a methodology for intra-frame motion dataset creation and augmentation was proposed, pairing the motion-corrupted data with its real-time ground-truth counterpart, with a primary focus on rapid anatomical changes. Specifically, based on a coarse-to-fine grid-scale representation of patient-specific motion data, 4D MRI digital anthropomorphic phantoms were generated to model lung cancer patients, and a dedicated intra-frame motion model was constructed using a piecewise linear approximation between consecutive control points. Additionally, a motion pattern perturbation scheme was introduced to comprehensively explore potential anatomical structure positions and enhance the diversity of intra-frame motion trajectories. Secondly, a proof-of-concept study in Cartesian cine-MRI was conducted, demonstrating that UNet models can effectively compensate for intra-frame motion by estimating the final-position image at the end of frame acquisition from motion-corrupted input. Quantitatively, in the testing dataset for gross tumor volume (GTV) contouring, the median Dice similarity coefficient (DSC) increased from 89% to 97%, while the 95th percentile Hausdorff distance (HD95) decreased from 4.1 mm to 1.4 mm. Geometric errors in targets undergoing considerable intra-frame deformations were successfully corrected, exhibiting close agreement with the ground truth in terms of both target shape and position. The saliency maps indicated that the model predominantly focused on the later-acquired k-space components for inference and, correspondingly in the spatial domain, the edges of the moving structures at their real-time final positions. Thirdly, a proof-of-concept study in radial cine-MRI was conducted, proposing "TransSin-UNet", a novel dual-domain deep learning framework. Within the radial k-space reconstruction window, the long-distance spatial-temporal dependencies among the sinogram representation of the spokes were modeled by a transformer encoder subnetwork, followed by a UNet subnetwork operating in the spatial domain for pixel-level refinement. The network was trained and extensively evaluated across datasets with varying azimuthal radial profile increments. TransSin-UNet required only an additional 4.8 ms per frame for compensation compared to conventional direct image reconstruction using motion-corrupted spokes. It consistently outperformed architectures relying solely on transformer encoders or UNets across all comparative evaluations, leading to a noticeable enhancement in image quality and target positioning accuracy. The normalized root mean squared error (NRMSE) decreased by 50% from the initial average of 0.188, whereas the mean DSC of GTV increased from 85.1% to 96.2% in the investigated testing cases. Furthermore, the ground-truth positions of anatomical structures experiencing substantial deformations were precisely derived. This work constitutes a substantial advancement toward the clinical implementation of cine-MR tracking error reduction strategies to support enhanced real-time motion management in MRgRT.
MR-Linac, intra-frame motion, motion compensation, imaging latency, deep learning
Sui, Zhuojie
2025
Englisch
Universitätsbibliothek der Ludwig-Maximilians-Universität München
Sui, Zhuojie (2025): Imaging error reduction for MR guided radiotherapy with deep learning-based intra-frame motion compensation. Dissertation, LMU München: Fakultät für Physik
[thumbnail of Sui_Zhuojie.pdf]
Vorschau
PDF
Sui_Zhuojie.pdf

24MB

Abstract

Radiotherapy in the presence of intra-fractional motion can significantly benefit from real-time magnetic resonance imaging (MRI) guidance, owing to its superior soft tissue contrast and the absence of ionizing radiation. However, motion-related imaging errors have been identified as the primary contributor to overall loop latency in MR guided radiotherapy (MRgRT), leading to residual geometric tracking errors and subsequently affecting the effectiveness of active motion management. This thesis explores the feasibility of reducing these errors in MRgRT through deep learning-based intra-frame motion compensation techniques. Firstly, a motion-dependent k-space sampling simulation procedure was developed to investigate dynamic MR imaging behavior and motion-related imaging errors. Building upon this, a methodology for intra-frame motion dataset creation and augmentation was proposed, pairing the motion-corrupted data with its real-time ground-truth counterpart, with a primary focus on rapid anatomical changes. Specifically, based on a coarse-to-fine grid-scale representation of patient-specific motion data, 4D MRI digital anthropomorphic phantoms were generated to model lung cancer patients, and a dedicated intra-frame motion model was constructed using a piecewise linear approximation between consecutive control points. Additionally, a motion pattern perturbation scheme was introduced to comprehensively explore potential anatomical structure positions and enhance the diversity of intra-frame motion trajectories. Secondly, a proof-of-concept study in Cartesian cine-MRI was conducted, demonstrating that UNet models can effectively compensate for intra-frame motion by estimating the final-position image at the end of frame acquisition from motion-corrupted input. Quantitatively, in the testing dataset for gross tumor volume (GTV) contouring, the median Dice similarity coefficient (DSC) increased from 89% to 97%, while the 95th percentile Hausdorff distance (HD95) decreased from 4.1 mm to 1.4 mm. Geometric errors in targets undergoing considerable intra-frame deformations were successfully corrected, exhibiting close agreement with the ground truth in terms of both target shape and position. The saliency maps indicated that the model predominantly focused on the later-acquired k-space components for inference and, correspondingly in the spatial domain, the edges of the moving structures at their real-time final positions. Thirdly, a proof-of-concept study in radial cine-MRI was conducted, proposing "TransSin-UNet", a novel dual-domain deep learning framework. Within the radial k-space reconstruction window, the long-distance spatial-temporal dependencies among the sinogram representation of the spokes were modeled by a transformer encoder subnetwork, followed by a UNet subnetwork operating in the spatial domain for pixel-level refinement. The network was trained and extensively evaluated across datasets with varying azimuthal radial profile increments. TransSin-UNet required only an additional 4.8 ms per frame for compensation compared to conventional direct image reconstruction using motion-corrupted spokes. It consistently outperformed architectures relying solely on transformer encoders or UNets across all comparative evaluations, leading to a noticeable enhancement in image quality and target positioning accuracy. The normalized root mean squared error (NRMSE) decreased by 50% from the initial average of 0.188, whereas the mean DSC of GTV increased from 85.1% to 96.2% in the investigated testing cases. Furthermore, the ground-truth positions of anatomical structures experiencing substantial deformations were precisely derived. This work constitutes a substantial advancement toward the clinical implementation of cine-MR tracking error reduction strategies to support enhanced real-time motion management in MRgRT.