Purpose: Lumbar spinal stenosis (LSS) is a frequently occurring condition defined by narrowing of the spinal or nerve root canal due to degenerative changes. Physicians use MRI scans to determine the severity of stenosis, occasionally complementing it with X-ray or CT scans during the diagnostic work-up. However, manual grading of stenosis is time-consuming and induces inter-reader variability as a standardized grading system is lacking. Machine Learning (ML) has the potential to aid physicians in this process by automating segmentation and classification of LSS. However, it is unclear what models currently exist to perform these tasks.
Methods: A systematic review of literature was performed by searching the Cochrane Library, Embase, Emcare, PubMed, and Web of Science databases for studies describing an ML-based algorithm to perform segmentation or classification of the lumbar spine for LSS. Risk of bias was assessed through an adjusted version of the Newcastle-Ottawa Quality Assessment Scale that was more applicable to ML studies. Qualitative analyses were performed based on type of algorithm (conventional ML or Deep Learning (DL)) and task (segmentation or classification).
Results: A total of 27 articles were included of which nine on segmentation, 16 on classification and 2 on both tasks. The majority of studies focused on algorithms for MRI analysis. There was wide variety among the outcome measures used to express model performance. Overall, ML algorithms are able to perform segmentation and classification tasks excellently. DL methods tend to demonstrate better performance than conventional ML models. For segmentation the best performing DL models were U-Net based. For classification U-Net and unspecified CNNs powered the models that performed the best for the majority of outcome metrics. The number of models with external validation was limited.
Conclusion: DL models achieve excellent performance for segmentation and classification tasks for LSS, outperforming conventional ML algorithms. However, comparisons between studies are challenging due to the variety in outcome measures and test datasets. Future studies should focus on the segmentation task using DL models and utilize a standardized set of outcome measures and publicly available test dataset to express model performance. In addition, these models need to be externally validated to assess generalizability.
@article{Verheijen2025,
title = {Artificial Intelligence for Segmentation and Classification in Lumbar Spinal Stenosis: an overview of current methods},
author = {Verheijen, E.J.A. and Kapogiannis, T. and Munteh, D. and Chabros, J. and Staring, M. and Smith, T.R. and Vleggeert-Lankamp, C.L.A.},
journal = {European Spine Journal},
volume = {},
pages = {},
month = {},
year = {2025},
}
This paper explores how artificial intelligence (AI) techniques can address common challenges in astronomy and (bio)medical imaging. It focuses on applying convolutional neural networks (CNNs) and other AI methods to tasks such as image reconstruction, object detection, anomaly detection, and generative modeling. Drawing parallels between domains like MRI and radio astronomy, the paper highlights the critical role of AI in producing high-quality image reconstructions and reducing artifacts. Generative models are examined as versatile tools for tackling challenges such as data scarcity and privacy concerns in medicine, as well as managing the vast and complex datasets found in astrophysics. Anomaly detection is also discussed, with an emphasis on unsupervised learning approaches that address the difficulties of working with large, unlabeled datasets. Furthermore, the paper explores the use of reinforcement learning to enhance CNN performance through automated hyperparameter optimization and adaptive decision-making in dynamic environments. The focus of this paper remains strictly on AI applications, without addressing the synergies between measurement techniques or the core algorithms specific to each field.
@article{Rezaei2025,
title = {Bridging Gaps with Computer Vision: AI in (Bio)Medical Imaging and Astronomy},
author = {Rezaei, Samira and Chegeni, Amirmohammad and Javadpour, Amir and VafaeiSadr, Alireza and Cao, Lu and Rottgering, Huub and Staring, Marius},
journal = {Astronomy and Computing},
volume = {51},
pages = {100921},
month = apr,
year = {2025},
}
Artificial Intelligence (AI)-based auto-delineation technologies rapidly delineate multiple structures of interest like organs-at-risk and tumors in 3D medical images, reducing personnel load and facilitating time-critical therapies. Despite its accuracy, the AI may produce flawed delineations, requiring clinician attention. Quality assessment (QA) of these delineations is laborious and demanding. Delineation error detection systems (DEDS) aim to aid QA, yet questions linger about potential challenges to their adoption and time-saving potential. To address these queries, we first conducted a user study with two clinicians from Holland Proton Therapy Center, a Dutch cancer treatment center. Based on the study’s findings about the clinicians’ error detection workflows with and without DEDS assistance, we developed a simulation model of the QA process, which we used to assess different error detection workflows on a retrospective cohort of 42 head and neck cancer patients. Results suggest possible time savings, provided the per-slice analysis time stays close to the current baseline and trading-off delineation quality is acceptable. Our findings encourage the development of user-centric delineation error detection systems and provide a new way to model and evaluate these systems’ potential clinical value.
@article{ChavezDePlaza2025,
title = {Implementation of Delineation Error Detection Systems in Time-Critical Radiotherapy: Do AI-Supported Optimization and Human Preferences Meet?},
author = {Chaves-de-Plaza, Nicolas F. and Mody, Prerak and Hildebrandt, Klaus and Staring, Marius and Astreinidou, Eleftheria and de Ridder, Mischa and de Ridder, Huib and Vilanova, Anna and van Egmond, Rene},
journal = {Cognition, Technology & Work},
volume = {},
pages = {},
month = {},
year = {2025},
}
Objective: The integration of proton beamlines with X-ray imaging/irradiation platforms has opened up possibilities for image-guided Bragg peak irradiations in small animals. Such irradiations allow selective targeting of normal tissue substructures and tumours. However, their small size and location pose challenges in designing experiments. This work presents a simulation framework useful for optimizing beamlines, imaging protocols, and design of animal experiments. The usage of the framework is demonstrated, mainly focusing on the imaging part.
Approach: The fastCAT toolkit was modified with Monte Carlo (MC)-calculated primary and scatter data of a small animal imager for the simulation of micro-CT scans. The simulated CT of a mini-calibration phantom from fastCAT was validated against a full MC TOPAS CT simulation. A realistic beam model of a preclinical proton facility was obtained from beam transport simulations to create irradiation plans in matRad. Simulated CT images of a digital mouse phantom were generated using single-energy CT (SECT) and dual-energy CT (DECT) protocols and their accuracy in proton stopping power ratio (SPR) estimation and their impact on calculated proton dose distributions in a mouse were evaluated.
Main Results: The CT numbers from fastCAT agree within 11 HU with TOPAS except for materials at the centre of the phantom. Discrepancies for central inserts are caused by beam hardening issues. The root mean square deviation in the SPR for the best SECT (90kV/Cu) and DECT (50kV/Al-90kV/Al) protocols are 3.7% and 1.0%, respectively. Dose distributions calculated for SECT and DECT datasets revealed range shifts <0.1 mm, gamma pass rates (3%/0.1mm) greater than 99%, and no substantial dosimetric differences for all structures. The outcomes suggest that SECT is sufficient for proton treatment planning in animals.
Significance: The framework is a useful tool for the development of an optimized experimental configuration without using animals and beam time.
@article{Malimban2024,
title = {A simulation framework for preclinical proton irradiation workflow},
author = {Malimban, Justin and Ludwig, Felix and Lathouwers, Danny and Staring, Marius and Verhaegen, Frank and Brandenburg, Sytze},
journal = {Physics in Medicine and Biology},
volume = {69},
pages = {215040},
month = {},
year = {2024},
}
Visual scoring of interstitial lung disease in systemic sclerosis (SSc-ILD) from CT scans is laborious, subjective and time-consuming. This study aims to develop a deep learning framework to automate SSc-ILD scoring. The automated framework is a cascade of two neural networks. The first network selects the craniocaudal positions of the five scoring levels. Subsequently, for each level, the second network estimates the ratio of three patterns to the total lung area: the total extent of disease (TOT), ground glass (GG) and reticulation (RET). To overcome the score imbalance in the second network, we propose a method to augment the training dataset with synthetic data. To explain the network’s output, a heat map method is introduced to highlight the candidate interstitial lung disease regions. The explainability of heat maps was evaluated by two human experts and a quantitative method that uses the heat map to produce the score. The results show that our framework achieved a κ of 0.66, 0.58, and 0.65, for the TOT, GG and RET scoring, respectively. Both experts agreed with the heat maps in 91%, 90% and 80% of cases, respectively. Therefore, it is feasible to develop a framework for automated SSc-ILD scoring, which performs competitively with human experts and provides high-quality explanations using heat maps. Confirming the model’s generalizability is needed in future studies.
@article{Jia2024b,
title = {Explainable fully automated CT scoring of interstitial lung disease for patients suspected of systemic sclerosis by cascaded regression neural networks and its comparison with experts},
author = {Jia, Jingnan and Hern{\'a}ndez Gir{\'o}n, Irene and Schouffoer, Anne A. and De Vries-Bouwstra, Jeska K. and Ninaber, Maarten K. and Korving, Julie C. and Staring, Marius and Kroft, Lucia J.M. and Stoel, Berend C.},
journal = {Scientific Reports},
volume = {14},
pages = {26666},
month = {},
year = {2024},
}
Pulmonary function tests (PFTs) are important clinical metrics to measure the severity of interstitial lung disease for systemic sclerosis patients. However, PFTs cannot always be performed by spirometry if there is a risk of disease transmission or other contraindications. In addition, it is unclear how lung function is affected by changes in lung vessels. Convolution neural networks (CNNs) have been previously proposed to estimate PFTs from chest CT scans (CNN-CT) and extracted vessels (CNNVessel). Due to GPU memory constraints, however, these networks used down-sampled images, which causes a loss of information on small vessels. Previous work based on CNNs has indicated that detailed vessel information from CT scans can be helpful for PFT estimation. Therefore, this paper proposes to use a point cloud neural network (PNN-Vessel) and graph neural network (GNN-Vessel) to estimate PFTs from point cloud and graph-based representations of pulmonary vessel centerlines, respectively. After that, we perform multiple variable step-wise regression analysis to explore if vessel-based networks can contribute to the PFT estimation, in addition to CNN-CT. Results showed that both PNN-Vessel and GNN-Vessel outperformed CNN-Vessel, by 14% and 4%, respectively, when averaged across the ICC scores of four PFTs metrics. In addition, compared to CNN-Vessel, PNNVessel used 30% of training time (1.1 hours) and 7% parameters (2.1 M) and GNN-Vessel used only 7% training time (0.25 hours) and 0.7% parameters (0.2 M). Our multiple variable regression analysis still verified that more detailed vessel information could provide further explanation for PFT estimation from anatomical imaging.
@article{Jia2024a,
title = {Using 3D point cloud and graph-based neural networks to improve the estimation of pulmonary function tests from chest CT},
author = {Jia, Jingnan and Yu, Bo and Mody, Prerak and Ninaber, Maarten K. and Schouffoer, Anne A. and Kroft, Lucia J.M. and Staring, Marius and Stoel, Berend C.},
journal = {Computers in Biology and Medicine},
volume = {182},
pages = {109192},
month = nov,
year = {2024},
}
Increased usage of automated tools like deep learning in medical image segmentation has alleviated the bottleneck of manual contouring. This has shifted manual labour to quality assessment (QA) of automated contours which involves detecting errors and correcting them. A potential solution to semi-automated QA is to use deep Bayesian uncertainty to recommend potentially erroneous regions, thus reducing time spent on error detection. Previous work has investigated the correspondence between uncertainty and error, however, no work has been done on improving the “utility" of Bayesian uncertainty maps such that it is only present in inaccurate regions and not in the accurate ones. Our work trains the FlipOut model with the Accuracy-vs-Uncertainty (AvU) loss which promotes uncertainty to be present only in inaccurate regions. We apply this method on datasets of two radiotherapy body sites, c.f. head-and-neck CT and prostate MR scans. Uncertainty heatmaps (i.e. predictive entropy) are evaluated against voxel inaccuracies using Receiver Operating Characteristic (ROC) and Precision-Recall (PR) curves. Numerical results show that when compared to the Bayesian baseline the proposed method successfully suppresses uncertainty for accurate voxels, with similar presence of uncertainty for inaccurate voxels. Code to reproduce experiments is available at https://github.com/prerakmody/bayesuncertainty-error-correspondence.
@article{Mody:2024b,
title = {Improving Uncertainty-Error Correspondence in Deep Bayesian Medical Image Segmentation},
author = {Mody, Prerak and Chaves-de-Plaza, Nicolas and Rao, Chinmay and Astreinidou, Eleftheria and De Ridder, Mischa and Hoekstra, Nienke and Hildebrandt, Klaus and Staring, Marius},
journal = {The Journal of Machine Learning for Biomedical Imaging},
volume = {2},
pages = {1048 -- 1082},
month = aug,
year = {2024},
}
The contour depth methodology enables non-parametric summarization of contour ensembles by extracting their representatives, confidence bands, and outliers for visualization (via contour boxplots) and robust downstream procedures. We address two shortcomings of these methods. Firstly, we significantly expedite the computation and recomputation of Inclusion Depth (ID), introducing a linear-time algorithm for epsilon ID, a variant used for handling ensembles with contours with multiple intersections. We also present the inclusion matrix, which contains the pairwise inclusion relationships between contours, and leverage it to accelerate the recomputation of ID. Secondly, extending beyond the single distribution assumption, we present the Relative Depth (ReD), a generalization of contour depth for ensembles with multiple modes. Building upon the linear-time eID, we introduce CDclust, a clustering algorithm that untangles ensemble modes of variation by optimizing ReD. Synthetic and real datasets from medical image segmentation and meteorological forecasting showcase the speed advantages, illustrate the use case of progressive depth computation and enable non-parametric multimodal analysis. To promote research and adoption, we offer the contour-depth Python package.
@article{Chaves-de-Plaza:2024,
title = {Depth for Multi-Modal Contour Ensembles},
author = {Chaves-de-Plaza, N.F. and Molenaar, M. and Mody, P. and Staring, M. and van Egmond, R. and Eisemann, E. and Vilanova, A. and Hildebrandt, K.},
journal = {Computer Graphics Forum},
volume = {43},
number = {3},
pages = {e15083},
year = {2024},
}
Background and Purpose: Retrospective dose evaluation for organ-at-risk auto-contours has previously used small cohorts due to additional manual effort required for treatment planning on auto-contours. We aimed to do this at large scale, by a) proposing and assessing an automated plan optimization workflow that used existing clinical plan parameters and b) using it for head-and-neck auto-contour dose evaluation.
Materials and Methods: Our automated workflow emulated our clinic’s treatment planning protocol and reused existing clinical plan optimization parameters. This workflow recreated the original clinical plan (POG) with manual contours (PMC) and evaluated the dose effect (POG - PMC) on 70 photon and 30 proton plans of head-and-neck patients. As a use-case, the same workflow (and parameters) created a plan using auto-contours (PAC) of eight head-and-neck organs-at-risk from a commercial tool and evaluated their dose effect (PMC - PAC).
Results: For plan recreation (POG - PMC), our workflow had a median impact of 1.0% and 1.5% across dose metrics of auto-contours, for photon and proton respectively. Computer time of automated planning was 25% (photon) and 42% (proton) of manual planning time. For auto-contour evaluation (PMC - PAC), we noticed an impact of 2.0% and 2.6% for photon and proton radiotherapy. All evaluations had a median ΔNTCP (Normal Tissue Complication Probability) less than 0.3%.
Conclusions: The plan replication capability of our automated program provides a blueprint for other clinics to perform auto-contour dose evaluation with large patient cohorts. Finally, despite geometric differences, auto-contours had a minimal median dose impact, hence inspiring confidence in their utility and facilitating their clinical adoption.
@article{Mody:2024a,
title = {Large-scale dose evaluation of deep learning organ contours in head-and-neck radiotherapy by leveraging existing plans},
author = {Mody, Prerak and Huiskes, Merle and Chaves-de-Plaza, Nicolas and Onderwater, Alice and Lamsma, Rense and Hildebrandt, Klaus and Hoekstra, Nienke and Astreinidou, Eleftheria and Staring, Marius and Dankers, Frank},
journal = {Physics and Imaging in Radiation Oncology},
volume = {30},
pages = {100572},
month = apr,
year = {2024},
}
Artificial intelligence techniques, specifically deep learning, have already affected daily life in a wide range of areas. Likewise, initial applications have been explored in rheumatology. Deep learning might not easily surpass the accuracy of classic techniques when performing classification or regression on low-dimensional numerical data. With images as input, however, deep learning has become so successful that it has already outperformed the majority of conventional image-processing techniques developed during the past 50 years. As with any new imaging technology, rheumatologists and radiologists need to consider adapting their arsenal of diagnostic, prognostic and monitoring tools, and even their clinical role and collaborations. This adaptation requires a basic understanding of the technical background of deep learning, to efficiently utilize its benefits but also to recognize its drawbacks and pitfalls, as blindly relying on deep learning might be at odds with its capabilities. To facilitate such an understanding, it is necessary to provide an overview of deep-learning techniques for automatic image analysis in detecting, quantifying, predicting and monitoring rheumatic diseases, and of currently published deep-learning applications in radiological imaging for rheumatology, with critical assessment of possible limitations, errors and confounders, and conceivable consequences for rheumatologists and radiologists in clinical practice.
@article{Stoel:2024,
title = {Deep Learning in Rheumatologic Image Interpretation},
author = {Stoel, Berend C. and Staring, Marius and Reijnierse, Monique and van der Helm-van Mil, Annette H.M.},
journal = {Nature Reviews Rheumatology},
volume = {20},
pages = {182 -- 195},
month = mar,
year = {2024},
}
Multi-sequence magnetic resonance imaging (MRI) has found wide applications in both modern clinical studies and deep learning research. However, in clinical practice, it frequently occurs that one or more of the MRI sequences are missing due to different image acquisition protocols or contrast agent contraindications of patients, limiting the utilization of deep learning models trained on multi-sequence data. One promising approach is to leverage generative models to synthesize the missing sequences, which can serve as a surrogate acquisition. State-of-the-art methods tackling this problem are based on convolutional neural networks (CNN) which usually suffer from spectral biases, resulting in poor reconstruction of high-frequency fine details. In this paper, we propose Conditional Neural fields with Shift modulation (CoNeS), a model that takes voxel coordinates as input and learns a representation of the target images for multi-sequence MRI translation. The proposed model uses a multi-layer perceptron (MLP) instead of a CNN as the decoder for pixel-to-pixel mapping. Hence, each target image is represented as a neural field that is conditioned on the source image via shift modulation with a learned latent code. Experiments on BraTS 2018 and an in-house clinical dataset of vestibular schwannoma patients showed that the proposed method outperformed state-of-the-art methods for multi-sequence MRI translation both visually and quantitatively. Moreover, we conducted spectral analysis, showing that CoNeS was able to overcome the spectral bias issue common in conventional CNN models. To further evaluate the usage of synthesized images in clinical downstream tasks, we tested a segmentation network using the synthesized images at inference. The results showed that CoNeS improved the segmentation performance when some MRI sequences were missing and outperformed other synthesis models. We concluded that neural fields are a promising technique for multi-sequence MRI translation.
@article{Chen:2024,
author = {Chen, Yunjie and Staring, Marius and Neve, Olaf M. and Romeijn, Stephan R. and Hensen, Erik F. and Verbist, Berit M. and Wolterink, Jelmer M. and Tao, Qian},
title = {CoNeS: Conditional neural fields with shift modulation for multi-sequence MRI translation},
journal = {The Journal of Machine Learning for Biomedical Imaging},
volume = {2},
pages = {657 -- 685},
year = {2024},
}
Ensembles of contours arise in various applications like simulation, computer-aided design, and semantic segmentation. Uncovering ensemble patterns and analyzing individual members is a challenging task that suffers from clutter. Ensemble statistical summarization can alleviate this issue by permitting analyzing ensembles’ distributional components like the mean and median, confidence intervals, and outliers. Contour boxplots, powered by Contour Band Depth (CBD), are a popular nonparametric ensemble summarization method that benefits from CBD’s generality, robustness, and theoretical properties. In this work, we introduce Inclusion Depth (ID), a new notion of contour depth with three defining characteristics. First, ID is a generalization of functional Half-Region Depth, which offers several theoretical guarantees. Second, ID relies on a simple principle: the inside/outside relationships between contours. This facilitates implementing ID and understanding its results. Third, the computational complexity of ID scales quadratically in the number of members of the ensemble, improving CBD’s cubic complexity. This also in practice speeds up the computation enabling the use of ID for exploring large contour ensembles or in contexts requiring multiple depth evaluations like clustering. In a series of experiments on synthetic data and case studies with meteorological and segmentation data, we evaluate ID’s performance and demonstrate its capabilities for the visual analysis of contour ensembles.
@article{Chaves-de-Plaza:2025,
author = {Chaves-de-Plaza, Nicolas and Mody, Prerak P. and Staring, Marius and van Egmond, Ren{\'e}; and Vilanova, Anna and Hildebrandt, Klaus},
title = {Inclusion Depth for Contour Ensembles},
journal = {IEEE Transactions on Visualization and Computer Graphics},
volume = {},
number = {},
pages = {},
year = {2024},
}
Background: MR acquisition is a time consuming process, making it susceptible to patient motion during scanning. Even motion in the order of a millimeter can introduce severe blurring and ghosting artifacts, potentially necessitating re-acquisition. MRI can be accelerated by acquiring only a fraction of k-space, combined with advanced reconstruction techniques leveraging coil sensitivity profiles and prior knowledge. AI-based reconstruction techniques have recently been popularized, but generally assume an ideal setting without intra-scan motion.
Purpose: To retrospectively detect and quantify the severity of motion artifacts in undersampled MRI data. This may prove valuable as a safety mechanism for AI-based approaches, provide useful information to the reconstruction method, or prompt for re-acquisition while the patient is still in the scanner.
Methods: We developed a deep learning approach that detects and quantifies motion artifacts in undersampled brain MRI. We demonstrate that synthetically motion-corrupted data can be leveraged to train the CNN-based motion artifact estimator, generalizing well to real-world data. Additionally, we leverage the motion artifact estimator by using it as a selector for a motion-robust reconstruction model in case a considerable amount of motion was detected, and a high data consistency model otherwise.
Results: Training and validation were performed on 4387 and 1304 synthetically motion-corrupted images and their uncorrupted counterparts, respectively. Testing was performed on undersampled in vivo motion-corrupted data from 28 volunteers, where our model distinguished head motion from motion-free scans with 91% and 96% accuracy when trained on synthetic and on real data, respectively. It predicted a manually defined quality label (‘Good’, ‘Medium’ or ‘Bad’ quality) correctly in 76% and 85% of the time when trained on synthetic and real data, respectively. When used as a selector it selected the appropriate reconstruction network 93% of the time, achieving near optimal SSIM values.
Conclusions: The proposed method quantified motion artifact severity in undersampled MRI data with high accuracy, enabling real-time motion artifact detection that can help improve the safety and quality of AI-based reconstructions.
@article{Beljaards:2024,
author = {Beljaards, Laurens and Pezzotti, Nicola and Rao, Chinmay and Doneva, Mariya and van Osch, Matthias J.P. and Staring, Marius},
title = {AI-Based Motion Artifact Severity Estimation in Undersampled MRI Allowing for Selection of Appropriate Reconstruction Models},
journal = {Medical Physics},
volume = {51},
number = {5},
pages = {3555 -- 3565},
year = {2024},
}
Pulmonary function test (PFT) plays an important role in screening and following-up pulmonary involvement in systemic sclerosis (SSc). However, some patients are not able to perform PFT due to contraindications. In addition, it is unclear how lung function is affected by changes in lung structure in SSc. Therefore, this study aims to explore the potential of automatically estimating PFT results from chest CT scans of SSc patients and how different regions influence the estimation of PFT values. Deep regression networks were developed with transfer learning to estimate PFT from 316 SSc patients. Segmented lungs and vessels were used to mask the CT images to train the network with different inputs: from entire CT scan, lungs-only to vessels-only. The network trained by entire CT scans with transfer learning achieved an ICC of 0.71, 0.76, 0.80, and 0.81 for the estimation of DLCO, FEV1, FVC and TLC, respectively. The performance of the networks gradually decreased when trained on data from lungs-only and vessels-only. Regression attention maps showed that regions close to large vessels are highlighted more than other regions, and occasionally regions outside the lungs are highlighted. These experiments mean that apart from lungs and large vessels, other regions contribute to the estimation of PFTs. In addition, adding manually designed biomarkers increased the correlation (R) from 0.75, 0.74, 0.82, and 0.83 to 0.81, 0.83, 0.88, and 0.90, respectively. It means that that manually designed imaging biomarkers can still contribute to explaining the relation between lung function and structure.
@article{Jia:2023,
author = {Jia, Jingnan and Marges, Emiel R. and Ninaber, Maarten K. and Kroft, Lucia J.M. and Schouffoer, Anne A. and Staring, Marius and Stoel, Berend C.},
title = {Automatic pulmonary function estimation from chest CT scans using deep regression neural networks: the relation between structure and function in systemic sclerosis},
journal = {IEEE Access},
volume = {11},
pages = {135272 -- 135282},
month = nov,
year = {2023},
}
Objective. Validation of automated 2-dimensional (2D) diameter measurements of vestibular schwannomas on magnetic resonance imaging (MRI).
Study Design.Retrospective validation study using 2 data sets containing MRIs of vestibular schwannoma patients.
Setting. University Hospital in The Netherlands.
Methods.Two data sets were used, 1 containing 1 scan per patient (n = 134) and the other containing at least 3 consecutive MRIs of 51 patients, all with contrast-enhanced T1 or high-resolution T2 sequences. 2D measurements of the maximal extrameatal diameters in the axial plane were automatically derived from a 3D-convolutional neural network compared to manual measurements by 2 human observers. Intra- and interobserver variabilities were calculated using the intraclass correlation coefficient (ICC), agreement on tumor progression using Cohen’s kappa.
Results. The human intra- and interobserver variability showed a high correlation (ICC: 0.98-0.99) and limits of agreement of 1.7 to 2.1 mm. Comparing the automated to human measurements resulted in ICC of 0.98 (95% confidence interval [CI]: 0.974; 0.987) and 0.97 (95% CI: 0.968; 0.984), with limits of agreement of 2.2 and 2.1 mm for diameters parallel and perpendicular to the posterior side of the temporal bone, respectively. There was satisfactory agreement on tumor progression between automated measurements and human observers (Cohen’s κ = 0.77), better than the agreement between the human observers (Cohen’s κ = 0.74).
Conclusion. Automated 2D diameter measurements and growth detection of vestibular schwannomas are at least as accurate as human 2D measurements. In clinical practice, measurements of the maximal extrameatal tumor (2D) diameters of vestibular schwannomas provide important complementary information to total tumor volume (3D) measurements. Combining both in an automated measurement algorithm facilitates clinical adoption.
@article{Neve:2023,
author = {Neve, Olaf M. and Romeijn, Stephan R. and Chen, Yunjie and Nagtegaal, Larissa and Grootjans, Willem and Jansen, Jeroen C. and Staring, Marius and Verbist, Berit M. and Hensen, Erik F.},
title = {Automated 2-dimensional measurement of vestibular schwannoma: validity and accuracy of an artificial intelligence algorithm},
journal = {Otolaryngology - Head and Neck Surgery},
volume = {169},
number = {6},
pages = {1582 -- 1589},
month = dec,
year = {2023},
}
The particular mechanical obstruction of pulmonary embolism (PE) and chronic thromboembolic pulmonary hypertension (CTEPH) may affect pulmonary arteries and veins differently. Therefore, we evaluated whether pulmonary vascular morphology and densitometry using CT pulmonary angiography (CTPA) in arteries and veins could distinguish PE from CTEPH.
We analyzed CTPA images from a convenience cohort of 16 PE patients, 6 CTEPH patients and 15 controls without PE or CTEPH. Pulmonary vessels were extracted with a graph-cuts method, and separated into arteries and veins using a deep-learning classification method. By analyzing the distribution of vessel radii, vascular morphology was quantified into a slope (α) and intercept (β) for the entire pulmonary vascular tree, and for arteries and veins, separately. To quantify lung perfusion, the median pulmonary vascular density was calculated. As a reference, lung perfusion was also quantified by the contrast enhancement in the parenchymal areas, pulmonary trunk and descending aorta. All quantifications were compared between the three groups.
Vascular morphology did not differ between groups, in contrast to vascular density values (both arterial and venous; p-values 0.006 - 0.014). The median vascular density (interquartile range) was -452 (95), -567 (113) and -470 (323) HU, for the PE, control and CTEPH group, respectively. The perfusion curves from all measurements showed different patterns between groups.
In this proof of concept study, not vasculature morphology but vascular densities differentiated between normal and thrombotic obstructed vasculature. For distinction on an individual patient level, further technical improvements are needed both in terms of image acquisition/reconstruction and post-processing.
@article{Zhai:2023,
author = {Zhai, Zhiwei and Boon, Gudula J.A.M. and Staring, Marius and van Dam, Lisette F. and Kroft, Lucia J.M. and Giron, Irene Hernandez and Ninaber, Maarten K. and Bogaard, Harm Jan and Meijboom, Lilian J. and Vonk Noordegraaf, Anton and Huisman, Menno V. and Klok, Frederikus A. and Stoel, Berend C.},
title = {Automated Quantification of the Pulmonary Vasculature in Pulmonary Embolism and Chronic Thromboembolic Pulmonary Hypertension},
journal = {Pulmonary Circulation},
volume = {13},
number = {2},
pages = {e12223},
year = {2023},
}
@article{Goedmakers:2022,
author = {Goedmakers, C.M.W and Pereboom, L.M. and Schoones, J.W. and de Leeuw den Bouter, M.L. and Remis, R.F. and Staring, M. and Vleggeert-Lankamp, C.L.A.},
title = {Machine learning for image analysis in the cervical spine: Systematic review of the available models and methods},
journal = {Brain and Spine},
volume = {2},
pages = {101666},
year = {2022},
}
Purpose: To develop automated vestibular schwannoma measurements on contrast-enhanced T1- and T2-weighted MRI.
Material and methods: MRI data from 214 patients in 37 different centers was retrospectively analyzed between 2020-2021. Patients with hearing loss (134 vestibular schwannoma positive [mean age ± SD, 54 ± 12 years; 64 men], 80 negative) were randomized to a training and validation set and an independent test set. A convolutional neural network (CNN) was trained using five-fold cross-validation for two models (T1 and T2). Quantitative analysis including Dice index, Hausdorff distance, surface-to-surface distance (S2S), and relative volume error were used to compare the computer and the human delineations. Furthermore, an observer study was performed in which two experienced physicians evaluated both delineations.
Results: The T1-weighted model showed state-of-the-art performance with a mean S2S distance of less than 0.6 mm for the whole tumor and the intrameatal and extrameatal tumor parts. The whole tumor Dice index and Hausdorff distance were 0.92 and 2.1 mm in the independent test set. T2-weighted images had a mean S2S distance less than 0.6 mm for the whole tumor and the intrameatal and extrameatal tumor parts. Whole tumor Dice index and Hausdorff distance were 0.87 and 1.5 mm in the independent test set. The observer study indicated that the tool was comparable to human delineations in 85-92% of cases.
Conclusion: The CNN model detected and delineated vestibular schwannomas accurately on contrast-enhanced T1 and T2-weighted MRI and distinguished the clinically relevant difference between intrameatal and extrameatal tumor parts.
@article{Neve:2022,
author = {Neve, Olaf and Chen, Yunjie and Tao, Qian and Romeijn, Stephan and de Boer, Nick and Grootjans, Willem and Kruit, Mark and Lelieveldt, Boudewijn and Jansen, Jeroen and Hensen, Erik and Verbist, Berit and Staring, Marius},
title = {Fully Automated 3D Vestibular Schwannoma Segmentation with and without Gadolinium Contrast: a multi-center, multi-vendor study},
journal = {Radiology: Artificial Intelligence},
volume = {4},
number = {4},
pages = {e210300},
year = {2022},
}
Background suppression (BGS) in arterial spin labeling (ASL) MRI leads to a higher temporal SNR (tSNR) of the perfusion images compared to ASL without BGS. The performance of the BGS, however, depends on the tissue relaxation times and on inhomogeneities of the scanner’s magnetic fields, which differ between subjects and are unknown at the moment of scanning. Therefore, we developed a feedback loop (FBL) mechanism that optimizes the BGS for each subject in the scanner during acquisition. We implemented the FBL for 2D pseudo-continuous ASL (PCASL) scans with an echo-planar imaging (EPI) readout. After each dynamic scan, acquired ASL images were automatically sent to an external computer and processed with a Python processing tool. Inversion times were optimized on-the-fly using 80 iterations of the Nelder-Mead method, by minimizing the signal intensity in the label image while maximizing the signal intensity in the perfusion image. The performance of this method was first tested in a 4-component phantom. The regularization parameter was then tuned in 6 healthy subjects (3 male, 3 female, age 24-62 years) and set as λ=4 for all other experiments. Resulting ASL images, perfusion images and tSNR maps obtained from the last 20 iterations of the FBL scan were compared to those obtained without BGS and to standard BGS in 12 healthy volunteers (5 male, 7 female, age 24-62 years) (including the 6 volunteers used for tuning of λ). The FBL resulted in perfusion images with a statistically significantly higher tSNR (2.20) compared to standard BGS (1.96) (P < 5 10-3, two-sided paired t-test). Minimizing signal in the label image furthermore resulted in control images from which approximate changes in perfusion signal can directly be appreciated. This could be relevant to ASL applications that require a high temporal resolution. Future work is needed to minimize the number of initial acquisitions during which the performance of BGS is reduced compared to standard BGS and to extend the technique to 3D ASL.
@article{Koolstra:2022,
author = {Koolstra, Kirsten and Staring, Marius and de Bruin, Paul and van Osch, Mathias J.P.},
title = {Subject-specific optimization of background suppression for arterial spin labeling MRI using a feedback loop on the scanner},
journal = {NMR in Biomedicine},
volume = {35},
number = {9},
pages = {e4746},
month = sep,
year = {2022},
}
Purpose. Parallel RF transmission (PTx) is one of the key technologies enabling high quality imaging at ultrahigh field strengths (≥7T). Compliance with regulatory limits on the local specific absorption rate (SAR) typically involves over-conservative safety margins to account for intersubject variability, which negatively affect the utilization of ultra-high field MR. In this work, we present a method to generate a subject-specific body model from a single T1-weighted dataset for personalized local SAR prediction in PTx neuroimaging at 7T.
Methods. Multi-contrast data were acquired at 7T (N=10) to establish ground truth segmentations in eight tissue types. A 2.5D convolutional neural network was trained using the T1-weighted data as input in a leave-one-out cross-validation study. The segmentation accuracy was evaluated through local SAR simulations in a quadrature birdcage as well as a PTx coil model.
Results. The network-generated segmentations reached overall Dice coefficients of 86.7% ± 6.7% (mean ± standard deviation) and showed to successfully address the severe intensity bias and contrast variations typical to 7T. Errors in peak local SAR obtained were below 3.0% in the quadrature birdcage. Results obtained in the PTx configuration indicated that a safety margin of 6.3% ensures conservative local SAR estimates in 95% of the random RF shims, compared to an average overestimation of 34% in the generic "one-size-fits-all" approach.
Conclusion. A subject-specific body model can be automatically generated from a single T1-weighted dataset by means of deep learning, providing the necessary inputs for accurate and personalized local SAR predictions in PTx neuroimaging at 7T.
@article{Brink:2022,
author = {Brink, Wyger M. and Yousefi, Sahar and Bhatnagar, Prerna and Remis, Rob F. and Staring, Marius and Webb, Andrew G.},
title = {Personalised Local SAR Prediction for Parallel Transmit Neuroimaging at 7T from a Single T1-weighted Dataset},
journal = {Magnetic Resonance in Medicine},
volume = {88},
number = {1},
pages = {464 - 475},
month = jul,
year = {2022},
}
For image-guided small animal irradiations, the whole workflow of imaging, organ contouring, irradiation planning, and delivery is typically performed in a single session requiring continuous administration of anesthetic agents. Automating contouring leads to a faster workflow, which limits exposure to anesthesia and thereby, reducing its impact on experimental results and on animal wellbeing. Here, we trained the 2D and 3D U-Net architectures of no-new-Net (nnU-Net) for autocontouring of the thorax in mouse micro-CT images. We trained the models only on native CTs and evaluated their performance using an independent testing dataset (i.e., native CTs not included in the training and validation). Unlike previous studies, we also tested the model performance on an external dataset (i.e., contrast-enhanced CTs) to see how well they predict on CTs completely different from what they were trained on. We also assessed the interobserver variability using the generalized conformity index (CIgen) among three observers, providing a stronger human baseline for evaluating automated contours than previous studies. Lastly, we showed the benefit on the contouring time compared to manual contouring. The results show that 3D models of nnU-Net achieve superior segmentation accuracy and are more robust to unseen data than 2D models. For all target organs, the mean surface distance (MSD) and the Hausdorff distance (95p HD) of the best performing model for this task (nnU-Net 3d_fullres) are within 0.16 mm and 0.60 mm, respectively. These values are below the minimum required contouring accuracy of 1 mm for small animal irradiations, and improve significantly upon state-of-the-art 2D U-Net-based AIMOS method. Moreover, the conformity indices of the 3d_fullres model also compare favourably to the interobserver variability for all target organs, whereas the 2D models perform poorly in this regard. Importantly, the 3d_fullres model offers 98% reduction in contouring time.
@article{Malimban:2022,
author = {Malimban, Justin and Lathouwers, Danny and Qian, Haibin and Verhaegen, Frank and Wiedemann, Julia and Brandenburg, Sytze and Staring, Marius},
title = {Deep learning-based segmentation of the thorax in mouse micro-CT scans},
journal = {Scientific Reports},
volume = {12},
number = {1},
pages = {1822},
year = {2022},
}
Medical image registration and segmentation are two of the most frequent tasks in medical image analysis. As these tasks are complementary and correlated, it would be beneficial to apply them simultaneously in a joint manner. In this paper, we formulate registration and segmentation as a joint problem via a Multi-Task Learning (MTL) setting, allowing these tasks to leverage their strengths and mitigate their weaknesses through the sharing of beneficial information. We propose to merge these tasks not only on the loss level, but on the architectural level as well. We studied this approach in the context of adaptive image-guided radiotherapy for prostate cancer, where planning and follow-up CT images as well as their corresponding contours are available for training. At testing time the contours of the follow-up scans are not available, which is a common scenario in adaptive radiotherapy. The study involves two datasets from different manufacturers and institutes. The first dataset was divided into training (12 patients) and validation (6 patients), and was used to optimize and validate the methodology, while the second dataset (14 patients) was used as an independent test set. We carried out an extensive quantitative comparison between the quality of the automatically generated contours from different network architectures as well as loss weighting methods. Moreover, we evaluated the quality of the generated deformation vector field (DVF). We show that MTL algorithms outperform their Single-Task Learning (STL) counterparts and achieve better generalization on the independent test set. The best algorithm achieved a mean surface distance of 1.06±0.3 mm, 1.27±0.4 mm, 0.91±0.4 mm, and 1.76±0.8 mm on the validation set for the prostate, seminal vesicles, bladder, and rectum, respectively. The high accuracy of the proposed method combined with the fast inference speed, makes it a promising method for automatic re-contouring of follow-up scans for adaptive radiotherapy, potentially reducing treatment related complications and therefore improving patients quality-of-life after treatment. The source code is available at https://github.com/moelmahdy/JRS-MTL.
@article{Elmahdy:2021,
author = {Elmahdy, Mohamed S. and Beljaards, Laurens and Yousefi, Sahar and Sokooti, Hessam and Verbeek, Fons and van der Heide, U.A. and Staring, Marius},
title = {Joint Registration and Segmentation via Multi-Task Learning for Adaptive Radiotherapy of Prostate Cancer},
journal = {IEEE Access},
volume = {9},
pages = {95551 -- 95568},
month = jun,
year = {2021},
}
Manual or automatic delineation of the esophageal tumor in CT images is known to be very challenging. This is due to the low contrast between the tumor and adjacent tissues, the anatomical variation of the esophagus, as well as the occasional presence of foreign bodies (e.g. feeding tubes). Physicians therefore usually exploit additional knowledge such as endoscopic findings, clinical history, additional imaging modalities like PET scans. Achieving his additional information is time-consuming, while the results are error-prone and might lead to non-deterministic results. In this paper we aim to investigate if and to what extent a simplified clinical workflow based on CT alone, allows one to automatically segment the esophageal tumor with sufficient quality. For this purpose, we present a fully automatic end-to-end esophageal tumor segmentation method based on convolutional neural networks (CNNs). The proposed network, called Dilated Dense Attention Unet (DDAUnet), leverages spatial and channel attention gates in each dense block to selectively concentrate on determinant feature maps and regions. Dilated convolutional layers are used to manage GPU memory and increase the network receptive field. We collected a dataset of 792 scans from 288 distinct patients including varying anatomies with air pockets, feeding tubes and proximal tumors. Repeatability and reproducibility studies were conducted for three distinct splits of training and validation sets. The proposed network achieved a DSC value of 0.79 ± 0.20, a mean surface distance of 5.4 ± 20.2mm and 95% Hausdorff distance of 14.7 ± 25.0mm for 287 test scans, demonstrating promising results with a simplified clinical workflow based on CT alone. Our code is publicly available via https://github.com/yousefis/DenseUnet_Esophagus_Segmentation.
@article{Yousefi:2021,
author = {Yousefi, Sahar and Sokooti, Hessam and Elmahdy, Mohamed S. and Lips, Irene M. and Manzuri Shalmani, Mohammad T. and Zinkstok, Roel T. and Dankers, Frank J.W.M. and and Staring, Marius},
title = {Esophageal Tumor Segmentation in CT Images using a Dilated Dense Attention Unet (DDAUnet)},
journal = {IEEE Access},
volume = {9},
pages = {99235 -- 99248},
month = jul,
year = {2021},
}
In this paper we propose a supervised method to predict registration misalignment using convolutional neural networks (CNNs). This task is casted to a classification problem with multiple classes of misalignment: "correct" 0-3 mm, "poor" 3-6 mm and "wrong" over 6 mm. Rather than a direct prediction, we propose a hierarchical approach, where the prediction is gradually refined from coarse to fine. Our solution is based on a convolutional Long Short-Term Memory (LSTM), using hierarchical misalignment predictions on three resolutions of the image pair, leveraging the intrinsic strengths of an LSTM for this problem. The convolutional LSTM is trained on a set of artificially generated image pairs obtained from artificial displacement vector fields (DVFs). Results on chest CT scans show that incorporating multi-resolution information, and the hierarchical use via an LSTM for this, leads to overall better F1 scores, with fewer misclassifications in a well-tuned registration setup. The final system yields an accuracy of 87.1%, and an average F1 score of 66.4% aggregated in two independent chest CT scan studies.
@article{Sokooti:2021,
author = {Sokooti, Hessam and Yousefi, Sahar and Elmahdy, Mohamed S. and Lelieveldt, Boudewijn P.F. and Staring, Marius},
title = {Hierarchical Prediction of Registration Misalignment using a Convolutional LSTM: Application to Chest CT Scans},
journal = {IEEE Access},
volume = {9},
pages = {62008 -- 62020},
month = apr,
year = {2021},
}
Purpose: Efficient compression of images while preserving image quality has the potential to be a major enabler of effective remote clinical diagnosis and treatment, since poor Internet connection conditions are often the primary constraint in such services. This paper presents a framework for organ-specific image compression for teleinterventions based on a deep learning approach and anisotropic diffusion filter.
Methods: The proposed method, DLAD, uses a CNN architecture to extract a probability map for the organ of interest; this probability map guides an anisotropic diffusion filter that smooths the image except at the location of the organ of interest. Subsequently, a compression method, such as BZ2 and HEVC-visually lossless, is applied to compress the image. We demonstrate the proposed method on 3D CT images acquired for radio frequency ablation (RFA) of liver lesions. We quantitatively evaluate the proposed method on 151 CT images using peak-signal-to-noise ratio (PSNR), structural similarity (SSIM) and compression ratio (CR) metrics. Finally, we compare the assessments of two radiologists on the liver lesion detection and the liver lesion center annotation using 33 sets of the original images and the compressed images.
Results: The results show that the method can significantly improve CR of most well-known compression methods. DLAD combined with HEVC-visually lossless achieves the highest average CR of 6.45, which is 36% higher than that of the original HEVC and outperforms other state-of-the-art lossless medical image compression methods. The means of PSNR and SSIM are 70 dB and 0.95, respectively. In addition, the compression effects do not statistically significantly affect the assessments of the radiologists on the liver lesion detection and the lesion center annotation.
Conclusions: We thus conclude that the method has a high potential to be applied in teleintervention applications.
@article{Luu:2021,
author = {Luu, Ha Manh and van Walsum, Theo and Franklin, Daniel and Pham, Phuong Cam and Vu, Luu Dang and Moelker, Adriaan and Staring, Marius and Van Hoang, Xiem and Niessen, Wiro and Trung, Nguyen Linh},
title = {Efficiently Compressing 3D Medical Images for Teleinterventions via CNNs and Anisotropic Diffusion},
journal = {Medical Physics},
volume = {48},
number = {6},
pages = {2877 -- 2890},
month = jun,
year = {2021},
}
Adaptive intelligence aims at empowering machine learning techniques with the additional use of domain knowledge. In this work, we present the application of adaptive intelligence to accelerate MR acquisition. Starting from undersampled k-space data, an iterative learning-based reconstruction scheme inspired by compressed sensing theory is used to reconstruct the images. We developed a novel deep neural network to refine and correct prior reconstruction assumptions given the training data. The network was trained and tested on a knee MRI dataset from the 2019 fastMRI challenge organized by Facebook AI Research and NYU Langone Health. All submissions to the challenge were initially ranked based on similarity with a known groundtruth, after which the top 4 submissions were evaluated radiologically. Our method was evaluated by the fastMRI organizers on an independent challenge dataset. It ranked #1, shared #1, and #3 on respectively the 8x accelerated multi-coil, the 4x multi-coil, and the 4x single-coil tracks. This demonstrates the superior performance and wide applicability of the method.
@article{Pezzotti:2020,
author = {Pezzotti, Nicola and Yousefi, Sahar and Elmahdy, Mohamed S. and van Gemert, Jeroen and Sch{\"u}lke, Christophe and Doneva, Mariya and Nielsen, Tim and Kastryulin, Sergey and Lelieveldt, Boudewijn P.F. and van Osch, Matthias J.P. and de Weerdt, Elwin and Staring, Marius},
title = {An Adaptive Intelligence Algorithm for Undersampled Knee MRI Reconstruction},
journal = {IEEE Access},
volume = {8},
pages = {204825 -- 204838},
year = {2020},
}
Imaging pulmonary fissures by CT provides useful information on diagnosis of pulmonary diseases. Automatic segmentation of fissures is a challenging task due to the variable appearance of fissures, such as inhomogeneous intensities, pathological deformation and imaging noise. To overcome these challenges, we propose an anisotropic differential operator called directional derivative of plate (DDoP) filter to probe the presence of fissure objects in 3D space by modeling the profile of a fissure patch with three parallel plates. To reduce the huge computation burden of dense matching with rotated DDoP kernels, a family of spherical harmonics are particularly utilized for acceleration. Additionally, a two-stage post-processing scheme is introduced to segment fissures. The performance of our method was verified in experiments using 55 scans from the publicly available LOLA11 dataset and 50 low-dose CT scans of lung cancer patients from the VIA-ELCAP database. Our method showed superior performance compared to the derivative of sticks (DoS) method and the Hessian-based method in terms of median and mean F1-score. The median F1-score for DDoP, DoS-based and Hessian-based methods on the LOLA11 dataset was 0.899, 0.848 and 0.843, respectively, and the mean F1-score was 0.858 ± 0.103, 0.781 ± 0.165 and 0.747 ± 0.239, respectively.
@article{Zhao:2020,
author = {Zhao, Hong and Stoel, Berend C. and Staring, Marius and Bakker, M. Els and Stolk, Jan and Zhou, Ping and Xiao, Changyan},
title = {A framework for pulmonary fissure segmentation in 3D CT images using a directional derivative of plate filter},
journal = {Signal Processing},
volume = {173},
pages = {107602},
month = aug,
year = {2020},
}
The problem of motion detection has received considerable attention due to the explosive growth of its applications in video analysis and surveillance systems. While the previous approaches can produce good results, the accurate detection of motion remains a challenging task due to the difficulties raised by illumination variations, occlusion, camouflage, sudden motions appearing in burst, dynamic texture, and environmental changes such as those on weather conditions, sunlight changes during a day, etc. In this study, a novel per-pixel motion descriptor is proposed for motion detection in video sequences which outperforms the current methods in the literature particularly in severe scenarios. The proposed descriptor is based on two complementary three-dimensional discrete wavelet transforms (3D-DWT) and a three-dimensional wavelet leader. In this approach, a feature vector is extracted for each pixel by applying a novel three-dimensional wavelet-based motion descriptor. Then, the extracted features are clustered by the well-known K-means algorithm. The experimental results demonstrate the effectiveness of the proposed method compared to state-of-the-art approaches in several public benchmark datasets. The application of the proposed method and additional experimental results for several challenging datasets are available online.
@article{Yousefi:2019,
author = {Yousefi, Sahar and Manzuri Shalmani, M. T. and Lin, Jeremy and Staring, Marius},
title = {A Novel Motion Detection Method Using 3D Discrete Wavelet Transform},
journal = {IEEE Transactions on Circuits and Systems for Video Technology},
volume = {29},
number = {12},
pages = {3487 -- 3500},
month = dec,
year = {2019},
}
Purpose: To evaluate the feasibility of fiducial markers as a surrogate for GTV position in image-guided radiotherapy of rectal cancer.
Methods and Materials: We analyzed 35 fiducials in 19 rectal cancer patients who received short course radiotherapy or long-course chemoradiotherapy. A MRI exam was acquired before and after the first week of radiotherapy and daily pre- and post-irradiation CBCT scans were acquired in the first week of radiotherapy. Between the two MRI exams, the fiducial displacement relative to the center of gravity of the GTV (COGGTV) and the COGGTV displacement relative to bony anatomy was determined. Using the CBCT scans, inter- and intrafraction fiducial displacement relative to bony anatomy was determined.
Results: The systematic error of the fiducial displacement relative to the COGGTV was 2.8, 2.4 and 4.2 mm in the left-right (LR), anterior-posterior (AP) and craniocaudal (CC) direction. Large interfraction systematic errors of up to 8.0 and random errors up to 4.7 mm were found for COGGTV and fiducial displacements relative to bony anatomy, mostly in the AP and CC directions. For tumors located in the mid- and upper rectum these errors were up to 9.4 (systematic) and 5.6 mm (random) compared to 4.9 and 2.9 mm for tumors in the lower rectum. Systematic and random errors of the intrafraction fiducial displacement relative to bony anatomy were ≤ 2.1 mm in all directions.
Conclusions: Large interfraction errors of the COGGTV and the fiducials relative to bony anatomy were found. Therefore, despite the observed fiducial displacement relative to the COGGTV, the use of fiducials as a surrogate for GTV position reduces the required margins in the AP and CC direction for a GTV boost using image-guided radiotherapy of rectal cancer. This reduction may be larger in patients with tumors located in the mid- and upper rectum compared to the lower rectum.
@article{vandenEnde:2019a,
author = {van den Ende, R.P.J. and Kerkhof, E.M. and Rigter, L.S. and van Leerdam, M.E. and Peters, F.P. and van Triest, B. and Staring, M. and Marijnen, C.A.M. and van der Heide, U.A.},
title = {Feasibility of gold fiducial markers as a surrogate for GTV position in image-guided radiotherapy of rectal cancer},
journal = {International Journal of Radiation Oncology, Biology, Physics},
volume = {105},
number = {5},
pages = {1151 -- 1159},
month = dec,
year = {2019},
}
Objective: Our goal was to investigate the performance of an open source deformable image registration package, elastix, for fast and robust contour propagation in the context of online adaptive intensity-modulated proton therapy (IMPT) for prostate cancer.
Methods: A planning and 7-10 repeat CT scans were available of 18 prostate cancer patients. Automatic contour propagation of repeat CT scans was performed using elastix and compared with manual delineations in terms of geometric accuracy and runtime. Dosimetric accuracy was quantified by generating IMPT plans using the propagated contours expanded with a 2 mm (prostate) and 3.5 mm margin (seminal vesicles and lymph nodes) and calculating dosimetric coverage based on the manual delineation. A coverage of V95% ≥ 98% (at least 98% of the target volumes receive at least 95% of the prescribed dose) was considered clinically acceptable.
Results: Contour propagation runtime varied between 3 and 30 seconds for different registration settings. For the fastest setting, 83 in 93 (89.2%), 73 in 93 (78.5%), and 91 in 93 (97.9%) registrations yielded clinically acceptable dosimetric coverage of the prostate, seminal vesicles, and lymph nodes, respectively. For the prostate, seminal vesicles, and lymph nodes the Dice Similarity Coefficient (DSC) was 0:87 ± 0:05, 0:63 ± 0:18 and 0:89 ± 0:03 and the mean surface distance (MSD) was 1:4 ± 0:5 mm, 2:0 ± 1:2 mm and 1:5 ± 0:4 mm, respectively.
Conclusion: With a dosimetric success rate of 78.5% to 97.9%, this software may facilitate online adaptive IMPT of prostate cancer using a fast, free and open implementation.
@article{Qiao:2019a,
author = {Qiao, Yuchuan and Jagt, Thyrza and Hoogeman, Mischa and Lelieveldt, Boudewijn P.F. and Staring, Marius},
title = {Evaluation of an open source registration package for automatic contour propagation in online adaptive intensity-modulated proton therapy of prostate cancer},
journal = {Frontiers in Oncology},
volume = {9},
pages = {1297},
month = nov,
year = {2019},
}
Purpose: Gas exchange in systemic sclerosis (SSc) is known to be affected by fibrotic changes in the pulmonary parenchyma. However, SSc patients without detectable fibrosis can still have impaired gas transfer. We aim to investigate whether pulmonary vascular changes could partly explain a reduction in gas transfer of systemic sclerosis (SSc) patients without fibrosis.
Materials and Methods: We selected 77 patients, whose visual CT scoring showed no fibrosis. Pulmonary vessels were detected automatically in CT images and their local radii were calculated. The frequency of occurrence for each radius was calculated, and from this radius histogram two imaging biomarkers (α and β) were extracted, where α reflects the relative contribution of small vessels compared to large vessels and β represents the vessel tree capacity. Correlations between imaging biomarkers and gas transfer (DLCOc %predicted) were evaluated with Spearman’s correlation. Multivariable stepwise linear regression was performed with DLCOc %predicted as dependent variable and age, BMI, sPAP, FEV1 %predicted, TLC %predicted, FVC %predicted, α, β, voxel size and CT-derived lung volume as independent variables.
Results: Both α and β were significantly correlated with gas transfer (R=-0.29, p-value=0.011 and R=0.32, p-value=0.004, respectively). The multivariable step-wise linear regression analysis selected sPAP (coefficient=-0.78, 95%CI=[-1.07, -0.49], p-value<0.001), β (coefficient=8.6, 95%CI=[4.07, 13.1], p-value<0.001) and FEV1 %predicted (coefficient=0.3, 95%CI=[0.12, 0.48], p-value=0.001) as significant independent predictors of DLCOc %predicted (R=0.71, p-value<0.001).
Conclusions:In SSc patients without detectable pulmonary fibrosis, pulmonary vascular morphology is associated with gas transfer, indicating that impaired gas exchange is associated with vascular changes.
@article{Zhai:2019a,
author = {Zhai, Zhiwei and Staring, Marius and Ninaber, Maarten K. and de Vries-Bouwstra, Jeska and Schouffoer, Anne A. and Kroft, Lucia J. and Stolk, Jan and Stoel, Berend C.},
title = {Pulmonary Vascular Morphology Associated with Gas Exchange in Systemic Sclerosis without Lung Fibrosis},
journal = {Journal of Thoracic Imaging},
volume = {34},
number = {6},
pages = {373 -- 379},
month = nov,
year = {2019},
}
Stochastic gradient descent (SGD) is commonly used to solve (parametric) image registration problems. In case of badly scaled problems, SGD however only exhibits sublinear convergence properties. In this paper we propose an efficient preconditioner estimation method to improve the convergence rate of SGD. Based on the observed distribution of voxel displacements in the registration, we estimate the diagonal entries of a preconditioning matrix, thus rescaling the optimization cost function. The preconditioner is efficient to compute and employ, and can be used for mono-modal as well as multi-modal cost functions, in combination with different transformation models like the rigid, affine and B-spline model. Experiments on different clinical data sets show that the proposed method indeed improves the convergence rate compared to SGD with speedups around 2-5 in all tested settings, while retaining the same level of registration accuracy.
@article{Qiao:2019b,
author = {Qiao, Yuchuan and Lelieveldt, Boudewijn P.F and Staring, Marius},
title = {An efficient preconditioner for stochastic gradient descent optimization of image registration},
journal = {IEEE Transactions on Medical Imaging},
volume = {38},
number = {10},
pages = {2314 -- 2325},
month = oct,
year = {2019},
}
Purpose: Morphological changes to anatomy resulting from invasive surgical procedures or pathology, typically alter the surrounding vasculature. This makes it useful as a descriptor for feature-driven image registration in various clinical applications. However, registration of vasculature remains challenging, as vessels often differ in size and shape, and may even miss branches, due to surgical interventions or pathological changes. Furthermore, existing vessel registration methods are typically designed for a specific application. To address this limitation, we propose a generic vessel registration approach useful for a variety of clinical applications, involving different anatomical regions.
Methods: A probabilistic registration framework based on a hybrid mixture model, with a refinement mechanism to identify missing branches (denoted as HdMM+) during vasculature matching, is introduced. Vascular structures are represented as 6-dimensional hybrid point sets comprising spatial positions and centerline orientations, using Student’s t-distributions to model the former and Watson distributions for the latter.
Results: The proposed framework is evaluated for intraoperative brain shift compensation, and monitoring changes in pulmonary vasculature resulting from chronic lung disease. Registration accuracy is validated using both synthetic and patient data. Our results demonstrate, HdMM+ is able to reduce more than 85% of the initial error for both applications, and outperforms the state-of-the-art point-based registration methods such as coherent point drift (CPD) and Student’s t-Distribution mixture model (TMM), in terms of mean surface distance, modified hausdorff distance, Dice and Jaccard scores.
Conclusion: The proposed registration framework models complex vascular structures using a hybrid representation of vessel centerlines, and accommodates intricate variations in vascular morphology. Furthermore, it is generic and flexible in its design, enabling its use in a variety of clinical applications.
@article{Bayer:2019,
author = {Bayer, Siming and Zhai, Zhiwei and Strumia, Maddalena and Tong, Xiaoguang and Gao, Ying and Staring, Marius and Stoel, Berend and Fahrig, Rebecca and Nabavi, Arya and Maier, Andreas and Ravikumar, Nishant},
title = {Registration of vascular structures using a hybrid mixture model},
journal = {International Journal of Computer Assisted Radiology and Surgery},
volume = {14},
number = {9},
pages = {1507 -- 1516},
month = sep,
year = {2019},
}
Purpose: Vascular remodeling is a significant pathological feature of various pulmonary diseases, which may be assessed by quantitative CT imaging. The purpose of this study was therefore to develop and validate an automatic method for quantifying pulmonary vascular morphology in CT images.
Methods: The proposed method consists of pulmonary vessel extraction and quantification. For extracting pulmonary vessels, a graph-cuts based method is proposed which considers appearance (CT intensity) and shape (vesselness from a Hessian-based filter) features, and incorporates distance to the airways into the cost function to prevent false detection of airway walls. For quantifying the extracted pulmonary vessels, a radius histogram is generated by counting the occurrence of vessel radii, calculated from a distance transform based method. Subsequently, two biomarkers, slope α and intercept β, are calculated by linear regression on the radius histogram. A public data set from the VESSEL12 challenge was used to independently evaluate the vessel extraction. The quantitative analysis method was validated using images of a 3D printed vessel phantom, scanned by a clinical CT scanner and a micro-CT scanner (to obtain a gold standard). To confirm the association between imaging biomarkers and pulmonary function, 77 scleroderma patients were investigated with the proposed method.
Results: In the independent evaluation with the public data set, our vessel segmentation method obtained an area under the ROC curve of 0.976. The median radius difference between clinical and micro-CT scans of a 3D printed vessel phantom was 0.062 ± 0.020 mm, with interquartile range of 0.199 ± 0.050 mm. In the studied patient group, a significant correlation between diffusion capacity for carbon monoxide and the biomarkers, α (R=-0.27, p-value=0.018) and β (R=0.321, p-value=0.004), was obtained.
Conclusions: In conclusion, the proposed method was highly accurate, validated with a public data set and a 3D printed vessel phantom data set. The correlation between imaging biomarkers and diffusion capacity in a clinical data set confirmed an association between lung structure and function. This quantification of pulmonary vascular morphology may be helpful in understanding the pathophysiology of pulmonary vascular diseases.
@article{Zhai:2019b,
author = {Zhai, Zhiwei and Staring, Marius and Giron, Irene Hernandez and Veldkamp, Wouter J.H. and Kroft, Lucia J. and Ninaber, Maarten K. and Stoel, Berend C.},
title = {Automatic quantitative analysis of pulmonary vascular morphology in CT images},
journal = {Medical Physics},
volume = {46},
number = {9},
pages = {3985 -- 3997},
month = sep,
year = {2019},
}
Purpose: To develop and validate a robust and accurate registration pipeline for automatic contour propagation for online adaptive Intensity-Modulated Proton Therapy (IMPT) of prostate cancer using elastix software and deep learning.
Methods: A 3D Convolutional Neural Network was trained for automatic bladder segmentation of the CT scans. The automatic bladder segmentation alongside the CT scan are jointly optimized to add explicit knowledge about the underlying anatomy to the registration algorithm. We included three datasets from different institutes and CT manufacturers. The first was used for training and testing the ConvNet, where the second and the third were used for evaluation of the proposed pipeline. The system performance was quantified geometrically using the Dice Similarity Coefficient (DSC), the Mean Surface Distance (MSD), and the 95% Hausdorff Distance (HD). The propagated contours were validated clinically through generating the associated IMPT plans and compare it with the IMPT plans based on the manual delineations. Propagated contours were considered clinically acceptable if their treatment plans met the dosimetric coverage constraints on the manual contours.
Results: The bladder segmentation network achieved a DSC of 88% and 82% on the test datasets. The proposed registration pipeline achieved a MSD of 1.29 ± 0.39, 1.48 ± 1.16, and 1.49 ± 0.44 mm for the prostate, seminal vesicles, and lymph nodes, respectively on the second dataset and a MSD of 2.31 ± 1.92 and 1.76 ± 1.39 mm for the prostate and seminal vesicles on the third dataset. The automatically propagated contours met the dose coverage constraints in 86%, 91%, and 99% of the cases for the prostate, seminal vesicles, and lymph nodes, respectively. A Conservative Success Rate (CSR) of 80% was obtained, compared to 65% when only using intensity-based registration.
Conclusions: The proposed registration pipeline obtained highly promising results for generating treatment plans adapted to the daily anatomy. With 80% of the automatically generated treatment plans directly usable without manual correction, a substantial improvement in system robustness was reached compared to a previous approach. The proposed method therefore facilitates more precise proton therapy of prostate cancer, potentially leading to fewer treatment related adverse side effects.
@article{Elmahdy:2019,
author = {Elmahdy, Mohamed S. and Jagt, Thyrza and Zinkstok, Roel Th. and Qiao, Yuchuan and Shazad, Rahil and Sokooti, Hessam and Yousefi, Sahar and Incrocci, Luca and Marijnen, Corrie A.M. and Hoogeman, Mischa and Staring, Marius},
title = {Robust contour propagation using deep learning and image registration for online adaptive proton therapy of prostate cancer},
journal = {Medical Physics},
volume = {46},
number = {8},
pages = {3329 -- 3343},
month = aug,
year = {2019},
}
Predicting registration error can be useful for evaluation of registration procedures, which is important for the adoption of registration techniques in the clinic. In addition, quantitative error prediction can be helpful in improving the registration quality. The task of predicting registration error is demanding due to the lack of a ground truth in medical images. This paper proposes a new automatic method to predict the registration error in a quantitative manner, and is applied to chest CT scans. A random regression forest is utilized to predict the registration error locally. The forest is built with features related to the transformation model and features related to the dissimilarity after registration. The forest is trained and tested using manually annotated corresponding points between pairs of chest CT scans in two experiments: SPREAD (trained and tested on SPREAD) and inter-database (including three databases SPREAD, DIR-Lab-4DCT and DIR-Lab-COPDgene). The results show that the mean absolute errors of regression are 1.07 ± 1.86 and 1.76 ± 2.59 mm for the SPREAD and inter-database experiment, respectively. The overall accuracy of classification in three classes (correct, poor and wrong registration) is 90.7% and 75.4%, for SPREAD and inter-database respectively. The good performance of the proposed method enables important applications such as automatic quality control in large-scale image analysis.
@article{Sokooti:2019,
author = {Sokooti, Hessam and Saygili, Gorkem and Glocker, Ben and Lelieveldt, Boudewijn P.F. and Staring, Marius},
title = {Quantitative Error Prediction of Medical Image Registration using Regression Forests},
journal = {Medical Image Analysis},
volume = {56},
number = {8},
pages = {110 -- 121},
month = aug,
year = {2019},
}
Background and purpose: A GTV boost is suggested to result in higher complete response rates in rectal cancer patients, which is attractive for organ preservation. Fiducials may offer GTV position verification on (CB)CT, if the fiducial-GTV spatial relationship can be accurately defined on MRI. The study aim was to evaluate the MRI visibility of fiducials inserted in the rectum.
Materials and methods: We tested four fiducial types (two Visicoil types, Cook and Gold Anchor), inserted in five patients each. Four observers identified fiducial locations on two MRI exams per patient in two scenarios: without (scenario A) and with (scenario B) (CB)CT available. A fiducial was defined to be consistently identified if 3 out of 4 observers labeled that fiducial at the same position on MRI. Fiducial visibility was scored on an axial and sagittal T2-TSE sequence and a T1 3D GRE sequence.
Results: Fiducial identification was poor in scenario A for all fiducial types. The Visicoil 0.75 and Gold Anchor were the most consistently identified fiducials in scenario B with 7 out of 9 and 8 out of 11 consistently identified fiducials in the first MRI exam and 2 out of 7 and 5 out of 10 in the second MRI exam, respectively. The consistently identified Visicoil 0.75 and Gold Anchor fiducials were best visible on the T1 3D GRE sequence.
Conclusion: The Visicoil 0.75 and Gold Anchor fiducials were the most visible fiducials on MRI as they were most consistently identified. The use of a registered (CB)CT and a T1 3D GRE MRI sequence is recommended.
@article{vandenEnde:2019b,
author = {van den Ende, R.P.J. and Rigter, L.S. and Kerkhof, E.M. and van Persijn van Meerten, E.L. and Rijkmans, E.C. and Lambregts, D.M.J. and van Triest, B. and van Leerdam, M.E. and Staring, M. and Marijnen, C.A.M. and van der Heide, U.A.},
title = {MRI visibility of gold fiducial markers for image-guided radiotherapy of rectal cancer},
journal = {Radiotherapy & Oncology},
volume = {132},
number = {3},
pages = {93 -- 99},
month = mar,
year = {2019},
}
Image registration, the process of aligning two or more images, is the core technique of many (semi-)automatic medical image analysis tasks. Recent studies have shown that deep learning methods, notably convolutional neural networks (ConvNets), can be used for image registration. Thus far training of ConvNets for registration was supervised using predefined example registrations. However, obtaining example registrations is not trivial. To circumvent the need for predefined examples, and thereby to increase convenience of training ConvNets for image registration, we propose the Deep Learning Image Registration (DLIR) framework for unsupervised affine and deformable image registration. In the DLIR framework ConvNets are trained for image registration by exploiting image similarity analogous to conventional intensity-based image registration. After a ConvNet has been trained with the DLIR framework, it can be used to register pairs of unseen images in one shot. We propose flexible ConvNets designs for affine image registration and for deformable image registration. By stacking multiple of these ConvNets into a larger architecture, we are able to perform coarse-to-fine image registration. We show for registration of cardiac cine MRI and registration of chest CT that performance of the DLIR framework is comparable to conventional image registration while being several orders of magnitude faster.
@article{DeVos:2019,
author = {De Vos, Bob and Berendsen, Floris F. and Viergever, Max A. and Sokooti, Hessam and Staring, Marius and I{\v{s}}gum, Ivana},
title = {A Deep Learning Framework for Unsupervised Affine and Deformable Image Registration},
journal = {Medical Image Analysis},
volume = {52},
number = {2},
pages = {128 -- 143},
month = feb,
year = {2019},
}
In recent years, machine learning approaches have been successfully applied to the field of neuroimaging for classification and regression tasks. However, many approaches do not give an intuitive relation between the raw features and the diagnosis. Therefore, they are difficult for clinicians to interpret. Moreover, most approaches treat the features extracted from the brain (for example, voxelwise gray matter concentration maps from brain MRI) as independent variables and ignore their spatial and anatomical relations. In this paper, we present a new Support Vector Machine (SVM)-based learning method for the classification of Alzheimer’s disease (AD), which integrates spatial-anatomical information. In this way, spatial-neighbor features in the same anatomical region are encouraged to have similar weights in the SVM model. Secondly, to make the learned model more interpretable, we introduce a group lasso penalty to induce structure sparsity, which may help clinicians to assess the key regions involved in the disease. For solving this learning problem, we use an accelerated proximal gradient descent approach. We tested our method on the subset of ADNI data selected by Cuingnet et al. (2011) for Alzheimer’s disease classification, as well as on an independent larger dataset from ADNI. Good classification performance is obtained for distinguishing cognitive normals (CN) vs. AD, as well as on distinguishing between various sub-types (e.g. CN vs. Mild Cognitive Impairment). The model trained on Cuignet’s dataset for AD vs. CN classification was directly used without re-training to the independent larger dataset. Good performance was achieved, demonstrating the generalizability of the proposed methods. For all experiments, the classification results are comparable or better than the state-of-the-art, while the weight map more clearly indicates the key regions related to Alzheimer’s disease.
@article{Sun:2018,
author = {Sun, Zhuo and Qiao, Yuchuan and Lelieveldt, Boudewijn P.F. and Staring, Marius},
title = {Integrating Spatial-Anatomical Regularization and Structure Sparsity into SVM: Improving Interpretation of Alzheimer's Disease Classification},
journal = {NeuroImage},
volume = {178},
pages = {445 -- 460},
month = sep,
year = {2018},
}
Objectives: Balloon pulmonary angioplasty (BPA) in patients with inoperable chronic thromboembolic pulmonary hypertension (CTEPH) can have variable outcomes. To gain more insight into this variation, we designed a method for visualizing and quantifying changes in pulmonary perfusion by automatically comparing computed tomography (CT) pulmonary angiography before and after BPA treatment. We validated these quantifications of perfusion changes against hemodynamic changes measured with right-sided heart catheterization.
Materials and Methods: We studied 14 consecutive CTEPH patients (12 women; age, 70.5 ± 24), who underwent CT pulmonary angiography and right-sided heart catheterization, before and after BPA. Posttreatment images were registered to pretreatment CT scans (using the Elastix toolbox) to obtain corresponding locations. Pulmonary vascular trees and their centerlines were detected using a graph cuts method and a distance transform method, respectively. Areas distal from vessels were defined as pulmonary parenchyma. Subsequently, the density changes within the vascular centerlines and parenchymal areas were calculated and corrected for inspiration level differences. For visualization, the densitometric changes were displayed in color-coded overlays. For quantification, the median and interquartile range of the density changes in the vascular and parenchymal areas (ΔVD and ΔPD) were calculated. The recorded changes in hemodynamic parameters, including changes in systolic, diastolic, mean pulmonary artery pressure (ΔsPAP, ΔdPAP and ΔmPAP, respectively) and vascular resistance (ΔPVR), were used as reference assessments of the treatment effect. Spearman correlation coefficients were employed to investigate the correlations between changes in perfusion and hemodynamic changes.
Results: Comparative imaging maps showed distinct patterns in perfusion changes among patients. Within pulmonary vessels, the interquartile range of ΔVD correlated significantly with ΔsPAP (R= 0.58, p=0.03), ΔdPAP (R= 0.71, p=0.005), ΔmPAP (R= 0.71, p=0.005), and ΔPVR (R= 0.77, p=0.001). In the parenchyma, the median of ΔPD had significant correlations with ΔdPAP (R= 0.58, p=0.030) and ΔmPAP (R= 0.59, p=0.025).
Conclusions: Comparative imaging analysis in CTEPH patients offers insight into differences in BPA treatment effect. Quantification of perfusion changes provides noninvasive measures that reflect hemodynamic changes.
@article{Zhai:2018,
author = {Zhai, Zhiwei and Ota, Hideki and Staring, Marius and Stolk, Jan and Sugimura, Koichiro and Takase, Kei and Stoel, Berend C.},
title = {Treatment Effect of Balloon Pulmonary Angioplasty in Chronic Thromboembolic Pulmonary Hypertension Quantified by Automatic Comparative Imaging in Computed Tomography Pulmonary Angiography},
journal = {Investigative Radiology},
volume = {53},
number = {5},
pages = {286 -- 292},
month = may,
year = {2018},
}
With an increasing number of large-scale population-based cardiac magnetic resonance (CMR) imaging studies being conducted nowadays, there comes the mammoth task of image annotation and image analysis. Such population-based studies would greatly benefit from automated pipelines, with an efficient CMR image analysis workflow. The purpose of this work is to investigate the feasibility of using a fully-automatic pipeline to segment the left ventricular endocardium and epicardium simultaneously on two orthogonal (vertical and horizontal) long-axis cardiac cine MRI scans. The pipeline is based on a multi-atlas-based segmentation approach and a spatio-temporal registration approach. The performance of the method was assessed by: (i) comparing the automatic segmentations to those obtained manually at both the end-diastolic and end-systolic phase, (ii) comparing the automatically obtained clinical parameters, including end-diastolic volume, end-systolic volume, stroke volume and ejection fraction, with those defined manually and (iii) by the accuracy of classifying subjects to the appropriate risk category based on the estimated ejection fraction. Automatic segmentation of the left ventricular endocardium was achieved with a Dice similarity coefficient (DSC) of 0.93 on the end-diastolic phase for both the vertical and horizontal long-axis scan; on the end-systolic phase the DSC was 0.88 and 0.85, respectively. For the epicardium, a DSC of 0.94 and 0.95 was obtained on the end-diastolic vertical and horizontal long-axis scans; on the end-systolic phase the DSC was 0.90 and 0.88, respectively. With respect to the clinical volumetric parameters, Pearson correlation coefficient (R) of 0.97 was obtained for the end-diastolic volume, 0.95 for end-systolic volume, 0.87 for stroke volume and 0.84 for ejection fraction. Risk category classification based on ejection fraction showed that 80% of the subjects were assigned to the correct risk category and only one subject (< 1%) was more than one risk category off. We conclude that the proposed automatic pipeline presents a viable and cost-effective alternative for manual annotation.
@article{Shahzad:2017,
author = {Shahzad, Rahil and Tao, Qian and Dzyubachyk, Oleh and Staring, Marius and Lelieveldt, Boudewijn P.F. and van der Geest, Rob J.},
title = {Fully-Automatic Left Ventricular Segmentation from Long-Axis Cardiac Cine MR Scans},
journal = {Medical Image Analysis},
volume = {39},
pages = {44 -- 55},
month = jul,
year = {2017},
}
Mild Cognitive Impairment (MCI) is an intermediate stage between healthy and Alzheimer’s disease (AD). To enable early intervention it is important to identify the MCI subjects that will convert to AD in an early stage. In this paper, we provide a new method to distinguish between MCI patients that either convert to Alzheimer’s Disease (MCIc) or remain stable (MCIs), using only longitudinal T1-weighted MRI. Currently, most longitudinal studies focus on volumetric comparison of a few anatomical structures, thereby ignoring more detailed development inside and outside those structures. In this study we propose to exploit the anatomical development within the entire brain, as found by a non-rigid registration approach. Specifically, this anatomical development is represented by the stationary velocity field (SVF) from registration between the baseline and follow-up images. To make the SVFs comparable among subjects, we use the parallel transport method to align them in a common space. The normalized SVF together with derived features are then used to distinguish between MCIc and MCIs subjects. This novel feature space is reduced using a Kernel Principal Component Analysis method, and a linear support vector machine is used as a classifier. Extensive comparative experiments are performed to inspect the influence of several aspects of our method on classification performance, specifically the feature choice, the smoothing parameter in the registration and the use of dimensionality reduction. The optimal result from a 10-fold cross-validation using 36 month follow-up data shows competitive results: accuracy 92%, sensitivity 95%, specificity 90%, and AUC 94%. Based on the same dataset, the proposed approach outperforms two alternative ones that either depends on the baseline image only, or uses longitudinal information from larger brain areas. Good results were also obtained when scans at 6, 12 or 24 months were used for training the classifier. Besides the classification power, the proposed method can quantitatively compare brain regions that have a significant difference in development between the MCIc and MCIs groups.
@article{Sun:2017,
author = {Sun, Zhuo and van der Giessen, Martijn and Lelieveldt, Boudewijn P.F. and Staring, Marius},
title = {Detection of conversion from mild cognitive impairment to Alzheimer's disease using longitudinal brain MRI},
journal = {Frontiers in Neuroinformatics},
volume = {11},
pages = {16},
month = feb,
year = {2017},
}
Purpose. To develop and validate a method for performing inter-station intensity standardization in multi-spectral whole-body MR data.
Methods. Different approaches for mapping the intensity of each acquired image stack into the reference intensity space were developed and validated. The registration strategies included: "direct" registration to the reference station (Strategy 1), "progressive" registration to the neighbouring stations without (Strategy 2) and with (Strategy 3) using information from the overlap regions of the neighbouring stations. For Strategy 3, two regularized modifications were proposed and validated. All methods were tested on two multi-spectral whole-body MR data sets: a multiple myeloma patients data set (48 subjects) and a whole-body MR angiography data set (33 subjects).
Results. For both data sets, all strategies showed significant improvement of intensity homogeneity with respect to vast majority of the validation measures (p < 0.005). Strategy 1 exhibited the best performance, closely followed by Strategy 2. Strategy 3 and its modifications were performing worse, in majority of the cases significantly (p < 0.05).
Conclusions. We propose several strategies for performing inter-station intensity standardization in multi-spectral whole-body MR data. All the strategies were successfully applied to two types of whole-body MR data, and the "direct" registration strategy was concluded to perform the best.
@article{Dzyubachyk:2017,
author = {Dzyubachyk, Oleh and Staring, Marius and Reijnierse, Monique and Lelieveldt, Boudewijn P.F. and van der Geest, Rob J.},
title = {Inter-Station Intensity Standardization for Whole-Body MR Data},
journal = {Magnetic Resonance in Medicine},
volume = {77},
number = {1},
pages = {422 -- 433},
month = jan,
year = {2017},
}
A retrospective view on the past two decades of the field of medical image registration is presented, guided by the article "A survey of medical image registration" (Maintz and Viergever, 1998). It shows that the classification of the field introduced in that article is still usable, although some modifications to do justice to advances in the field would be due. The main changes over the last twenty years are the shift from extrinsic to intrinsic registration, the primacy of intensity-based registration, the breakthrough of nonlinear registration, the progress of inter-subject registration, and the availability of generic image registration software packages. Two problems that were called urgent already 20 years ago, are even more urgent nowadays: Validation of registration methods, and translation of results of image registration research to clinical practice. It may be concluded that the field of medical image registration has evolved, but still is in need of further development in various aspects.
@article{Viergever:2016,
author = {Viergever, Max A. and Maintz, J.B. Antoine and Klein, Stefan and Murphy, Keelin and Staring, Marius and Pluim, Josien P.W.},
title = {A survey of medical image registration - under review},
journal = {Medical Image Analysis},
volume = {33},
pages = {140 -- 144},
month = oct,
year = {2016},
}
Pulmonary fissures are important landmarks for recognition of lung anatomy. In CT images, automatic detection of fissures is complicated by factors like intensity variability, pathological deformation and imaging noise. To circumvent this problem, we propose a derivative of stick (DoS) filter for fissure enhancement and a post-processing pipeline for subsequent segmentation. Considering a typical thin curvilinear shape of fissure profiles inside 2D cross-sections, the DoS filter is presented by first defining nonlinear derivatives along a triple stick kernel in varying directions. Then, to accommodate pathological abnormality and orientational deviation, a max-min cascading and multiple plane integration scheme is adopted to form a shape-tuned likelihood for 3D surface patches discrimination. During the post-processing stage, our main contribution is to isolate the fissure patches from adhering clutters by introducing a branch-point removal algorithm, and a multi-threshold merging framework is employed to compensate for local intensity inhomogeneity. The performance of our method was validated in experiments with two clinical CT data sets including 55 publicly available LOLA11 scans as well as separate left and right lung images from 23 GLUCOLD scans of COPD patients. Compared with manually delineating interlobar boundary references, our method obtained a high segmentation accuracy with median F1-scores of 0.833, 0.885, and 0.856 for the LOLA11, left and right lung images respectively, whereas the corresponding indices for a conventional Wiemker filtering method were 0.687, 0.853, and 0.841. The good performance of our proposed method was also verified by visual inspection and demonstration on abnormal and pathological cases, where typical deformations were robustly detected together with normal fissures.
@article{Xiao:2016,
author = {Xiao, Changyan and Stoel, Berend C. and Bakker, M. Els and Peng, Yuanyuan and Stolk, Jan and Staring, Marius},
title = {Pulmonary Fissure Detection in CT Images Using a Derivative of Stick Filter},
journal = {IEEE Transactions on Medical Imaging},
volume = {35},
number = {6},
pages = {1488 -- 1500},
month = jun,
year = {2016},
}
In this paper, we propose a novel method to estimate the confidence of a registration that does not require any ground truth, is independent from the registration algorithm and the resulting confidence is correlated with the amount of registration error. We first apply a local search to match patterns between the registered image pairs. Local search induces a cost space per voxel which we explore further to estimate the confidence of the registration similar to confidence estimation algorithms for stereo matching. We test our method on both synthetically generated registration errors and on real registrations with ground truth. The experimental results show that our confidence measure can estimate registration errors and it is correlated with local errors.
@article{Saygili:2016,
author = {Saygili, Gorkem and Staring, Marius and Hendriks, Emile A.},
title = {Confidence Estimation for Medical Image Registration Based On Stereo Confidences},
journal = {IEEE Transactions on Medical Imaging},
volume = {35},
number = {2},
pages = {539 -- 549},
month = feb,
year = {2016},
}
Fast automatic image registration is an important prerequisite for image guided clinical procedures. However, due to the large number of voxels in an image and the complexity of registration algorithms, this process is often very slow. Among many classical optimization strategies, stochastic gradient descent is a powerful method to iteratively solve the registration problem. This procedure relies on a proper selection of the optimization step size, which is important for the optimization procedure to converge. This step size selection is difficult to perform manually, since it depends on the input data, similarity measure and transformation model. The Adaptive Stochastic Gradient Descent (ASGD) method has been proposed to automatically choose the step size, but it comes at a high computational cost, dependent on the number of transformation parameters.
In this paper, we propose a new computationally efficient method (fast ASGD) to automatically determine the step size for gradient descent methods, by considering the observed distribution of the voxel displacements between iterations. A relation between the step size and the expectation and variance of the observed distribution is derived. While ASGD has quadratic complexity with respect to the transformation parameters, the fast ASGD method only has linear complexity. Extensive validation has been performed on different datasets with different modalities, inter/intra subjects, different similarity measures and transformation models. To perform a large scale experiment on 3D MR brain data, we have developed efficient and reusable tools to exploit an international high performance computing facility. For all experiments, we obtained similar accuracy as ASGD. Moreover, the estimation time of the fast ASGD method is reduced to a very small value, from 40 seconds to less than 1 second when the number of parameters is 105, almost 40 times faster. Depending on the registration settings, the total registration time is reduced by a factor of 2.5-7x for the experiments in this paper.
@article{Qiao:2016,
author = {Qiao, Yuchuan and van Lew, Baldur and Lelieveldt, Boudewijn P.F and Staring, Marius},
title = {Fast Automatic Step Size Estimation for Gradient Descent Optimization of Image Registration},
journal = {IEEE Transactions on Medical Imaging},
volume = {35},
number = {2},
pages = {391 -- 403},
month = feb,
year = {2016},
}
With the wide access to studies of selected gene expressions in transgenic animals, mice have become the dominant species as cerebral disease models.Many of these studies are performed on animals of not more than eight weeks, declared as adult animals. Based on the earlier reports that full brain maturation requires at least three months in rats, there is a clear need to discern the corresponding minimal animal age to provide an "adult brain" in mice in order to avoid modulation of disease progression/therapy studies by ongoing developmental changes. For this purpose, we have studied anatomical brain alterations of mice during their first six months of age. Using T2-weighted and diffusion-weighted MRI, structural and volume changes of the brain were identified and compared with histological analysis of myelination. Mouse brain volume was found to be almost stable already at three weeks, but cortex thickness kept decreasing continuously with maximal changes during the first three months. Myelination is still increasing between three and six months, although most dramatic changes are over by three months. While our results emphasize that mice should be at least three months old when adult animals are needed for brain studies, preferred choice of one particular metric for future investigation goals will result in somewhat varying age windows of stabilization.
@article{Hammelrath:2016,
author = {Hammelrath, Luam and {\v{S}}koki{\'c}, Sini{\v{s}}a and Khmelinskii, Artem and Hess, Andreas and van der Knaap, Noortje and Staring, Marius and Lelieveldt, Boudewijn P.F. and Wiedermann, Dirk and Hoehn, Mathias},
title = {Morphological maturation of the mouse brain: An in vivo MRI and histology investigation},
journal = {NeuroImage},
volume = {125},
number = {15},
pages = {144 -- 152},
month = jan,
year = {2016},
}
In this work, we present a fully automated algorithm for extraction of the 3D arterial tree and labelling the tree segments from whole-body magnetic resonance angiography (WB-MRA) sequences. The algorithm developed consists of two core parts (i) 3D volume reconstruction from different stations with simultaneous correction of different types of intensity inhomogeneity, and (ii) Extraction of the arterial tree and subsequent labelling of the pruned extracted tree. Extraction of the arterial tree is performed using the probability map of the "contrast" class, which is obtained as one of the results of the inhomogeneity correction scheme. We demonstrate that such approach is more robust than using the difference between the pre- and post-contrast channels traditionally used for this purpose. Labelling the extracted tree is performed by using a combination of graph-based and atlas-based approaches. Validation of our method with respect to the extracted tree was performed on the arterial tree subdivided into 32 segments, 82.4% of which were completely detected, 11.7% partially detected, and 5.9% were missed on a cohort of 35 subjects. With respect to automated labelling accuracy of the 32 segments, various registration strategies were investigated on a training set consisting of 10 scans. Further analysis on the test set consisting of 25 data sets indicates that 69% of the vessel centerline tree in the head and neck region, 80% in the thorax and abdomen region, and 84% in the legs was accurately labelled to the correct vessel segment. These results indicate clinical potential of our approach in enabling fully automated and accurate analysis of the entire arterial tree. This is the first study that not only automatically extracts the WB-MRA arterial tree, but also labels the vessel tree segments.
@article{Shahzad:2015,
author = {Shahzad, Rahil and Dzyubachyk, Oleh and Staring, Marius and Kullberg, Joel and Johansson, Lars and Ahlstr{\"o}m, H{\r{a}}kan and Lelieveldt, Boudewijn P. F. and van der Geest, Rob J.},
title = {Automated Extraction and Labelling of the Arterial Tree from Whole-Body MRA Data},
journal = {Medical Image Analysis},
volume = {24},
number = {1},
pages = {28 -- 40},
month = aug,
year = {2015},
}
Introduction. Interstitial lung disease occurs frequently in patients with systemic sclerosis (SSc). Quantitative computed tomography (CT) densitometry using the percentile density method may provide a sensitive assessment of lung structure for monitoring parenchymal damage. Therefore, we aimed to evaluate the optimal percentile density score in SSc by quantitative CT densitometry, against pulmonary function.
Material and Methods. We investigated 41 SSc patients by chest CT scan, spirometry and gas transfer tests. Lung volumes and the nth percentile density (between 1 and 99%) of the entire lungs were calculated from CT histograms. The nth percentile density is defined as the threshold value of densities expressed in Hounsfield units. A prerequisite for an optimal percentage was its correlation with baseline DLCO%predicted. Two patients showed distinct changes in lung function 2 years after baseline. We obtained CT scans from these patients and performed progression analysis.
Results. Regression analysis for the relation between DLCO%predicted and the nth percentile density was optimal at 85% (Perc85). There was significant agreement between Perc85 and DLCO%predicted (R = -0.49, P = 0.001) and FVC%predicted (R = -0.64, P < 0.001). Two patients showed a marked change in Perc85 over a two year period, but the localisation of change differed clearly.
Conclusions. We identified Perc85 as optimal lung density parameter, which correlated significantly with DLCO and FVC, confirming a lung parenchymal structure-function relation in SSc. This provides support for future studies to determine whether structural changes do precede lung function decline.
@article{Ninaber:2015,
author = {Ninaber, Maarten K. and Stolk, Jan and Smit, Jasper and Le Roy, Ernest J. and Kroft, Lucia J.M. and Bakker, M.E. and de Vries Bouwstra, Jeska K. and Schouffoer, Anna A. and Staring, Marius and Stoel, Berend C.},
title = {Lung Structure And Function Relation In Systemic Sclerosis: Application Of Lung Densitometry},
journal = {European Journal of Radiology},
volume = {84},
number = {5},
pages = {975 - 979},
month = may,
year = {2015},
}
The Alberta Stroke Program Early CT score (ASPECTS) scoring method is frequently used for quantifying early ischemic changes (EICs) in patients with acute ischemic stroke in clinical studies. Varying interobserver agreement has been reported, however, with limited agreement. Therefore, our goal was to develop and evaluate an automated brain densitometric method. It divides CT scans of the brain into ASPECTS regions using atlas-based segmentation. EICs are quantified by comparing the brain density between contralateral sides. This method was optimized and validated using CT data from 10 and 63 patients, respectively. The automated method was validated against manual ASPECTS, stroke severity at baseline and clinical outcome after 7 to 10 days (NIH Stroke Scale, NIHSS) and 3 months (modified Rankin Scale). Manual and automated ASPECTS showed similar and statistically significant correlations with baseline NIHSS (R=-0.399 and -0.277, respectively) and with follow-up mRS (R=-0.256 and -0.272), except for the follow-up NIHSS. Agreement between automated and consensus ASPECTS reading was similar to the interobserver agreement of manual ASPECTS (differences <1 point in 73% of cases). The automated ASPECTS method could, therefore, be used as a supplementary tool to assist manual scoring.
@article{Stoel:2015,
author = {Stoel, Berend C. and Marquering, Henk A. and Staring, Marius and Beenen, Ludo F. and Slump, Cornelis H. and Roos, Yvo B. and Majoie, Charles B.},
title = {Automated brain computed tomographic densitometry of early ischemic changes in acute stroke},
journal = {Journal of Medical Imaging},
volume = {2},
number = {1},
pages = {014004},
month = mar,
year = {2015},
}
The VESSEL12 (VESsel SEgmentation in the Lung) challenge objectively compares the performance of different algorithms to identify vessels in thoracic computed tomography (CT) scans. Vessel segmentation is fundamental in computer aided processing of data generated by 3D imaging modalities. As manual vessel segmentation is prohibitively time consuming, any real world application requires some form of automation. Several approaches exist for automated vessel segmentation, but judging their relative merits is difficult due to a lack of standardized evaluation. We present an annotated reference dataset containing 20 CT scans and propose nine categories to perform a comprehensive evaluation of vessel segmentation algorithms from both academia and industry. Twenty algorithms participated in the VESSEL12 challenge, held at International Symposium on Biomedical Imaging (ISBI) 2012. All results have been published at the VESSEL12 website http://vessel12.grand-challenge.org. The challenge remains ongoing and open to new participants. Our three contributions are: (1) an annotated reference dataset available online for evaluation of new algorithms; (2) a quantitative scoring system for objective comparison of algorithms; and (3) performance analysis of the strengths and weaknesses of the various vessel segmentation methods in the presence of various lung diseases.
@article{Rudyanto:2014,
author = {Rudyanto, Rina D. and Kerkstra, Sjoerd and van Rikxoort, Eva M. and Fetita, Catalin and Brillet, Pierre-Yves and Lefevre, Christophe and Xue, Wenzhe and Zhu, Xiangjun and Liang, Jianming and {\"O}ks{\"u}z, Ilkay and {\"U}nay, Devrim and Kadipasaoglu, Kamuran and Est{\'e}par, Ra{\'u}l San Jos{\'e} and Ross, James C. and Washko, George R. and Prieto, Juan-Carlos and Hoyos, Marcela Hern{\'a}ndez and Orkisz, Maciej and Meine, Hans and H{\"u}llebrand, Markus and St{\"o}cker, Christina and Mir, Fernando Lopez and Naranjo, Valery and Villanueva, Eliseo and Staring, Marius and Xiao, Changyan and Stoel, Berend C. and Fabijanska, Anna and Smistad, Erik and Elster, Anne C. and Lindseth, Frank and Foruzan, Amir Hossein and Kiros, Ryan and Popuri, Karteek and Cobzas, Dana and Jimenez-Carretero, Daniel and Santos, Andres and Ledesma-Carbayo, Maria J. and Helmberger, Michael and Urschler, Martin and Pienn, Michael and Bosboom, Dennis G. H. and Campo, Arantza and Prokop, Mathias and de Jong, Pim A. and Ortiz-de-Solorzano, Carlos and Mu{\~n}oz-Barrutia, Arrate and van Ginneken, Bram},
title = {Comparing algorithms for automated vessel segmentation in Computed Tomography scans of the lung: The VESSEL12 study},
journal = {Medical Image Analysis},
volume = {18},
number = {7},
pages = {1217 - 1232},
month = oct,
year = {2014},
}
Purpose: Whole lung densitometry on chest CT images is an accepted method for measuring tissue destruction in patients with pulmonary emphysema in clinical trials. Progression measurement is required for evaluation of change in health condition and the effect of drug treatment. Information about the location of emphysema progression within the lung may be important for the correct interpretation of drug efficacy, or for determining a treatment plan. The purpose of this study is therefore to develop and validate methods that enable the local measurement of lung density changes, which requires proper modeling of the effect of respiration on density.
Methods: Four methods, all based on registration of baseline and follow-up chest CT scans, are compared. The first naive method subtracts registered images. The second employs the so-called dry sponge model, where volume correction is performed using the determinant of the Jacobian of the transformation. The third and the fourth introduce a novel adaptation of the dry sponge model that circumvents its constant-mass assumption, which is shown to be invalid. The latter two methods require a third CT scan at a different inspiration level to estimate the patient-specific density-volume slope, where one method employs a global and the other a local slope. The methods were validated on CT scans of a phantom mimicking the lung, where mass and volume could be controlled. In addition, validation was performed on data of 21 patients with pulmonary emphysema.
Results: The image registration method was optimized leaving a registration error below half the slice increment (median 1.0mm). The phantom study showed that the locally adapted slope model most accurately measured local progression. The systematic error in estimating progression, as measured on the phantom data, was below 2 gr/l for a 70 ml (6%) volume difference, and 5 gr/l for a 210 ml (19%) difference, if volume correction was applied. On the patient data an underlying linearity assumption relating lung volume change with density change was shown to hold (fit R2 = 0.94), and globalized versions of the local models are consistent with global results (R2 of 0.865 and 0.882 for the two adapted slope models, respectively).
Conclusions: In conclusion, image matching and subsequent analysis of differences according to the proposed lung models i) has good local registration accuracy on patient data, ii) effectively eliminates a dependency on inspiration level at acquisition time, iii) accurately predicts progression in phantom data, and iv) is reasonably consistent with global results in patient data. It is therefore a potential future tool for assessing local emphysema progression in drug evaluation trials and in clinical practice.
@article{Staring:2014,
author = {Staring, Marius and Bakker, M.E. and Stolk, Jan and Shamonin, Denis P. and Reiber, Johan H.C. and Stoel, Berend C.},
title = {Towards Local Progression Estimation of Pulmonary Emphysema using CT},
journal = {Medical Physics},
volume = {41},
number = {2},
pages = {021905-1 - 021905-13},
month = feb,
year = {2014},
}
Nonrigid image registration is an important, but time-consuming task in medical image analysis. In typical neuroimaging studies, multiple image registrations are performed, i.e. for atlas-based segmentation or template construction. Faster image registration routines would therefore be beneficial.
In this paper we explore acceleration of the image registration package elastix by a combination of several techniques: i) parallelization on the CPU, to speed up the cost function derivative calculation; ii) parallelization on the GPU building on and extending the OpenCL framework from ITKv4, to speed up the Gaussian pyramid computation and the image resampling step; iii) exploitation of certain properties of the B-spline transformation model; iv) further software optimizations.
The accelerated registration tool is employed in a study on diagnostic classification of Alzheimer’s disease and cognitively normal controls based on T1-weighted MRI. We selected 299 participants from the publicly available Alzheimer’s Disease Neuroimaging Initiative database. Classification is performed with a support vector machine based on gray matter volumes as a marker for atrophy. We evaluated two types of strategies (voxel-wise and region-wise) that heavily rely on nonrigid image registration.
Parallelization and optimization resulted in an acceleration factor of 4-5x on an 8-core machine. Using OpenCL a speedup factor of ∼2 was realized for computation of the Gaussian pyramids, and 15-60 for the resampling step, for larger images. The voxel-wise and the region-wise classification methods had an area under the receiver operator characteristic curve of 88% and 90%, respectively, both for standard and accelerated registration.
We conclude that the image registration package elastix was substantially accelerated, with nearly identical results to the non-optimized version. The new functionality will become available in the next release of elastix as open source under the BSD license.
@article{Shamonin:2014,
author = {Shamonin, Denis P and Bron, Esther E and Lelieveldt, Boudewijn P.F. and Smits, Marion and Klein, Stefan and Staring, Marius},
title = {Fast Parallel Image Registration on CPU and GPU for Diagnostic Classification of Alzheimer's Disease},
journal = {Frontiers in Neuroinformatics},
volume = {7},
number = {50},
pages = {1-15},
month = jan,
year = {2014},
}
State-of-the-art fluoroscopic knee kinematic analysis methods require the patient-specific bone shapes segmented from CT or MRI. Substituting the patient-specific bone shapes with personalizable models, such as statistical shape models (SSM), could eliminate the CT/MRI acquisitions, and thereby decrease costs and radiation dose (when eliminating CT). SSM based kinematics, however, have not yet been evaluated on clinically relevant joint motion parameters.
Therefore, in this work the applicability of SSM-s for computing knee kinematics from biplane fluoroscopic sequences was explored. Kinematic precision with an edge based automated bone tracking method using SSM-s was evaluated on 6 cadaver and 10 in-vivo fluoroscopic sequences. The SSMs of the femur and the tibia-fibula were created using 61 training datasets. Kinematic precision was determined for medial-lateral tibial shift, anterior-posterior tibial drawer, joint distraction-contraction, flexion, tibial rotation and adduction. The relationship between kinematic precision and bone shape accuracy was also investigated.
The SSM based kinematics resulted in sub-millimeter (0.48-0.81 mm) and approximately one degree (0.69-0.99∘) median precision on the cadaveric knees compared to bone-marker-based kinematics. The precision on the in-vivo datasets was comparable to the cadaveric sequences when evaluated with a semi-automatic reference method. These results are promising, though further work is necessary to reach the accuracy of CT-based kinematics. We also demonstrated that a better shape reconstruction accuracy does not automatically imply a better kinematic precision. This result suggests that the ability of accurately fitting the edges in the fluoroscopic sequences has a larger role in determining the kinematic precision than the overall 3D shape accuracy.
@article{Baka:2014,
author = {Baka, Nora and Kaptein, Bart L. and Giphart, J. Erik and Staring, Marius and de Bruijne, Marleen and Lelieveldt, Boudewijn P.F. and Valstar, Edward},
title = {Evaluation of automated statistical shape model based knee kinematics from biplane fluoroscopy},
journal = {Journal of Biomechanics},
volume = {47},
number = {1},
pages = {122 - 129},
month = jan,
year = {2014},
}
Longitudinal studies on brain pathology and assessment of therapeutic strategies rely on a fully mature adult brain to exclude confounds of cerebral developmental changes. Thus, knowledge about onset of adulthood is indispensable for discrimination of developmental phase and adulthood. We have performed a high-resolution longitudinal MRI study at 11.7T of male Wistar rats between 21 days and six months of age, characterizing cerebral volume changes and tissue-specific myelination as a function of age. Cortical thickness reaches final value at 1 month, while volume increases of cortex, striatum and whole brain end only after two months. Myelin accretion is pronounced until the end of the third postnatal month. After this time, continuing myelination increases in cortex are still seen on histological analysis but are no longer reliably detectable with diffusion-weighted MRI due to parallel tissue restructuring processes. In conclusion, cerebral development continues over the first three months of age. This is of relevance for future studies on brain disease models which should not start before the end of month 3 to exclude serious confounds of continuing tissue development.
@article{Mengler:2014,
author = {Mengler, Luam and Khmelinskii, Artem and Diedenhofen, Michael and Po, Chrystelle and Staring, Marius and Lelieveldt, Boudewijn P.F. and Hoehn, Mathias},
title = {Brain maturation of the adolescent rat cortex and striatum: changes in volume and myelination},
journal = {NeuroImage},
volume = {84},
number = {1},
pages = {35-44},
month = jan,
year = {2014},
}
Purpose: Atherosclerosis is the primary cause of heart disease and stroke. The detailed assessment of atherosclerosis of the carotid artery requires high resolution imaging of the vessel wall using multiple MR sequences with different contrast weightings. These images allow manual or automated classification of plaque components inside the vessel wall. Automated classification requires all sequences to be in alignment, which is hampered by patient motion. In clinical practice correction of this motion is performed manually. Previous studies applied automated image registration to correct for motion using only non-deformable transformation models and did not perform a detailed quantitative validation. The purpose of this study is to develop an automated accurate 3D registration method, and to extensively validate this method on a large set of patient data. In addition, we quantified patient motion during scanning to investigate the need for correction.
Methods: MR imaging studies (1.5T, dedicated carotid surface coil, Philips) from fifty-five TIA/stroke patients with ipsilateral <70% carotid artery stenosis were randomly selected from a larger cohort. Five MR pulse sequences were acquired around the carotid bifurcation, each containing nine transverse slices: T1W TFE, TOF, T2W TSE, and pre- and post-contrast T1W TSE. The images were manually segmented by delineating the lumen contour in each vessel wall sequence and were manually aligned by applying through-plane and in-plane translations to the images. To find the optimal automatic image registration method, different masks, choice of the fixed image, different types of the mutual information image similarity metric, and transformation models including 3D deformable transformation models, were evaluated. Evaluation of the automatic registration results was performed by comparing the lumen segmentations of the fixed image and moving image after registration.
Results: The average required manual translation per image slice was 1.33 mm. Translations were larger as the patient was longer inside the scanner. Manual alignment took 187.5 seconds per patient resulting in a mean surface distance of 0.271 ± 0.127 mm. After minimal user interaction to generate the mask in the fixed image, the remaining sequences are automatically registered with a computation time of 52.0 seconds per patient. The optimal registration strategy used a circular mask with a diameter of 10 mm, a 3D B-spline transformation model with a control point spacing of 15 mm, mutual information as image similarity metric, and the pre-contrast T1W TSE as fixed image. A mean surface distance of 0.288 ± 0.128 mm was obtained with these settings, which is very close to the accuracy of the manual alignment procedure. The exact registration parameters and software were made publicly available.
Conclusions: An automated registration method was developed and optimized, only needing two mouse clicks to mark the start and end point of the artery. Validation on a large group of patients showed that automated image registration has similar accuracy as the manual alignment procedure, substantially reducing the amount of user interactions needed, and is multiple times faster. In conclusion, we believe that the proposed automated method can replace the current manual procedure, thereby reducing the time to analyze the images.
@article{vantKlooster:2013,
author = {van 't Klooster, Ronald and Staring, Marius and Klein, Stefan and Kwee, R.M. and Kooi, M.E. and Reiber, J.H.C and Lelieveldt, Boudewijn P.F. and van der Geest, Rob J.},
title = {Automated registration of multispectral MR vessel wall images of the carotid artery},
journal = {Medical Physics},
volume = {40},
number = {12},
pages = {121904-1 -- 121904-12},
month = dec,
year = {2013},
}
Purpose Whole-body MRI is seeing increasing use in the study and diagnosis of disease progression. In this, a central task is the visual assessment of the progressive changes that occur between two whole-body MRI datasets, taken at baseline and follow-up. Current radiological workflow for this consists in manual search of each organ of interest on both scans, usually on multiple data channels, for further visual comparison. Large size of datasets, significant posture differences, and changes in patient anatomy turn manual matching in an extremely labour-intensive task that requires from radiologists high concentration for long period of time. This strongly limits the productivity and increases risk of underdiagnosis.
Materials and Methods We present a novel approach to the comparative visual analysis of whole-body MRI follow-up data. Our method is based on interactive derivation of locally rigid transforms from a pre-computed whole-body deformable registration. Using this approach, baseline and follow-up slices can be interactively matched with a single mouse-click in the anatomical region of interest. In addition to the synchronized side-by-side baseline and matched follow-up slices, we have integrated four techniques to further facilitate the visual comparison of the two datasets: the "deformation sphere", the color fusion view, the magic lens, and a set of uncertainty iso-contours around the current region of interest.
Results We have applied our method to the study of cancerous bone lesions over time in patients with Kahler’s disease. During these studies, the radiologist carefully visually examines a large number of anatomical sites for changes. Our interactive locally rigid matching approach was found helpful in localization of cancerous lesions and visual assessment of changes between different scans. Furthermore, each of the features integrated in our software was separately evaluated by the experts.
Conclusions We demonstrated how our method significantly facilitates examination of whole-body MR datasets in follow-up studies by enabling the rapid interactive matching of regions of interest and by the explicit visualization of change.
@article{Dzyubachyk:2013,
author = {Dzyubachyk, Oleh and Blaas, Jorik and Botha, Charl P. and Staring, Marius and Reijnierse, Monique and Bloem, Johan L. and van der Geest, Rob J. and Lelieveldt, Boudewijn P.F.},
title = {Comparative Exploration of Whole-Body MR through Locally Rigid Transforms},
journal = {International Journal of Computer Assisted Radiology and Surgery},
volume = {8},
number = {4},
pages = {635 - 647},
month = jul,
year = {2013},
}
The intensity or gray-level derivatives have been widely used in image segmentation and enhancement. Conventional derivative filters often suffer from an undesired merging of adjacent objects, due to their intrinsic usage of an inappropriately broad Gaussian kernel; as a result neighboring structures cannot be properly resolved. To avoid this problem, we propose to replace the low-level Gaussian kernel with a bi-Gaussian function, which allows independent selection of scales on foreground and background. By selecting a narrow neighborhood for the background relative to the foreground, the proposed method will reduce interference from adjacent objects, while preserving the ability of intra-region smoothing. Our idea is inspired by a comparative analysis of existing line filters, where several traditional methods including the vesselness, gradient flux and medialness models are integrated into a uniform framework. The comparison subsequently aids in understanding the principles of the different filtering kernels, which is also a contribution of the paper. Based on some axiomatic scale-space assumptions, the full representation of our bi-Gaussian kernel is deduced. The popular γ-normalization scheme for multi-scale integration is extended to the bi-Gaussian operators. Finally, combined with a parameter-free shape estimation scheme, a derivative filter is developed for the typical applications of curvilinear structure detection and vasculature image enhancement. It is verified in experiments using synthetic and real data that the proposed method outperforms several conventional filters in separating closely located objects as well as being robust to noise.
@article{Xiao:2013,
author = {Xiao, Changyan and Staring, Marius and Wang, Yaonan and Shamonin, Denis P. and Stoel, Berend C.},
title = {A Multiscale Bi-Gaussian Filter for Adjacent Curvilinear Structures Detection With Application to Vasculature Images},
journal = {IEEE Transactions on Image Processing},
volume = {22},
number = {1},
pages = {174 - 188},
month = jan,
year = {2013},
}
EMPIRE10 (Evaluation of Methods for Pulmonary Image REgistration 2010) is a public platform for fair and meaningful comparison of registration algorithms which are applied to a database of intrapatient thoracic CT image pairs. Evaluation of nonrigid registration techniques is a nontrivial task. This is compounded by the fact that researchers typically test only on their own data, which varies widely. For this reason, reliable assessment and comparison of different registration algorithms has been virtually impossible in the past. In this work we present the results of the launch phase of EMPIRE10, which comprised the comprehensive evaluation and comparison of 20 individual algorithms from leading academic and industrial research groups. All algorithms are applied to the same set of 30 thoracic CT pairs. Algorithm settings and parameters are chosen by researchers expert in the configuration of their own method and the evaluation is independent, using the same criteria for all participants. All results are published on the EMPIRE10 website (http://empire10.isi.uu.nl). The challenge remains ongoing and open to new participants. Full results from 24 algorithms have been published at the time of writing. This paper details the organization of the challenge, the data and evaluation methods and the outcome of the initial launch with 20 algorithms. The gain in knowledge and future work are discussed.
@article{Murphy:2011,
author = {Murphy, Keelin and van Ginneken, Bram and Reinhardt, Joseph M. and Kabus, Sven and Ding, Kai and Deng, Xiang and Cao, Kunlin and Du, Kaifang and Christensen, Gary E. and Garcia, Vincent and Vercauteren, Tom and Ayache, Nicholas and Commowick, Olivier and Malandain, Gregoire and Glocker, Ben and Paragios, Nikos and Navab, Nassir and Gorbunova, Vladlena and Sporring, Jon and de Bruijne, Marleen and Han, Xiao and Heinrich, Mattias P. and Schnabel, Julia A. and Jenkinson, Mark and Lorenz, Cristian and Modat, Marc and McClelland, Jamie R. and Ourselin, Sebastien and Muenzing, Sascha E.A. and Viergever, Max A. and De Nigris, Dante and Collins, D.L. and Arbel, Tal and Peroni, Marta and Li, Rui and Sharp, Gregory C. and Schmidt-Richberg, Alexander and Ehrhardt, Jan and Werner, Rene and Smeets, Dirk and Loeckx, Dirk and Song, Gang and Tustison, Nicholas and Avants, Brian and Gee, James C. and Staring, Marius and Klein, Stefan and Stoel, Berend C. and Urschler, Martin and Werlberger, Manuel and Vandemeulebroucke, Jef and Rit, Simon and Sarrut, David and Pluim, Josien P.W.},
title = {Evaluation of Registration Methods on Thoracic CT: The EMPIRE10 Challenge},
journal = {IEEE Transactions on Medical Imaging},
volume = {30},
number = {11},
pages = {1901 - 1920},
month = nov,
year = {2011},
}
The traditional Hessian-related vessel filters often suffer from detecting complex structures like bifurcations due to an over-simplified cylindrical model. To solve this problem, we present a shape-tuned strain energy density function to measure vessel likelihood in 3D medical images. This method is initially inspired by established stress-strain principles in mechanics. By considering the Hessian matrix as a stress tensor, the three invariants from orthogonal tensor decomposition are used independently or combined to formulate distinctive functions for vascular shape discrimination, brightness contrast and structure strengthen measuring. Moreover, a mathematical description of Hessian eigenvalues for general vessel shapes is obtained, based on an intensity continuity assumption, and a relative Hessian strength term is presented to ensure the dominance of second-order derivatives as well as suppress undesired step edges. Finally, we adopt the multi-scale scheme to find an optimal solution through scale space. The proposed method is validated in experiments with a digital phantom and non-contrast-enhanced pulmonary CT data. It is shown that our model performed more effectively in enhancing vessel bifurcations and preserving details, compared to three existing filters.
@article{Xiao:2011,
author = {Xiao, Changyan and Staring, Marius and Shamonin, Denis P. and Reiber, Johan H.C. and Stolk, Jan and Stoel, Berend C.},
title = {A Strain Energy Filter for 3D Vessel Enhancement with Application to Pulmonary CT Images},
journal = {Medical Image Analysis},
volume = {15},
number = {1},
pages = {112 - 124},
month = feb,
year = {2011},
}
Quantitative evaluation of image registration algorithms is a difficult and under-addressed issue due to the lack of a reference standard in most registration problems. In this work a method is presented whereby detailed reference standard data may be constructed in an efficient semi-automatic fashion. A well-distributed set of n landmarks is detected fully automatically in one scan of a pair to be registered. Using a custom-designed interface, observers define corresponding anatomic locations in the second scan for a specified subset of s of these landmarks. The remaining n - s landmarks are matched fully automatically by a thin-plate-spline based system using the s manual landmark correspondences to model the relationship between the scans. The method is applied to 47 pairs of temporal thoracic CT scans, three pairs of brain MR scans and five thoracic CT datasets with synthetic deformations. Interobserver differences are used to demonstrate the accuracy of the matched points. The utility of the reference standard data as a tool in evaluating registration is shown by the comparison of six sets of registration results on the 47 pairs of thoracic CT data.
@article{Murphy:2012,
author = {Murphy, Keelin and van Ginneken, Bram and Klein, Stefan and Staring, Marius and Viergever, Max A. and Pluim, Josien P.W.},
title = {Semi-Automatic Construction of Reference Standards for Evaluation of Image Registration},
journal = {Medical Image Analysis},
volume = {15},
number = {1},
pages = {71 - 84},
month = feb,
year = {2011},
}
Thoracic computed tomography (CT) scans provide information about cardiovascular risk status. These scans are non-ECG-synchronized, thus precise quantification of coronary calcifications is difficult. Aortic calcium scoring is less sensitive to cardiac motion, so it is an alternative to coronary calcium scoring as an indicator of cardiovascular risk. We developed and evaluated a computer-aided system for automatic detection and quantification of aortic calcifications in low-dose non-contrast-enhanced chest CT. A computer-aided system for automatic detection and quantification of aortic calcifications was trained and tested on scans from participants of a lung cancer screening trial. A total of 433 low-dose, non-ECG-synchronized, non-contrast enhanced 16-detector row examinations of the chest were randomly divided into 340 training and 93 test data sets. A first observer manually identified aortic calcifications on training and test scans. A second observer did the same on the test scans only. First, a multi-atlas-based segmentation method was developed to delineate the aorta. Subsequently, the training data was used to train the system based on statistical pattern recognition theory to automatically identify calcifications in the aortic wall. Calcium volume scores for computer system and first observer and for the two observers were compared using descriptive statistics and Spearman rank correlation coefficients. The computer system correctly detected on average 768 mm3 out of 871 mm3 calcified plaque volume in the aorta with an average 61 mm3 of false positive volume per scan. Spearman rank correlation coefficient was ρ=0.97 between system and first observer compared to ρ=0.99 between the two observers. Automatic calcium scoring in the aorta appears feasible with good correlation between manual and automatic scoring.
@article{Isgum:2010,
author = {I{\v{s}}gum, Ivana and Rutten, Annemarieke and Prokop, Mathias and Staring, Marius and Klein, Stefan and Pluim, Josien P.W. and Viergever, Max A. and van Ginneken, Bram},
title = {Automated aortic calcium scoring on low-dose chest computed tomography},
journal = {Medical Physics},
volume = {37},
number = {2},
pages = {714 - 723},
month = feb,
year = {2010},
}
Medical image registration is an important task in medical image processing. It refers to the process of aligning data sets, possibly from different modalities (e.g., magnetic resonance (MR) and computed tomography (CT)), different time points (e.g., follow-up scans), and/or different subjects (in case of population studies). A large number of methods for image registration are described in the literature. Unfortunately, there is not one method that works for all applications. We have therefore developed elastix
, a publicly available computer program for intensity-based medical image registration. The software consists of a collection of algorithms that are commonly used to solve medical image registration problems. The modular design of elastix
allows the user to quickly configure, test, and compare different registration methods for a specific application. The command-line interface enables automated processing of large numbers of data sets, by means of scripting. The usage of elastix
for comparing different registration methods is illustrated with three example experiments, in which individual components of the registration method are varied.
@article{KleinStaring:2010,
author = {Klein, Stefan and Staring, Marius and Murphy, Keelin and Viergever, Max A. and Pluim, Josien P.W.},
title = {elastix: a toolbox for intensity-based medical image registration},
journal = {IEEE Transactions on Medical Imaging},
volume = {29},
number = {1},
pages = {196 -- 205},
month = jan,
year = {2010},
}
Atlas-based segmentation is a powerful generic technique for automatic delineation of structures in volumetric images. Several studies have shown that multi atlas segmentation methods outperform schemes that use only a single atlas, but running multiple registrations on volumetric data is time-consuming. Moreover, for many scans or regions within scans, a large number of atlases may not be required to achieve good segmentation performance and may even deteriorate the results. It would therefore be worthwhile to include the decision which and how many atlases to use for a particular target scan in the segmentation process. To this end, we propose two generally applicable multi atlas segmentation methods, AMAS and ALMAS. AMAS automatically selects the most appropriate atlases for a target image and automatically stops registering atlases when no further improvement is expected. ALMAS takes this concept one step further by locally deciding how many and which atlases are needed to segment a target image. The methods employ a computationally cheap atlas selection strategy, an automatic stopping criterion, and a technique to locally inspect registration results and determine how much improvement can be expected from further registrations.
AMAS and ALMAS were applied to segmentation of the heart in computed tomography scans of the chest and compared to a conventional multi atlas method (MAS). The results show that ALMAS achieves the same performance as MAS at a much lower computational cost. When the available segmentation time is fixed, both AMAS and ALMAS perform significantly better than MAS. In addition, AMAS was applied to an on-line segmentation challenge for delineation of the caudate nucleus in brain MRI scans where it achieved the best score of all results submitted to date.
@article{vanRikxoort:2010,
author = {van Rikxoort, Eva M. and I{\v{s}}gum, Ivana and Arzhaeva, Yulia and Staring, Marius and Klein, Stefan and Viergever, Max A. and Pluim, Josien P.W. and van Ginneken, Bram},
title = {Adaptive Local Multi-Atlas Segmentation: Application to the Heart and the Caudate Nucleus},
journal = {Medical Image Analysis},
volume = {14},
number = {1},
pages = {39 -- 49},
month = feb,
year = {2010},
}
Radiation therapy for cervical cancer can benefit from image registration in several ways, for example by studying the motion of organs, or by (partially) automating the delineation of the target volume and other structures of interest. In this paper, the registration of cervical data is addressed using mutual information (MI) of not only image intensity, but also features that describe local image structure. Three aspects of the registration are addressed to make this approach feasible. Firstly, instead of relying on a histogram-based estimation of mutual information, which poses problems for a larger number of features, a graph-based implementation of α-mutual information (α-MI) is employed. Secondly, the analytical derivative of α-MI is derived. This makes it possible to use a stochastic gradient descent method to solve the registration problem, which is substantially faster than non-derivative-based methods. Thirdly, the feature space is reduced by means of a principal component analysis, which also decreases the registration time. The proposed technique is compared to a standard approach, based on the mutual information of image intensity only. Experiments are performed on 93 T2-weighted MR clinical data sets acquired from 19 patients with cervical cancer. Several characteristics of the proposed algorithm are studied on a subset of 19 image pairs (one pair per patient). On the remaining data (36 image pairs, one or two pairs per patient) the median overlap is shown to improve significantly compared to standard MI from 0.85 to 0.86 for the clinical target volume (CTV, p = 2 10-2), from 0.75 to 0.81 for the bladder (p = 8 10-6) and from 0.76 to 0.77 for the rectum (p = 2 10-4). The registration error is improved at important tissue interfaces, such as that of the bladder with the CTV, and the interface of the rectum with the uterus and cervix.
@article{Staring:2009,
author = {Staring, Marius and van der Heide, Uulke A. and Klein, Stefan and Viergever, Max A. and Pluim, Josien P.W.},
title = {Registration of Cervical MRI Using Multifeature Mutual Information},
journal = {IEEE Transactions on Medical Imaging},
volume = {28},
number = {9},
pages = {1412 - 1421},
month = sep,
year = {2009},
}
A novel atlas-based segmentation approach based on the combination of multiple registrations is presented. Multiple atlases are registered to a target image. To obtain a segmentation of the target, labels of the atlas images are propagated to it. The propagated labels are combined by spatially varying decision fusion weights. These weights are derived from local assessment of the registration success. Furthermore, an atlas selection procedure is proposed that is equivalent to sequential forward selection from statistical pattern recognition theory. The proposed method is compared to three existing atlas-based segmentation approaches, namely (1) single atlas-based segmentation, (2) average-shape atlas-based segmentation, and (3) multi-atlas-based segmentation with averaging as decision fusion. These methods were tested on the segmentation of the heart and the aorta in computed tomography scans of the thorax. The results show that the proposed method outperforms other methods and yields results very close to those of an independent human observer. Moreover, the additional atlas selection step led to a faster segmentation at a comparable performance.
@article{Isgum:2009,
author = {I{\v{s}}gum, Ivana and Staring, Marius and Rutten, Annemarieke and Prokop, Mathias and Viergever, Max A. and van Ginneken, Bram},
title = {Multi-Atlas-Based Segmentation With Local Decision Fusion - Application to Cardiac and Aortic Segmentation in CT Scans},
journal = {IEEE Transactions on Medical Imaging},
volume = {28},
number = {7},
pages = {1000 - 1010},
month = jul,
year = {2009},
}
We present a stochastic gradient descent optimisation method for image registration with adaptive step size prediction. The method is based on the theoretical work by Plakhov and Cruz (2004). Our main methodological contribution is the derivation of an image-driven mechanism to select proper values for the most important free parameters of the method. The selection mechanism employs general characteristics of the cost functions that commonly occur in intensity-based image registration. Also, the theoretical convergence conditions of the optimisation method are taken into account. The proposed adaptive stochastic gradient descent (ASGD) method is compared to a standard, non-adaptive Robbins-Monro (RM) algorithm. Both ASGD and RM employ a stochastic subsampling technique to accelerate the optimisation process. Registration experiments were performed on 3D CT and MR data of the head, lungs, and prostate, using various similarity measures and transformation models. The results indicate that ASGD is robust to these variations in the registration framework and is less sensitive to the settings of the user-defined parameters than RM. The main disadvantage of RM is the need for a predetermined step size function. The ASGD method provides a solution for that issue.
@article{Klein:2009,
author = {Klein, Stefan and Pluim, Josien P.W. and Staring, Marius and Viergever, Max A.},
title = {Adaptive Stochastic Gradient Descent Optimisation for Image Registration},
journal = {International Journal of Computer Vision},
volume = {81},
number = {3},
pages = {227 - 239},
month = mar,
year = {2009},
}
Objectives: To study the impact of image subtraction of registered images on the detection of change in pulmonary ground-glass nodules identified on chest CT.
Materials and Methods: A cohort of 33 individuals (25 men, 8 women; age range 51 to 75 years) with 37 focal ground-glass opacities (GGO) were recruited from a lung cancer screening trial. For every participant, one to three follow-up scans were available (total number of pairs, 84). Pairs of scans of the same nodule were registered nonrigidly, and then subtracted to enhance differences in size and density. Four observers rated size and density change of the GGO between pairs of scans by visual comparison alone and with additional availability of a subtraction image and indicated their confidence. An independent experienced chest radiologist served as an arbiter having all reader data, clinical data and follow-up examinations available. Nodule pairs for which the arbiter could not establish definite progression, regression, or stability were excluded from further evaluation. This left 59 and 58 pairs for evaluation of size and density change, respectively. Weighted kappa statistics were used to assess inter-observer agreement and agreement with the arbiter. Statistical significance was tested with a kappa z-test.
Results: When the subtraction image was available, the average inter-observer improved from 0.52 to 0.66 for size change and from 0.47 to 0.57 for density change. Average agreement with the arbiter improved from 0.61 to 0.76 for size change and from 0.53 to 0.64 for density change. The effect was more pronounced when observer confidence without the subtraction image was low: agreement improved from 0.26 to 0.57 and from 0.19 to 0.47 in those cases.
Conclusions: Image subtraction improves the evaluation of subtle changes in pulmonary ground-glass opacities and decreases inter-observer variability.
@article{Staring:2010,
author = {Staring, Marius and Pluim, Josien P.W. and de Hoop, Bartjan and Klein, Stefan and van Ginneken, Bram and Gietema, Hester and Nossent, George and Schaefer-Prokop, Cornelia and van de Vorst, Saskia and Prokop, Mathias},
title = {Image Subtraction Facilitates Assessment of Volume and Density Change in Ground-Glass Opacities in Chest CT},
journal = {Investigative Radiology},
volume = {44},
number = {2},
pages = {61 - 66},
month = feb,
year = {2009},
}
An automatic method for delineating the prostate (including the seminal vesicles) in 3D magnetic resonance (MR) scans is presented. The method is based on nonrigid registration of a set of prelabelled atlas images. Each atlas image is nonrigidly registered with the target patient image. Subsequently, the deformed atlas label images are fused to yield a single segmentation of the patient image. The proposed method is evaluated on 50 clinical scans, which were manually segmented by three experts. The Dice similarity coefficient (DSC) is used to quantify the overlap between the automatic and manual segmentations. We investigate the impact of several factors on the performance of the segmentation method. For the registration, two similarity measures are compared: mutual information and a localised version of mutual information. The latter turns out to be superior (median diff. DSC = 0.02, p < 0.01 with a paired two-sided Wilcoxon test) and comes at no added computational cost, thanks to the use of a novel stochastic optimisation scheme. For the atlas fusion step we consider a majority voting rule and the ”simultaneous truth and performance level estimation” (STAPLE) algorithm, both with and without a preceding atlas selection stage. The differences between the various fusion methods appear to be small and mostly not statistically significant (p > 0.05). To assess the influence of the atlas composition, two atlas sets are compared. The first set consists of 38 scans of healthy volunteers. The second set is constructed by a leave-one-out approach using the 50 clinical scans that are used for evaluation. The second atlas set gives substantially better performance (diff. DSC = 0.04, p < 0.01), stressing the importance of a careful atlas definition. With the best settings, a median DSC of around 0.85 is achieved, which is close to the median interobserver DSC of 0.87. The segmentation quality is especially good at the prostate-rectum interface, where the segmentation error remains below 1mm in 50% of the cases and below 1.5mm in 75% of the cases.
@article{Klein:2008,
author = {Klein, Stefan and van der Heide, Uulke A. and Lips, Irene M. and van Vulpen, Marco and Staring, Marius and Pluim, Josien P.W.},
title = {Automatic Segmentation of the Prostate in 3D MR Images by Atlas Matching using Localised Mutual Information},
journal = {Medical Physics},
volume = {35},
number = {4},
pages = {1407 - 1417},
month = apr,
year = {2008},
}
In present-day medical practice it is often necessary to nonrigidly align image data. Current registration algorithms do not generally take the characteristics of tissue into account. Consequently, rigid tissue, such as bone, can be deformed elastically, growth of tumours may be concealed, and contrast-enhanced structures may be reduced in volume.
We propose a method to locally adapt the deformation field at structures that must be kept rigid, using a tissue-dependent filtering technique. This adaptive filtering of the deformation field results in locally linear transformations without scaling or shearing. The degree of filtering is related to tissue stiffness: more filtering is applied at stiff tissue locations, less at parts of the image containing nonrigid tissue. The tissue-dependent filter is incorporated in a commonly used registration algorithm, using mutual information as a similarity measure and cubic B-splines to model the deformation field. The new registration algorithm is compared with this popular method.
Evaluation of the proposed tissue-dependent filtering is performed on 3D CT data of the thorax and on 2D Digital Subtraction Angiography (DSA) images. The results show that tissue-dependent filtering of the deformation field leads to improved registration results: tumour volumes and vessel widths are preserved rather than affected.
@article{Staring:2007a,
author = {Staring, Marius and Klein, Stefan and Pluim, Josien P.W.},
title = {Nonrigid Registration with Tissue-Dependent Filtering of the Deformation Field},
journal = {Physics in Medicine and Biology},
volume = {52},
number = {23},
pages = {6879 - 6892},
month = dec,
year = {2007},
}
A popular technique for nonrigid registration of medical images is based on the maximization of their mutual information, in combination with a deformation field parameterized by cubic B-splines. The coordinate mapping that relates the two images is found using an iterative optimization procedure. This work compares the performance of eight optimization methods: gradient descent (with two different step size selection algorithms), quasi-Newton, nonlinear conjugate gradient, Kiefer-Wolfowitz, simultaneous perturbation, Robbins-Monro, and evolution strategy. Special attention is paid to computation time reduction by using fewer voxels to calculate the cost function and its derivatives. The optimization methods are tested on manually deformed CT images of the heart, on follow-up CT chest scans, and on MR scans of the prostate acquired using a BFFE, T1, and T2 protocol. Registration accuracy is assessed by computing the overlap of segmented edges. Precision and convergence properties are studied by comparing deformation fields. The results show that the Robbins-Monro method is the best choice in most applications. With this approach, the computation time per iteration can be lowered approximately 500 times without affecting the rate of convergence by using a small subset of the image, randomly selected in every iteration, to compute the derivative of the mutual information. From the other methods the quasi-Newton and the nonlinear conjugate gradient method achieve a slightly higher precision, at the price of larger computation times.
@article{Klein:2007,
author = {Klein, Stefan and Staring, Marius and Pluim, Josien P.W.},
title = {Evaluation of Optimization Methods for Nonrigid Medical Image Registration using Mutual Information and B-splines},
journal = {IEEE Transactions on Image Processing},
volume = {16},
number = {12},
pages = {2879 - 2890},
month = dec,
year = {2007},
}
Medical images that are to be registered for clinical application often contain both structures that deform and ones that remain rigid. Nonrigid registration algorithms that do not model properties of different tissue types may result in deformations of rigid structures. In this article a local rigidity penalty term is proposed which is included in the registration function in order to penalize the deformation of rigid objects. This term can be used for any representation of the deformation field capable of modelling locally rigid transformations. By using a B-spline representation of the deformation field, a fast algorithm can be devised. The proposed method is compared with an unconstrained nonrigid registration algorithm. It is evaluated on clinical three-dimensional CT follow-up data of the thorax and on two-dimensional DSA image sequences. The results show that nonrigid registration using the proposed rigidity penalty term is capable of nonrigidly aligning images, while keeping user-defined structures locally rigid.
@article{Staring:2007b,
author = {Staring, Marius and Klein, Stefan and Pluim, Josien P.W.},
title = {A Rigidity Penalty Term for Nonrigid Registration},
journal = {Medical Physics},
volume = {34},
number = {11},
pages = {4098 - 4108},
month = nov,
year = {2007},
}
Cardiac magnetic resonance imaging (CMR) is a crucial tool for diagnosing and treating cardiac diseases. However, the lengthy scanning time remains a significant drawback. To address this, accelerated imaging techniques have been introduced by undersampling k-space, which reduces the quality of the resulting images. Recent advancements in deep learning have aimed to expedite the scanning process while maintaining the high image quality. However, deep learning models still struggle to adapt to different sampling modes, and achieving generalization across a wide range of undersampling factors remains challenging. Therefore, an effective universal model for processing random undersampling is essential and promising. In this work, we introduce UPCMR, an unrolled model designed for random sampling CMR reconstruction. This model incorporates two kinds of learnable prompts, undersampling-specific prompt and spatial-specific prompt, and combines them with the UNet structure in each block, aiming to provide an effective and versatile solution for the above challenge.
@inproceedings{Lyu:2024,
author = {Lyu, Donghang and Rao, Chinmay S. and Staring, Marius and van Osch, Matthias J.P. and Doneva, Mariya and Lamb, Hildo and Pezzotti, Nicola},
title = {UPCMR: A Universal Prompt-guided Model for Random Sampling Cardiac MRI Reconstruction},
booktitle = {Statistical Atlases and Computational Modeling of the Heart (STACOM)},
address = {Marrakech, Morocco},
series = {Lecture Notes in Computer Science},
volume = {},
pages = {},
month = oct,
year = {2024},
}
Vestibular schwannomas (VS) are benign tumors that are generally managed by active surveillance with MRI examination. To further assist clinical decision-making and avoid overtreatment, an accurate prediction of tumor growth based on longitudinal imaging is highly desirable. In this paper, we introduce DeepGrowth, a deep learning method that incorporates neural fields and recurrent neural networks for prospective tumor growth prediction. In the proposed model, each tumor is represented as a signed distance function (SDF) conditioned on a low-dimensional latent code. Unlike previous studies, we predict the latent codes of the future tumor and generate the tumor shapes from it using a multilayer perceptron (MLP). To deal with irregular time intervals, we introduce a time-conditioned recurrent module based on a ConvLSTM and a novel temporal encoding strategy, which enables the proposed model to output varying tumor shapes over time. The experiments on an in-house longitudinal VS dataset showed that the proposed model significantly improved the performance (≥ 1.6% Dice score and ≥ 0.20 mm 95% Hausdorff distance), in particular for top 20% tumors that grow or shrink the most (≥ 4.6% Dice score and ≥ 0.73 mm 95% Hausdorff distance). Our code is available at https://github.com/cyjdswx/DeepGrowth.
@inproceedings{Chen:2024,
author = {Chen, Yunjie and Wolterink, Jelmer M. and Neve, Olaf M. and Romeijn, Stephan R. and Verbist, Berit M. and Hensen, Erik F. and Tao, Qian and Staring, Marius},
title = {Vestibular schwannoma growth prediction from longitudinal MRI by time-conditioned neural fields},
booktitle = {Medical Image Computing and Computer-Assisted Intervention},
address = {Marrakech, Morocco},
series = {Lecture Notes in Computer Science},
volume = {15003},
pages = {508 -- 518},
month = oct,
year = {2024},
}
Electrocardiography is the most common method to investigate the condition of the heart through the observation of cardiac rhythm and electrical activity, for both diagnosis and monitoring purposes. Analysis of electrocardiograms (ECGs) is commonly performed through the investigation of specific patterns, which are visually recognizable by trained physicians and are known to reflect cardiac (dis)function. In this work we study the use of β-variational autoencoders (VAEs) as an explainable feature extractor, and improve on its predictive capacities by jointly optimizing signal reconstruction and cardiac function prediction. The extracted features are then used for cardiac function prediction using logistic regression. The method is trained and tested on data from 7255 patients, who were treated for acute coronary syndrome at the Leiden University Medical Center between 2010 and 2021. The results show that our method significantly improved prediction and explainability compared to a vanilla β-VAE, while still yielding similar reconstruction performance.
@inproceedings{vanderValk:2023,
author = {van der Valk, Viktor and Atsma, Douwe and Scherptong, Roderick and Staring, Marius},
title = {Joint optimization of a {\beta}-VAE for ECG task-specific feature extraction},
booktitle = {Medical Image Computing and Computer-Assisted Intervention},
address = {Vancouver, Canada},
series = {Lecture Notes in Computer Science},
volume = {14221},
pages = {554 -- 563},
month = oct,
year = {2023},
}
Image registration plays a vital role in understanding changes that occur in 2D and 3D scientific imaging datasets. Registration involves finding a spatial transformation that aligns one image to another by optimizing relevant image similarity metrics. In this paper, we introduce itk-elastix, a user-friendly Python wrapping of the mature elastix registration toolbox. The open-source tool supports rigid, affine, and B-spline deformable registration, making it versatile for various imaging datasets. By utilizing the modular design of itk-elastix, users can efficiently configure and compare different registration methods, and embed these in image analysis workflows.
@inproceedings{Ntatsis:2023,
author = {Ntatsis, Konstantinos and Dekker, Niels and van der Valk, Viktor and Birdsong, Tom and Zukic, Dzenan and Klein, Stefan and Staring, Marius and McCormick, Matthew},
title = {itk-elastix: Medical image registration in Python},
booktitle = {Proceedings of the 22nd Python in Science Conference},
editor = {Agarwal, Meghann and Calloway, Chris and Niederhut, Dillon},
pages = {101 - 105},
month = jul,
year = {2023},
}
In radiological practice, multi-sequence MRI is routinely acquired to characterize anatomy and tissue. However, due to the heterogeneity of imaging protocols and contra-indications to contrast agents, some MRI sequences, e.g. contrast-enhanced T1-weighted image (T1ce), may not be acquired. This creates difficulties for large-scale clinical studies for which heterogeneous datasets are aggregated. Modern deep learning techniques have demonstrated the capability of synthesizing missing sequences from existing sequences, through learning from an extensive multi-sequence MRI dataset. In this paper, we propose a novel MR image translation solution based on local implicit neural representations. We split the available MRI sequences into local patches and assign to each patch a local multi-layer perceptron (MLP) that represents a patch in the T1ce. The parameters of these local MLPs are generated by a hypernetwork based on image features. Experimental results and ablation studies on the BraTS challenge dataset showed that the local MLPs are critical for recovering fine image and tumor details, as they allow for local specialization that is highly important for accurate image translation. Compared to a classical pix2pix model, the proposed method demonstrated visual improvement and significantly improved quantitative scores (MSE 0.86 10-3 vs. 1.02 10-3 and SSIM 94.9 vs 94.3).
@inproceedings{Chen:2023,
author = {Chen, Yunjie and Staring, Marius and Wolterink, Jelmer M. and Tao, Qian},
title = {Local implicit neural representations for multi-sequence MRI translation},
booktitle = {IEEE International Symposium on Biomedical Imaging (ISBI)},
address = {Cartagena de Indias, Colombia},
month = apr,
year = {2023},
}
Literature on medical imaging segmentation claims that hybrid UNet models containing both Transformer and convolutional blocks perform better than purely convolutional UNet models. This recently touted success of Transformers warrants an investigation into which of its components contribute to its performance. Also, previous work has a limitation of analysis only at fixed data scales as well as unfair comparisons with others models where parameter counts are not equivalent. This work investigates the performance of the window-Based Transformer for prostate CT Organ-at-Risk (OAR) segmentation at different data scales in context of replacing its various components. To compare with literature, the first experiment replaces the window-based Transformer block with convolution. Results show that the convolution prevails as the data scale increases. In the second experiment, to reduce complexity, the self-attention mechanism is replaced with an equivalent albeit simpler spatial mixing operation i.e. max-pooling. We observe improved performance for max-pooling in smaller data scales, indicating that the window-based Transformer may not be the best choice in both small and larger data scales. Finally, since convolution has an inherent local inductive bias of positional information, we conduct a third experiment to imbibe such a property to the Transformer by exploring two kinds of positional encodings. The results show that there are insignificant improvements after adding positional encoding, indicating the Transformers deficiency in capturing positional information given our data scales. We hope that our approach can serve as a framework for others evaluating the utility of Transformers for their tasks. Code is available via GitHub.
@inproceedings{Tan:2023,
author = {Tan, Yicong and Mody, Prerak and van der Valk, Viktor and Staring, Marius and van Gemert, Jan},
title = {Analyzing Components of a Transformer under Different Data Scales in 3D Prostate CT Segmentation},
booktitle = {SPIE Medical Imaging: Computer-Aided Diagnosis},
editor = {Colliot, Olivier and I{\v{s}}gum, Ivana},
address = {San Diego, CA, USA},
series = {Proceedings of SPIE},
volume = {12464},
pages = {1246408},
month = feb,
year = {2023},
}
Bayesian Neural Nets (BNN) are increasingly used for robust organ auto-contouring. Uncertainty heatmaps extracted from BNNs have been shown to correspond to inaccurate regions. To help speed up the mandatory quality assessment (QA) of contours in radiotherapy, these heatmaps could be used as stimuli to direct visual attention of clinicians to potential inaccuracies. In practice, this is non-trivial to achieve since many accurate regions also exhibit uncertainty. To influence the output uncertainty of a BNN, we propose a modified accuracy-versus-uncertainty (AvU) metric as an additional objective during model training that penalizes both accurate regions exhibiting uncertainty as well as inaccurate regions exhibiting certainty. For evaluation, we use an uncertainty-ROC curve that can help differentiate between Bayesian models by comparing the probability of uncertainty in inaccurate versus accurate regions. We train and evaluate a FlipOut BNN model on the MICCAI2015 Head and Neck Segmentation challenge dataset and on the DeepMind-TCIA dataset, and observed an increase in the AUC of uncertainty-ROC curves by 5.6% and 5.9%, respectively, when using the AvU objective. The AvU objective primarily reduced false positives regions (uncertain and accurate), drawing less visual attention to these regions, thereby potentially improving the speed of error detection.
@inproceedings{Mody:2022,
author = {Mody, Prerak P. and Chaves-de-Plaza, Nicolas F. and Hildebrandt, Klaus and Staring, Marius},
title = {Improving Error Detection in Deep Learning based Radiotherapy Autocontours using Bayesian Uncertainty},
booktitle = {Uncertainty for Safe Utilization of Machine Learning in Medical Imaging, MICCAI workshop},
address = {Singapore},
series = {Lecture Notes in Computer Science},
volume = {13563},
pages = {70 - 79},
month = sep,
year = {2022},
}
Delineation of tumors and organs-at-risk permits detecting and correcting changes in the patients’ anatomy throughout the treatment, making it a core step of adaptive proton therapy (APT). Although AI-based auto-contouring technologies have sped up this process, the time needed to perform the quality assessment (QA) of the generated contours remains a bottleneck, taking clinicians between several minutes up to an hour to complete. This paper introduces a fast contouring workflow suitable for time-critical APT, enabling detection of anatomical changes in shorter time frames and with a lower demand of clinical resources. The proposed human-centered AI-infused workflow follows two principles uncovered after reviewing the APT literature and conducting several interviews and an observational study in two radiotherapy centers in the Netherlands. First, enable targeted inspection of the generated contours by leveraging AI uncertainty and clinically-relevant features such as the proximity of the organs-at-risk to the tumor. Second, minimize the number of interactions needed to edit faulty delineations with redundancy-aware editing tools that provide the user a sense of predictability and control. We use a proof of concept that we validated with clinicians to demonstrate how current and upcoming AI capabilities support the workflow and how it would fit into clinical practice.
@inproceedings{Chaves-de-Plaza:2022,
author = {Chaves-de-Plaza, Nicolas and Mody, Prerak P. and Hildebrandt, Klaus and de Ridder, Huib and Staring, Marius and van Egmond, Ren{\'e}},
title = {Towards Fast and Robust AI-Infused Human-Centered Contouring Workflows For Adaptive Proton Therapy in the Head and Neck},
booktitle = {European Chapter of the Human Factors and Ergonomics Society},
address = {Turin, Italy},
month = apr,
year = {2022},
}
Visually scoring lung involvement in systemic sclerosis from CT scans plays an important role in monitoring progression, but its labor intensiveness hinders practical application. We proposed, therefore, an automatic scoring framework that consists of two cascaded deep regression neural networks. The first (3D) network aims to predict the craniocaudal position of five anatomically defined scoring levels on the 3D CT scans. The second (2D) network receives the resulting 2D axial slices and predicts the scores. We used 227 3D CT scans to train and validate the first network, and the resulting 1135 axial slices were used in the second network. Two experts scored independently a subset of data to obtain intra- and inter-observer variabilities and the ground truth for all data was obtained in consensus. To alleviate the unbalance in training labels in the second network, we introduced a sampling technique and to increase the diversity of the training samples synthetic data was generated, mimicking ground glass and reticulation patterns. The 4-fold cross validation showed that our proposed network achieved an average MAE of 5.90, 4.66 and 4.49, weighted kappa of 0.66, 0.58 and 0.65 for total score (TOT), ground glass (GG) and reticular pattern (RET), respectively. Our network performed slightly worse than the best experts on TOT and GG prediction but it has competitive performance on RET prediction and has the potential to be an objective alternative for the visual scoring of SSc in CT thorax studies.
@inproceedings{Jia:2022,
author = {Jia, Jingnan and Staring, Marius and Hern{\'a}ndez Gir{\'o}n, Irene and Kroft, Lucia J.M. and Schouffoer, Anne A. and Stoel, Berend C.},
title = {Prediction of Lung CT Scores of Systemic Sclerosis by Cascaded Regression Neural Networks},
booktitle = {SPIE Medical Imaging: Computer-Aided Diagnosis},
editor = {Colliot, Olivier and Isgum, Ivana},
address = {San Diego, CA, USA},
series = {Proceedings of SPIE},
volume = {12033},
pages = {1203338},
month = feb,
year = {2022},
}
Deep learning models for organ contouring in radiotherapy are poised for clinical usage, but currently, there exist few tools for automated quality assessment (QA) of the predicted contours. Bayesian models and their associated uncertainty, can potentially automate the process of detecting inaccurate predictions. We investigate two Bayesian models for auto-contouring, DropOut and FlipOut, using a quantitative measure - expected calibration error (ECE) and a qualitative measure - region-based accuracy-vs-uncertainty (R-AvU) graphs. It is well understood that a model should have low ECE to be considered trustworthy. However, in a QA context, a model should also have high uncertainty in inaccurate regions and low uncertainty in accurate regions. Such behaviour could direct visual attention of expert users to potentially inaccurate regions, leading to a speed-up in the QA process. Using R-AvU graphs, we qualitatively compare the behaviour of different models in accurate and inaccurate regions. Experiments are conducted on the MICCAI2015 Head and Neck Segmentation Challenge and on the DeepMindTCIA CT dataset using three models: DropOut-DICE, Dropout-CE (Cross Entropy) and FlipOut-CE. Quantitative results show that DropOut-DICE has the highest ECE, while Dropout-CE and FlipOut-CE have the lowest ECE. To better understand the difference between DropOut-CE and FlipOut-CE, we use the R-AvU graph which shows that FlipOut-CE has better uncertainty coverage in inaccurate regions than DropOut-CE. Such a combination of quantitative and qualitative metrics explores a new approach that helps to select which model can be deployed as a QA tool in clinical settings.
@inproceedings{Mody:2023,
author = {Mody, Prerak P and Chaves-de-Plaza, Nicolas and Hildebrandt, Klaus and van Egmond, Ren{/'e} and Villanova, Anna and Staring, Marius},
title = {Comparing Bayesian Models for Organ Contouring in Head and Neck Radiotherapy},
booktitle = {SPIE Medical Imaging: Image Processing},
editor = {Colliot, Olivier and Isgum, Ivana},
address = {San Diego, CA, USA},
series = {Proceedings of SPIE},
volume = {12032},
pages = {120320F},
month = feb,
year = {2022},
}
Deep supervised models often require a large amount of labelled data, which is difficult to obtain in the medical domain. Therefore, semi-supervised learning (SSL) has been an active area of research due to its promise to minimize training costs by leveraging unlabelled data. Previous research have shown that SSL is especially effective in low labelled data regimes, we show that outperformance can be extended to high data regimes by applying Stochastic Weight Averaging (SWA), which incurs zero additional training cost. Our model was trained on a prostate CT dataset and achieved improvements of 0.12 mm, 0.14 mm, 0.32 mm, and 0.14 mm for the prostate, seminal vesicles, rectum, and bladder respectively, in terms of median test set mean surface distance (MSD) compared to the supervised baseline in our high data regime.
@inproceedings{Li:2022,
author = {Li, Yichao and Elmahdy, Mohamed S. and Lew, Michael S. and Staring, Marius},
title = {Transformation-Consistent Semi-Supervised Learning for Prostate CT Radiotherapy},
booktitle = {SPIE Medical Imaging: Computer-Aided Diagnosis},
editor = {Colliot, Olivier and Isgum, Ivana},
address = {San Diego, CA, USA},
series = {Proceedings of SPIE},
volume = {12033},
pages = {120333O},
month = feb,
year = {2022},
}
The 2019 fastMRI challenge was an open challenge designed to advance research in the field of machine learning for MR image reconstruction. The goal for the participants was to reconstruct undersampled MRI k-space data. The original challenge left an open question as to how well the reconstruction methods will perform in the setting where there is a systematic difference between training and test data. In this work we tested the generalization performance of the submissions with respect to various perturbations, and despite differences in model architecture and training, all of the methods perform very similarly.
@inproceedings{Johnson:2021,
author = {Johnson, Patricia M. and Jeong, Geunu and Hammernik, Kerstin and Schlemper, Jo and Qin, Chen and Duan, Jinming and Rueckert, Daniel and Lee, Jingu and Pezzotti, Nicola and De Weerdt, Elwin and Yousefi, Sahar and Elmahdy, Mohamed S. and Van Gemert, Jeroen Hendrikus Franciscus and Schuelke, Chistophe and Doneva, Mariya and Nielsen, Tim and Kastryulin, Sergey and Lelieveldt, Boudewijn P. F. and Van Osch, Matthias J. P. and Staring, Marius and Chen, Eric Z. and Wang, Puyang and Chen, Xiao and Chen, Terrence and Patel, Vishal M. and Sun, Shanhui and Shin, Hyungseob and Jun, Yohan and Eo, Taejoon and Kim, Sewon and Kim, Taeseong and Hwang, Dosik and Putzky, Patrick and Karkalousos, Dimitrios and Teuwen, Jonas and Miriakov, Nikita and Bakker, Bart and Caan, Matthan and Welling, Max and Muckley, Matthew J. and Knoll, Florian},
title = {Evaluation of the Robustness of Learned MR Image Reconstruction to Systematic Deviations Between Training and Test Data for the Models from the fastMRI Challenge},
booktitle = {Machine Learning for Medical Image Reconstruction, MICCAI workshop},
editor = {Haq, Nandinee F. and Johnson, Patricia and Maier, Andreas and W{\"u}rfl, Tobias and Yoo, Jaejun},
address = {Strasbourg, France},
series = {Lecture Notes in Computer Science},
volume = {12964},
pages = {25 - 34},
month = oct,
year = {2021},
}
Pulmonary lobe segmentation is an important preprocessing task for the analysis of lung diseases. Traditional methods relying on fissure detection or other anatomical features, such as the distribution of pulmonary vessels and airways, could provide reasonably accurate lobe segmentations. Deep learning based methods can outperform these traditional approaches, but require large datasets. Deep multi-task learning is expected to utilize labels of multiple different structures. However, commonly such labels are distributed over multiple datasets. In this paper, we proposed a multi-task semi-supervised model that can leverage information of multiple structures from unannotated datasets and datasets annotated with different structures. A focused alternating training strategy is presented to balance the different tasks. We evaluated the trained model on an external independent CT dataset. The results show that our model significantly outperforms single-task alternatives, improving the mean surface distance from 7.174 mm to 4.196 mm. We also demonstrated that our approach is successful for different network architectures as backbones.
@inproceedings{Jia:2021,
author = {Jia, Jingnan and Zhai, Zhiwei and Bakker, M. Els and Hern{\'a}ndez Gir{\'o}n, I. and Staring, Marius and Stoel, Berend C.},
title = {Multi-Task Semi-Supervised Learning for Pulmonary Lobe Segmentation},
booktitle = {IEEE International Symposium on Biomedical Imaging (ISBI)},
address = {Nice, France},
pages = {1329 - 1332},
month = apr,
year = {2021},
}
Recently, joint registration and segmentation has been formulated in a deep learning setting, by the definition of joint loss functions. In this work, we investigate joining these tasks at the architectural level. We propose a registration network that integrates segmentation propagation between images, and a segmentation network to predict the segmentation directly. These networks are connected into a single joint architecture via so-called cross-stitch units, allowing information to be exchanged between the tasks in a learnable manner. The proposed method is evaluated in the context of adaptive image-guided radiotherapy, using daily prostate CT imaging. Two datasets from different institutes and manufacturers were involved in the study. The first dataset was used for training (12 patients) and validation (6 patients), while the second dataset was used as an independent test set (14 patients). In terms of mean surface distance, our approach achieved 1.06 ± 0.3 mm, 0.91 ± 0.4 mm, 1.27 ± 0.4 mm, and 1.76 ± 0.8 mm on the validation set and 1.82 ± 2.4 mm, 2.45 ± 2.4 mm, 2.45 ± 5.0 mm, and 2.57 ± 2.3 mm on the test set for the prostate, bladder, seminal vesicles, and rectum, respectively. The proposed multi-task network outperformed single-task networks, as well as a network only joined through the loss function, thus demonstrating the capability to leverage the individual strengths of the segmentation and registration tasks. The obtained performance as well as the inference speed make this a promising candidate for daily re-contouring in adaptive radiotherapy, potentially reducing treatment-related side effects and improving quality-of-life after treatment.
@inproceedings{Beljaards:2020,
author = {Beljaards, Laurens and Elmahdy, Mohamed S. and Verbeek, Fons and Staring, Marius},
title = {A Cross-Stitch Architecture for Joint Registration and Segmentation in Adaptive Radiotherapy},
booktitle = {Medical Imaging with Deep Learning},
editor = {Pal, Christopher and Descoteaux, Maxime},
address = {Montreal, Canada},
series = {Proceedings of Machine Learning Research},
volume = {121},
pages = {62 - 74},
month = jul,
year = {2020},
}
Contouring of the target volume and Organs-At-Risk (OARs) is a crucial step in radiotherapy treatment planning. In an adaptive radiotherapy setting, updated contours need to be generated based on daily imaging. In this work, we leverage personalized anatomical knowledge accumulated over the treatment sessions, to improve the segmentation accuracy of a pre-trained Convolution Neural Network (CNN), for a specific patient. We investigate a transfer learning approach, finetuning the baseline CNN model to a specific patient, based on imaging acquired in earlier treatment fractions. The baseline CNN model is trained on a prostate CT dataset from one hospital of 379 patients. This model is then fine-tuned and tested on an independent dataset of another hospital of 18 patients, each having 7 to 10 daily CT scans. For the prostate, seminal vesicles, bladder and rectum, the model fine-tuned on each specific patient achieved a Mean Surface Distance (MSD) of 1:64 ± 0:43 mm, 2:38 ± 2:76 mm, 2:30 ± 0:96 mm, and 1:24 ± 0:89 mm, respectively, which was significantly better than the baseline model. The proposed personalized model adaptation is therefore very promising for clinical implementation in the context of adaptive radiotherapy of prostate cancer.
@inproceedings{Elmahdy:2020,
author = {Elmahdy, Mohamed S. and Ahuja, Tanuj and van der Heide, Uulke A. and Staring, Marius},
title = {Patient-Specific Finetuning of Deep Learning Models for Adaptive Radiotherapy in Prostate CT},
booktitle = {IEEE International Symposium on Biomedical Imaging (ISBI)},
address = {Iowa City, Iowa, USA},
pages = {577 - 580},
month = apr,
year = {2020},
}
Current unsupervised deep learning-based image registration methods are trained with mean squares or normalized cross correlation as a similarity metric. These metrics are suitable for registration of images where a linear relation between image intensities exists. When such a relation is absent knowledge from conventional image registration literature suggests the use of mutual information. In this work we investigate whether mutual information can be used as a loss for unsupervised deep learning image registration by evaluating it on two datasets: breast dynamic contrast-enhanced MR and cardiac MR images. The results show that training with mutual information as a loss gives on par performance compared with conventional image registration in contrast enhanced images, and the results show that it is generally applicable since it has on par performance compared with normalized cross correlation in single-modality registration.
@inproceedings{deVos:2020,
author = {de Vos, Bob D. and van der Velden, Bas H. M. and Sander, J{\"o}rg and Gilhuijs, Kenneth G.A. and Staring, Marius and I{\v{s}}gum, Ivana},
title = {Mutual information for unsupervised deep learning image registration},
booktitle = {SPIE Medical Imaging: Image Processing},
editor = {I{\v{s}}gum, Ivana and Landman, Bennett A.},
address = {Houston, Texas, USA},
series = {Proceedings of SPIE},
volume = {11313},
pages = {11313OR},
month = feb,
year = {2020},
}
Graph Convolutional Networks (GCNs) are a novel and powerful method for dealing with non-Euclidean data, while Convolutional Neural Networks (CNNs) can learn features from Euclidean data such as images. In this work, we propose a novel method to combine CNNs with GCNs (CNN-GCN), that can consider both Euclidean and non-Euclidean features and can be trained end-to-end. We applied this method to separate the pulmonary vascular trees into arteries and veins (A/V). Chest CT scans were pre-processed by vessel segmentation and skeletonization, from which a graph was constructed: voxels on the skeletons resulting in a vertex set and their connections in an adjacency matrix. 3D patches centered around each vertex were extracted from the CT scans, oriented perpendicularly to the vessel. The proposed CNN-GCN classifier was trained and applied on the constructed vessel graphs, where each node is then labeled as artery or vein. The proposed method was trained and validated on data from one hospital (11 patient, 22 lungs), and tested on independent data from a different hospital (10 patients, 10 lungs). A baseline CNN method and human observer performance were used for comparison. The CNN-GCN method obtained a median accuracy of 0.773 (0.738) in the validation (test) set, compared to a median accuracy of 0.817 by the observers, and 0.727 (0.693) by the CNN. In conclusion, the proposed CNN-GCN method combines local image information with graph connectivity information, improving pulmonary A/V separation over a baseline CNN method, approaching the performance of human observers.
@inproceedings{Zhai:2019,
author = {Zhai, Zhiwei and Staring, Marius and Zhou, Xuhui and Xie, Qiuxia and Xiao, Xiaojuan and Bakker, M. Els and Kroft, Lucia J. and Lelieveldt, Boudewijn P.F. and Boon, Duliette and Klok, Frederikus A. and Stoel, Berend C.},
title = {Linking convolutional neural networks with graph convolutional networks: application in pulmonary artery-vein separation},
booktitle = {Graph Learning in Medical Imaging, MICCAI workshop},
editor = {Zhang, D. and Zhou, L. and Jie, B. and Liu, M.},
address = {Shenzhen, China},
series = {Lecture Notes in Computer Science},
volume = {11849},
pages = {36 - 43},
month = oct,
year = {2019},
}
Hadamard time-encoded pseudo-continuous arterial spin labeling (te-pCASL) is a signal-to-noise ratio (SNR)-efficient MRI technique for acquiring dynamic pCASL signals that encodes the temporal information into the labeling according to a Hadamard matrix. In the decoding step, the contribution of each sub-bolus can be isolated resulting in dynamic perfusion scans. When acquiring te-ASL both with and without flow-crushing, the ASL-signal in the arteries can be isolated resulting in 4D-angiographic information. However, obtaining multi-timepoint perfusion and angiographic data requires two acquisitions. In this study, we propose a 3D Dense-Unet convolutional neural network with a multilevel loss function for reconstructing multi-timepoint perfusion and angiographic information from an interleaved 50%-sampled crushed and 50%-sampled non-crushed data, thereby negating the additional scan time. We present a framework to generate dynamic pCASL training and validation data, based on models of the intravascular and extravascular te-pCASL signals. The proposed network achieved SSIM values of 97.3 ± 1.1 and 96.2 ± 11.1 respectively for 4D perfusion and angiographic data reconstruction for 313 test data-sets.
@inproceedings{Yousefi:2019,
author = {Yousefi, Sahar and Hirschler, L. and van der Plas, M. and Elmahdi, Mohamed and Sokooti, Hessam and van Osch, Mathias J.P. and Staring, Marius},
title = {Fast Dynamic Perfusion and Angiography Reconstruction using an end-to-end 3D Convolutional Neural Network},
booktitle = {Machine Learning for Medical Image Reconstruction, MICCAI workshop},
editor = {Knoll, Florian and Maier, Andreas and Rueckert, Daniel and Ye, Jong Chul},
address = {Shenzhen, China},
series = {Lecture Notes in Computer Science},
volume = {11905},
pages = {25 - 35},
month = oct,
year = {2019},
}
Joint image registration and segmentation has long been an active area of research in medical imaging. Here, we reformulate this problem in a deep learning setting using adversarial learning. We consider the case in which fixed and moving images as well as their segmentations are available for training, while segmentations are not available during testing; a common scenario in radiotherapy. The proposed framework consists of a 3D end-to-end generator network that estimates the deformation vector field (DVF) between fixed and moving images in an unsupervised fashion and applies this DVF to the moving image and its segmentation. A discriminator network is trained to evaluate how well the moving image and segmentation align with the fixed image and segmentation. The proposed network was trained and evaluated on follow-up prostate CT scans for image-guided radiotherapy, where the planning CT contours are propagated to the daily CT images using the estimated DVF. A quantitative comparison with conventional registration using elastix showed that the proposed method improved performance and substantially reduced computation time, thus enabling real-time contour propagation necessary for online-adaptive radiotherapy.
@inproceedings{Elmahdy:2019,
author = {Elmahdy, Mohamed S. and Wolterink, Jelmer M. and Sokooti, Hessam and I{\v{s}}gum, Ivana and Staring, Marius},
title = {Adversarial optimization for joint registration and segmentation in prostate CT radiotherapy},
booktitle = {Medical Image Computing and Computer-Assisted Intervention},
editor = {Shen, Dinggang and Liu, Tianming and Peters, Terry M. and Staib, Lawrence H. and Essert, Caroline and Zhou, Sean and Yap, Pew-Thian and Khan, Ali},
address = {Shenzhen, China},
series = {Lecture Notes in Computer Science},
volume = {11769},
pages = {366 - 374},
month = oct,
year = {2019},
}
We have developed an open source, collaborative platform for researchers to develop, compare, and improve medical image registration algorithms. The platform handles data management, unit testing, and benchmarking of registration methods in a fully automatic fashion. In this paper we describe the platform and present the Continuous Registration Challenge. The challenge focuses on registration of lung CT and brain MR images and includes eight publicly available data sets. The platform is made available to the community as an open source project and can be used for organization of future challenges.
@inproceedings{Marstal:2019,
author = {Marstal, Kasper and Berendsen, Floris and Dekker, Niels and Staring, Marius and Klein, Stefan},
title = {The Continuous Registration Challenge: Evaluation-As-A-Service for Medical Image Registration Algorithms},
booktitle = {IEEE International Symposium on Biomedical Imaging (ISBI)},
address = {Venice, Italy},
pages = {1399 - 1402},
month = apr,
year = {2019},
}
Invasive right-sided heart catheterization (RHC) is currently the gold standard for assessing treatment effects in pulmonary vascular diseases, such as chronic thromboembolic pulmonary hypertension (CTEPH). Quantifying morphological changes by matching vascular trees (pre- and post-treatment) may provide a non-invasive alternative for assessing hemodynamic changes. In this work, we propose a method for quantifying morphological changes, consisting of three steps: constructing vascular trees from the detected pulmonary vessels, matching vascular trees with preserving local tree topology, and quantifying local morphological changes based on Poiseuille’s law (changes in radius-4, ∆r-4). Subsequently, median and interquartile range (IQR) of all local ∆r-4 were calculated as global measurements for assessing morphological changes. The vascular tree matching method was validated with 10 synthetic trees and the relation between clinical RHC parameters and quantifications of morphological changes was investigated in 14 CTEPH patients, pre- and post-treatment. In the evaluation with synthetic trees, the proposed method achieved an average residual distance of 3:09 ± 1:28 mm, which is a substantial improvement over a coherent point drift method (4:32 ± 1:89 mm) and a method with global-local topology preservation (3:92 ± 1:59 mm). In the clinical evaluation, the morphological changes (IQR of ∆r-4) was significantly correlated with the changes in RHC examinations, ∆sPAP (R=-0.62, p-value=0.019) and ∆mPAP (R=-0.56, p-value=0.038). Quantifying morphological changes may provide a noninvasive assessment of treatment effects in CTEPH patients, consistent with hemodynamic changes from invasive RHC.
@inproceedings{Zhai:2018,
author = {Zhai, Zhiwei and Staring, Marius and Ota, Hideki and Stoel, Berend C.},
title = {Pulmonary vessel tree matching for quantifying changes in vascular morphology},
booktitle = {Medical Image Computing and Computer-Assisted Intervention},
editor = {Frangi, A.F. and Schnabel, J.A. and Davatzikos, C. and Alberola-Lopez, C. and Fichtinger, G.},
address = {Granada,Spain},
series = {Lecture Notes in Computer Science},
volume = {11071},
pages = {517 - 524},
month = sep,
year = {2018},
}
Accurate gross tumor volume (GTV) segmentation in esophagus CT images is a critical task in computer aided diagnosis (CAD) systems. However, because of the difficulties raised by the contrast similarity between esophageal GTV and its neighbouring tissues in CT scans, this problem has been addressed weakly. In this paper we present a 3D end-to-end method based on a convolutional neural network (CNN) for this purpose. We leverages design elements from DenseNet in a typical U-shape. The proposed architecture consists of a contractile path and an extending path that includes dense blocks for extracting contextual features and retrieves the lost resolution respectively. Using dense blocks leads to deep supervision, feature re-usability, and parameter reduction while aiding the network to be more accurate. The proposed architecture was trained and tested on a dataset containing 553 scans from 49 distinct patients. The proposed network achieved a Dice value of 0:73 ± 0:20, and a 95% mean surface distance of 3:07 ± 1:86 mm for 85 test scans. The experimental results indicates the effectiveness of the proposed method for clinical diagnosis and treatment systems.
@inproceedings{Yousefi:2018,
author = {Yousefi, Sahar and Sokooti, Hessam and Elmahdy, Mohamed S. and Peters, Femke P. and Manzuri Shalmani, Mohammad T. and Zinkstok, Roel T. and Staring, Marius},
title = {Esophageal Gross Tumor Volume Segmentation using a 3D Convolutional Neural Network},
booktitle = {Medical Image Computing and Computer-Assisted Intervention},
editor = {Frangi, A.F. and Schnabel, J.A. and Davatzikos, C. and Alberola-Lopez, C. and Fichtinger, G.},
address = {Granada,Spain},
series = {Lecture Notes in Computer Science},
volume = {11073},
pages = {343 - 351},
month = sep,
year = {2018},
}
Delineation of the target volume and Organs-At-Risk (OARs) is a crucial step for proton therapy dose planning of prostate cancer. Adaptive proton therapy mandates automatic delineation, as manual delineation is too time consuming while it should be fast and robust. In this study, we propose an accurate and robust automatic propagation of the delineations from the planning CT to the daily CT by means of Deformable Image Registration (DIR). The proposed algorithm is a multi-metric DIR method that jointly optimizes the registration of the bladder contours and CT images. A 3D Dilated Convolutional Neural Network (DCNN) was trained for automatic bladder segmentation of the daily CT. The network was trained and tested on prostate data of 18 patients, each having 7 to 10 daily CT scans. The network achieved a Dice Similarity Coefficient (DSC) of 92.7% ± 1.6% for automatic bladder segmentation. For the automatic contour propagation of the prostate, lymph nodes, and seminal vesicles, the system achieved a DSC of 0.87 ± 0.03, 0.89 ± 0.02, and 0.67 ± 0.11 and Mean Surface Distance of 1.4 ± 0.30 mm, 1.4 ± 0.29 mm, and 1.5 ± 0.37 mm, respectively. The proposed algorithm is therefore very promising for clinical implementation in the context of online adaptive proton therapy of prostate cancer.
@inproceedings{Elmahdy:2018,
author = {Elmahdy, Mohamed S. and Jagt, Thyrza and Hoogeman, Mischa S. and Zinkstok, Roel and Staring, Marius},
title = {Evaluation of multi-metric registration for online adaptive proton therapy of prostate cancer},
booktitle = {International Workshop on Biomedical Image Registration (WBIR)},
editor = {Klein, Stefan and Staring, Marius and Durrleman, Stanley and Sommer, Stefan Horst},
address = {Leiden, The Netherlands},
series = {Lecture Notes in Computer Science},
volume = {10883},
pages = {94 - 104},
month = jun,
year = {2018},
}
Currently, non-rigid image registration algorithms are too computationally intensive to use in time-critical applications. Existing implementations that focus on speed typically address this by either parallelization on GPU-hardware, or by introducing methodically novel techniques into CPU-oriented algorithms. Stochastic gradient descent (SGD) optimization and variations thereof have proven to drastically reduce the computational burden for CPU-based image registration, but have not been successfully applied in GPU hardware due to its stochastic nature. This paper proposes 1) NiftyRegSGD, a SGD optimization for the GPU-based image registration tool NiftyReg, 2) random chunk sampler, a new random sampling strategy that better utilizes the memory bandwidth of GPU hardware. Experiments have been performed on 3D lung CT data of 19 patients, which compared NiftyRegSGD (with and without random chunk sampler) with CPU-based elastix Fast Adaptive SGD (FASGD) and NiftyReg. The registration runtime was 21.5s, 4.4s and 2.8s for elastix-FASGD, NiftyRegSGD without, and NiftyRegSGD with random chunk sampling, respectively, while similar accuracy was obtained. Our method is publicly available at https://github.com/SuperElastix/NiftyRegSGD.
@inproceedings{Bhosale:2018,
author = {Bhosale, Parag and Staring, Marius and Al-Ars, Zaid and Berendsen, Floris F.},
title = {GPU-based stochastic-gradient optimization for non-rigid medical image registration in time-critical applications},
booktitle = {SPIE Medical Imaging: Image Processing},
editor = {Angelini, Elsa A. and Landman, Bennett A.},
address = {Houston, Texas, USA},
series = {Proceedings of SPIE},
volume = {10574},
pages = {105740R},
month = feb,
year = {2018},
}
In this work we propose a deep learning network for deformable image registration (DIRNet). The DIRNet consists of a convolutional neural network (ConvNet) regressor, a spatial transformer, and a resampler. The ConvNet analyzes a pair of fixed and moving images and outputs parameters for the spatial transformer, which generates the displacement vector field that enables the resampler to warp the moving image to the fixed image. The DIRNet is trained end-to-end by unsupervised optimization of a similarity metric between input image pairs. A trained DIRNet can be applied to perform registration on unseen image pairs in one pass, thus non-iteratively. Evaluation was performed with registration of images of handwritten digits (MNIST) and cardiac cine MR scans (Sunnybrook Cardiac Data). The results demonstrate that registration with DIRNet is as accurate as a conventional deformable image registration method with short execution times.
@inproceedings{deVos:2017,
author = {de Vos, Bob D. and Berendsen, Floris and Viergever, Max A. and Staring, Marius and I{\v{s}}gum, Ivana},
title = {End-to-End Unsupervised Deformable Image Registration with a Convolutional Neural Network},
booktitle = {Deep Learning in Medical Image Analysis Workshop at MICCAI},
editor = {Cardoso, M. Jorge and Arbel, Tal and Carneiro, Gustavo and Syeda-Mahmood, Tanveer and Tavares, Jo{\"a}o Manuel R.S. and Moradi, Mehdi and Bradley, Andrew and Greenspan, Hayit and Papa, Jo{\"a}o Paulo and Madabhushi, Anant and Nascimento, Jacinto C. and Cardoso, Jaime S. and Belagiannis, Vasileios and Lu, Zhi},
address = {Quebec,Canada},
series = {Lecture Notes in Computer Science},
volume = {10553},
pages = {204 - 212},
month = sep,
year = {2017},
}
In this paper we propose a method to solve nonrigid image registration through a learning approach, instead of via iterative optimization of a predefined dissimilarity metric. We design a Convolutional Neural Network (CNN) architecture that, in contrast to all other work, directly estimates the displacement vector field (DVF) from a pair of input images. The proposed RegNet is trained using a large set of artificially generated DVFs, does not explicitly define a dissimilarity metric, and integrates image content at multiple scales to equip the network with contextual information. At testing time nonrigid registration is performed in a single shot, in contrast to current iterative methods. We tested RegNet on 3D chest CT follow-up data. The results show that the accuracy of RegNet is on par with a conventional B-spline registration, for anatomy within the capture range. Training RegNet with artificially generated DVFs is therefore a promising approach for obtaining good results on real clinical data, thereby greatly simplifying the training problem. Deformable image registration can therefore be successfully casted as a learning problem.
@inproceedings{Sokooti:2017,
author = {Sokooti, Hessam and de Vos, Bob and Berendsen, Floris and Lelieveldt, Boudewijn P.F. and I{\v{s}}gum, Ivana and Staring, Marius},
title = {Nonrigid Image Registration Using Multi-Scale 3D Convolutional Neural Networks},
booktitle = {Medical Image Computing and Computer-Assisted Intervention},
editor = {Descoteaux, Maxime and Maier-Hein, Lena and Franz, Alfred and Jannin, Pierre and Collins, D. Louis and Duchesne, Simon},
address = {Quebec,Canada},
series = {Lecture Notes in Computer Science},
volume = {10433},
pages = {232 - 239},
month = sep,
year = {2017},
}
This paper reports a new automatic algorithm to estimate the misregistration in a quantitative manner. A random regression forest is constructed, predicting the local registration error. The forest is built using local and modality independent features related to the registration precision, the transformation model and intensity-based similarity after registration. The forest is trained and tested using manually annotated corresponding points between pairs of chest CT scans. The results show that the mean absolute error of regression is 0.72 ± 0.96 mm and the accuracy of classiffication in three classes (correct, poor and wrong registration) is 93.4%, comparing favorably to a competing method. In conclusion, a method was proposed that for the first time shows the feasibility of automatic registration assessment by means of regression, and promising results were obtained.
@inproceedings{Sokooti:2016,
author = {Sokooti, Hessam and Saygili, Gorkem and Glocker, Ben and Lelieveldt, Boudewijn P.F. and Staring, Marius},
title = {Accuracy Estimation for Medical Image Registration Using Regression Forests},
booktitle = {Medical Image Computing and Computer-Assisted Intervention},
editor = {Ourselin, Sebastien and Joskowicz, Leo and Sabuncu, Mert R. and Unal, Gozde and Wells, William},
address = {Athens,Greece},
series = {Lecture Notes in Computer Science},
volume = {9902},
pages = {107 - 115},
month = oct,
year = {2016},
}
In this paper we present SimpleElastix, an extension of SimpleITK designed to bring the Elastix medical image registration library to a wider audience. Elastix is a modular collection of robust C++ image registration algorithms that is widely used in the literature. However, its command-line interface introduces overhead during prototyping, experimental setup, and tuning of registration algorithms. By integrating Elastix with SimpleITK, Elastix can be used as a native library in Python, Java, R, Octave, Ruby, Lua, Tcl and C# on Linux, Mac and Windows. This allows Elastix to intregrate naturally with many development environments so the user can focus more on the registration problem and less on the underlying C++ implementation. As means of demonstration, we show how to register MR images of brains and natural pictures of faces using minimal amount of code. SimpleElastix is open source, licensed under the permissive Apache License Version 2.0 and available at https://github.com/kaspermarstal/SimpleElastix.
@inproceedings{Marstal:2016,
author = {Marstal, Kasper and Berendsen, Floris and Staring, Marius and Klein, Stefan},
title = {SimpleElastix: A user-friendly, multi-lingual library for medical image registration},
booktitle = {International Workshop on Biomedical Image Registration (WBIR)},
editor = {Schnabel, Julia and Mori, Kensaku},
address = {Las Vegas, Nevada, USA},
series = {IEEE Conference on Computer Vision and Pattern Recognition Workshops},
pages = {574 - 582},
month = jul,
year = {2016},
}
A large diversity of image registration methodologies has emerged from the research community. The scattering of methods over toolboxes impedes rigorous comparison to select the appropriate method for a given application. Toolboxes typically tailor their implementations to a mathematical registration paradigm, which makes internal functionality nonexchangeable. Subsequently, this forms a barrier for adoption of registration technology in the clinic. We therefore propose a unifying, role-based software design that can integrate a broad range of functional registration components. These components can be configured into an algorithmic network via a single high-level user interface. A generic component handshake mechanism provides users feedback on incompatibilities. We demonstrate the viability of our design by incorporating two paradigms from different code bases. The implementation is done in C++ and is available as open source. The progress of embedding more paradigms can be followed via https://github.com/kaspermarstal/SuperElastix.
@inproceedings{Berendsen:2016,
author = {Berendsen, Floris and Marstal, Kasper and Klein, Stefan and Staring, Marius},
title = {The design of SuperElastix - a unifying framework for a wide range of image registration methodologies},
booktitle = {International Workshop on Biomedical Image Registration (WBIR)},
editor = {Schnabel, Julia and Mori, Kensaku},
address = {Las Vegas, Nevada, USA},
series = {IEEE Conference on Computer Vision and Pattern Recognition Workshops},
pages = {498 - 506},
month = jul,
year = {2016},
}
In rectal cancer patients large day-to-day target volume deformations occur, leading to large PTV margins. The introduction of MR-guided RT with excellent soft-tissue contrast, facilitates adaptive procedures, like online re-planning with smaller margins. Time constraints demand for automatic contouring of the daily-MR. A possible fast solution is contour propagation with deformable image-registration (DIR). In rectal cancer patients DIR is challenging because of large local deformations of the CTV (meso-rectum) caused by (dis)appearing rectal and bladder content. To deal with this challenge, pre-treatment delineations can be used to define a region-of-interest (ROI) to limit DIR to the part of the anatomy, and also allows excluding regions with (dis-)appearing content. In this study, we investigate optimal parameter settings of MR-to-MR DIR, in the context of contour-propagation of the meso-rectum.
@inproceedings{Uilkema:2016,
author = {Uilkema, Sander and van der Heide, Uulke A. and Sonke, Jan-Jakob and Staring, Marius and Nijkamp, Jasper},
title = {MR-based contour propagation for rectal cancer patients},
booktitle = {18th International Conference on the use of Computers in Radiation Therapy},
editor = {Oelfke, Uwe and Partridge, Mike},
address = {London, UK},
month = jun,
year = {2016},
}
Accurate lung vessel segmentation is an important operation for lung CT analysis. Hessian-based filters are popular for pulmonary vessel enhancement. However, due to their low response at vessel bifurcations and vessel boundaries, extracting lung vessels by thresholding the vesselness is inaccurate. Some literature turns to graph cuts for more accurate segmentation, as it incorporates neighbourhood information. In this work, we propose a new graph cuts cost function combining appearance and shape, where CT intensity represents appearance and vesselness from a Hessian-based filter represents shape. In order to make the graph representation computationally tractable, voxels that are considered clearly background are removed using a low threshold on the vesselness map. The graph structure is then established based on the neighbourhood relationship of the remaining voxels. Vessels are segmented by minimizing the energy cost function with the graph cuts optimization framework. We optimized the parameters and evaluated the proposed method with two manually labeled sub-volumes. For independent evaluation, we used the 20 CT scans of the VESSEL12 challenge. The evaluation results of the sub-volumes dataset show that the proposed method produced a more accurate vessels segmentation. For the VESSEL12 dataset, our method obtained a competitive performance with an area under the ROC of 0.975, especially among the binary submissions.
@inproceedings{Zhai:2016,
author = {Zhai, Zhiwei and Staring, Marius and Stoel, Berend C.},
title = {Lung vessel segmentation in CT images using graph cuts},
booktitle = {SPIE Medical Imaging: Image Processing},
editor = {Styner, Martin A. and Angelini, Elsa A.},
address = {San Diego, CA, USA},
series = {Proceedings of SPIE},
volume = {9784},
pages = {97842K - 97842K-8},
month = feb,
year = {2016},
}
Image registration is often very slow because of the high dimensionality of the images and complexity of the algorithms. Adaptive stochastic gradient descent (ASGD) outperforms deterministic gradient descent and even quasi-Newton in terms of speed. This method, however, only exploits first-order information of the cost function. In this paper, we explore a stochastic quasi-Newton method (s-LBFGS) for non-rigid image registration. It uses the classical limited memory BFGS method in combination with noisy estimates of the gradient. Curvature information of the cost function is estimated once every L iterations and then used for the next L iterations in combination with a stochastic gradient. The method is validated on follow-up data of 3D chest CT scans (19 patients), using a B-spline transformation model and a mutual information metric. The experiments show that the proposed method is robust, efficient and fast. s-LBFGS obtains a similar accuracy as ASGD and deterministic LBFGS. Compared to ASGD the proposed method uses about 5 times fewer iterations to reach the same metric value, resulting in an overall reduction in run time of a factor of two. Compared to deterministic LBFGS, s-LBFGS is almost 500 times faster.
@inproceedings{Qiao:2015,
author = {Qiao, Yuchuan and Sun, Z. and Lelieveldt, Boudewijn P.F. and Staring, Marius},
title = {A Stochastic Quasi-Newton Method for Non-rigid Image Registration},
booktitle = {Medical Image Computing and Computer-Assisted Intervention},
editor = {Navab, N. and Hornegger, J. and Wells, W.M. and Frangi, A.F.},
address = {Munchen,Germany},
series = {Lecture Notes in Computer Science},
volume = {9350},
pages = {297 - 304},
month = sep,
year = {2015},
}
Longitudinal brain image series offers the possibility to study individual brain anatomical changes over time. Mathematical models are needed to study such developmental trajectories in detail. In this paper, we present a novel approach to study the individual brain anatomy over time via a linear geodesic shape regression method. In our method, we integrate separate pairwise registrations between the baseline image and the follow-up images into a unified spatial registration plus temporal regression framework. Different from previous geodesic shape regression approaches, which use the LDDMM framework to estimate the brain anatomical change over time, our method is based on the LogDemons method to decrease the computation cost, while maintaining the diffeomorphic property of the deformation over time. Moreover, a temporal regression constraint is explicitly implemented in each optimization iteration to make sure that the entire developmental trajectory can be compactly represented by the baseline image and an optimal stationary velocity field. Our method is mathematically well founded in the Alternating Direction Method of Multipliers (ADMM), which for our image regression application is interpreted in diffeomorphic space instead of Euclidean space. We evaluate our new method on 2D synthetic images and real 3D brain longitudinal image series, and the experiments show promising results in regression accuracy as well as estimated deformations.
@inproceedings{Sun:2015,
author = {Sun, Zhuo and Lelieveldt, Boudewijn P.F. and Staring, Marius},
title = {Fast Linear Geodesic Shape Regression Using Coupled Logdemons Registration},
booktitle = {IEEE International Symposium on Biomedical Imaging (ISBI)},
editor = {Angelini, Elsa and Kovacevic, Jelena},
address = {New York, USA},
pages = {1276 - 1279},
month = apr,
year = {2015},
}
This paper describes a novel method for segmentation and modeling of branching vessel structures in medical images using adaptive subdivision surfaces fitting. The method starts with a rough initial skeleton model of the vessel structure. A coarse triangular control mesh consisting of hexagonal rings and dedicated bifurcation elements is constructed from this skeleton. Special attention is paid to ensure a topological sound control mesh is created around the bifurcation areas. Then, a smooth tubular surface is obtained from this coarse mesh using a standard subdivision scheme. This subdivision surface is iteratively fitted to the image. During the fitting, the target update locations of the subdivision surface are obtained using a scanline search along the surface normals, finding the maximum gradient magnitude (of the imaging data). In addition to this surface fitting framework, we propose an adaptive mesh refinement scheme. In this step the coarse control mesh topology is updated based on the current segmentation result, enabling adaptation to varying vessel lumen diameters. This enhances the robustness and flexibility of the method and reduces the amount of prior knowledge needed to create the initial skeletal model. The method was applied to publicly available CTA data from the Carotid Bifurcation Algorithm Evaluation Framework resulting in an average dice index of 89.2% with the ground truth. Application of the method to the complex vascular structure of a coronary artery tree in CTA and to MRI images were performed to show the versatility and flexibility of the proposed framework.
@inproceedings{Kitslaar:2015,
author = {Kitslaar, Pieter H. and van 't Klooster, Ronald and Staring, Marius and Lelieveldt, Boudewijn P.F. and van der Geest, Rob J.},
title = {Segmentation of Branching Vascular Structures using Adaptive Subdivision Surface Fitting},
booktitle = {SPIE Medical Imaging: Image Processing},
editor = {Ourselin, Sebastien and Styner, Martin A.},
address = {Orlando, Florida, USA},
series = {Proceedings of SPIE},
volume = {9413},
pages = {94133Z},
month = feb,
year = {2015},
}
In medical imaging, registration is used to combine images containing information from different modalities or to track treatment effects over time in individual patients. Most registration software packages do not provide an easy-to-use interface that facilitates the use of registration. 2D visualization techniques are often used for visualizing 3D datasets.
RegistrationShop was developed to improve and ease the process of volume registration using 3D visualizations and intuitive interactive tools. It supports several basic visualizations of 3D volumetric data. Interactive rigid and non-rigid transformation tools can be used to manipulate the volumes and immediate visual feedback for all rigid transformation tools allows the user to examine the current result in real-time. In this context, we introduce 3D comparative visualization techniques, as well as a way of placing landmarks in 3D volumes. Finally, we evaluated our approach with domain experts, who underlined the potential and usefulness of RegistrationShop.
@inproceedings{Smit:2014,
author = {Smit, Noeska N. and Klein Haneveld, Berend and Staring, Marius and Eisemann, Elmar and Botha, Charl P. and Vilanova, Anna},
title = {RegistrationShop: An Interactive 3D Medical Volume Registration System},
booktitle = {Eurographics Workshop on Visual Computing for Biology and Medicine},
editor = {Viola, I and Buehler, K. and Ropinski, T.},
address = {Vienna, Austria},
pages = {145 - 153},
month = sep,
year = {2014},
}
Image registration is often used in the clinic, for example during radiotherapy and image-guide surgery, but also for general image analysis. Currently, this process is often very slow, yet for intra-operative procedures the speed is crucial. For intensity-based image registration, a nonlinear optimization problem should be solved, usually by (stochastic) gradient descent. This procedure relies on a proper setting of a parameter which controls the optimization step size. This parameter is difficult to choose manually however, since it depends on the input data, optimization metric and transformation model. Previously, the Adaptive Stochastic Gradient Descent (ASGD) method has been proposed that automatically chooses the step size, but it comes at high computational cost. In this paper, we propose a new computationally efficient method to automatically determine the step size, by considering the observed distribution of the voxel displacements between iterations. A relation between the step size and the expectation and variance of the observed distribution is then derived. Experiments have been performed on 3D lung CT data (19 patients) using a nonrigid B-spline transformation model. For all tested dissimilarity metrics (mean squared distance, normalized correlation, mutual information, normalized mutual information), we obtained similar accuracy as ASGD. Compared to ASGD whose estimation time is progressively increasing with the number of parameters, the estimation time of the proposed method is substantially reduced to an almost constant time, from 40 seconds to no more than 1 second when the number of parameters is 105.
@inproceedings{Qiao:2014,
author = {Qiao, Yuchuan and Lelieveldt, Boudewijn P.F. and Staring, Marius},
title = {Fast automatic estimation of the optimization step size for nonrigid image registration},
booktitle = {SPIE Medical Imaging: Image Processing},
editor = {Ourselin, Sebastien and Styner, Martin A.},
address = {San Diego, CA, USA},
series = {Proceedings of SPIE},
volume = {9034},
pages = {90341A},
month = feb,
year = {2014},
}
Whole-body MR receives increasing interest as potential alternative to many conventional diagnostic methods. Typical whole-body MR scans contain multiple data channels and are acquired in a multistation manner. Quantification of such data typically requires correction of two types of artefacts: different intensity scaling on each acquired image stack, and intensity inhomogeneity (bias) within each stack. In this work, we present an all-in-one method that is able to correct for both mentioned types of acquisition artefacts. The most important properties of our method are: 1) All the processing is performed jointly on all available data channels, which is necessary for preserving the relation between them, and 2) It allows easy incorporation of additional knowledge for estimation of the bias field. Performed validation on two types of whole-body MR data confirmed superior performance of our approach in comparison with state-of-the-art bias removal methods.
@inproceedings{Dzyubachyk:2013,
author = {Dzyubachyk, Oleh and van der Geest, Rob J. and Staring, Marius and B{\"o}rnert, Peter and Reijnierse, Monique and Bloem, Johan L. and Lelieveldt, Boudewijn P.F.},
title = {Joint Intensity Inhomogeneity Correction for Whole-Body MR Data},
booktitle = {Medical Image Computing and Computer-Assisted Intervention},
editor = {Mori, K. and Sakuma, I. and Sato, Y. and Barillot, C. and Navab, N.},
address = {Nagoya, Japan},
series = {Lecture Notes in Computer Science},
volume = {8149},
pages = {106 - 113},
month = sep,
year = {2013},
}
Pulmonary fissures are important landmarks for automated recognition of lung anatomy. We propose a derivative of stick (DoS) filter for fissure detection in CT scans by considering the thin linear shape across multiple transverse planes. Based on a stick decomposition of a rectangle neighborhood, our main contribution is to define a nonlinear derivative vertical to the stick orientation. Then, combining with a standard deviation of intensity along the stick, the composed likelihood function will take a strong response to fissure-like bright lines, and tends to suppress undesired structures including large vessels, step edges and blobs. Applying the 2D filter sequentially to the sagittal, coronal and axial planes, an approximate 3D co-planar constraint is implicitly exerted through the cascaded pipeline, which helps to further remove the non-fissure tissues. To generate a clear segmentation, we adopt a connected component based post-processing scheme, and a branch-point finding algorithm is introduced to disconnect the residual adjacent clutters from the fissures, after binarizing the filter response with a relatively low threshold. The performance of our filter has been verified in experiments with a 23 patients dataset, where pathological deformations to different extents are included. It compared favorably with prior algorithms.
@inproceedings{Xiao:2013,
author = {Xiao, Changyan and Staring, Marius and Wang, Juan and Shamonin, Denis P. and Stoel, Berend C.},
title = {A Derivative of Stick Filter for Pulmonary Fissure Detection in CT Images},
booktitle = {SPIE Medical Imaging: Image Processing},
editor = {Ourselin, S. and Haynor, D.R.},
address = {Orlando, Florida, USA},
series = {Proceedings of SPIE},
volume = {8669},
pages = {86690V},
month = feb,
year = {2013},
}
Multi-contrast MRI is a frequently used imaging technique in preclinical brain imaging. In longitudinal cross-sectional studies exploiting and browsing through this high-throughput, heterogeneous data can become a very demanding task. The goal of this work was to build an intuitive and easy to use, dedicated visualization and side-by-side exploration tool for heterogeneous, co-registered multi-contrast, follow-up cross-sectional MRI data. The registration by-products were used: the deformation field was used to automatically link the same voxel in the displayed datasets of interest. Its determinant of the Jacobian (detJac) was used for a faster and more accurate visual assessment and comparison of brain deformation between the follow-up scans. This was combined with an efficient data management scheme. We investigated the functionality and the utility of our tool in the neuroimaging research field by means of a case study evaluation with three experienced domain scientists, using longitudinal, cross-sectional multi-contrast MRI rat brain data. Based on the performed case study evaluation we can conclude that the proposed tool improves the visual assessment of high-throughput cross-sectional, multi-contrast, follow-up data and can further assist in guiding quantitative studies.
@inproceedings{Khmelinskii:2013,
author = {Khmelinskii, A. and Mengler, L. and Kitslaar, P. and Staring, M. and Hoehn, M. and Lelieveldt, B.P.F.},
title = {A visualization platform for high-throughput, follow-up, co-registered multi-contrast MRI rat brain data},
booktitle = {SPIE Medical Imaging: Biomedical Applications in Molecular, Structural, and Functional Imaging},
editor = {Weaver, J.B. and Molthen, R.C.},
address = {Orlando, Florida, USA},
series = {Proceedings of SPIE},
volume = {8672},
pages = {86721W},
month = feb,
year = {2013},
}
For multi atlas-based segmentation approaches, a segmentation fusion scheme which considers local performance measures may be more accurate than a method which uses a global performance measure. We improve upon an existing segmentation fusion method called SIMPLE and extend it to be localized and suitable for multi-labeled segmentations. We demonstrate the algorithm performance on 23 CT scans of COPD patients using a leaveone-out experiment. Our algorithm performs significantly better (p < 0.01) than majority voting, STAPLE, and SIMPLE, with a median overlap of the fissure of 0.45, 0.48, 0.55 and 0.6 for majority voting, STAPLE, SIMPLE, and the proposed algorithm, respectively.
@inproceedings{Agarwal:2012,
author = {Agarwal, Maruti and Bakker, M. Els and Hendriks, Emile A. and Stoel, Berend C. and Reiber, Johan H.C. and Staring, Marius},
title = {Local SIMPLE Multi Atlas-Based Segmentation Applied to Lung Lobe Detection on Chest CT},
booktitle = {SPIE Medical Imaging: Image Processing},
editor = {Haynor, D.R. and Ourselin, S.},
address = {San Diego, California, USA},
series = {Proceedings of SPIE},
volume = {8314},
pages = {831410},
month = feb,
year = {2012},
}
We present an automatic lung lobe segmentation algorithm for COPD patients. The method enhances fissures, removes unlikely fissure candidates, after which a B-spline is fitted iteratively through the remaining candidate objects. The iterative fitting approach circumvents the need to classify each object as being part of the fissure or being noise, and allows the fissure to be detected in multiple disconnected parts. This property is beneficial for good performance in patient data, containing incomplete and disease-affected fissures.
The proposed algorithm is tested on 22 COPD patients, resulting in accurate lobe-based densitometry, and a median overlap of the fissure (defined 3 voxels wide) with an expert ground truth of 0.65, 0.54 and 0.44 for the three main fissures. This compares to complete lobe overlaps of 0.99, 0.98, 0.98, 0.97 and 0.87 for the five main lobes, showing promise for lobe segmentation on data of patients with moderate to severe COPD.
@inproceedings{Shamonin:2012,
author = {Shamonin, Denis P. and Staring, Marius and Bakker, M. Els and Xiao, Changyan and Stolk, Jan and Reiber, Johan H.C. and Stoel, Berend C.},
title = {Automatic Lung Lobe Segmentation of COPD Patients using Iterative B-Spline Fitting},
booktitle = {SPIE Medical Imaging: Image Processing},
editor = {Haynor, D.R. and Ourselin, S.},
address = {San Diego, CA, USA},
series = {Proceedings of SPIE},
volume = {8314},
pages = {83140W},
month = feb,
year = {2012},
}
Nonrigid registration is a technique to recover spatial deformations between images. It can be formulated as an optimization problem to minimize the image dissimilarity. A regularization term is used to reduce undesirable deformations which are usually employed in a homogeneous or spatial-variant fashion. When spatial-variant regularization is used in nonrigid registration of dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI), the local coefficients have been determined by manual segmentation of tissues of interest. We propose a framework to generate regularization coefficients for nonrigid registration in DCE-MRI, where tumor locations are to be transformed in a rigid fashion. The coefficients are obtained by applying a sigmoid function on subtraction images from a pre-registration. All parameters in the function are automatically determined using k-means clustering. The validation study compares three regularization weighting schemes in nonrigid registrations: a constant coefficient for a volume-preserving term, binary coefficients obtained by manual segmentation and a real-value coefficients using the proposed method on a rigidity term. Evaluation is performed using displacements, intensity changes and volume changes of tumors on synthetic and clinical DCE-MR breast images. As a result, the registration using spatial-variant rigidity terms performs better than using homogeneous volume-preserving terms. For the coefficient generation methods of a rigidity term, the proposed method can replace the binary coefficients requiring manual tumor segmentation.
@inproceedings{Liang:2011,
author = {Liang, Xi and Kotagiri, R. and Yang, Q. and Staring, Marius and Pitman, A.},
title = {Generating Coefficients for Regularization Terms in Nonrigid Registration of Contrast-Enhanced MRI},
booktitle = {Medical Image Computing and Computer-Assisted Intervention, Workshop on Breast Image Analysis},
editor = {Tanner, C. and Schnabel, J. and Karssemeijer, N. and Nielsen, Mads and Giger, M. and Hawkes, D.J.},
address = {Toronto, Canada},
pages = {1 - 8},
month = sep,
year = {2011},
}
In vivo MicroCT imaging of disease models at multiple time points is of great importance for preclinical oncological research, to monitor disease progression. However, the great postural variability between animals in the imaging device complicates data comparison.
In this paper we propose a method for automated registration of whole-body MicroCT follow-up datasets of mice. First, we register the skeleton, the lungs and the skin of an articulated animal atlas (Segars et al. 2004) to MicroCT datasets, yielding point correspondence of these structures over all time points. This correspondence is then used to regularize an intensity-based B-spline registration. This two step approach combines the robustness of model-based registration with the high accuracy of intensity-based registration.
We demonstrate our approach using challenging whole-body in vivo follow-up MicroCT data and obtain subvoxel accuracy for the skeleton and the skin, based on the Euclidean surface distance. The method is computationally efficient and enables high resolution whole-body registration in ≈17 minutes with unoptimized code, mostly executed single-threaded.
@inproceedings{Baiker:2011,
author = {Baiker, Martin and Staring, Marius and L{\"o}wik, Clemens W.G.M. and Reiber, Johan H.C. and Lelieveldt, Boudewijn P.F.},
title = {Automated Registration of Whole-Body Follow-Up MicroCT Data of Mice},
booktitle = {Medical Image Computing and Computer-Assisted Intervention},
editor = {Fichtinger, G. and Martel, A. and Peters, T.},
address = {Toronto, Canada},
series = {Lecture Notes in Computer Science},
volume = {6892},
pages = {516 - 523},
month = sep,
year = {2011},
}
We present a stochastic optimisation method for intensity-based monomodal image registration. The method is based on a Robbins-Monro stochastic gradient descent method with adaptive step size estimation, and adds a preconditioning matrix. The derivation of the preconditioner is based on the observation that, after registration, the deformed moving image should approximately equal the fixed image. This prior knowledge allows us to approximate the Hessian at the minimum of the registration cost function, without knowing the coordinate transformation that corresponds to this minimum. The method is validated on 3D fMRI time-series and 3D CT chest follow-up scans. The experimental results show that the preconditioning strategy improves the rate of convergence.
@inproceedings{Klein:2011,
author = {Klein, Stefan and Staring, Marius and Andersson, Patrik and Pluim, Josien P.W.},
title = {Preconditioned Stochastic Gradient Descent Optimisation for Monomodal Image Registration},
booktitle = {Medical Image Computing and Computer-Assisted Intervention},
editor = {Fichtinger, G. and Martel, A. and Peters, T.},
address = {Toronto, Canada},
series = {Lecture Notes in Computer Science},
volume = {6892},
pages = {549 - 556},
month = sep,
year = {2011},
}
The advantage of 2D-3D image registration methods versus direct image-to-patient registration, is that these methods generally do not require user interaction (such as manual annotations), additional machinery or additional acquisition of 3D data.
A variety of intensity-based similarity measures has been proposed and evaluated for different applications. These studies showed that the registration accuracy and capture range are influenced by the choice of similarity measure. However, the influence of the optimization method on intensity-based 2D-3D image registration has not been investigated. We have compared the registration performance of seven optimization methods in combination with three similarity measures: gradient difference, gradient correlation, and pattern intensity. Optimization methods included in this study were: regular step gradient descent, Nelder-Mead, Powell-Brent, Quasi-Newton, nonlinear conjugate gradient, simultaneous perturbation stochastic approximation, and evolution strategy. Registration experiments were performed on multiple patient data sets that were obtained during cerebral interventions. Various component combinations were evaluated on registration accuracy, capture range, and registration time. The results showed that for the same similarity measure, different registration accuracies and capture ranges were obtained when different optimization methods were used. For gradient difference, largest capture ranges were obtained with Powell-Brent and simultaneous perturbation stochastic approximation. Gradient correlation and pattern intensity had the largest capture ranges in combination with Powell-Brent, Nelder-Mead, nonlinear conjugate gradient, and Quasi-Newton. Average registration time, expressed in the number of DRRs required for convergence, was the lowest for Powell-Brent. Based on these results, we conclude that Powell-Brent is a reliable optimization method for intensity-based 2D-3D registration of x-ray images to CBCT, regardless of the similarity measure used.
@inproceedings{vanderBom:2011,
author = {van der Bom, Martijn J. and Klein, Stefan and Staring, Marius and Homan, R. and Bartels, L. Wilbert and Pluim, Josien P.W.},
title = {Evaluation of optimization methods for intensity-based 2D-3D registration in x-ray guided interventions},
booktitle = {SPIE Medical Imaging: Image Processing},
editor = {Dawant, B.M. and Haynor, D.R.},
address = {Orlando, Florida, USA},
series = {Proceedings of SPIE},
volume = {7962},
pages = {796223},
month = feb,
year = {2011},
}
Nonrigid image registration is an important, but resource demanding and time-consuming task in medical image analysis. This limits its application in time-critical clinical routines. In this paper we explore acceleration of a registration algorithm by means of parallel processing. The serial algorithm is analysed and automatically rewritten (re-coded) by a recently introduced automatic parallelisation tool, Daedalus. Daedalus identifies task parallelism (which is more difficult than data parallelism) and converts the serial algorithm to a Polyhedral Process Network (PPN). Each process node in the PPN corresponds to a task that is mapped to a separate thread (of the CPU, but possibly also GPU). The threads communicate via first-in-first-out (FIFO) buffers. Difficulties such as deadlocks, race conditions and synchronisation issues are automatically taken care of by Daedalus. Data-parallelism is not automatically recognised by Daedalus, but can be achieved by manually prefactoring the serial code to make data parallelism explicit. We evaluated the performance gain on a 4-core CPU and compared it to an OpenMP implementation, exploiting only data parallelism. A speedup factor of 3.4 was realised using Daedalus, versus 2.6 using OpenMP. The automated Daedalus approach seems thus a promising means of accelerating image registration based on task parallelisation.
@inproceedings{Farago:2010,
author = {Farago, Tamas and Nikolov, Hristo and Klein, Stefan and Reiber, Johan H.C. and Staring, Marius},
title = {Semi-Automatic Parallelisation for Iterative Image Registration with B-splines},
booktitle = {Medical Image Computing and Computer-Assisted Intervention, High Performance workshop},
editor = {Gong, Leiguang and etal},
address = {Beijing, China},
month = sep,
year = {2010},
}
Accurate registration of thoracic CT is useful in clinical terms and also challenging due to the elastic nature of lung tissue deformations. The goal of the EMPIRE10 challenge (Evaluation of Methods for Pulmonary Image Registration 2010), a workshop of the MICCAI 2010 conference, is to provide a platform for in-depth evaluation and fair comparison of available registration algorithms for this application. To this end we registered to the challenge with team RubberBand. The goal of our submission is to determine what a standard, but fully automatic, intensity-based image registration algorithm can achieve compared to the competition.
The algorithm, implemented in elastix
, optimises the normalised correlation criterion, using a fast, parameter-free and robust stochastic optimisation procedure. A combination of an affine and two nonrigid B-spline transformations models the spatial relationship. The approach is embedded in a multi-resolution framework for both the image data and the transformation. No explicit regularisation is used.
Of the 34 submitted algorithms, our contribution achieved the 7-th place with an average rank of 13.13 (best 8.03, worst 31.46). The incorporation of a regularisation term may improve the ranking of the algorithm, since our final score was most negatively influenced by the score for folding.
@inproceedings{Staring:2010,
author = {Staring, Marius and Klein, Stefan and Reiber, Johan H.C. and Niessen, Wiro J. and Stoel, Berend C.},
title = {Pulmonary Image Registration With elastix Using a Standard Intensity-Based Algorithm},
booktitle = {Medical Image Computing and Computer-Assisted Intervention, Medical Image Analysis for the Clinic: A Grand Challenge},
editor = {van Ginneken, Bram and Murphy, Keelin and Heimann, Tobias and Pekar, Vladimir and Deng, Xiang},
address = {Beijing, China},
month = sep,
year = {2010},
}
The traditional Hessian-related vessel filters often suffer from the problem of handling non-cylindrical objects. To remedy the shortcoming, we present a shape-tuned strain energy density function to measure vessel likelihood in 3D images. Based on the tensor invariants and stress-strain principle in mechanics, a new shape discriminating and vessel strength measure function is formulated. The synthetical and clinical data experiments verify the performance of our method in enhancing complex vascular structures including branches, bifurcations, and feature details.
@inproceedings{Xiao:2010,
author = {Xiao, Changyan and Staring, Marius and Shamonin, Denis P. and Reiber, Johan H.C. and Stolk, Jan and Stoel, Berend C.},
title = {A Strain Energy Filter for 3D Vessel Enhancement},
booktitle = {Medical Image Computing and Computer-Assisted Intervention},
editor = {Jiang, T. and etal},
address = {Beijing, China},
series = {Lecture Notes in Computer Science},
volume = {6363},
pages = {367 - 374},
year = {2010},
}
Non-rigid registration of MR images to a common reference image results in deformation fields, from which anatomical differences can be statistically assessed, within and between populations. Without further assumptions, nonparametric tests are required and currently the analysis of deformation fields is performed by permutation tests. For deformation fields, often the vector magnitude is chosen as test statistic, resulting in a loss of information. In this paper, we consider the three dimensional Moore-Rayleigh test as an alternative for permutation tests. This nonparametric test offers two novel features: first, it incorporates both the directions and magnitude of the deformation vectors. Second, as its distribution function is available in closed form, this test statistic can be used in a clinical setting. Using synthetic data that represents variations as commonly encountered in clinical data, we show that the Moore-Rayleigh test outperforms the classical permutation test.
@inproceedings{Scheenstra:2009,
author = {Scheenstra, Alize E.H. and Muskulus, M. and Staring, Marius and van den Maagdenberg, A.M.J.V. and Verduyn Lunel, S. and Reiber, Johan H.C. and van der Weerd, L. and Dijkstra, Jouke},
title = {The 3D Moore-Rayleigh Test for the Quantitative Groupwise Comparison of MR Brain Images},
booktitle = {Information Processing in Medical Imaging},
editor = {Prince, J. L. and Pham, D.L. and Myers, K.J.},
address = {Williamsburg, Virginia, USA},
series = {Lecture Notes in Computer Science},
volume = {5636},
pages = {564 - 575},
month = jul,
year = {2009},
}
Progression measurement of emphysema is required to evaluate the health condition of a patient and the effect of drugs. To locally estimate progression we use image registration, which allows for volume correction using the determinant of the Jacobian of the transformation. We introduce an adaptation of the so-called sponge model that circumvents its constant-mass assumption. Preliminary results from CT scans of a lung phantom and from CT data sets of two patients suggest that image registration may be a suitable method to locally estimate emphysema progression.
@inproceedings{Staring:2009,
author = {Staring, Marius and Bakker, M. Els and Shamonin, Denis P. and Stolk, Jan and Reiber, Johan H.C. and Stoel, Berend C.},
title = {Towards Local Estimation of Emphysema Progression Using Image Registration},
booktitle = {SPIE Medical Imaging: Image Processing},
editor = {Pluim, J.P.W. and Dawant, B.M.},
address = {Orlando, Florida, USA},
series = {Proceedings of SPIE},
volume = {7259},
pages = {72590O},
month = feb,
year = {2009},
}
An algorithm is presented for the efficient semi-automatic construction of a detailed reference standard for registration in thoracic CT. A well-distributed set of 100 landmarks is detected fully automatically in one scan of a pair to be registered. Using a custom-designed interface, observers locate corresponding anatomic locations in the second scan. The manual annotations are used to learn the relationship between the scans and after approximately twenty manual marks the remaining points are matched automatically. Inter-observer differences demonstrate the accuracy of the matching and the applicability of the reference standard is demonstrated on two different sets of registration results over 19 CT scan pairs.
@inproceedings{Murphy:2008b,
author = {Murphy, Keelin and van Ginneken, Bram and Pluim, Josien P.W. and Klein, Stefan and Staring, Marius},
title = {Semi-automatic Reference Standard Construction for Quantitative Evaluation of Lung CT Registration},
booktitle = {Medical Image Computing and Computer-Assisted Intervention},
editor = {Fichtinger, G. and Martel, A. and Peters, T.},
series = {Lecture Notes in Computer Science},
volume = {5242},
pages = {1006 - 1013},
month = sep,
year = {2008},
}
A novel method for quantitative evaluation of registration systems in thoracic CT is utilised to examine the effects of varying system parameters on registration error. Regional analysis is implemented to determine whether registration error is more prevalent in particular areas of the lungs. Experiments on twenty-four CT scan-pairs prove that in many cases significant reductions in processing time can be achieved without much loss of registration accuracy. More difficult cases require additional steps in order to achieve maximum precision. Larger errors appear more frequently in the lower regions of the lungs close to the diaphragm.
@inproceedings{Murphy:2008a,
author = {Murphy, Keelin and van Ginneken, Bram and Pluim, Josien P.W. and Klein, Stefan and Staring, Marius},
title = {Quantitative Assessment of Registration in Thoracic CT},
booktitle = {The First International Workshop on Pulmonary Image Analysis},
editor = {Brown, Matthew and de Bruijne, Marleen and van Ginneken, Bram and Kiraly, Atilla and Kuhnigk, Jan Martin and Lorenz, Cristian and Mori, Kensaku and Reinhardt, Joseph},
address = {New York, USA},
pages = {203 - 211},
month = sep,
year = {2008},
}
Atlas-based segmentation is a popular generic technique for automated delineation of structures in volumetric data sets. Several studies have shown that multi-atlas based segmentation methods outperform schemes that use only a single atlas, but running multiple registrations on large volumetric data is too time-consuming for routine clinical use. We propose a generally applicable adaptive local multi-atlas segmentation method (ALMAS) that locally decides how many and which atlases are needed to segment a target image. Only the selected parts of atlases are registered. The method is iterative and automatically stops when no further improvement is expected. ALMAS was applied to segmentation of the heart on chest CT scans and compared to three existing atlas-based methods. It performed significantly better than single-atlas methods and as good as multi-atlas methods at a much lower computational cost.
@inproceedings{vanRikxoort:2008,
author = {van Rikxoort, Eva and I{\v{s}}gum, Ivana and Staring, Marius and Klein, Stefan and van Ginneken, Bram},
title = {Adaptive local multi-atlas segmentation: application to heart segmentation in chest CT scans},
booktitle = {SPIE Medical Imaging: Image Processing},
editor = {Reinhardt, J.M. and Pluim, J.P.W.},
address = {San Diego, CA, USA},
series = {Proceedings of SPIE},
volume = {6914},
pages = {691407},
month = feb,
year = {2008},
}
In this paper, an automatic method for delineating the prostate in MR scans is presented. The method is based on nonrigid registration of a set of prelabelled atlas images. Each atlas image is nonrigidly registered with the target patient image. After that, atlas images that match well to the patient image are selected and the segmentation is obtained by a majority voting rule. Two registration methods are investigated. The first one uses the common mutual information as a similarity measure. The second one uses a localised version of mutual information. Experiments are performed on 38 MR images using a leave-one-out approach. The automatic segmentations are evaluated with manual segmentations by computing their overlap. The localised mutual information measure outperforms the commonly used global version and achieves a median Dice similarity coefficient of 0.82. The spatial distribution of the segmentation errors is visualised using a spherical coordinate mapping of the prostate boundary.
@inproceedings{Klein:2007b,
author = {Klein, Stefan and van der Heide, Uulke A. and Staring, Marius and Kotte, Alexis N.T.J. and Raaymakers, Bas W. and Pluim, Josien P.W.},
title = {Segmentation of the Prostate in MR images by Atlas Matching using Localised Mutual Information},
booktitle = {XVth International Conference on the use of Computers in Radiation Therapy},
editor = {Jaffray, D.A. and Sharpe, M. and van Dyk, J. and Bissonnette, J.P.},
address = {Toronto, Canada},
volume = {2},
pages = {585 - 589},
month = jun,
year = {2007},
}
Prostate cancer treatment by radiation therapy requires an accurate localisation of the prostate. For the treatment planning, primarily computed tomography (CT) images are used, but increasingly magnetic resonance (MR) images are added, because of their soft-tissue contrast. In current practice at our hospital, a manual delineation of the prostate is made, based on the CT and MR scans, which is a labour-intensive task. We propose an automatic segmentation method, based on nonrigid registration of a set of prelabelled MR atlas images. The algorithm consists of three stages. Firstly, the target image is nonrigidly registered with each atlas image, using mutual information as the similarity measure. After that, the best registered atlas images are selected by comparing the mutual information values after registration. Finally, the segmentation is obtained by averaging the selected deformed segmentations and thresholding the result. The method is evaluated on 22 images by calculating the overlap of automatic and manual segmentations. This results in a median Dice similarity coefficient of 0.82.
@inproceedings{Klein:2007a,
author = {Klein, Stefan and van der Heide, Uulke A. and Raaymakers, Bas W. and Kotte, Alexis N.T.J. and Staring, Marius and Pluim, Josien P.W.},
title = {Segmentation of the prostate in MR images by atlas matching},
booktitle = {IEEE International Symposium on Biomedical Imaging (ISBI)},
editor = {Fessler, J.A. and Denney Jr., T.S.},
address = {Washington, USA},
pages = {1300 - 1303},
month = apr,
year = {2007},
}
Nonrigid registration of medical images usually does not model properties of different tissue types. This results for example in nonrigid deformations of structures that are rigid. In this work we address this problem by employing a local rigidity penalty term. We illustrate this approach on a 2D synthetic image, and evaluate it on clinical 2D DSA image sequences, and on 3D CT follow-up data of the thorax of patients suffering from lung tumours. The results show that the rigidity penalty term does indeed penalise nonrigid deformations of rigid structures, whereas the standard nonrigid registration algorithm compresses those.
@inproceedings{Staring:2006b,
author = {Staring, Marius and Klein, Stefan and Pluim, Josien P.W.},
title = {Evaluation of a Rigidity Penalty Term for Nonrigid Registration},
booktitle = {Workshop on Image Registration in Deformable Environments},
editor = {Bartoli, Adrien and Navab, Nassir and Lepetit, Vincent},
address = {Edinburgh, UK},
pages = {41 - 50},
month = sep,
year = {2006},
}
Mutual information based nonrigid registration of medical images is a popular approach. The coordinate mapping that relates the two images is found in an iterative optimisation procedure. In every iteration a computationally expensive evaluation of the mutual information’s derivative is required. In this work two acceleration strategies are compared. The first technique aims at reducing the number of iterations, and, consequently, the number of derivative evaluations. The second technique reduces the computational costs per iteration by employing stochastic approximations of the derivatives. The performance of both methods is tested on an artificial registration problem, where the ground truth is known, and on a clinical problem involving low-dose CT scans and large deformations. The experiments show that the stochastic approximation approach is superior in terms of speed and robustness. However, more accurate solutions are obtained with the first technique.
@inproceedings{Klein:2006,
author = {Klein, Stefan and Staring, Marius and Pluim, Josien P.W.},
title = {A Comparison of Acceleration Techniques for Nonrigid Medical Image Registration},
booktitle = {International Workshop on Biomedical Image Registration (WBIR)},
editor = {Pluim, J.P.W. and Likar, B. and Gerritsen, F.A.},
address = {Utrecht, The Netherlands},
series = {Lecture Notes on Computer Science},
volume = {4057},
pages = {151 - 159},
month = jul,
year = {2006},
}
Nonrigid registration is a technique commonly used in the field of medical imaging. A drawback of most current nonrigid registration algorithms is that they model all tissue as being nonrigid. When a nonrigid registration is performed, the rigid objects in the image, such as bony structures or surgical instruments, may also transform nonrigidly. Other consequences are that tumour growth between follow-up images may be concealed, or that structures containing contrast material in one image and not in the other may be compressed by the registration algorithm.
In this paper we propose a novel regularisation term, which is added to the cost function in order to penalise nonrigid deformations of rigid objects. This regularisation term can be used for any representation of the deformation field capable of modelling locally rigid deformations. By using a B-spline representation of the deformation field, a fast algorithm can be devised. We show on 2D synthetic data, on clinical CT slices, and on clinical DSA images, that the proposed rigidity constraint is successful, thus improving registration results.
@inproceedings{Staring:2006a,
author = {Staring, Marius and Klein, Stefan and Pluim, Josien P.W.},
title = {Nonrigid Registration Using a Rigidity Constraint},
booktitle = {SPIE Medical Imaging: Image Processing},
editor = {Reinhardt, J.M. and Pluim, J.P.W.},
address = {San Diego, CA, USA},
series = {Proceedings of SPIE},
volume = {6144},
pages = {355 - 364},
month = feb,
year = {2006},
}
In-vivo image-based multi-spectral images have typical problems in image acquisition, registration, visualization and analysis. As its spatial and spectral axes do not have the same unit, standard image algorithms often do not apply. The image size is often so large that it is hard to analyze them interactively. In a clinical setting, image motion will always occur during the acquisition times up to 30 seconds, since most (elderly) patients often have difficulty to retain their poses. In this paper, we discuss how the acquisition, registration, display and analysis can be optimized for in-vivo multi-spectral images.
@inproceedings{Noordmans:2006,
author = {Noordmans, Herke Jan and de Roode, Rowland and Staring, Marius and Verdaasdonk, Rudolf},
title = {Registration and analysis of in-vivo multi-spectral images for correction of motion and comparison in time},
booktitle = {SPIE Photonics West: Multimodal Biomedical Imaging},
editor = {Azar, F.S. and Metaxas, D.N.},
address = {San Jose, CA, USA},
series = {Proceedings of SPIE},
volume = {6081},
pages = {35 - 43},
month = jan,
year = {2006},
}
In present-day medical practice it is often necessary to nonrigidly align image data, either intra- or inter-patient. Current registration algorithms usually do not take different tissue types into account. A problem that might occur with these algorithms is that rigid tissue, like bone, also deforms elastically. We propose a method to correct a deformation field, that is calculated with a nonrigid registration algorithm. The correction is based on a second feature image, which represents the tissue stiffness. The amount of smoothing of the deformation field is related to this stiffness coefficient. By filtering the deformation field on rigid tissue, the deformation field will represent a local rigid transformation. Other parts of the image containing nonrigid tissue are smoothed less, which leaves the original elastic deformation (almost) untouched. It is shown on a synthetic example and on inspiration-expiration CT data of the thorax, that a filtering of the deformation field based on tissue type indeed keeps rigid tissue rigid, thus improving the registration results.
@inproceedings{Staring:2005,
author = {Staring, Marius and Klein, Stefan and Pluim, Josien P.W.},
title = {Nonrigid Registration with Adaptive, Content-Based Filtering of the Deformation Field},
booktitle = {SPIE Medical Imaging: Image Processing},
editor = {Fitzpatrick, J.M. and Reinhardt, J.M.},
address = {San Diego, CA, USA},
series = {Proceedings of SPIE},
volume = {5747},
pages = {212 - 221},
month = feb,
year = {2005},
}
Nonrigid registration of medical images by maximisation of their mutual information, in combination with a deformation field parameterised by cubic B-splines, has been shown to be robust and accurate in many applications. However, the high computation time is a big disadvantage. This work focusses on the optimisation procedure. Many implementations follow a gradient-descent like approach. The time needed for computing the derivative of the mutual information with respect to the B-spline parameters is the bottleneck in this process. We investigate the influence of several gradient approximation techniques on the number of iterations needed and the computation time per iteration. Three methods are studied: a simple finite difference strategy, the so-called simultaneous perturbation method, and a more analytic computation of the gradient based on a continuous, and differentiable representation of the joint histogram. In addition, the effect of decreasing the number of image samples, used for computing the gradient in each iteration, is investigated. Two types of experiments are performed. Firstly, the registration of an image to itself, after application of a known, randomly generated deformation, is considered. Secondly, experiments are performed with 3D ultrasound brain scans, and 3D CT follow-up scans of the chest. The experiments show that the method using an analytic gradient computation outperforms the other two. Furthermore, the computation time per iteration can be extremely decreased, without affecting the rate of convergence and final accuracy, by using very few samples of the image (randomly chosen every iteration) to compute the derivative. With this approach, large data sets (2563) can be registered within 5 minutes on a moderate PC.
@inproceedings{Klein:2005,
author = {Klein, Stefan and Staring, Marius and Pluim, Josien P.W.},
title = {Comparison of gradient approximation techniques for optimisation of mutual information in nonrigid registration},
booktitle = {SPIE Medical Imaging: Image Processing},
editor = {Fitzpatrick, J.M. and Reinhardt, J.M.},
address = {San Diego, CA, USA},
series = {Proceedings of SPIE},
volume = {5747},
pages = {192 - 203},
month = feb,
year = {2005},
}
In this paper we study the use of an adaptive quantization step size, instead of a fixed one, for the Scalar Costa Scheme. We propose an adaptation method based on Weber’s law. This allows for a more effective embedding, which is also shown to render the watermark robust against sample value scaling. A model for the bit error probability due to the estimation of the adaptive quantization step size at the detector is derived, which provides insight in the required precision of estimating the quantization step size in the detector.
@inproceedings{Oostveen:2004,
author = {Oostveen, Job C. and Kalker, A.A.C. and Staring, Marius},
title = {Adaptive quantization watermarking},
booktitle = {Security, Steganography, and Watermarking of Multimedia Contents VI},
editor = {Delp III, Edward J. and Wong, Ping W.},
address = {San Jose, California, USA},
series = {Proceedings of SPIE},
volume = {5306},
pages = {296 - 303},
month = jan,
year = {2004},
}
In this paper we study the problem of optimizing the distortion compensation parameter for the Scalar Costa Scheme, which is a practical version of the class of Distortion Compensated Dither Modulation schemes. In the literature, a number of results are known for finding the value of the distortion compensation parameter that maximizes the capacity of the watermarking channel. Instead, in this paper, we look at minimization of the bit error probability as the criterion for determining the optimal value of the distortion compensation parameter. To this end, we derive a model for the bit error probability, which is subsequently approximated and minimized. This is done both for the cases of Gaussian noise and uniform noise. The results match very well with earlier results by Eggers.
@inproceedings{Staring:2003,
author = {Staring, Marius and Oostveen, Job C. and Kalker, A.A.C.},
title = {Optimal distortion compensation for quantization watermarking},
booktitle = {International Conference on Image Processing},
address = {Barcelona, Spain},
volume = {2},
pages = {727 - 730},
month = sep,
year = {2003},
}
Mid field MR scanners (0.1T-1T) are gaining increasing attention in the last few years (Lavrova et al. (2024), (Campbell Washburn et al., 2019) due to increased safety and lower manufacturing and maintenance cost and consequently improving the accessibility of MRI for clinical purposes (Arnold et al., 2023). The reduced field strength leads to a reduced signal and change in T1 relaxation times and therefore different contrast and signal to noise as 1.5T and 3T systems.
MRI is often used for diagnosing and monitoring brain tumors, lesions, and disorders such as neurodegenerative diseases. Especially for the last afflictions gray matter (GM) and white matter (WM) volume are interesting markers due to the atrophy associated with these diseases. Automatic segmentation models including Adaptive Maximum A Posteriori (MAP) segmentation (Rajapakse et al., 1997) and partial volume estimation (PVE) (Tohka et al., 2004) are well established models that are used by applications such as CAT12 to estimate GM and WM volumes.
In this work we study the quality of T1 weighted based brain segmentations at 0.6T in the context of brain volume measurements in healthy volunteers. We compare segmentations from 0.6T and 1.5T and study how noise suppression by deep learning based reconstruction as provided by the vendor might affect these.
@inproceedings{Jabarimani:2025a,
author = {Jabarimani, Navid and Ercan, Ece and Dong, Yiming and Pezzotti, Nicola and Webb, Andrew and B{\"o}rnert, Peter and Staring, Marius and van Osch, Matthias J.P. and Nagtegaal, Martijn},
title = {Characterizing differences between white and gray matter T1W-based segmentations at 0.6T and 1.5T},
booktitle = {International Society for Magnetic Resonance in Medicine},
month = may,
year = {2025},
}
Fluid-Attenuated Inversion Recovery (FLAIR) images are an important part of clinical brain protocols, especially due to the excellent contrast for diagnosing lesions, edema, etc. (Campbell-Washburn et al., 2019). Mid field MR scanners (0.1T-1T) provide a more accessible and affordable option for clinical use compared to high-field scanners (Arnold et al., 2023). However, lower field strength often produces lower-quality FLAIR images due to longer T1 relaxation time and lower signal to noise ratio (Lavrova et al., 2024). Moreover, from our initial tests with a 0.6T MRI-scanner, we noticed that the relative drop in quality of these images compared to 1.5T was much higher for FLAIR than for example T2W images, which could limit their diagnostic value.
This project aims to address this issue by applying a content/style-based plug-and-play reconstruction framework PnP-MUNIT (Rao et al., 2024) to guide the reconstruction of FLAIR images using information from T2-weighted scans, with the goal of improving image quality and reducing the scan time. In this work we assess this concept using data from a 3T scanner and adapt it for 0.6T data, applying the content/style model to the lower field strength in a zero-shot manner without any fine-tuning.
@inproceedings{Jabarimani:2025b,
author = {Jabarimani, Navid and Rao, Chinmay and Ercan, Ece and Dong, Yiming and Pezzotti, Nicola and Doneva, Mariya and de Weerdt, Elwin and van Osch, Matthias J.P. and Staring, Marius and Nagtegaal, Martijn},
title = {Accelerated FLAIR imaging at 0.6T using T2w-guided multi-contrast deep learning-based reconstruction using a Zero-Shot approach},
booktitle = {International Society for Magnetic Resonance in Medicine},
month = may,
year = {2025},
}
The use of Imageless MR sequences, combined with deep-learning methods, could offer a rapid, cost-effective screening technique suitable for large population-wise deployment. We showcase how this framework yields accurate detection and lesion size estimation using an MS lesions case study.
@inproceedings{Gonzalez:2025,
author = {Gonz{\'a}lez-Cebri{\'a}n, Alba and Garc{\'i}a-Crist{\'o}bal, Pablo and Galve, Fernando and Van Der Valk, Viktor and Ilicak, Efe and Staring, Marius and Webb, Andrew and Alonso, Joseba},
title = {An Imageless Magnetic Resonance Diagnosis procedure for fast and affordable screening and follow-up},
booktitle = {International Society for Magnetic Resonance in Medicine},
month = may,
year = {2025},
}
We investigate a prototype 0.6T MRI system for free-breathing functional lung imaging. Our findings demonstrate improved image quality compared to 1.5T, with improved tissue-background contrast and homogeneity the functional maps, underscoring the system’s robustness and potential for non-invasive pulmonary imaging.
@inproceedings{Ilicak:2025,
author = {Ilicak, Efe and Ercan, Ece and Dong, Yiming and Staring, Marius and Webb, Andrew and van Osch, Matthias JP and B{\"o}rnert, Peter and Nagtegaal, Martijn},
title = {Free-Breathing Functional Lung Imaging at 0.6T compared to 1.5T},
booktitle = {International Society for Magnetic Resonance in Medicine},
month = may,
year = {2025},
}
Motivation: Scans within an MR exam share redundant information due to the same underlying structures. One contrast can hence be used to guide the reconstruction of another, thereby requiring less measurements.
Goals: Multimodal guided reconstruction to reduce scanning times.
Approach: Our method exploits AI-based content/style decomposition in an iterative reconstruction algorithm. We explored this concept via numerical simulation and subsequently validated it on in vivo data.
Results: Compared to a conventional compressed sensing baseline, our method showed consistent improvement in simulations and produced sharper reconstructions from undersampled in vivo data. By enforcing data consistency, it was also more reliable than blind image translation.
Impact: In the clinic, this can potentially enable a reduced MR exam time for a given image quality or improve image quality given a scan time budget. The former can reduce strain on the patient, whereas the latter can improve diagnosis.
@inproceedings{Rao:2024,
author = {Rao, Chinmay and Beljaards, Laurens and van Osch, Matthias and Doneva, Mariya and Meineke, Jakob and Sch{\"u}lke, Christophe and Pezzotti, Nicola and de Weerdt, Elwin and Staring, Marius},
title = {Guided Multicontrast Reconstruction based on the Decomposition of Content and Style},
booktitle = {International Society for Magnetic Resonance in Medicine},
month = may,
year = {2024},
}
Previous work on auto-contour dose evaluation has used both manual [1] and automated [2,3] techniques, however with small ( 20) test patient cohorts. This is due to extensive manual effort required for additional contour refinement and treatment planning on the auto-contours. Moreover, automated planning techniques, if not already clinically implemented, are difficult to adopt. Our primary goal is to investigate the dosimetric effect of auto-contouring for proton radiotherapy using a large-scale cohort of patients. A secondary goal is to develop and evaluate a workflow that is both automated and uses existing plan parameters, hence enabling evaluation for a large patient cohort.
@inproceedings{Mody:2024,
author = {Mody, Prerak and Huiskes, Merle and Chaves de Plaza, Nicolas and Onderwater, Alice and Lamsma, Rense and Hildebrandt, Klaus and Hoekstra, Nienke and Astreinidou, Eleftheria and Staring, Marius and Dankers, Frank},
title = {Dose evaluation using existing plan parameters of auto-contouring in head-and-neck radiotherapy},
booktitle = {Radiotherapy and Oncology (ESTRO)},
volume = {194},
pages = {S3074 -- S3077},
month = may,
year = {2024},
}
As populations age and the prevalence of cervical spine degeneration rises, the demand for computer-aided diagnostics and prognostics in neurosurgery rises. Not all patients benefit from surgical treatment and predicting who will remains challenging. Automating parts of the radiological image analysis process using Machine Learning could provide more accurate, consistent assessment with increased time efficiency, and potentially gain new disease insights. The purpose of this study was to identify which image features on cervical radiographs are important for the prediction of clinical success one year after surgery for cervical disc disease, by developing and validating a deep learning algorithm that predicts clinical success solely based on the radiograph.
@inproceedings{Goedmakers:2023,
author = {Goedmakers, C.M.W and Pereboom, L.M. and de Leeuw den Bouter, M.L. and Remis, R.F. and Staring, M. and Vleggeert-Lankamp, C.L.A.},
title = {Deep Learning on Preoperative Radiographs for Clinical Success Prediction after Surgery for Cervical Degenerative Disease},
booktitle = {Brain and Spine},
volume = {3},
pages = {101842},
year = {2023},
}
The size of orthotopic tumors in small animals (typically a few mm) presents some challenges in preclinical proton dose delivery. For tumors situated deeper in the animal close to critical organs, determination of the actual dose distribution and conformity is challenging especially for Bragg peak irradiations. Therefore, it is important to optimize beam properties, verify CT HU-RSP calibration, and ensure the quality of dose distributions.
In this work, we present a simulation framework that (1) allows generation of realistic X-ray μ-CBCT images, (2) facilitates CT HU calibration, and (3) performs proton dose calculations.
A μ-CBCT model was developed using the fastCAT toolkit. Monte Carlo simulations were performed to generate the primary and scatter kernels and imaging dose calibration appropriate for μ-CBCT scans. CTs were then generated for a mini Gammex phantom and the MOBY/ROBY digital rodent phantoms. The HU - SPR conversion is performed with the mini Gammex phantom. The resulting calibration parameters are then used to convert the CTs of the MOBY/ROBY phantoms to SPR maps. These are then used to calculate dose distributions in TOPAS for treatment plans created in matRad using realistic beams based on measured emittances and simulations of the beam transport with BDSIM. Since the composition of the MOBY/ROBY phantoms is known, a ground truth exists against which the accuracy of the calibration and dose distributions can be verified.
This framework is used to optimize the irradiation setup and assess the quality of small animal irradiations.
@inproceedings{Malimban:2023,
author = {Malimban, Justin and Ludwig, Felix and Lathouwers, Danny and Staring, Marius and Verhaegen, Frank and Brandenburg, Sytze},
title = {A simulation framework of the preclinical proton irradiation workflow},
booktitle = {Particle Therapy Co-operative Group},
address = {Madrid, Spain},
month = jun,
year = {2023},
}
Traditional MR fingerprinting involves matching the acquired signal evolutions against a dictionary of expected tissue fingerprints to obtain thecorresponding tissue parameters. Since this dictionary is essentially a discrete representation of a physical model and the matching processamounts to brute-force search in a discretized parameter space, there arises a tradeoff between discretization error and parameter estimationtime. In this work, we investigate this tradeoff and show via numerical simulation how a neural net-based approach solves it. We additionallyconduct a phantom study using 1.5T and 3T data to demonstrate the consistency of neural net-based estimation with dictionary matching.
@inproceedings{Rao:2023,
author = {Rao, Chinmay and Meineke, Jakob and Pezzotti, Nicola and Staring, Marius and van Osch, Matthias and Doneva, Mariya},
title = {Analysis of the Discretization Error vs. Estimation Time Tradeoff of MRF Dictionary Matching and the Advantage of the Neural Net-based Approach},
booktitle = {International Society for Magnetic Resonance in Medicine},
month = jun,
year = {2023},
}
Advanced diffusion weighted self-navigated multi-shot MRI can run at high scan efficiencies resulting in good image quality. However, the model-based image reconstruction used is rather time consuming. Deep learning-based reconstruction approaches could function as a faster alternative. Tailored network architectures with appropriately set physical model constraints can help to shorten reconstruction times, resulting in good image quality with reduced noise propagation.
@inproceedings{Dong:2023,
author = {Dong, Yiming and Koolstra, Kirsten and Beljaards, Laurens and Staring, Marius and van Osch, Matthias J.P. and B{\"o}rnert, Peter},
title = {Deep Learning Based Self-Navigated Diffusion Weighted Multi-Shot EPI with Supervised Denoising},
booktitle = {International Society for Magnetic Resonance in Medicine},
month = jun,
year = {2023},
}
With compressed sensing (CS) undersampled data points are used for MR image reconstruction to reduce acquisition times with preservation of SNR, but CS tends to simplify image content with higher levels of acceleration. Deep learning (DL) reconstruction methods could accelerate the acquisition process with preservation of high image quality by learning from high complexity images. In cardiac MR imaging high levels of acceleration allows multi-slice imaging during one breath-hold (BH) which could reduce scan-times significantly. We investigate the feasibility of a prospectively assessed DL-based reconstruction technique combined with different levels of acceleration using Compressed Sensing artificial intelligence framework in cardiac MR imaging.
@inproceedings{Lu:2023,
author = {Lu, Huangling and Juffermans, Joe F. and Pezzotti, Nicola and Staring, Marius and Lamb, Hildo J.},
title = {Deep learning-based acceleration of Compressed SENSE Cardiac MR imaging - accelerating total scan-times and reducing the number of breath holds},
booktitle = {Society for Cardiovascular Magnetic Resonance},
month = jan,
year = {2023},
}
MRI can be accelerated via (AI-based) reconstruction by undersampling k-space. Current methods typically ignore intra-scan motion, although even a few millimeters of motion can introduce severe blurring and ghosting artifacts that necessitate reacquisition. In this short paper we investigate the effects of rigid-body motion on AI-based reconstructions. Leveraging the Bloch equations we simulate motion corrupted MRI acquisitions with a linear interleaved scanning protocol including spin history effects, and investigate i) the effect on reconstruction quality, and ii) if this corruption can be mitigated by introducing motion-corrupted data during training. We observe an improvement from 0.787 to 0.844 in terms of SSIM when motion-corrupted brain data is included during training, demonstrating that training with motion-corrupted data can partially compensate for motion corruption. Inclusion of spin-history effects did not influence the results.
@inproceedings{Beljaards:2022,
author = {Beljaards, Laurens and Pezzotti, Nicola and Sch{\"u}lke, Christophe and van Osch, Matthias J.P. and Staring, Marius},
title = {The effect of intra-scan motion on AI reconstructions in MRI},
booktitle = {Medical Imaging with Deep Learning},
month = jul,
year = {2022},
}
Introduction: Organ contouring is one of the most laborious and time-consuming stages in the preclinical irradiation workflow. Since deep learning algorithms have shown excellent performance on human organ segmentation, their application for animals have also been recently explored. However, previously developed deep learning-based animal autocontouring models were mostly trained on one type of dataset, and their predictive performance was also evaluated on the same distribution as the training data. In a preclinical facility wherein studies involving various strains of animals and image acquisition protocols are performed, it is important to demonstrate the robustness of these tools across different populations and settings. Therefore, in this work, we externally validated two deep learning pipelines for mouse thorax segmentation to assess their usability and portability when implemented on a larger scale.
Materials & Methods: We trained the 2D and 3D models of nnU-Net (i.e., one of the best performing algorithms for clinical segmentation) and compared them with the state-of-the-art AIMOS pipeline for segmentation of the mouse thorax. We allotted 105 native micro-CT scans of mice for the training and initially performed internal validation using 35 native micro-CT scans not included in the model development to determine the best nnU-Net model. Then, the proposed nnU-Net model and AIMOS were externally validated using 35 contrast-enhanced micro-CTs, which comprise of scans with a different mouse strain and imaging parameters than the training data. The predictive performance was evaluated in terms of the Dice score (DSC), mean surface distance (MSD), and 95% Hausdorff distance (95% HD).
Results: When tested against native micro-CTs, all models of nnU-Net (3d_fullres, 3d_cascade, 3d_lowres, 2d) and AIMOS generated accurate contours, achieving average DSC greater than 0.94, 0.90, 0.97 and 0.95 for the heart, spinal cord, right and left lungs, respectively. The average MSD was less than the in-plane voxel size of 0.14 mm while the average 95% HD was below 0.60 mm except for the right lung results of nnU-Net 2d. Among the nnU-Net models, 3d_fullres was considered the superior model, producing the most accurate contours at a reasonable speed. The nnU-Net 3d_fullres model and AIMOS were then evaluated against the external dataset. The nnU-Net 3d_fullres model achieved average DSC of 0.92, 0.85, 0.96 and 0.95 whereas AIMOS showed inferior results: 0.83, 0.82, 0.87 and 0.77. Consistent for all organs, AIMOS recorded unacceptably large 95% HD > 1 mm and produced incomplete contours. Moreover, AIMOS failed to distinguish the right lung from left lung. These entail that AIMOS requires more labor-intensive corrections than nnU-Net 3d_fullres for data on which the model was not trained on.
Conclusion: We have shown that the nnU-Net 3d_fullres model is more robust and generalizable than the current best performing algorithm for mouse segmentation (AIMOS). Our findings also demonstrate the importance of thoroughly evaluating the performance of autocontouring tools before implementation in routine preclinical practice.
@inproceedings{Malimban:2022,
author = {Malimban, Justin and Lathouwers, Danny and Qian, Haibin and Verhaegen, Frank and Wiedemann, Julia and Brandenburg, Sytze and Staring, Marius},
title = {External validation of deep learning models for mouse thorax autocontouring},
booktitle = {5th Conference on Small Animal Precision Image-guided Radiotherapy},
month = mar,
year = {2022},
}
Background suppression (BGS) in arterial spin labeling (ASL) leads to perfusion images with a higher temporal signal-to-noise ratio (tSNR) compared to ASL without BGS. The optimal inversion times (TIs), and therefore the quality of the BGS, depend on the T1 relaxation times of the underlying tissue and on inhomogeneities of the scanner’s magnetic fields (B0, B1+). In this work, we designed and implemented a feedback mechanism that optimized the quality of background suppression in real time on the scanner. The results show an increased tSNR for the subject-specific optimization of BGS compared to standard BGS in 12 healthy volunteers.
@inproceedings{Koolstra:2022,
author = {Koolstra, Kirsten and Staring, Marius and de Bruin, Paul and van Osch, Matthias J.P.},
title = {Subject-specific optimization of background suppression for arterial spin labeling MRI using a real-time feedback loop on the scanner},
booktitle = {International Society for Magnetic Resonance in Medicine},
address = {London, UK},
month = may,
year = {2022},
}
Ultra-high field (UHF) MRI (B0 > 7T) shows great promise to yield higher resolution structural and physiological information than available at 3T, particularly in the brain. Parallel RF transmission (PTx) is a key technology for UHF-MRI to address the increased spatial variations in the radiofrequency (RF) field distribution. However, currently it has failed to reach widespread clinical adoption. The main factors include the intersubject variability in local specific absorption rate (SAR) leading to large safety margins to ensure compliance to regulatory limits, and time-consuming B1+ calibration procedures required for tailored RF pulse design. Together, these technological challenges limit the clinical impact of PTx and the utilization of UHF-MRI.
In this work, we demonstrate a fast subject-specific method based on deep learning and a fast EM solver for predicting both SAR and B1+ fields using only a 9 second long localizer scan.
@inproceedings{Brink:2022,
author = {Brink, Wyger and Staring, Marius and Remis, Rob and Webb, Andrew},
title = {Fast Subject-Specific SAR and B1+ Prediction for PTx at 7T using only an Initial Localizer Scan},
booktitle = {ISMRM},
address = {London, UK},
month = may,
year = {2022},
}
Image-guided small animal irradiations are typically performed in a single session, requiring continuous administration of anesthesia. Prolonged exposure to anesthesia can potentially affect experimental outcomes and thus, a fast preclinical irradiation workflow is desired. Similar to the clinic, delineation of organs remains one of the most time-consuming and labor-intensive stages in the preclinical workflow, and this is amplified by the fact that hundreds of animals are involved in a single study. In this work, we evaluated the accuracy and efficiency of deep learning pipelines for automated contouring of organs in the mouse thorax.
@inproceedings{Malimban:2024,
author = {Malimban, Justin and Lathouwers, Danny and Qian, Haibin and Verhaegen, Frank and Wiedemann, Julia and Brandenburg, Sytze and Staring, Marius},
title = {Autocontouring of the mouse thorax using deep learning},
booktitle = {Radiotherapy and Oncology (ESTRO)},
month = may,
year = {2022},
}
Suppression of background signal in ASL leads to perfusion images with higher signal-to-noise ratio (SNR) compared to ASL without background suppression (BGS). BGS is obtained by applying multiple inversion pulses before and during the post-label delay (PLD). The optimal inversion times, and therefore the quality of the BGS, depends on the relaxation times of the underlying tissue (T1, T2) and on imperfections of the scanner’s magnetic fields (B0, B1+). Although this results in inter-subject differences, current ASL protocols make use of one set of predefined inversion times for all subjects, primarily because these inter-scan variations are not known at the moment of scanning. This means that the quality of the resulting perfusion images is not optimal for all subjects. In this work, we develop and implement a feedback loop that optimizes the timings of ASL BGS pulses real-time on the scanner, generating individually optimized perfusion images for each subject.
@inproceedings{Koolstra:2021,
author = {Koolstra, Kirsten and Staring, Marius and de Bruin, Paul and van Osch, Matthias J.P.},
title = {Individually optimized ASL background suppression using a real-time feedback loop on the scanner},
booktitle = {ESMRMB},
address = {online},
month = oct,
year = {2021},
}
Short summary: For the evaluation of vestibular schwannoma (VS) progression and treatment planning, accurate measurement from MRI is important. In clinical practice, manual linear measurements are performed from MRI. Manual 3D measurement is time-consuming and 2D measurement is subjective and reflects highly variable tumour volume poorly. We developed an AI model to detect and segment VS from MRI automatically.
Purpose/Objectives: We present a model for the detection and segmentation of VS, based on deep learning and suited to process multi-centre, multi-vendor MR images. The model’s performance is evaluated and compared to humans in an observer study.
Methods & materials: In total 214 cases (134 VS positive and 80 negative) with gadolinium-enhanced T1 and native T2 weighted MR images were acquired from 37 centres and 12 different MRI scanners. The intra- and extra meatal parts of the tumour were manually delineated by two observers under supervision of an experienced head and neck radiologist. Cases were divided into three non-overlapping sets (training, validation, and testing). A model was trained using 3D no-new-Unet deep learning segmentation method. In addition, an observer study was performed, in which the radiologist blinded to case information and delineation method compared model and human delineations.
Results: The model correctly detected VS in all positive cases and excluded the negative. Evaluation of the T1 model compared to the human delineation resulted in a Dice index 90.4±13.0, Hausdorff distance 2.12±9.32 mm, and mean surface-to-surface distance 0.49±1.52 mm. Intra and extra meatal tumour parts had Dice indices of 77.5±21.3 and 82.2±28.0, respectively. The observer study showed that in 103 out of 111 cases (93%) the model was comparable to or better than human delineation.
Conclusion: The proposed model can accurately detect and delineate VS from MRI in a multi-centre, multi-vendor setting. As such, it is a robust tool well suited to the reality of clinical practise. The model performed comparably to human delineations in the observer study.
@inproceedings{Neve:2021,
author = {Neve, Olaf and Tao, Qian and Romeijn, Stephan and Chen, Yunjie and de Boer, Nick P. and Grootjans, Willem and Kruit, Mark C. and Lelieveldt, Boudewijn P.F. and Jansen, Jeroen and Hensen, Erik and Staring, Marius and Verbist, Berit},
title = {Fully Automated 3D Vestibular Schwannoma Segmentation: A multicentre multi-vendor study},
booktitle = {European Society of Head and Neck Radiology (ESHNR)},
address = {online},
month = sep,
year = {2021},
}
Compliance with RF exposure limits in ultra-high field MRI is typically based on “one-size-fits-all” safety margins to account for the intersubject variability of local SAR. In this work we have developed a semantic segmentation method based on deep learning, which is able to generate a subject-specific bodymodel for personalized RF exposure prediction at 7T.
@inproceedings{Brink:2021,
author = {Brink, Wyger and Yousefi, Sahar and Bhatnagar, Prernna and Staring, Marius and Remis, Rob and Webb, Andrew},
title = {Numerical Body Model Inference for Personalized RF Exposure Prediction in Neuroimaging at 7T},
booktitle = {ISMRM},
address = {Vancouver, BC, Canada},
month = may,
year = {2021},
}
PURPOSE: Accurate measurement of vestibular schwannoma (VS) is important for evaluation of VS progression and treatment planning. In clinical practice, linear measurements are manually performed from MRI. Manual measurement is time-consuming, subjective, and restricted to 2D planes. In this study, we aim to develop a deep learning convolutional neural network (CNN) model to automatically detect and segment VS in 3D from Gadolinium (Gd)-enhanced MRI.
METHOD AND MATERIALS: In total 124 patients with unilateral hearing loss referred to MRI exam were enrolled, including 84 VS-positive and 40 VS-negative cases. MRI data were acquired from 37 centers, by 12 different MRI scanners from 3 vendors. Typical image resolution was 0.35x0.35x1 mm, and field of view ranged from 130x130x24 mm to 270x270x188 mm. In 84 positive cases, VS was manually delineated by two observers, supervised by a senior radiologist. The 124 subjects were randomly divided into three non-overlapping sets: training set (N=72), validation set (N=18), and test set (N=34). We trained a 3D no-new-Unet CNN for both VS detection and segmentation. Training was performed on a NVIDIA Tesla V100 graphics processing unit with 16GB memory.
RESULTS: Applied to the test set, the CNN correctly detected VS in 24 subjects and excluded VS in 10 (sensitivity 100%, specificity 100%). We evaluated the Dice index, Hausdorff distance, and surface to surface (S2S) distance in two scenarios, namely, CNN vs. observer 1, and observer 1 vs. observer 2. No significant differences were found: Dice 0.91±0.06 vs 0.92±0.05 (p=0.5 by paired Wilcoxon test), Hausdorff 1.3±1.5mm vs. 1.2±1.0 mm (p=0.6), S2S 0.4±0.3 mm vs. 0.4±0.2 mm (p=0.9). The annotation time was 6.0±3.3 min for the observer and 2.5±2.8 min for the CNN model.
CONCLUSION: In a multi-center and multi-vendor setting, a CNN model can accurately detect and delineate VS in 3D from Gd-enhanced MRI, to faciliate diagnosis and measurement of VS in clinical practice.
@inproceedings{Tao:2020,
author = {Tao, Qian and Romeijn, Stephan and Neve, Olaf and de Boer, Nick and Grootjans, Willem and Kruit, Mark C. and Lelieveldt, Boudewijn P.F. and Jansen, Jeroen and Hensen, Erik and Verbist, Berit and Staring, Marius},
title = {Deep-Learning-based Detection and Segmentation of Vestibular Schwannoma: A Multi-Center and Multi-Vendor MRI Study},
booktitle = {RSNA},
address = {Chicago, USA},
month = nov,
year = {2020},
}
Purpose: Balloon pulmonary angioplasty (BPA) is a treatment of obstructed pulmonary arteries (PAs), for patients with inoperable chronic thromboembolic pulmonary hypertension (CTEPH). Since the effect of BPA in untreated (i.e. unobstructed) PAs is unknown, we investigated the treatment response in treated and untreated PAs, by analyzing CT Pulmonary Angiography (CTPA).
Methods and Materials: We studied 22 consecutive CTEPH patients (20 female; age: 67 ± 14), who underwent CTPA and right-heart catheterization (RHC), pre- and post-BPA. In consensus, three experts selected treated artery segments based on the BPA locations and approximately 5 untreated artery segments at a similar level. Post-BPA CTPA scans were registered to pre-BPA scans, and local intravascular density changes were measured. The median density change in treated (MDCT) and untreated segments (MDCU) was calculated, based on manual selections. The difference between MDCU and MDCT was tested by paired T-test. The difference in density changes (ΔMDC) between treated and untreated PAs was calculated (MDCT-MDCU). Changes in RHC parameters included systolic, diastolic and mean pulmonary artery pressure (ΔsPAP, ΔdPAP and ΔmPAP) and in pulmonary vascular resistance (ΔPVR). The relation between hemodynamic changes and ΔMDC was studied with Spearman’s correlation.
Results: MDCT (51 ± 85 HU) and MDCU (-23 ± 103 HU) were significantly different and in opposite direction (p=0.001). ΔMDC was significantly correlated with ΔdPAP (R=-0.55, p=0.008) and ΔPVR (R=-0.47, p=0.026), and marginally correlated with ΔmPAP (R=-0.4, p=0.068).
Conclusion: Perfusion in treated PAs increased, whereas perfusion in untreated PAs decreased. Not only improved perfusion in treated arteries, but also normalization in untreated arteries may play a significant role in improving hemodynamics by BPA.
@inproceedings{Zhai:2020,
author = {Zhai, Zhiwei and Ota, Hideki and Staring, Marius and Stolk, Jan and Sugimura, Koichiro and Takase, Kei and Stoel, Berend C.},
title = {Response to balloon pulmonary angioplasty in treated versus untreated pulmonary arteries in CTEPH patients},
booktitle = {European Congress of Radiology},
address = {Vienna, Austria},
month = mar,
year = {2020},
}
Non-rigid registration is essential for a wide range of clinical applications, such as intraoperative image-guidance and postoperative follow-up assessment, and longitudinal image analysis for disease diagnosis and monitoring. Vascular structures are a rich descriptor of the organ deformation, since it permeates through all organs within body. As vasculature differs in size, shape and topology, following surgical intervention/treatment or due to disease progression, non-rigid vessel matching remains a challenging task. Recently, hybrid mixture models (HdMM) have been applied to tackle this challenge, and demonstrate significant improvements in terms of accuracy and robustness relative to the state-of-the-art. However, the smoothness constraint enforced on the deformation field with this approach only accounts for the global topology of the vasculature, resulting in a reduced capacity to accurately match localized changes to vascular structures, and preserve local topology. In this work, we proposed a modified version of HdMM by formulating an adaptive kernel, to enforce a local smoothness constraint on the deformation field, henceforth referred to as HdMMad. The proposed HdMMad framework is evaluated with cerebral and pulmonary vasculature, acquired retrospectively. The registration results for both data sets demonstrate that the proposed approach outperforms registration algorithms also designed to preserve local topology. Using HdMMad, around 80% of the initial registration error was reduced, for both data sets.
@inproceedings{Bayer:2019,
author = {Bayer, S. and Zhai, Z. and Strumia, M. and Tong, X. and Gao, Y. and Staring, M. and Stoel, B.C. and Ostermeier, M. and Fahrig, R. and Nabavi, A. and Maier, A. and Ravikumar, N.},
title = {Local topology preservation for vascular centerline matching using a hybrid mixture model},
booktitle = {IEEE Nuclear Science Symposium and Medical Imaging Conference},
address = {Manchester, UK},
month = oct,
year = {2019},
}
Purpose/Introduction: Arterial spin labeling (ASL) is a non-invasive technique for acquiring quantitative measures of cerebral blood flow (CBF). Hadamard time-encoded(te) pCASL allows time-efficient acquisition of dynamic ASL-data and when done with and without flow-crushing, 4D MRA and arterial input function measurements can be obtained. While improving quantification, this approach is also a factor two slower. In this study, we propose an end-to-end 3D convolutional neural network (CNN) in order to accelerate CBF quantification from sparse sampling (50%) of te-pCASL with and without flow crushers. For training and evaluation of the CNN, we propose a framework to simulate the te-PCASL signal.
Subjects and Methods: Fig. 1a shows the proposed framework for generating the training and validation data. The ASL signal is simulated by a tracer kinetic model of te-pCASL. This model is a function of arterial arrival time, tissue arrival time and blood flow, which were extracted from in vivo data and registered to BrainWeb scans. This study contains 1676 simulated subjects, each including crushed and non-crushed input data at 8 timepoints, and angiographic and perfusion output data at 7 timepoints. The proposed CNN, Fig. 1b, leverages design elements from DenseNet with loop connectivity patterns in a typical U-shape ( 400K parameters). We leverage a Huber loss function for training the network, with weights based on the gradient magnitudes from Fig. 2a. In order to manage memory usage, we utilize patch-based training. The simulated dataset was divided into 1174 subjects for training, 167 for validation, and 335 for testing. For augmenting the training data, noise, flipping and ±13∘ rotation have been applied randomly. The network was trained for 30k iterations.
Results: Fig. 2b, shows the network’s training loss for the perfusion and angiography separately and their combination. The average MSE for the perfusion and the angiography is 5.30±0.22 and 5.02±0.17. Fig. 3 shows the outputs of the network for some slices of perfusion/angiography at different timepoints. Due to the averaging property in the Huber loss function, the results suffer from over-smoothing. It takes an average of 0.12±0.18s to reconstruct all perfusion and angiography scans from the sparsely-sampled crushed/noncrushed data (size 1013).
Discussion/Conclusion: This study demonstrates that CNNs are promising to reconstruct angiographic and perfusion images from sparsely sampled Hadamard ASL-data. A next step is the use of perceptual losses to improve sharpness of the results, as well as validation on in vivo data.
@inproceedings{Yousefi:2019,
author = {Yousefi, Sahar and Sokooti, Hessam and Hirschler, L. and van der Plas, M. and Petitclerc, L. and Staring, Marius and van Osch, Mathias J.P.},
title = {Reconstruction of Dynamic Perfusion and Angiography Images from Sub-sampled Hadamard Time-encoded ASL Data using Deep Convolutional Neural Networks},
booktitle = {ESMRMB},
volume = {32},
number = {1},
pages = {S96 - S97},
month = oct,
year = {2019},
}
Purpose: To evaluate the feasibility of fiducial markers as a surrogate for GTV position in image-guided radiotherapy of rectal cancer.
Methods and Materials: We analyzed 35 fiducials in 19 rectal cancer patients who received short course radiotherapy or long-course chemoradiotherapy (LC-CRT). Two MRI exams and daily pre- and post-irradiation CBCT scans were acquired in the first week of radiotherapy. Weekly CBCT scans were acquired thereafter for patients that received LC-CRT. Between the two MRI exams, the fiducial displacement relative to the center of gravity of the GTV (COGGTV) and the COGGTV displacement relative to bony anatomy was determined. Using the CBCT scans, inter- and intrafraction fiducial displacement relative to bony anatomy was determined.
Results: The systematic error of the fiducial displacement relative to the COGGTV was 2.8, 2.4 and 4.2 mm in the left-right (LR), anterior-posterior (AP) and craniocaudal (CC) direction. Large interfraction systematic errors of up to 8.0 and random errors up to 4.7 mm were found for COGGTV and fiducial displacements relative to bony anatomy, mostly in the AP and CC directions. For tumors located in the mid- and upper rectum these errors were 9.4 (systematic) and 5.6 mm (random) compared to 4.9 and 2.9 mm for tumors in the lower rectum. Systematic and random errors of the intrafraction fiducial displacement relative to bony anatomy were <2.1 mm in all directions.
Conclusions: Large interfraction errors of the COGGTV and the fiducials relative to bony anatomy were found. Therefore, despite the observed fiducial displacement relative to the COGGTV, the use of fiducials as a surrogate for GTV position reduces the required margins in the AP and CC direction for a GTV boost using image-guided radiotherapy of rectal cancer. This reduction may be larger in patients with tumors located in the mid- and upper rectum compared to the lower rectum.
@inproceedings{vandenEnde:2019,
author = {van den Ende, Roy P.J. and Kerkhof, Ellen M. and Rigter, L.S. and van Leerdam, M. and Peters, Femke P. and van Triest, B. and Staring, Marius and Marijnen, Corrie A.M. and van der Heide, Uulke A.},
title = {Feasibility of gold fiducial markers as a surrogate for GTV position in image-guided radiotherapy of rectal cancer},
booktitle = {American Association of Physicists in Medicine (AAPM)},
address = {San Antonio, TX, USA},
month = jul,
year = {2019},
}
Purpose: To develop a method to automatically adapt treatment plans in near real-time to the anatomy-of-the-day for prostate and cervical cancer.
Material / Methods: Starting point is a prior plan optimized on the planning CT. First, spot positions (Bragg peaks) from the prior plan are restored by adjusting the energy of each pencil-beam to the water-equivalent path length in the daily CT. Subsequently, to compensate for deformations of target and OARs, pencil-beams are added followed by a pencil-beam weight optimization using the Reference-Point-Method. This method generates a Pareto optimal plan for the anatomy-of-the-day, with similar trade-offs to those in the prior plan. The method was evaluated using 8-10 daily CTs of 11 prostate cancer patients (88 CTs) and 3-4 daily CTs of 5 cervical cancer patients (19 CTs). Evaluation was done by comparing for each CT a full multi-criteria optimization without time constraints (benchmark) to the proposed method and to a forward dose calculation of the prior plan on each CT (no replanning).
Results: The figures show large dosimetric differences between no replanning and benchmark, while the differences between the proposed method and benchmark are substantially smaller. The use of replanning improved target coverage to clinically acceptable levels in 85/88 CTs and 19/19 CTs for prostate and cervix, respectively. All plans showed reduced OAR doses. Replanning took on average 2.9 and 3.6 minutes for prostate and cervix, respectively, using 50% for dose computation.
Conclusion: The automation and realized replanning times make the proposed method an important step towards real-time adaptive proton therapy.
@inproceedings{Jagt:2019a,
author = {Jagt, T. and Breedveld, S. and van Haveren, R. and Nout, R. and Astreinidou, E. and Staring, M. and Heijmen, B. and M. Hoogeman},
title = {An automated replanning strategy for near real-time adaptive proton therapy},
booktitle = {Particle Therapy Co-operative Group},
address = {Manchester, UK},
month = jun,
year = {2019},
}
Purpose/objective: Intensity-modulated proton therapy (IMPT) is very sensitive to small daily density variations along the pencil beam paths and variations in target and OAR shapes. This makes IMPT for sites with large inter-fraction target deformations extra challenging, such as in the treatment of cervical cancer. Online replanning is an option to achieve adequate target dose in each fraction. This study evaluates a novel approach employing a pre-treatment established plan-library as prior information in automated online replanning for IMPT of cervical cancer.
Material and Methods: CT data of 5 cervical cancer patients was available, comprising of a full and empty-bladder CT and 3-4 repeat CTs. Prescribed dose for the primary tumor and pelvic ±para-aortic lymph nodes was 45 Gy. Pre-treatment plan-libraries were created to provide prior spot distributions for replanning on the repeat CTs. One consisted of two treatment plans based on the full and empty-bladder CT +8 mm margin and the other of one treatment plan encompassing all target deformation observed in the full and empty-bladder CT +10 mm margin, i.e. a large ITV. In case of the 2-plan-library the daily bladder volume was used to select the prior plan for replanning.
The reoptimization method starts with a spot-position (Bragg peak) restoration from the selected prior plan by adjusting the energy of each pencil beam to the water equivalent path length in the repeat CT. To further compensate for deformations, new spots are added. The reference point method (RPM) is then used to optimize the spot weights. The RPM has been automatically tuned on benchmark plans of 4 CTs (i.e. optimized from scratch without time constraints) and results in a reoptimized Pareto optimal plan for the new anatomy, with similar trade-offs as in the benchmark plan. Replanning was performed for each repeat CT using tight margins of 5/2 mm (primary tumor/nodes), only meant to account for intra-fraction motion. The prior and reoptimized plans were evaluated on the repeat CTs using the 5/2 mm-PTVs and compared to benchmark plans on the repeat CTs.
Results: Evaluating the prior plans on the repeat CTs without replanning resulted in V95%<95% in most CTs, with values down to 50% (see Fig 1). For both plan-library approaches, reoptimization increased the number of repeat CTs with adequate coverage (PTV V95%=95% and V107%=2%) from 2/19 to 19/19 CTs. Fig 2 shows the differences between the reoptimized and benchmark plans on the repeat CTs using the ITV or 2-plan-library as prior. Median improvements are seen up to 4.5%-point for bladder V30Gy when using the 2-plan-library instead of the ITV plan, with outliers up to 13.8%-point. Reoptimization took 3.6 min on average.
Conclusion: With fully automated replanning, adequate target coverage was restored for all CTs, as well as decreased OAR doses. The use of a 2-plan-library yielded lower OAR doses than a single ITV prior plan. With an average time of 3.6 minutes, this method is an important step towards online-adaptive IMPT in cervical cancer.
@inproceedings{Jagt:2019b,
author = {Jagt, Thyrza and Breedveld, Sebastiaan and van Haveren, Rens and Nout, Remi and Astreinidou, Eleftheria and Staring, Marius and Heijmen, Ben and Hoogeman, Mischa},
title = {Plan-library supported automated re-planning for online-adaptive IMPT of cervical cancer},
booktitle = {Radiotherapy and Oncology (ESTRO)},
volume = {133},
number = {Supplement 1},
pages = {S38 - S39},
month = apr,
year = {2019},
}
Purpose/objective: Online-adaptive radiotherapy holds the promise to mitigate daily anatomical uncertainties thereby increasing treatment precision. One of the major challenges here is the development and validation of sufficiently fast, accurate, and robust segmentation algorithms for target volumes and organs at risk. The purpose of this study is to improve contour propagation in the pelvic region by combining deep-learning based auto-segmentation with image registration and to validate it geometrically and dosimetrically for online-adaptive Proton Therapy (PT) for prostate cancer.
Material and Methods: The proposed registration pipeline registers the daily CT scan to the planning, based on a combination of image intensities and, moreover, by an automatic segmentation of the bladder from the daily CT scan using a novel 3D convolutional neural network. The bladder was chosen for its influence on prostate motion. Registration performance is further enhanced by a digital inpainting of gas pockets with realistic bowel content, using a state-of-the-art generative adversarial network. We further perform data normalization and in relevant cases digitally remove contrast agent from the bladder.
Evaluation was performed on CT data from 18 prostate cancer patients, each with 7 to 10 repeat CT scans. Manual delineations of the prostate, lymph nodes, seminal vesicles, bladder and rectum were available for evaluation. Geometric performance was quantified using the Mean Surface Distance (MSD). The pipeline was validated dosimetrically on 11 out of 18 patients by simulating an online-adaptive PT workflow based on the propagated contours. To this end, for each repeat CT, a treatment plan was generated based on the propagated contours and the plan was evaluated using the manual delineations. A dose of 74 Gy was assigned to the high-dose PTV (prostate) and 55 Gy to the low-dose PTV (lymph nodes and seminal vesicles). The generated treatment plans were considered clinically acceptable if dosimetric coverage constraints derived from the manual contours were met (PTV V95% = 98% and V107% = 2%).
Results: The proposed pipeline achieved a MSD of 1.29 ± 0.33, 1.44 ± 0.68, and 1.52 ± 0.45 mm for the prostate, seminal vesicles, and lymph nodes, respectively (Fig. 1). The propagated contours met the dose coverage constraints in 85%, 91%, and 99% of the cases for the prostate, seminal vesicles, and lymph nodes, respectively (Fig. 2). 78% of the cases met all constraints at the same time, compared to 65% when using a standard registration approach. The average runtime for the proposed pipeline is 98 seconds per registration.
Conclusion: The proposed registration pipeline obtained highly promising results for generating treatment plans adapted to the daily anatomy. With 78% of the automatically generated treatment plans directly usable without manual correction, a substantial improvement in system robustness was reached compared to an existing approach. The proposed method therefore facilitates more precise PT of prostate cancer.
@inproceedings{Elmahdy:2019,
author = {Elmahdy, Mohamed S. and Jagt, Thyrza and Zinkstok, R. Th. and Marijnen, C.A.M. and Hoogeman, Mischa and Staring, Marius},
title = {Deep learning improves robustness of contour propagation for online adaptive IMPT of prostate cancer},
booktitle = {Radiotherapy and Oncology (ESTRO)},
volume = {133},
number = {Supplement 1},
pages = {S543-S544},
month = apr,
year = {2019},
}
Introduction: Image registration is a common preprocessing step in many different medical image analysis applications. However, registration methods are implemented in a large variety of toolboxes, and the methods are rarely compared on the same datasets. The lack of standardization makes it challenging for end-users to select the right registration algorithm and hyperparameters for their application.
To standardize comparison, registration methods can enter a Grand Challenge (GC). GCs are competitions with standardized data sets, evaluation methods, and experimental setups that focus on specific research topics. The experiments are run by third parties which ensures fair, independent evaluations. However, most modern GCs are static, one-time events, that allow closed source contributions. This hampers collaboration and reproducibility.
Methods: To address these limitations, and inspired by modern software development practices, we proposed the Continuous Registration Challenge (CRC). For this challenge, we developed a fully automatic platform for benchmarking registration methods on many different data sets. The platform consists of a C++ API for running registrations, a Python framework and a Continuous Integration system for running experiments, a compute backend, and a website with public leaderboards. The platform and all submissions are open source. The C++ API, SuperElastix, was designed with a role-base architecture that allows many different registration paradigms to co-exist in the same framework. The challenge focuses on pairwise registration of lungs and brains, two problems frequently encountered in clinical settings.
Results: The system described above is used for the Continuous Registration Challenge. The system allows researchers to test their methods in a standardized way, using a fully automatic experimental setup. The experiments are running every week, and participants can follow the leaderboards, which are updated every weekend (https://continuousregistration.grand-challenge.org/leaderboard/). The leaderboards will continue to be updated and the repository will be open for contributions even after the challenge ends. The results will be presented and discussed at the Workshop On Biomedical Image Registration (WBIR 2018, https://wbir2018.nl/). All participants are invited to collaborate on a paper which we plan to submit to a leading journal in the field.
Conclusion: We present an open source framework for the continuous and automated benchmarking of image registration algorithms.
@inproceedings{Marstal:2018,
author = {Marstal, Kasper and Berendsen, Floris and Dekker, Niels and Staring, Marius and Klein, Stefan},
title = {The Continuous Registration Challenge},
booktitle = {8th International Workshop on Biomedical Image Registration},
address = {Leiden, The Netherlands},
month = jun,
year = {2018},
}
Purpose/objective: In rectal cancer patients with complete clinical response an organ-preservation strategy seems safe. Dose response analyses suggest that higher tumor doses result in higher complete response rates. Tumor dose can be increased by applying a boost with external beam radiotherapy, endorectal brachytherapy or contact therapy. With position verification using CT, CBCT or a radiograph, verification of tumor position is difficult due to limited soft tissue contrast. Fiducial markers can be used as a surrogate for tumor position, after their position relative to the tumor is established on MRI. The aim of this study was to evaluate the MRI visibility of different gold fiducial markers implanted in the tumor, rectal wall or mesorectum.
Material/methods: We included 20 rectal cancer patients who received neoadjuvant (chemo)radiotherapy. Three or four markers were inserted in the tumor, rectal wall or mesorectum by sigmoidoscopy or endoscopic-ultrasonography. We tested 4 marker types (Visicoil (0.5x5 mm and 0.75x5 mm)[IBA Dosimetry, GmbH, Germany], Cook 0.64x3.4 mm [Cook Medical, Limerick, Ireland] and Gold Anchor 0.28x20 mm [Naslund Medical AB, Sweden]), each placed in 5 patients. Two radiologists and two radiation oncologists were blinded for marker type and identified marker locations on MRI in two scenarios: without (scenario A) and with (scenario B) a rigidly registered CT or CBCT with markers available to aid in identifying the marker locations on MRI. Included MRI sequences were a transverse and a sagittal T2-TSE, a T1 3D with short TE (1.6-2.5 ms), a T1 3D with long TE (5-15 ms) and a transverse B0 map. Observers labeled marker positions on the sequence on which the marker could most accurately be identified. In addition, the observers graded the visibility of each identified marker on each sequence (0=not visible, 1=poor/average, 2=good/excellent). A marker was defined to be consistently identified if at least three observers labeled that marker on the same position on MRI.
Results: Of the 64 inserted markers, 41 were still present at the time of MRI as determined on corresponding CT or CBCT. Table 1 summarizes the results for scenario B. The Gold Anchor marker was the most consistently identified marker (9 out of 12). In comparison, in scenario A only 4 out of 12 present Gold Anchor markers were consistently identified. The consistently identified Gold Anchor markers were best visible on the T1 3D (long TE) sequence (86% good/excellent) and 73% were labeled on that sequence. The markers were least visible on both T2-TSE sequences (43-46% good/excellent). Examples of the Gold Anchor marker on the different MRI sequences are shown in Figure 1.
Conclusion: The Gold Anchor marker was the best visible marker on MRI as it was the most consistently identified marker. The use of a rigidly registered CT or CBCT improves marker identification on MRI. Standard anatomical MRI sequences are not sufficient to identify markers, it is therefore recommended to include a T1 3D (long TE) sequence.
@inproceedings{vandenEnde:2018,
author = {van den Ende, R.P.J. and Rigter, L.S. and Kerkhof, E.M. and van Persijn van Meerten, E.L. and Rijkmans, E.C. and Lambregts, D.M.J. and van Triest, B. and van Leerdam, M.E. and Staring, M. and Marijnen, C.A.M. and van der Heide, U.A.},
title = {EP-2115: MRI visibility of gold fiducial markers for image-guided radiotherapy for rectal cancer},
booktitle = {Radiotherapy and Oncology},
volume = {127},
number = {Supplement 1},
pages = {S1163 - S1164},
month = apr,
year = {2018},
}
Purpose/objective: Prompt gamma (PG) emission profiles can be used to determine the proton range in patients, but studies on the correlation between PG measurements and relevant dosimetric parameters are mostly lacking. The aim of this study was to investigate the feasibility of using PG emission profiles to monitor dosimetric changes in pencil beam scanning (PBS) proton therapy as a result of day-to-day variation in patient anatomy.
Material/methods: We included 11 prostate patients with a planning CT scan and 7-9 repeat CT scans (99 CT scans in total), illustrating daily variation in patient anatomy. For each patient, we had a PBS treatment plan with two lateral fields. We determined the real-time PG emission profiles on a cylindrical surface around the patient by simulating each plan on the planning CT and on the repeat CT scans of each patient using the Geant4-based TOPAS Monte Carlo code. The scored (i.e. detected) PGs were discriminated on the basis of energy (E ≥ 1 MeV) and angle of incidence (87° ≤ θ ≤ 93°) so as to select PGs perpendicular to the treatment beam. The treatment plans consisted of a mean of 1417 spots and the PGs were scored for each spot individually.
From the planned and simulated dose distributions, we determined the V95% of the GTV and the Dmean and V60Gy of the rectum. Next, the PG profiles that corresponded with the 5% most intense spots (i.e. with the highest number of protons) were selected. We fitted sigmoid functions to the falloff region of all selected PG emission profiles and used the 50% point of the sigmoid curve (X50) as a measure for the falloff location (which is known to correlate strongly with the Bragg peak location of the corresponding spot). We used the distribution of the absolute differences between the X50 (|ΔX50|) of all selected spots simulated using the planning CT scan and the repeat CT scans for each patient as a measure of similarity between simulations. To evaluate the validity of using |ΔX50|, we determined Pearson correlation coefficients (r) between the mean and standard deviation (SD) of |ΔX50| and dosimetric differences between simulations.
Results: Figure 1 illustrates dosimetric differences due to anatomical changes. An increase in Dmean and V60Gy of the rectum of up to 16.0 Gy and 13.6%-point, respectively, and a decrease in V95% of the GTV of up to 20.7%-point, were observed. Measurable correlations were observed between the change in V95% when simulating the treatment plan on the repeat CT scans and the mean |ΔX50| (|r|=0.51 for 6 out of 11 patients; mean |r| of 0.56 (SD: 0.29)). In addition, the SD of |ΔX50| appears to be a potential predictor for a change in Dmean of the rectum (|r|=0.58 for 6 patients; mean |r| of 0.46 (SD: 0.29)) (Figure 2). No significant predictor was found for V60Gy due to the small mean difference between simulations.
Conclusion: These promising results show, as a proof of principle, that PG emission profiles can be used to monitor daily dosimetric changes in proton therapy as a result of day-to-day anatomical variation.
@inproceedings{Lens:2018,
author = {Lens, Eelco and Jagt, Thyrza and Hoogeman, Mischa and Staring, Marius and Schaart, Dennis R.},
title = {OC-0082: Using prompt gamma emission profiles to monitor day-to-day dosimetric changes in proton therapy},
booktitle = {Radiotherapy and Oncology},
volume = {127},
number = {Supplement 1},
pages = {S40 - S41},
month = apr,
year = {2018},
}
Purpose: Balloon pulmonary angioplasty (BPA) in patients with inoperable chronic thromboembolic pulmonary hypertension (CTEPH) can have variable outcomes. To gain more insight into this variation, we aimed to visualize and quantify changes in lung perfusion using CT pulmonary angiography (CTPA). We validated these measurements of perfusional changes against hemodynamic changes measured during right-heart catheterization.
Materials and Methods: We studied 14 consecutive CTEPH patients (12 female; age: 65±17), who underwent CTPA and right-heart catheterization, before and after BPA. Post-treatment images were registered to pre-treatment CT scans (using the Elastix toolbox) to obtain corresponding locations. Pulmonary vascular trees and their centerlines were detected using a graph-cuts method and distance transform. Areas distal from vessels were defined for measuring perfusional changes in the parenchyma. Subsequently, the density changes within the vascular and parenchymal areas were calculated and corrected for inspiration level differences, and displayed in color-coded overlays. For quantification, the median and inter-quartile range (IQR) of the density changes were calculated in the vascular and parenchymal areas (ΔVD and ΔPD, respectively). The recorded changes in hemodynamic parameters included changes in systolic, diastolic and mean pulmonary artery pressure (ΔsPAP, ΔdPAP and ΔmPAP, respectively) and in vascular resistance (ΔPVR). The Spearman’s correlation coefficients between perfusional changes and hemodynamic changes were tested.
Results: PAP and PVR were significantly improved after BPA. Comparative imaging maps showed distinct patterns in perfusional changes between patients. Within vessels, the IQR of ΔVD correlated with ΔsPAP (R=-0.58, p=0.03), ΔdPAP (R=-0.71, p=0.005), ΔmPAP (R=-0.71, p=0.005) and ΔPVR (R=-0.77, p=0.001, see Figure). In the parenchyma, the median of ΔPD correlated with ΔdPAP (R=-0.71, p=0.005) and ΔmPAP (R=-0.68, p =0.008).
Conclusion: Comparative imaging in CTEPH patients offers insight into differences in BPA treatment effect. Quantification of these perfusional changes provides non-invasive measures that reflect hemodynamic changes.
@inproceedings{Zhai:2017,
author = {Zhai, Zhiwei and Ota, Hideki and Staring, Marius and Stolk, Jan and Sugimura, Koichiro and Takase, Kei and Stoel, Berend C.},
title = {Treatment Effect of Balloon Pulmonary Angioplasty in CTEPH Quantified by Automatic Comparative Imaging in CTPA},
booktitle = {RSNA},
address = {Chicago, USA},
month = nov,
year = {2017},
}
Purpose: To monitor tumor changes in rectal cancer patients undergoing sequential MRI during radiotherapy, deformable image registration (DIR) is required. However, applying DIR may result in unrealistic tissues expansion or folding, particularly in the case of tumor shrinkage. We developed and evaluated two registration approaches of T2-weighted (T2W) MRI for spatial mapping of tumor and mesorectum.
Methods and Materials: Thirteen patients received weekly repeated T2W-MRI (3T) for five weeks (n=62 scans). Both approaches were implemented in Elastix, using a B-splines transformation and mutual information as a similarity metric. For registration of non-tumor structures in the mesorectum manually selected landmarks and/or rectum contours were utilized to refine the registration when needed. For tumor mapping, the transit point from tumor to normal rectum wall was used for tumor alignment assuming tumor volume preservation.
Results: Registration of non-tumor structures resulted in average dice similarity coefficient (DSC) of 90±6% and mean surface distance (MSD) of 1.53±0.48mm between mesorectum segmentations. Refinement using landmarks was required for one patient improving MSD and DSC, from 18.0 to 1.27 mm and 62% to 96% respectively. Refinement using rectum contours was required for another patient improving MSD form 3.0 to 2.7 mm. Tumor mapping resulted in satisfactory alignment when evaluated visually. For an average-sized tumor ( 30cc) with 44% volume shrinkage MSD was 1.5 mm between rectum delineations.
Conclusion: We obtained accurate registrations between T2W-MRIs over time within the mesorectum. This can be used for monitoring tumor response and toxicity of normal tissue in radiotherapy of rectal tumors.
@inproceedings{BaniYassien:2017,
author = {Bani Yassien, Ahmed and Gobadi, Ghazaleh and Staring, Marius and van Triest, Baukelien and Lambregts, Doenja and Betgen, Anja and Marijnen, Corrie A.M. and van der Heide, Uulke A.},
title = {T2-weighted MR image registration of rectal tumours and mesorectum during radiotherapy},
booktitle = {European Congress of Radiology},
address = {Vienna, Austria},
month = mar,
year = {2017},
}
Population based whole body MR (WB-MR) screening studies, for assessment of arterial diseases are being widely conducted nowadays. With large population studies come the mammoth task of manual annotation and analysis. A common practice for diagnosing arterial trees on WB-MR is conducting visual inspection on maximum intensity projection (MIP) images. By performing analysis on MIP images, the entire potential of WB-MR scans is not being fully utilized. A method for automatic extraction of 3D vascular tree would be very useful for arterial disease analysis and quantification.
@inproceedings{Shahzad:2016,
author = {Shahzad, Rahil and Dzyubachyk, Oleh and Staring, Marius and Lelieveldt, Boudewijn P.F. and van der Geest, Rob J.},
title = {Framework For Automatic Labelling Of Arterial Tree From WB-MRA},
booktitle = {IEEE International Symposium on Biomedical Imaging (ISBI)},
editor = {Kybic, Jan and Sonka, Milan},
address = {Prague, Czech Republic},
month = apr,
year = {2016},
}
Introduction: Longitudinal studies on brain pathology rely on a steady state adult brain to exclude confounds of brain development changes. Thus, knowledge when adulthood is reached is indispensable to discriminate developmental phase and adulthood. We are conducting a lifespan study on rats, imaging juvenile development as well as ageing processes of the brain with noninvasive techniques including functional and anatomical MRI and different PET-tracers. Here we present a high-resolution longitudinal MRI study at 11.7T of male Wistar rats between 21 days and six months of age, characterizing cerebral volume changes and tissue-specific myelination as a function of age.
Methods: Twelve Wistar rats, housed pairwise in cages, were used in MRI experiments from the age of three weeks up to three or six months, respectively. MR experiments were conducted on an 11.7T Bruker BioSpec scanner (Bruker Biospin, Germany). Animals were anaesthetized using 2% Isoflurane in 70:30 N2O:O2 and vital functions were monitored continuously. T2 maps were chosen for their anatomical detail and quantitative reproducibility; diffusion tensor imaging for information on tissue anisotropy and myelination. Maps were calculated from a multislice multiecho sequence (10 echoes) with TE=10ms, TR=5000ms, inplane resolution of 0.146x0.146mm and 0.5mm slice thickness (no gaps). For all individuals, T2 maps from the different ages were coregistered non-rigidly using a B-spline transform (1), and the corresponding deformation fields were calculated. Volumetric changes of brain regions were derived from the deformation fields. At three weeks, three and six months, a subset of four rats was sacrificed for histological evaluation using Cresyl Violet and Black and Gold II staining for myelin.
Results: Cortical thickness reaches final value at 1 month, but volume increases of cortex, of striatum and the whole brain end only after two months. Myelin accretion is pronounced till the end of the third month. Continuing myelination increases in the cortex are seen on histological analysis, but are no longer reliably detectable with diffusion-weighted MRI due to parallel tissue restructuring processes. A combination of reduced T2 and decreased cortical volume is implying an increasing tissue density (myelination and cell number) during development, lowering the relative amount of free water, and thus reducing the cortical volume. This was confirmed by histological evaluation.
Conclusions: In conclusion, cerebral development continues over the first three months of age. This is of relevance for future studies on brain disease models, which should not start before end of month 3 to exclude serious confound of continuing tissue development.
Acknowledgement: This work was financially supported by grants from BMBF (0314104) and the EU-FP7 program TargetBraIn (HEALTH-F2-2012-279017).
References: (1) Klein & Staring et al. "elastix: a toolbox for intensity based medical image registration", 2010, IEEE-TMI
@inproceedings{Mengler:2013,
author = {Mengler, Luam and Khmelinskii, Artem and Diedenhofen, Michael and Po, Chrystelle and Staring, Marius and Lelieveldt, Boudewijn P.F. and Hoehn, Mathias},
title = {When is a young rat brain adult? Volume and myelination changes in cortex and striatum},
booktitle = {European Molecular Imaging Meeting},
address = {Torino, Italy},
month = may,
year = {2013},
}
Introduction: The detailed assessment of atherosclerosis of the carotid artery requires high resolution imaging of the vessel wall using multiple MR sequences with different contrast weightings. These images allow manual or automated classification of plaque components inside the vessel wall. During scanning, patient movement may cause misalignment between the sequences. Registration is required to correct for these misalignments. Manual alignment is the current procedure, but is time-consuming; therefore automatic image registration has been applied in a number of studies. In previous work, automatic registration was only performed in 2D [1-3], while patient movement occurs in 3D, and no quantitative validation was performed [2-4]. The purpose of this study was 1) to develop an optimal 3D automatic registration method, and 2) to perform a quantitative validation of the registration results with a gold standard.
Methods and Materials: Images from fifty TIA/stroke patients with ipsilateral <70% carotid artery stenosis, with adequately visualized vessel wall boundaries and no imaging artifacts, were randomly selected from a larger cohort. MR images were obtained on a 1.5T scanner using a dedicated carotid surface coil (both Philips Healthcare). Five MR pulse sequences were obtained around the carotid bifurcation, each containing nine transverse slices: T1W TFE, TOF, T2W TSE, and pre- and post-contrast T1W TSE as described in [5]. Acquired pixel size was 0.39x0.39 mm2 and slice thickness was 3.0 mm. All images were reconstructed to have a pixel size of 0.2x0.2 mm2 in plane. The data was manually segmented by delineating the lumen contour in each vessel wall sequence, which acts as the ground truth. In this study the pre-contrast T1W TSE was used as the reference sequence. In addition, an expert manually aligned the images to this reference by applying an inplane translation to each image slice.
Automatic image registrations using various transforms were investigated, being 2D translation per image slice, 3D rigid transform, 3D affine transform and 3D B-spline transform. In order to prevent the automatic registration to align the studies to the dominant neck-air boundary, a circular image mask centered over the lumen was used. Mutual information was chosen as image similarity metric, a tri-linear interpolator, and stochastic gradient descent as optimizer. In all cases various mask sizes were tested to obtain the optimal mask size. Similarly, for the 3D B-spline transform, different grid sizes of the B-splines were tested. The different registration methods were applied using publicly available elastix
software [6].
Evaluation of the automatic registration was performed by comparing the lumen segmentation of the reference image after registration with the lumen segmentation of the moving image. To compare the results of 2D and 3D registrations, all contours were converted into 3D tubular meshes and the distance between the tubular meshes was used to quantify the accuracy. A smaller distance indicates a better registration result.
Results: The box plot in Figure 1 shows the distribution of the residual registration distance. The first box shows the distance without applying any form of registration. The second box shows the results after manual registration of the expert. The remaining columns show the different automatic registration strategies. The performance of the 3D B-spline registration is very close to the manual alignment. Paired t-tests showed that there is no significant difference between the manual alignment and the 3D B-spline registration (p<0.05). The optimal mask size was a circle with a diameter of 1 cm. The optimal grid size for the 3D B-spline registration was 15 mm.
Discussion and Conclusion: Different methods for the automatic registration of multispectral MR vessel wall images of the carotid artery were investigated and quantitatively validated. Registration using a 3D B-spline transform performed equally well as the manual alignment by an expert, with final result in the order of the in-plane voxel size. These results show that automatic image registration can replace the manual alignment reducing the amount of time needed analyzing carotid vessel wall images.
References and Acknowledgements:
[1] Biasiolli, L. et al., Proc. SPIE, 2010;7623;76232N.
[2] Hofman, J.M. et al., Magn Reson Med, 2006;55(4):790-9.
[3] Liu, F. et al., Magn Reson Med, 2006;55(3):659-68.
[4] Fei, B. et al., Proc. IEEE EMBS, 2003; 646 - 648.
[5] Kwee, R.M. et al., PLoS One, 2011;6(2).
[6] Klein, S. et al., IEEE Transactions on Medical Imaging, 2010;29(1):196-205.
This research was supported by the Center for Translational Molecular Medicine and the Dutch Heart Foundation (PARISk).
@inproceedings{vantKlooster:2012,
author = {van 't Klooster, Ronald and Staring, Marius and Klein, Stefan and Kwee, Robert M. and Kooi, M. Eline and Reiber, Johan H.C and Lelieveldt, Boudewijn P.F. and van der Geest, Rob J.},
title = {Automatic Registration of Multispectral MR Vessel Wall Images of the Carotid Artery},
booktitle = {International Society for Magnetic Resonance in Medicine},
address = {Melbourne, Australia},
month = may,
year = {2012},
}
Introduction: In pre-clinical research, the combination of structural (MRI, CT, ultrasound), functional (PET, SPECT, specialized MRI protocols) and optical (BLI, FLI) imaging modalities enables longitudinal and cross-sectional studies in living organisms.
The goal of this work is to develop software for interactive exploration of heterogeneous multi-modal data in follow-up studies
Methods: To enable comparison and integration of follow-up multimodal data, image registration is required, where we differentiate inter-modal registration and intra-modal registration to detect changes over time.
To combine different modalities rigid registration is used to compensate for any rotation or translation that exists between different datasets. To detect the changes in different regions of the brain over the life-cycle (deformation, tumor growth, etc.), the different time-points are elastically/non-rigidly registered to each other. For each elastic registration, the deformation field and the correspondent determinant of the Jacobian (detJac) are calculated. When comparing 2 different time-points the information provided by the deformation field can be used: without distorting the original data one can automatically pin-point the exact same region/voxel in both datasets and understand what deformation the brain suffered from one time-point to another in all directions. Analyzing the detJac one can find whether a specific region of the brain suffered any local compression or expansion.
With the registration results for any possible combination of data at hand, one can easily choose what to visualize and compare side by side: same subject-same modality-different time-points, different subjects-same modality-same time-points, same subject-different modalities-different time-points, different subjects-different modalities-same time-point, etc.
In the proposed method the registration was performed using elastix
[1] and the visualization interface was built using MeVisLab.
Results: The proposed approach was first tested using multi-modal follow-up data of 1 male Wistar rat (Harlan-Winkelmann). It was scanned repeatedly (every 2 months from 10 to 20 months of age) under 2% isoflurane anesthesia using a horizontal bore 11.7T Bruker BioSpec MRI scanner. Diffusion tensor imaging was used to calculate fractional anisotropy, mean diffusivity and eigenvalue maps; a multislice multiecho experiment was performed to calculate T2 maps; both datasets were acquired with identical geometry and spatial resolution. T2*maps were acquired with a multi gradient echo sequence, angiography scans with a FLASH-2D TOF sequence with or without saturation of venous blood.
It was used to identify and follow in time a spontaneous brain tumor growth, later identified ex vivo as a melingeom. The automatic linking of the same ROI/voxel in non-rigidly registered datasets and the use of the detJac to search for asymmetries in brain deformation allowed for a more accurate comparison of follow-up data.
Conclusions: In this work we describe the first step taken to build an interactive and intuitive to use exploration system for multi-modal longitudinal and cross-sectional studies. In the future, quantification tools will be added to the platform and an intensive validation will be performed with multi-modal life-span rat brain data.
References:
[1] Klein & Staring et al., "elastix: a toolbox for intensity based medical image registration," IEEE-TMI, 2010.
@inproceedings{Khmelinskii:2011,
author = {Khmelinskii, Artem and Mengler, Luam and Kitslaar, Pieter and Staring, Marius and Po, Chrystelle and Reiber, Johan H.C. and Hoehn, Mathias and Lelieveldt, Boudewijn P.F.},
title = {Interactive system for exploration of multi-modal rat brain data},
booktitle = {European Molecular Imaging Meeting},
address = {Leiden, The Netherlands},
pages = {122},
month = jun,
year = {2011},
}
Introduction: Animal models can provide mechanistic insight into disease pathology and evolution, as well as a platform to test potential therapies. However, most pre-clinical research employs juvenile, 3 month old (300g) animals, which is not necessarily representative of the natural disease state.
We are conducting a lifespan study on rats, imaging the juvenile development as well as ageing processes of the brain with non-invasive techniques including functional and anatomical MRI and different PET-tracers, respectively.
An explorative study like this benefits from an automated evaluation technique that is based on individual structural changes rather than manual ROI analysis.
To overcome this limitation, we used elastic coregistration and individual deformation fields to address changes in the frontal cortex from MR images.
Material and Methods: From postnatal day 21 two groups of four male Wistar rats were housed pairwise. Food restriction (80% of ad libitum consumption) started at months 3 in order to minimize obesity, a risk factor for age-related diseases [1].
Animals were employed in MR experiments on a bimonthly basis. Group 1 was followed from the age of 3weeks up to 14 months, Group 2 from 10 until 20 months. MR experiments were conducted on an 11.7T Bruker BioSpec horizontal bore scanner. Animals were anaesthetized using 2% Isoflurane in 70:30 N2O:O2 and vital functions were monitored continuously.
T2 maps were chosen for their anatomical detail and quantitative reproducibility. Maps were calculated from an MSME (multi slice multi echo) sequence (10echoes) with TE=10ms, TR=5000ms, FOV=28x28mm and a resolution of 0.146x0.146mm in plane and 0.5mm slice thickness (without gaps). For every individual all T2 maps from the different ages were coregistered non-rigidly using a B-spline transform [2], and the corresponding deformation fields calculated. Deformation maps indicated volumetric changes of brain regions.
At specific time points (3 weeks and 3 months) a subset of rats was sacrificed for histological evaluation (cresyl violet).
Results: The deformation maps revealed a decrease in cortical thickness during juvenile development (3 weeks to 3 months). In parallel, quantitative evaluation of the frontal cortex showed a reduction of T2 relaxation time.
A further T2 reduction was observed after the age of 6 months, however, without significant changes in volume. Preliminary histological evaluation revealed a higher cortical cell density at 3 months when compared to 3 weeks of age.
Conclusions: Elastic coregistration is a useful tool for lifespan studies, providing unbiased information on volume changes on an individual basis. The deformation fields allow the creation of physiologically meaningful ROIs for quantitative analysis of imaging parameters.
A combination of reduced T2 and decreased cortical volume is implying an increasing tissue density (myelination and cell number) during development, lowering the relative amount of free water, and thus reducing the cortical volume. This was confirmed by histological evaluation.
Acknowledgements: This work was financially supported by BMBF (0314104) and ENCITE EU-FP7 (HEALTH-F5-2008-201842) program.
References:
[1] Mattson and Wan, Beneficial effect of intermittent fasting and caloric restriction on the cardiovascular and cerebrovascular systems, Journal of Nutritional Biochemistry, 16 (129-137), 2005.
[2] Klein & Staring et al., "elastix: a toolbox for intensity based medical image registration," IEEE-TMI, 2010.
@inproceedings{Mengler:2011,
author = {Mengler, Luam and Khmelinskii, Artem and Po, Chrystelle and Staring, Marius and Reiber, Johan H.C. and Lelieveldt, Boudewijn P.F. and Hoehn, Mathias},
title = {Juvenile development and ageing mediated changes in cortical structure and volume in the rat brain},
booktitle = {European Molecular Imaging Meeting},
address = {Leiden, The Netherlands},
pages = {160},
month = jun,
year = {2011},
}
Introduction: Whole lung densitometry on chest CT images is a clinically accepted method for measuring tissue destruction in pulmonary emphysema. Assessment of effect of new drugs for emphysema warrant local quantification of changes in severity of emphysema.
Objectives: To develop methods to locally evaluate emphysema progression.
Methods: Methods are based on matching follow-up chest CT scans using an intensity-based image registration technique, followed by subtracting baseline from follow-up images to estimate progression, while taking lung volume change into account. The first method assumes that lung mass is preserved and the second method is based on the observation that the volume-density slope is not necessarily fixed. The latter requires a third CT scan at a different inspiration level to estimate this slope locally. In a pilot study, both methods were applied to a lung phantom, where mass is known to be constant. Additionally, emphysema progression of three patients was graded visually by an MD, separately for apex, middle region and lung base, and these results were compared to the automatic methods.
Results: Both methods were able to reproduce the expected absence of progression in the phantom, the second method having the lowest error (mean error: 3.1 ± 5.8 HU and -0.2 ± 4.0 HU). Both methods showed good correspondence with the visual assessment, the second method being slightly better.
Conclusions: Image matching and the subsequent analysis of differences according to the proposed lung models is a potential tool for localizing emphysema progression in drug evaluation trials.
@inproceedings{Staring:2009,
author = {Staring, Marius and Stolk, Jan and Bakker, M. Els and Shamonin, Denis and Reiber, Hans and Stoel, Berend C.},
title = {Local progression estimation of pulmonary emphysema using image matching},
booktitle = {European Respiratory Society Annual Congress},
address = {Vienna, Austria},
month = sep,
year = {2009},
}
Purpose To demonstrate that image subtraction improves detection of change in pulmonary ground glass nodules identified on chest CT.
Methods We recruited 33 participants with 37 ground glass nodules from a lung cancer screening trial. Each participant had at least one follow-up scan (86 scans total; 2 to 4 scans per participant). Pairs of scans of the same nodule were presented in random order, and 4 observers with varying experience in chest CT were asked to rate growth and density change between the two images (increase, no change, decrease). The experiment was repeated with a new random sequence, where additionally subtraction images (after data registration) were provided for each pair of nodules. An experienced chest radiologist established a reference standard using all available information. Weighted kappa statistics kw were used to assess inter-observer agreement and agreement with the reference standard.
Results The reference standard established a regression over time in 5/37 ground glass nodules and no change in 16/37 nodules. In 16/37 nodules the size increased, and in 8/16 nodules density increased as well. When the subtraction image was available, average inter-observer kw improved from 0.46 to 0.53 for size change and from 0.36 to 0.50 for density change. Average agreement with the standard of reference improved from kw = 0.53 to 0.63 for size change and from 0.48 to 0.57 for density change.
Conclusion Subtraction imaging improves the detection of subtle changes in pulmonary ground glass nodules and decreases intra-observer variability.
@inproceedings{Staring:2008,
author = {Staring, Marius and Pluim, Josien P.W. and de Hoop, Bartjan and Klein, Stefan and Prokop, Mathias},
title = {Subtraction Imaging for Improved Detection of Change in Ground Glass Nodules in Chest Computed Tomography},
booktitle = {European Radiology Supplements},
address = {Vienna, Austria},
volume = {18, supplement 1},
pages = {137 - 336},
month = mar,
year = {2008},
}
Image registration, the task of aligning images, has been one of the key methods in medical image analysis. The goal in image registration is to find a coordinate mapping between a fixed target image and a moving source image. Until recently, machine learning was mainly used to aid image registration, e.g., for detecting misalignment. Nowadays, machine learning methods perform image registration themselves. In particular, deep learning-based image registration methods are gradually taking over the slow old methods. The benefit of machine learning-based image registration is that the spatial statistical relation between images are learned in an offline training phase. This means that during online inference time, image registration is performed very rapidly in one shot. In this chapter we will explain how tow train global and local image registration methods with supervised and unsupervised machine learning.
@incollection{deVos:2024,
author = {de Vos, Bob D. and Sokooti, Hessam and Staring, Marius and I{\v{s}}gum, Ivana},
title = {Chapter 19 - Machine learning in image registration},
booktitle = {Medical Image Analysis},
editor = {Frangi, Alejandro F. and Prince, Jerry L. and Sonka, Milan},
publisher = {Academic Press},
series = {The MICCAI Society book Series},
chapter = {19},
pages = {501 -- 515},
year = {2024},
}
Preclinical radiation studies play a crucial role in cancer research because they serve as an experimental system for investigating the biological, chemical, and physical aspects of the radiation response. These studies aim to address open questions regarding long term side effects, regional tissue radiosensitivities, immunomodulatory effects, and new treatment strategies such as very high dose rate (FLASH) and spatially fractionated irradiations. However, small animal experiments present a unique practical challenge. Unlike in the clinic where imaging, contouring, and treatment planning are conducted over a period of several days, the preclinical workflow typically requires these steps to be carried out consecutively while the animal is in the irradiation position under sedation. To mitigate effects of the anaesthesia and ensure the animal’s wellbeing, a fast workflow is thus essential.
This thesis explores the application of deep learning for autocontouring and fast proton dose engines for dose calculations, with the primary goal of streamlining irradiation planning for small animal studies. These tools greatly benefit the preclinical community by enhancing workflow efficiency, increasing experiment capacity, and reducing the overall workload of physicists and biologists. The reduction in planning time not only boosts animal throughput but also contributes positively to animal welfare. Additionally, the research on CT HU calibration methods has provided valuable insights into the benefits of SECT and DECT calibration for proton irradiation planning in preclinical settings. Ultimately, the work described in this thesis brings us one step closer to achieving more accurate and efficient image-guided irradiations of small animals for radiobiological studies.
@phdthesis{Malimban:2025,
author = {Malimban, Justin},
title = {Irradiation planning in small animal radiation biology research},
school = {University of Groningen},
address = {},
month = jan,
year = {2025},
}
HRCT is an important modality to non-invasively diagnose pulmonary diseases and assess treatment effects. In this thesis, we developed automatic methods to quantify SSc disease, based on HRCT. In this thesis, we first provided a general introduction in Chapter 1 about pulmonary anatomy, SSc, PFTs, chest CT and deep learning on chest CT. A lung lobe segmentation method was proposed in Chapter 2, as accurately extracting lungs and lobes is an essential step for later SSc disease analysis. An explainable fully automated SSc-ILD scoring framework was proposed in Chapter 3. This framework could automatically select five levels and estimate the ratio of SSc-ILD to lung area for each level in the order of several seconds. In Chapter 4, an automatic PFT estimation network was developed which could help to understand the relation between lung function and structure and to estimate the PFTs from CT scans for patients with PFTs contraindications. Because of GPU memory limitation, the CT scans used in Chapter 4 were down-sampled. Therefore, Chapter 5 achieved higher PFT regression performance with less training time by converting vessel centerlines from HRCT to point cloud and graph data.
@phdthesis{Jia:2024,
author = {Jia, Jingnan},
title = {Automatic Analysis of Chest CT in Systemic Sclerosis Using Deep Learning},
school = {Leiden University Medical Center},
address = {Albinusdreef 2, 2333 ZA Leiden},
month = sep,
year = {2024},
}
Adapting a radiotherapy treatment plan to the daily anatomy is a crucial task to ensure adequate irradiation of the target without unnecessary exposure of healthy tissue.This adaptation can be performed by automatically generating contours of the daily anatomy together with fast re-optimization of the treatment plan. These measurescan compensate for the daily variation and ensure the delivery of the prescribed dose distribution at small margins and high robustness settings. In this thesis, we focused on developing a deep learning-based methodology for automatic contouring for real-time adaptive radiotherapy either guided by CT or MR imaging modalities.
@phdthesis{Elmahdy:2022,
author = {Elmahdy, Mohamed S.},
title = {Deep learning for online adaptive radiotherapy},
school = {Leiden University Medical Center},
address = {Albinusdreef 2, 2333 ZA Leiden},
month = mar,
year = {2022},
}
The aim of this thesis is to develop a learning-based image registration method as a much faster alternative to conventional methods without requiring hyper-parameter tuning. We also aimed to improve accuracy as well as inference time of registration misalignment detection methods, via a fully automatic solution. Although all the proposed methods in this thesis are generic, all the experiments are performed on chest CT scans.
Chapter 2 presents a novel method to solve nonrigid image registration through a learning approach, instead of via iterative optimization of a predefined dissimilarity metric. Chapter 3 extends chapter 2 into a practical pipeline based on efficient supervised learning from artificial deformations. Chapter 4 proposes a new automatic method to predict the registration error in a quantitative manner and is applied to chest CT scans. Chapter 5 presents a supervised method to predict registration misalignment using convolutional neural networks (CNNs).
@phdthesis{Sokooti:2021,
author = {Sokooti, Hessam},
title = {Supervised Learning in Medical Image Registration},
school = {Leiden University Medical Center},
address = {Albinusdreef 2, 2333 ZA Leiden},
month = dec,
year = {2021},
}
The aim of this thesis is to develop these methods focusing on quantifying pulmonary vascular diseases and assessing treatment effects, based on CT images. Particularly, the following objectives have been pursued in this thesis: 1) to develop an accurate lung vessel segmentation method; 2) to propose and validate an automatic method for quantifying pulmonary vascular morphology; 3) to investigate pulmonary vascular remodeling in SSc patients with impaired DLCO, but in the absence of pulmonary fibrosis; 4) to investigate changes in the pulmonary vascular densitometry and morphology in patients with CTEPH, treated with BPA. These objectives are described in this thesis.
@phdthesis{Zhai:2020,
author = {Zhai, Zhiwei},
title = {Automatic Quantitative Analysis of Pulmonary Vessels in CT: Methods and Applications},
school = {Leiden University Medical Center},
address = {Albinusdreef 2, 2333 ZA Leiden},
month = mar,
year = {2020},
}
Image registration is important for medical image analysis. However, its clinical application is sometimes limited by the speed of the algorithm. For example, in online adaptive radiation therapy a few seconds is ideal, while it usually takes several minutes, at the least. In this thesis, we consider acceleration techniques for parametric intensity-based image registration problems focussing on the optimization routine, specifically the step size and the search direction. The different proposed methods are thoroughly evaluated on different datasets across modalities, subject, similarity measures and transformation models. Depending on the registration settings, the estimation time of the step size is reduced from 40 seconds to less than 1 second when the number of parameters is 105, almost 40 times faster. The total registration time of new acceleration techniques (FASGD) is reduced by a factor of 2.5-7x compared with ASGD for the experiments in this thesis. All methods were implemented using C++ in the open source registration package elastix. Based on these acceleration schemes we evaluated elastix on the application of automatic contour propagation in online adaptive intensity modulated proton therapy for prostate cancer.
@phdthesis{Qiao:2017,
author = {Qiao, Yuchuan},
title = {Fast optimization methods for image registration in adaptive radiation therapy},
school = {Leiden University Medical Center},
address = {Albinusdreef 2, 2333 ZA Leiden},
month = dec,
year = {2017},
}
In pre-clinical research, whole-body small animal imaging is widely used for the in vivo visualization of functional and anatomical information to study cancer, neurological and cardiovascular diseases and help with a faster development of new drugs. Functional information is provided by imaging modalities such as PET, SPECT and specialized MRI. Structural imaging modalities like radiography, CT, MRI and ultrasound provide detailed depictions of anatomy. Optical imaging modalities, such as BLI and near-infrared fluorescence imaging offer a high sensitivity in visualizing molecular processes in vivo. The combination of these modalities enables to follow the subject(s) and molecular processes in time, in living animals.
With these advances in image acquisition, the problem has shifted from data acquisition to data processing. The organization, analysis and interpretation of this heterogeneous whole-body imaging data has become a demanding task.
In this thesis, the data processing approach depicted in Figure 1.1 was further explored. This approach is based on an articulated whole-body atlas as a common reference to normalize the geometric heterogeneity caused by postural and anatomical differences between individuals and geometric differences between imaging modalities. Mapping to this articulated atlas has the advantage that all the different imaging modalities can be (semi) automatically registered to a common anatomical reference; postural variations can be corrected, and the different animals can be scaled properly while allowing for proper management of this highthroughput whole-body data.
In this thesis, we have focused on three complementary aspects of the approach described in Figure 1.1, and worked towards an automated analysis pipeline for quantitative small animal image analysis. The specific goals of this thesis were:
@phdthesis{Khmelinskii:2013,
author = {Khmelinskii, Artem},
title = {Multi-modal small-animal imaging: image processing challenges and applications},
school = {Leiden University Medical Center},
address = {Albinusdreef 2, 2333 ZA Leiden},
month = oct,
year = {2013},
}
Chapter 2 gives a general overview of image registration and describes the registration framework and software used in this thesis. The registration software was developed at the Image Sciences Institute jointly with Stefan Klein for our research projects. All experiments described in this thesis were performed with this software package, called elastix.
A first approach to tackle the rigid-nonrigid tissue problem uses adaptive filtering of the deformation field. The technique is reported in Chapter 3. Issues that remain with this approach are solved with another technique based on a local rigidity penalty term, see Chapter 4. The last technique is clinically employed for the detection of subtle changes in ground-glass opacities in the lung. The results of this study are presented in Chapter 5.
To address the problem of insufficient registration quality, in Chapter 6 the registration cost function is extended with multiple image features, instead of using image intensity only, as is commonly done.
@phdthesis{Staring:2008,
author = {Staring, M.},
title = {Intrasubject Registration for Change Analysis in Medical Imaging},
school = {Utrecht University},
address = {Image Sciences Institute},
month = oct,
year = {2008},
}
In order to protect (copy)rights of digital content, means are sought to stop piracy. Several methods are known to be instrumental for achieving this goal. This report considers one such method: digital watermarking, more specific quantization based watermarking methods. A general watermarking scheme consist of a watermark embedder, a channel representing some sort of processing on the watermarked signal, and a watermark detector. The problems related to any watermarking method are the perceptual quality of the watermarked signal, and the possibility to retrieve the embedded information at the detector.
From current quantization based watermarking algorithms, like QIM, DC-QIM, SCS, etc., it is known that the achievable rates are promising, but that it is hard to meet the required robustness demands. Therefore improvements of current algorithms are sought that are more robust against normal processing. This report focusses on two possible improvements, namely the use of error correcting codes (ECC) and the use of adaptive quantization.
Watermarking can be seen as a form of communication. Therefore, the robustness demand for watermarking is equivalent with the demand of reliable communication for communication models. Therefore, the use of ECC gives certainly an improvement in robustness. This is confirmed by experiments. Repetition codes are simple to implement and already gives a gain in robustness. The concatenation of convolutional codes with repetition codes gives an improvement only in the case of mild degradations due to the above mentioned processing.
In this report watermarking of signals with a luminance component are considered, like digital images and video data. Adaptive quantization refers to the use of a larger quantization step size for high luminance values, and a lower quantization step size for low luminance values. It is known from Weber’s law that the human eye is less sensitive for brightness changes in higher luminance values, than it is in lower luminance values. Therefore, using adaptive quantization does not come at the cost of a loss of perceptual quality of the host signal. Adaptive quantization gives a large robustness gain for brightness scaling attacks. However, the adaptive quantization step size must be estimated at the detector, which potentially introduces an additional source of errors in the retrieved message. By means of experiments it is shown that this is not such a big problem. Therefore, adaptive quantization improves the robustness of the watermarking scheme.
It is valuable to know the performance of the watermarking scheme with the two improvements. The used performance measure is the bit error probability. The total bit error probability is build up from two components: One estimates the bit error probability for the case of fixed quantization, with an Additive White Gaussian Noise (AWGN) or uniform noise attack; The other estimates the bit error probability for the case of adaptive quantization, without any attack. Models for these two bit error probabilities are developed.
At the embedder the distortion compensation parameter α has to be set. The optimal value for this parameter is derived for the case of a Gaussian host signal and an AWGN channel. The value of this optimal parameter α* is compared with an earlier result of Eggers and is shown to be identical. But whereas Eggers found a numerical function, which he numerically optimizes, our result leads to an analytical function, which can be optimized numerically.
So, we use two methods to improve robustness, namely the use of error correcting codes and an adaptive quantization step size. These two methods are shown to be improvements. Also an analytical model for the performance is derived, which can be used to verify analytically the robustness improvement.
@mastersthesis{Staring:2002,
author = {Staring, M.},
title = {Analysis of Quantization based Watermarking},
school = {University of Twente, Faculty of Applied Mathematics},
address = {Drienerlolaan 5, 7522 NB Enschede},
month = dec,
year = {2002},
}