Large-scale longitudinal neuroimaging studies with diffusion imaging techniques are necessary to test and validate models of white matter neurophysiological processes that change in time, both in healthy and diseased brains. The predictive power of such longitudinal models will always be limited by the reproducibility of repeated measures acquired during different sessions. At present, there is limited quantitative knowledge about the across-session reproducibility of standard diffusion metrics in 3T multi-centric studies on subjects in stable conditions, in particular when using tract based spatial statistics and with elderly people. In this study we implemented a multi-site brain diffusion protocol in 10 clinical 3T MRI sites distributed across 4 countries in Europe (Italy, Germany, France and Greece) using vendor provided sequences from Siemens (Allegra, Trio Tim, Verio, Skyra, Biograph mMR), Philips (Achieva) and GE (HDxt) scanners. We acquired DTI data (2 × 2 × 2 mm(3), b = 700 s/mm(2), 5 b0 and 30 diffusion weighted volumes) of a group of healthy stable elderly subjects (5 subjects per site) in two separate sessions at least a week apart. For each subject and session four scalar diffusion metrics were considered: fractional anisotropy (FA), mean diffusivity (MD), radial diffusivity (RD) and axial (AD) diffusivity. The diffusion metrics from multiple subjects and sessions at each site were aligned to their common white matter skeleton using tract-based spatial statistics. The reproducibility at each MRI site was examined by looking at group averages of absolute changes relative to the mean (%) on various parameters: i) reproducibility of the signal-to-noise ratio (SNR) of the b0 images in centrum semiovale, ii) full brain test-retest differences of the diffusion metric maps on the white matter skeleton, iii) reproducibility of the diffusion metrics on atlas-based white matter ROIs on the white matter skeleton. Despite the differences of MRI scanner configurations across sites (vendors, models, RF coils and acquisition sequences) we found good and consistent test-retest reproducibility. White matter b0 SNR reproducibility was on average 7 ± 1% with no significant MRI site effects. Whole brain analysis resulted in no significant test-retest differences at any of the sites with any of the DTI metrics. The atlas-based ROI analysis showed that the mean reproducibility errors largely remained in the 2-4% range for FA and AD and 2-6% for MD and RD, averaged across ROIs. Our results show reproducibility values comparable to those reported in studies using a smaller number of MRI scanners, slightly different DTI protocols and mostly younger populations. We therefore show that the acquisition and analysis protocols used are appropriate for multi-site experimental scenarios.
Large-scale longitudinal multi-site MRI brain morphometry studies are becoming increasingly crucial to characterize both normal and clinical population groups using fully automated segmentation tools. The test-retest reproducibility of morphometry data acquired across multiple scanning sessions, and for different MR vendors, is an important reliability indicator since it defines the sensitivity of a protocol to detect longitudinal effects in a consortium. There is very limited knowledge about how across-session reliability of morphometry estimates might be affected by different 3T MRI systems. Moreover, there is a need for optimal acquisition and analysis protocols in order to reduce sample sizes. A recent study has shown that the longitudinal FreeSurfer segmentation offers improved within session test-retest reproducibility relative to the cross-sectional segmentation at one 3T site using a nonstandard multi-echo MPRAGE sequence. In this study we implement a multi-site 3T MRI morphometry protocol based on vendor provided T1 structural sequences from different vendors (3D MPRAGE on Siemens and Philips, 3D IR-SPGR on GE) implemented in 8 sites located in 4 European countries. The protocols used mild acceleration factors (1.5-2) when possible. We acquired across-session test-retest structural data of a group of healthy elderly subjects (5 subjects per site) and compared the across-session reproducibility of two full-brain automated segmentation methods based on either longitudinal or cross-sectional FreeSurfer processing. The segmentations include cortical thickness, intracranial, ventricle and subcortical volumes. Reproducibility is evaluated as absolute changes relative to the mean (%), Dice coefficient for volume overlap and intraclass correlation coefficients across two sessions. We found that this acquisition and analysis protocol gives comparable reproducibility results to previous studies that used longer acquisitions without acceleration. We also show that the longitudinal processing is systematically more reliable across sites regardless of MRI system differences. The reproducibility errors of the longitudinal segmentations are on average approximately half of those obtained with the cross sectional analysis for all volume segmentations and for entorhinal cortical thickness. No significant differences in reliability are found between the segmentation methods for the other cortical thickness estimates. The average of two MPRAGE volumes acquired within each test-retest session did not systematically improve the across-session reproducibility of morphometry estimates. Our results extend those from previous studies that showed improved reliability of the longitudinal analysis at single sites and/or with non-standard acquisition methods. The multi-site acquisition and analysis protocol presented here is promising for clinical applications since it allows for smaller sample sizes per MRI site or shorter trials in studies evaluating the role of potential biomarkers to predict disease progression or treatment effects.
JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.
How does it work?
We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.
Video X seems to be unrelated to Abstract Y...
In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.