Detalles del proyecto
Descripción
Development of Robust Brain Measurement Tools Informed by Ultrahigh Field 7T MRI Abstract: Summary. Neuroimaging can provide safe, non-invasive, and whole-brain measurements for large clinical and research studies of brain disorders. However, many disorders such as Alzheimer's Disease (AD) cause complex spatiotemporal patterns of brain alterations, which are often difficult to tease out due to limited image quality afforded by the popular 3T MRI scanners (with 20,000+ units available worldwide). Although 7T MRI scanners provide better image quality, these ultrahigh field scanners are not widely available (with only 40+ units available worldwide) and are also not used clinically. Thus, tools for reconstructing 7T-like high-quality MRI from 3T MRI scan are highly desirable. A means for achieving this is by learning the relationship between 3T and 7T MRI scans from training samples. This renewal project is dedicated to developing a set of novel learning-based methods to transfer image contrast and tissue/anatomical labels of 7T MRI of training subjects to 3T MRI of new subjects for 1) image quality enhancement, 2) high-precision tissue segmentation, 3) accurate anatomical ROI (region of interest) labeling, and eventually 4) early detection of brain disorders such as AD. Specifically, (Aim 1) to enhance the image quality of 3T MRI, we will develop a novel deep learning architecture to learn a complex multi-layer 3T-to-7T mapping from training subjects, each with coupled 3T and 7T MRI scans. This mapping will then be applied to reconstruct quality-enhanced 7T-like MRI scans from new 3T MRI scans. (Aim 2) For brain structural measurement (e.g., brain atrophies, and hippocampal volume shrinkage), a crucial step is brain tissue segmentation. We will thus develop a robust and accurate random forest tissue segmentation method, which maps 7T label information to 3T scans. The mapping function is trained using tissue labels generated for 7T scans, instead of 3T scans which often have limited image contrast. (Aim 3) To further quantify local atrophies in ROIs or even sub-ROIs (i.e., hippocampal subfields), we will develop a deformable multi-ROI segmentation method by employing (a) random forest to predict deformation from each image location to the target boundary by adaptive integration of multimodal (anatomical, structural & functional connectivity) information and (b) auto-context model to iteratively refine ROI segmentation results. Note that the adaptive integration of multimodal MRI data, especially resting-state fMRI (rs-fMRI), is critical to the segmentation of sub-ROIs such as hippocampal subfields, since local functional connectivity patterns can help distinguish boundaries between neighboring subfields that often have different cortico-cortical connections. (Aim 4) Finally, by integrating anatomical features from all accurately segmented ROIs/sub-ROIs and also structural & functional connectivity features between those segmented ROIs/sub-ROIs, we can more effectively detect early-stage brain disorders, i.e., the conversion of Mild Cognitive Impairment (MCI) to AD. We will integrate information from different imaging datasets and multiple imaging centers by using our novel multi-task learning approach for jointly learning the respective disease prediction models. Applications. These computational methods will find their applications in diverse fields, i.e., quantifying brain abnormalities associated with various neurological diseases (i.e., Alzheimer's disease and schizophrenia), measuring the effects of different pharmacological interventions on the brain, and finding associations between imaging and clinical scores.
Estado | Finalizado |
---|---|
Fecha de inicio/Fecha fin | 17/9/08 → 31/5/21 |
Enlaces | https://projectreporter.nih.gov/project_info_details.cfm?aid=9977173 |
Financiación
- National Institute of Biomedical Imaging and Bioengineering: USD402,999.00
!!!ASJC Scopus Subject Areas
- Neurociencia (todo)
Huella digital
Explore los temas de investigación que se abordan en este proyecto. Estas etiquetas se generan con base en las adjudicaciones/concesiones subyacentes. Juntos, forma una huella digital única.