Project Details
Description
Project Summary/Abstract
Lung cancer 5-year survival rates drop from 61% for early stage diagnosis to just 6% for late stage diagnosis.
Currently, fewer than 1 in 5 cases are diagnosed at an early stage. The increasing frequency of chest CT scans
and changes in lung cancer screening guidelines are expected to increase the number of incidentally discovered
lung lesions, representing an opportunity for earlier lung cancer diagnosis. Bronchoscopy is currently the safest,
least invasive, and least expensive diagnostic option, but its poor diagnostic yield greatly limits its procedural
benefit. Even when advanced techniques like radial endobronchial ultrasound and electromagnetic navigation
are used, the diagnostic yield is just 50-60%. This is primarily due to challenges with intraoperative localization
of the bronchoscope prior to needle deployment. Additionally, access to these techniques is limited because they
require expensive equipment and unique expertise. Efforts relying on the bronchoscope's built-in camera require
no additional equipment or specialization, but have struggled with generalizability across individuals in part due
to limited data availability and assumptions about airway features.
The objective of this proposal is to improve the success rate of traditional bronchoscopes by addressing limita-
tions in intraoperative localization using a data-driven model that is robust to differences in human anatomy. This
work has potential for significant public health benefit by (1) increasing early lung cancer detection, (2) reducing
morbidity and mortality by reducing the number of invasive procedures, and (3) making minimally invasive bron-
choscopy more accessible in areas without expert bronchoscopists. The proposed work will be accomplished via
two Specific Aims. In Aim 1, a dataset will be generated of virtual and real bronchoscopy videos with video-frame
matched six degrees-of-freedom poses (position and orientation in three-dimensions) of the bronchoscope's dis-
tal tip. This data will be made publicly available as the first large dataset of its kind to promote future research and
reproducibility. In Aim 2, a real-time bronchoscope localization model will be developed using advances in ma-
chine learning, including deep neural networks, that have shown success in camera localization for non-medical
applications. These models will regress the pose of the bronchoscope's distal tip using current and past video
frames of the bronchoscope's built-in camera. The clinical utility of the system will be evaluated in simulation, 3D
printed lung phantoms, and ex-vivo porcine lung experiments. The research, tightly coupled clinical experience,
and associated training plan will provide a unique interdisciplinary skill-set in computer science, medical robotics,
and procedural medicine. The outstanding research and clinical environment for this training at the University of
North Carolina at Chapel Hill ensures exceptional preparation for a career conducting cutting-edge research as
a physician-scientist in medical robotics.
Lung cancer 5-year survival rates drop from 61% for early stage diagnosis to just 6% for late stage diagnosis.
Currently, fewer than 1 in 5 cases are diagnosed at an early stage. The increasing frequency of chest CT scans
and changes in lung cancer screening guidelines are expected to increase the number of incidentally discovered
lung lesions, representing an opportunity for earlier lung cancer diagnosis. Bronchoscopy is currently the safest,
least invasive, and least expensive diagnostic option, but its poor diagnostic yield greatly limits its procedural
benefit. Even when advanced techniques like radial endobronchial ultrasound and electromagnetic navigation
are used, the diagnostic yield is just 50-60%. This is primarily due to challenges with intraoperative localization
of the bronchoscope prior to needle deployment. Additionally, access to these techniques is limited because they
require expensive equipment and unique expertise. Efforts relying on the bronchoscope's built-in camera require
no additional equipment or specialization, but have struggled with generalizability across individuals in part due
to limited data availability and assumptions about airway features.
The objective of this proposal is to improve the success rate of traditional bronchoscopes by addressing limita-
tions in intraoperative localization using a data-driven model that is robust to differences in human anatomy. This
work has potential for significant public health benefit by (1) increasing early lung cancer detection, (2) reducing
morbidity and mortality by reducing the number of invasive procedures, and (3) making minimally invasive bron-
choscopy more accessible in areas without expert bronchoscopists. The proposed work will be accomplished via
two Specific Aims. In Aim 1, a dataset will be generated of virtual and real bronchoscopy videos with video-frame
matched six degrees-of-freedom poses (position and orientation in three-dimensions) of the bronchoscope's dis-
tal tip. This data will be made publicly available as the first large dataset of its kind to promote future research and
reproducibility. In Aim 2, a real-time bronchoscope localization model will be developed using advances in ma-
chine learning, including deep neural networks, that have shown success in camera localization for non-medical
applications. These models will regress the pose of the bronchoscope's distal tip using current and past video
frames of the bronchoscope's built-in camera. The clinical utility of the system will be evaluated in simulation, 3D
printed lung phantoms, and ex-vivo porcine lung experiments. The research, tightly coupled clinical experience,
and associated training plan will provide a unique interdisciplinary skill-set in computer science, medical robotics,
and procedural medicine. The outstanding research and clinical environment for this training at the University of
North Carolina at Chapel Hill ensures exceptional preparation for a career conducting cutting-edge research as
a physician-scientist in medical robotics.
Status | Finished |
---|---|
Effective start/end date | 1/9/21 → 31/8/24 |
Links | https://projectreporter.nih.gov/project_info_details.cfm?aid=10676966 |
Funding
- National Cancer Institute: US$45,856.00
- National Cancer Institute: US$37,546.00
- National Cancer Institute: US$38,262.00
ASJC Scopus Subject Areas
- Cancer Research
- Artificial Intelligence
- Oncology
Fingerprint
Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.