Deep-learning algorithm estimates gestational age from smartphone images
21 Dec 2018
Prematurity is a significant cause of mortality in neonates. Knowledge of an infant’s gestational age is critical in post-delivery treatment plans to reduce neonatal deaths. In high-income countries, prenatal ultrasound scans – the ground truth measure – are the gold-standard method to track gestational aging, but in lower-income countries, access to ultrasound technology and medical experts is limited.
“If we could accurately estimate gestational age for newborns using simple, portable technology, we would be able to administer a proper post-treatment plan to reduce the risk of mortality in many under-serviced regions,” says Arjun Desai from Duke University, first-author on a new study into an automatic system for gestational aging.
The cross-disciplinary team, led by Sina Farsiu, has developed a system based on the previously reported inverse correlation between blood vessel density in the anterior lens capsule region, and gestational age. Located behind the pupil, the anterior lens capsule vasculature (ALCV) can be assessed by an expert using an ophthalmoscope.
In this new study, an ophthalmoscope has been attached to a handheld, smartphone-based device to take videos of the ALCV of 124 premature neonates in their first 48 hours of life. The team has now reported on their fully-automatic, deep-learning algorithm that estimates gestational age (Biomed. Opt. Express 10.1364/BOE.9.006038).
Configuring machine-learning
Recording good quality videos during the clinical trial wasn’t easy, and the doctors involved often had to use cotton-buds (or Q-tips) to gently keep the newborn’s eyes open for filming.
“Most of the information acquired in the videos was pretty irrelevant. We wanted to focus on the eye, but each video frame mostly contained the externals of the infants’ face, the room etc. Our algorithm first removed all extraneous information,” explains Desai, who helped develop the deep-learning algorithm that extracted the local eye-region-of-interest from each frame of the videos.
Blind image analysers were used to select the clearest, single frame of each neonate’s eye region, which were passed to a pre-trained neural network to extract representative features. “Deep-learning is still a pseudo black box and one of the challenges is optimizing it to look for important features, while not really knowing ahead of time what these features are and the best methods for extracting them,” says Desai.
The selected features were then clustered using support vector machines to produce a binary classification. The team trained their algorithm on the ground-truth ultrasonic gestational age results to produce a binary yes (1) or no (0) response to multiple thresholds. Each threshold posed the question of whether features from an image were of a specified gestational week or lower. Six thresholds were used, from 33 to 38 gestational weeks.
Manual versus automatic
The researchers tested their automatic results against several manual methods, which used manually-extracted features from ALCV in premature neonates to estimate gestational age. Analysis was a time-consuming task, with the manual selection of the clearest frame from videos of each infant, and then annotation of vasculature to estimate features, such as ALCV density, branch length, and tortuosity (“bendy-ness”). The best performing manual method fit a linear regression between ALCV density and gestational age.
“The automatic algorithm performed as well or better than the manual methods at all gestational ages, except 33 weeks,” said Desai. This new automatic method is far less time consuming than manual segmentations of images and doesn’t require medical expertise to operate.
Data driven
“There may still be some work required to fine tune this algorithm, but as we collect more data we’ll be better able to do that,” says Desai.
It’s not clear how ALCVs correlation with gestational age is affected by nutritional status or racial profile, and Desai points out that the study’s focus on neonates within the United States is a “good start” but may not be “representative enough”. In collaboration with Jennifer Griffin, a research epidemiologist from RTI International in Los Angeles, the algorithm will be tested and fine-tuned in a large-scale clinical trial, funded by the Bill and Melinda Gates Foundation, in sub-Saharan Africa and South Asia.
The automated software is open-source so that communities in low-income countries can freely access, what the team hopes will become, an influential tool for remote neonatal care.
Louisa Cockbill is a science writer based in the UK
21/12/2018 FROM PHYSICSWORLD.COM
Δεν υπάρχουν σχόλια:
Δημοσίευση σχολίου