Home - Emerging Tech - Neural Network Detects Breast Cancer in Ultrasound Images
Research Brief

Neural Network Detects Breast Cancer in Ultrasound Images

After quick training, deep learning software matches diagnostic performance of radiologists.

Janelle Weaver, Contributor
Wednesday, March 7, 2018


Ultrasound can improve the detection of breast cancer when combined with mammography, particularly in patients with dense breast tissue. But it requires more of the radiologist’s time than mammography alone, and it still produces a high number of false positives.

A study published Jan. 10 in the British Journal of Radiology shows that algorithms can aid in the diagnosis of breast cancer based on ultrasound images. The software learns faster and better than a medical student with no prior experience in breast imaging. The ratings of the neural network were most similar to those of a radiology resident, indicating that the software may still be outperformed by more seasoned radiologists.

The researchers used deep learning software. This type of artificial neural network is currently used in various industries for quality inspection tasks. Such tasks include detection of defects on metal surfaces, real-time traffic analysis and appearance-based product identification.

It took seven minutes for the software to be trained on a set of 445 breast ultrasound images, compared with 48 minutes for a fourth-year medical student with no prior experience in breast imaging. After this initial training, another set of 192 breast ultrasound images was then used to compare the performance of the software and medical student, as well as a radiology resident and a radiologist with three years of experience in breast imaging. In only 3.7 seconds, the neural network demonstrated a diagnostic accuracy comparable to that of the radiologists, who took approximately 25 minutes to evaluate the images.

The sensitivity, or the proportion of positives that were correctly identified, was 84.2 percent for both the software and radiologists, compared with 73.7 percent for the medical student. The specificity, or the proportion of negatives that were correctly identified, was 80.4 percent for the neural network, compared with 89 percent, 82.7 percent and 72.8 percent for the human readers with decreasing levels of experience.

According to the authors, deep learning software could serve as a visual aid for inexperienced physicians, allowing highly accurate, real-time analysis during an ultrasound examination.