From the 2021 HVPAA National Conference
Rohan Bhansali (Duke University), Avinash Komarlingam (University of Maryland)
Chest X-rays are the most frequently performed medical imaging procedure, supported by their critical role in diagnosing thoracic diseases, including lung carcinoma, pneumonia, and tuberculosis. Despite their ubiquitous nature, they are considered to be among the most difficult radiographs to interpret, with the challenge exacerbated by human limitations such as cognitive or perceptual biases. Furthermore, with the overwhelming majority of the global population lacking access to radiologists, there exists a stark shortage of qualified diagnostic experts.
Recent advances in deep learning algorithms, specifically convolutional neural networks, have demonstrated significant promise towards automated, large-scale chest X-ray classification, alleviating this deficiency while additionally decreasing healthcare costs and diagnostic delays even beyond resource-deficient regions. However, these algorithms fall short in incorporating different view positions and patient symptoms, which are essential for rigorous diagnosis.
Accordingly, we developed a multi-network model that concurrently classifies posteroanterior, anteroposterior, and lateral chest X-rays and outputs a unified diagnosis from fourteen disease classifications. We optimized our model’s hyperparameters using the MIMIC-CXR dataset, a collection of 377,110 chest X-ray images sourced from 227,835 imaging studies from 65,379 patients at the Beth Israel Deaconess Medical Center Emergency Department between 2011-2016. The images were passed through the Laplacian filter to highlight meaningful features within the scans, thereby reducing computational expense while boosting performance. The Laplacian is a second order differential operator defined as the divergence of the gradient field, where the value is greater when the rate of the change in the measured value is greater. This results in the filter acting like an edge detector when applied to images, as edges in an image would have a large, abrupt change in pixel values, resulting in white areas on the transformed images; other areas on the image would have gradual change or no change at all in the pixel values, resulting in darker areas on the transformed image. We then utilized the processed images to train three distinct 121-layer convolutional neural networks and subsequently concatenated them to provide a fused prediction.
We found that applying the Laplacian filter significantly increased performance across the board, enabling our model’s classification accuracy to increase substantially from 89.2% to 92.8% when validated on a testing set of 74,384 chest X-rays.
Our model’s performance exceeded that of practicing radiologists in both efficiency and accuracy. Comparatively, they attained an average accuracy of 78% across the same fourteen disease classifications and required longer time for diagnoses by multiple orders of magnitude. The implications of these results are twofold; they reaffirm the developments of previous research integrating deep learning and clinical diagnosis while concurrently suggesting the newfound efficacy of the Laplacian filter, namely its potential for application in other medical imaging modalities.
We describe an inexpensive, efficient, and reliable screening tool for cardiopulmonary diseases capable of reading and interpreting the nearly two billion chest X-rays taken annually. Its versatile nature allows it to be deployed in diverse environments, from aiding developing countries plagued with inadequate healthcare to streamlining metropolitan hospitals brimming with patients.