Deep Learning for Understanding Multi label Imbalanced Chest X-rayDatasets
DOI:
https://doi.org/10.69996/jcai.2025004Keywords:
Convolutional Neural Networks, Gradient- CAM, AUC, ROC, Chest X-raysAbstract
In recent years, convolutional neural networks (CNNs) have become essential in medical image analysis, especially for diagnosing diseases from Chest X-rays. However, the black-box nature of these algorithms and the complexity of multi- label classification in healthcare create significant interpretability challenges. Our project focuses on making these models more transparent by using explainable AI techniques, such as Grad- CAM, which generates heatmaps to visually show which areas of the X-ray the model used to make its predictions. It implements the complex task of diagnosing multiple diseases from a single chest X-ray, where some conditions are rarer than others. To ensure accuracy and reliability, we evaluate the models using metrics like F1 score and ROC AUC, which help measure their performance. By combining advanced deep learning methods with these explainability and evaluation techniques, the work aims to improve both the accuracy and interpretability of AI- driven diagnoses in healthcare.
References
[1] Pranav Rajpurkar, Jeremy Irvin, Kaylie Zhu, Brandon Yang, Hershel Mehta et al., “CheXNet: Radiologist- Level Pneumonia Detection on Chest X-Rays with Deep Learning,” arXiv 2017.
[2] C. Erdi¸ allı, Ecem Sogancioglu, Bram van Ginneken, Kicky G. van Leeuwen and Keelin Murphy, “Deep learning for chest X-ray analysis: A survey,” Medical Image Analysis, vol.72, no.102125, 2021.
[3] Qiannan Xu, Li Zhu, Tao Dai and Chengbing Yan, “Aspect-based sentiment classification with multi-attention network,” Neurocomputing, vol.388, pp.135-143, 2020.
[4] H. Wang, “A Weighted Graph Attention Network Based Method for Multi-label Classification of Electrocardiogram Abnormalities,” 2020 42nd Annual International Conferenceof the IEEE Engineering in Medicine & Biology Society (EMBC), Montreal, QC, Canada, pp. 418-421, 2020.
[5] Willone Lim, Kelvin Sheng Chek Yong, Bee Theng Lau and Colin Choon Lin Tan, “Future of generative adversarial networks (GAN) for anomaly detection in network security: A review,” Computers & Security, vol.139, no.103733, 2024.
[6] M.K.U. Ahamed, M.M. Islam, M.A. Uddin, A. Akhter, U.K. Acharjee et al., “DTLCx: An Improved ResNet Archi- tecture to Classify Normal and Conventional Pneumonia Cases from COVID-19 Instances with Grad-CAM-Based Superimposed Visualization Utilizing Chest X-ray Images,” Diagnostics, vol.13, pp.551, 2023.
[7] Helena Liz, Javier Huertas-Tato, Manuel Sa´nchez-Montan˜e´s, Javier Del Ser and David Camacho, “Deep learning for understanding multi label imbalanced Chest X-ray datasets,” arXiv:2207.14408 , 202l.
[8] Ramprasaath R. Selvaraju, Michael Cogswell Abhishek Das, Ra- makrishna Vedantam, Devi Parikh and Dhruv Batra, “Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization,” 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 2017.
[9] Lliu, Bin, Blekas, Konstantinos, Tsoumakas and Grigorios, “Multi- Label Sampling based on Local Label Imbalance,” Pattern Recognition, vol.122, 2021.
[10] J. Irvin, P. Rajpurkar, M. Ko, Y. Yu, S. Ciurea-Ilcus, C. Chute et al., “CheXpert: A Large Chest Radiograph Dataset with Uncertainty Labels and Expert Comparison,” Proceedings of the AAAI Conference on Artificial Intelligence, vol.33, no.01, pp.590-597, 2019
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Journal of Computer Allied Intelligence(JCAI, ISSN: 2584-2676)

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Fringe Global Scientific Press publishes all the papers under a Creative Commons Attribution-Non-Commercial 4.0 International (CC BY-NC 4.0) (https://creativecommons.org/licenses/by-nc/4.0/) license. Authors have the liberty to replicate and distribute their work. Authors have the ability to use either the whole or a portion of their piece in compilations or other publications that include their own work. Please see the licensing terms for more information on reusing the work.