Gender imbalance in AI-based medical imaging

A study finds evidence suggesting gender imbalance in medical imaging datasets used to train classifiers for computer-aided diagnosis. Artificial intelligence (AI) is increasingly applied to disease diagnosis through medical imaging. However, potential gender and racial biases tied to AI systems remain a significant concern. Enzo Ferrante and colleagues analyzed the performance of three deep learning models for chest X-ray diagnosis on two publicly available, large datasets. The datasets, maintained by the US National Institutes of Health and Stanford University, included the disease diagnosis and gender information of 30,805 and 65,240 individuals, respectively. The authors found a consistent decrease in the models’ classifying ability when images from a majority of male patients were used for the AI training dataset and images of female patients were used to test the AI, and vice versa. Compared with a balanced dataset, a dataset with a 25%/75% ratio of imbalance showed significantly lower classifying ability for the underrepresented group. The findings suggest that gender imbalance in medical datasets used to train AI-based diagnostic systems produces biased classifiers, which could exacerbate healthcare disparities, according to the authors.

Article #19-19012: “Gender imbalance in medical imaging datasets produces biased classifiers for computeraided diagnosis,” by Agostina J. Larrazaba, Nicolás Nieto, Victoria Peterson, Diego H. Milone, and Enzo Ferrante.

MEDIA CONTACT: Enzo Ferrante, Ciudad Universitaria UNL, Santa Fe, ARGENTINA, tel: +54 9 343 452 6369; email:

[email protected]

###

This part of information is sourced from https://www.eurekalert.org/pub_releases/2020-05/potn-gii052020.php

withyou android app