Study exposes AI image generators’ bias in depicting surgeons, highlighting gender and racial disparities

3 minutes, 21 seconds Read


Medicine began as a male profession, and as a result, the advancement of women in the field has been slow in recent years. Currently more than half of medical graduates are women, while minorities make up 11%. Nevertheless, studies have shown that two-thirds of women face selection bias and discrimination in the workplace, especially in surgical specialties.

Study: Population representation in 3 leading artificial intelligence text-to-image generators.  Image credit: santypan/Shutterstock.com Study: Population Representation in 3 Top Artificial Intelligence Text-to-Image Generators. Image credit: santypan/Shutterstock.com

Gender and race disparities in medical specialties

In the United States, about 34% of the population is non-white; However, only 7% of surgeons are considered non-white, a proportion that has remained unchanged or decreased from 2005 to 2018. When both sex and gender intersect for discrimination purposes, it results in only 10 black women who are full professors. Surgery in the United States, where a black woman does not hold a department chair.

Black female surgeons make up less than 1% of academic surgical faculty, despite making up more than 7.5% of the population. Furthermore, black principal investigators won less than 0.4% of National Institutes of Health (NIH) grants between 1998 and 2017, thus indicating a lack of funding for this group.

What does the study show?

published in the present study JAMA Network Surgery Programming artificial intelligence (AI) text-to-image generators such as DALL E 2, Stable Diffusion, and Midjourney examine the manifestation of this disparity.

The cross-sectional study, performed in May 2023, included the results of seven reviewers who examined 2,400 images generated across eight surgical specialties. Eight were run through each of the three AI generators. Another 1,200 images were created with additional geographic prompts for the three countries.

The only prompt given was ‘A picture of A’s face [surgical specialty],’ amended in the second case by naming Nigeria, the United States, or China.

Demographic characteristics of surgeons and surgical trainees across America were drawn from Association of American Medical Colleges subspecialty reports. Each group was enrolled separately, as greater diversity was observed among surgical trainees than among older groups of surgeons caring for patients.

The researchers examined how accurately the generators reflected social biases in the form of expectations for surgeons or trainees to be Hispanic, black, or Asian, as well as white rather than male than female.

Study results

Whites and males were overrepresented among attending surgeons, with females at 15% and non-whites at 23%. Among surgical trainees, approximately 36% were female and 37% were non-white.

When the surgeon prompt was used with DALL E 2, the proportion of female and non-white images accurately reflected demographic information at 16% and 23%, respectively. In contrast, DALL-E 2 only produced images of female surgical trainees in 16% of cases, 23% of whom were non-white.

When using midjourney and stable diffusion, images of female surgeons were absent or less than 2% of the total, respectively. Images of non-white surgeons were missing in about 1% of cases in each case. This reveals a gross under-representation of these two major demographic categories in AI-generated images compared to real population data.

When geographic prompts were added, the proportion of non-white surgeons in the images increased. However, none of the models increased female representation after specifying China or Nigeria.

What are the effects?

The present study investigated whether AI text-to-image generators perpetuate existing social biases related to professional stereotypes. The researchers compared the presentations produced by three popular AI generators to the study of real surgeons and surgical students.

Current social biases were magnified using two of the three most commonly used AI generators. These AI generators showed over 98% of the images represented surgeons as white and male. The third model showed correct images for both race and gender categories for surgeon images but fell short for surgical trainees.

The study suggests the need for guardrails and robust feedback systems to minimize AI text-to-image generators for exaggerating stereotypes in professions such as surgery.

Journal Reference:

  • Ali, R., Tang, O.Y., Connolly, I.D., etc (2023). Population Representation in 3 Top Artificial Intelligence Text-to-Image Generators. JAMA Network Surgery. doi:10.1001/jamasurg.2023.5695.
  • Morrison, Z., Perez, N., Ahmad, H., etc (2022). Bias and Discrimination in Surgery: Where Are We and What Can We Do About It? Journal of Pediatric Surgery. doi:10.1016/j.jpedsurg.2022.02.012.



Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

​Jr Mehmood’s Top Performances: Ghar Ghar Ki Kahani, Aap Ki Kasam And More ​ Most Beautiful Bioluminescent Beaches in The World Health Benefits Of Curry Leaves Benefits Of Amla For Hair Growth Zodiac Signs As Nail Arts