Study finds bias in how doctors talk to Black, female patients

Biases based on gender and ethnicity have been well-documented throughout society, including medical care, but data analysis by a UO researcher found exactly how those biases also show up in the language doctors use in their caregiver reports.

Physicians were more likely to refer to female patients impersonally compared with male patients, and physicians were less likely to use negative emotion words in their reports on Black patients compared with white patients.

Those were among the findings by assistant professor David Markowitz, a psychology of language researcher in the School of Journalism and Communication who analyzed 1.8 million medical records for the language doctors used and discovered distinct differences in the way they communicated with those in their care.

“What was most surprising is how clear the signals were in the data,” Markowitz said. “It really paints a picture that bias is not just a one-off phenomenon among certain physicians or individuals. Bias is systemic, subtle and consequential in medicine.”

Markowitz set off by wanting to investigate how biases could appear with some of the most vulnerable populations and aimed his focus on those who are in critical care.

“The evidence suggests bias manifests in how physicians talk about their patients,” he said. “And it's probably not too far of a leap to also suggest it might affect their care as well.”

Markowitz used a database of medical records for nearly 46,000 critical care patients, including over 58,000 hospital admissions to Beth Israel Deaconess Medical Center in Boston. The records excluded patient names but included demographic data and notes from doctors and nurses about the patients’ care.

He used an automated text analysis tool that measured impersonal pronouns (such as it, someone and who), positive emotion terms (brave, safe and gentle), negative emotion terms (bad, weak and panic), body terms (nerve, spine, stomach), analytic thinking, and cognitive processing terms (solve, determine and perhaps).

“I looked at language patterns reflecting bias across groups,” Markowitz said. “So, for example, there are patients who are identified as white or Black or male, female, etc. Using computational methods to analyze language, I identified patterns of bias through the descriptions of a patient’s condition and progress.”

The study provides some of the first evidence suggesting how language plays a central role in the patient-physician relationship from actual medical records and patient interactions, and it demonstrates systematic gender and ethnicity biases in medicine through language.

The evidence establishes a link between communication patterns and bias that is often unobserved or underexamined in medicine, Markowitz noted. It also builds on other, nonlinguistic findings that indicate such inequalities are widespread in medicine.

The analysis suggested that physicians refer to female patients in impersonal and emotional terms compared with male patients, physicians attend less to the negative experiences of Black patients than white patients, and physicians require more effort organizing their thoughts to resolve issues for Black females compared to other groups of people. Additionally, physicians writing about male patients focused more on their body than those writing about female patients.

The data also indicated physicians thought in more analytical and rational terms when attending to male patients compared to female patients, and they used fewer indicators of needing to psychologically “work through” diagnoses for male compared to female patients.

When looking at gender and ethnicity, physicians demonstrated the greatest need to work through diagnoses for Black women, whereas patients of other gender and ethnicities received less questioning and required less cognitive effort from caregivers. Black females, on average, were described with the lowest rate of positive affect compared to other patients.

“Based on the evidence, caregivers of Black women tend to communicate, at least linguistically, with the greatest indicators of bias,” Markowitz said.

Markowitz notes there are some caveats with these data: They were only collected from one hospital, and they could not account for physician demographics. It is unclear if the physician demographics in the study are typical of most hospitals in the U.S.

He said the next steps could include incorporating findings for individual physicians into annual performance evaluations or perhaps eventually use them in real-time situations.

“These data are unlikely to be useful as a bias detector, but instead they could be used as a red flag to indicate how some physicians are communicating with patients and make them aware of possible harmful behavior,” Markowitz said. “It would be fantastic to also get ratings from patients, too, to figure out whether they can pick up on some of these subtle biases reflected in language.”

Markowitz said it would be ideal to scale up the study to incorporate additional hospitals’ data and determine to what degree the findings might hold across different hospitals.

“How can we take this information and then either provide trainings or provide ways of understanding the systems that are perpetuating bias?” Markowitz said. “Because medicine as an institution has a significant negative history of undermining people of color and undermining women. So how can we use this evidence to motivate change? We need ways to improve the system because making it more inclusive not only provides better care, it's also just the right thing to do.”

By Jim Murez, University Communitions