Search here


Artificial intelligence offers better and more empathetic advice than human doctors

Time to Read: 3 minute
Artificial intelligence offers better and more empathetic advice than human doctors
Artificial intelligence offers better and more empathetic advice than human doctors
Khushbu Kumari

The panel of healthcare professional raters preferred ChatGPT responses to those of human physicians 79% of the time

While technology is expected to be cool in its interactions with humans, recent research found that artificial intelligence assistant ChatGPT may be more empathetic than human doctors when giving health advice.

A new study published in JAMA Internal Medicine and led by John W. Ayers, MD, of the Qualcomm Institute at the University of California, San Diego, offers a first glimpse of the role AI assistants could play in medicine.

The study compared doctors' written and ChatGPT responses to real-world health questions. A group of licensed healthcare professionals preferred ChatGPT responses 79% of the time and rated ChatGPT responses as higher quality and more empathic.

“The opportunities to improve healthcare with AI are enormous,” says Ayers, who is also deputy director of innovation in the Division of Infectious Diseases and Global Public Health at the UC San Diego School of Medicine. “AI-augmented care is the future of medicine.”

In the new study, the research team set out to answer the question: Can ChatGPT accurately answer the questions that patients send to their doctors? If so, AI models could be integrated into healthcare systems to improve physician responses to questions submitted by patients and alleviate the increasing burden on physicians.

To get a large and diverse sample of health questions and responses from doctors that did not contain personally identifiable information, the team turned to social media, where millions of patients post medical questions that doctors respond to: Reddit's AskDocs.

AskDocs is a subreddit with approximately 452,000 members who post medical questions and verified healthcare professionals submit answers. Although anyone can answer a question, moderators check the credentials of healthcare professionals and responses show the respondent's credential level. The result is a broad and diverse set of medical questions from patients and the corresponding responses from licensed medical professionals.

The team randomly selected 195 AskDocs exchanges in which a verified doctor answered a public question. The team provided the original question to ChatGPT and asked it to draft an answer. A group of three licensed health professionals evaluated each question and the corresponding answers, without knowing if the answer came from a doctor or from ChatGPT. They compared responses based on information quality and empathy, noting which they preferred.

ChatGPT responses were better

The panel of health professional evaluators preferred the responses of ChatGPT to those of the doctors in 79% of the occasions.

“ChatGPT messages responded with nuanced and accurate information that often addressed more aspects of the patient's questions than the physicians' responses,” said Jessica Kelley, a nurse practitioner with Human Longevity in San Diego and a co-author of the study.

In addition, the quality of the ChatGPT responses was significantly higher than that of the physician responses: responses of good or very good quality were 3.6 times higher for ChatGPT than for physicians (physicians 22.1% vs. ChatGPT 78.5%). Responses were also more empathic : empathic or very empathic responses were 9.8 times higher for ChatGPT than for physicians (physicians 4.6% vs. ChatGPT 45.1%).

“I never imagined saying this,” added Dr. Aaron Goodman, an associate clinical professor at UC San Diego School of Medicine and study co-author, “but ChatGPT is a recipe I'd like to hit my inbox with. The tool will transform the way I care for my patients.”

RELATED TAGS


Related News


TOP PICKS

About | Terms of use | Privacy Policy | Cookie Policy