ChatGPT may have better bedside manner than some doctors, but it lacks some expertise

Lazy eyes listen

NewsRescue

ChatGPT can be a helpful tool for people looking for medical information and counseling, but the artificial intelligence technology cannot totally replace the value of a real physician – it admits as much.

“While I am a language model that has been trained on a vast amount of information, I am not a licensed medical professional and I am not capable of providing medical diagnoses, treatments, or advice,” the chatbot replied to a CNN question.

Nonetheless, new research published this week suggests that physicians can learn from chatbots when it comes to patient communication.

A panel of certified health care experts evaluated responses to over 200 different medical queries given in a public online forum, such as patient enquiries concerning medical diagnoses, the need for medical attention, and other topics.

ChatGPT responses were “preferred over physician responses and rated significantly higher for both quality and empathy,” according to a study released on Friday.

More than a quarter of physician responses were judged to be of poor quality, compared to less than 3% of ChatGPT responses. In contrast, nearly half of ChatGPT comments (45%) were deemed empathic, compared to less than 5% of physician responses.

According to the study, ChatGPT scored 21% higher than physicians for answer quality and 41% more empathetically.

In one study scenario, a patient asked a social media forum about the possibility of turning blind after getting a spray of bleach in the eye. ChatGPT’s response began with an apology for the alarm, followed by seven more phrases of counsel and encouragement about the “unlikely” consequence of going blind. Meanwhile, one doctor responded, “sounds like you’ll be fine,” followed by the Poison Control phone number. ChatGPT’s response was preferred by all clinicians who evaluated these responses.

As in this case, experts observe that ChatGPT responses are often much longer than those from physicians, which may influence perceptions of quality and empathy.

“We cannot know for sure whether the raters judged for style (e.g., verbose and flowery discourse) rather than content without controlling for response length,” noted Mirella Lapata, professor of natural language processing at the University of Edinburgh.

Dr. David Asch, a professor of medicine and senior vice dean at the University of Pennsylvania, questioned ChatGPT how it may be used in health care earlier this month. He thought the responses to be comprehensive but lengthy.

“It turns out ChatGPT is a little chatty,” he explained. “It didn’t sound like anyone was speaking to me. It sounded like someone was attempting to be really thorough.”

Asch, who led Penn Medicine’s Center for Health Care Innovation for ten years, says he’d be pleased to meet a young physician who answered questions as thoroughly and carefully as ChatGPT did, but warns that the AI tool isn’t yet ready to totally entrust patients to.

On a phone, a woman employs OpenAI’s chatGPT.