Sunday, February 25, 2024
HomeHealthcareChatGPT Does a Unhealthy Job of Answering Folks’s Treatment Questions, Examine Finds

ChatGPT Does a Unhealthy Job of Answering Folks’s Treatment Questions, Examine Finds


Researchers just lately examined ChatGPT’s means to reply affected person questions on treatment, discovering that the viral chatbot got here up dangerously quick. The analysis was introduced on the American Society of Well being-System Pharmacists’ annual assembly, which was held this week in Anaheim.

The free model of ChatGPT, which was the one examined within the examine, has greater than 100 million customers. Suppliers must be cautious of the truth that the generative AI mannequin doesn’t all the time give sound medical recommendation, given a lot of their sufferers could possibly be turning to ChatGPT to reply health-related questions, the examine identified.

The examine was performed by pharmacy researchers at Lengthy Island College. They first gathered 45 questions that sufferers posed to the college’s drug data service in 2022 and 2023, after which they wrote their solutions to them. Every response was reviewed by a second investigator.

Then, the analysis staff fed these similar inquiries to ChatGPT and in contrast the solutions to the pharmacist-produced responses. The researchers gave ChatGPT 39 questions as an alternative of 45, as the subject material for six of the questions lacked the printed literature wanted for ChatGPT to supply a data-driven response.

The examine discovered that solely 1 / 4 of ChatGPT’s solutions had been passable. ChatGPT didn’t immediately tackle 11 questions, gave mistaken solutions to 10, and supplied incomplete solutions for one more 12, the researchers wrote.

As an example, one query requested whether or not there’s a drug interplay between the blood-pressure decreasing treatment verapamil and Paxlovid, Pfizer’s antiviral tablet for Covid-19. ChatGPT stated that there isn’t a interplay between the 2 medicine, which isn’t true — combining these two drugs might dangerously decrease an individual’s blood strain.

In some instances, the AI mannequin generated false scientific references to help its response. In every immediate, the researchers requested ChatGPT to indicate references to the knowledge supplied in its solutions, however the mannequin supplied references in solely eight responses — and all of these references had been made-up.

“Healthcare professionals and sufferers must be cautious about utilizing ChatGPT as an authoritative supply for medication-related data,” Dr. Sara Grossman, a lead writer of the examine, stated in a press release. “Anybody who makes use of ChatGPT for medication-related data ought to confirm the knowledge utilizing trusted sources.”

ChatGPT’s utilization coverage echoes Dr. Grossman’s sentiments. It states that the mannequin is “not fine-tuned to supply medical data,” and that folks ought to by no means flip to it when in search of “diagnostic or remedy providers for critical medical situations.”

Photograph: venimo, Getty Pictures

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments