Sign In  |  Register  |  About Corte Madera  |  Contact Us

Corte Madera, CA
September 01, 2020 10:27am
7-Day Forecast | Traffic
  • Search Hotels in Corte Madera

  • CHECK-IN:
  • CHECK-OUT:
  • ROOMS:

ChatGPT found by study to spread inaccuracies when answering medication questions

A Long Island University study found that nearly 75% of drug-related queries submitted to ChatGPT came back with incomplete or wrong responses. Fox News Digital spoke to experts.

ChatGPT has been found to have shared inaccurate information regarding drug usage, according to new research.

In a study led by Long Island University (LIU) in Brooklyn, New York, nearly 75% of drug-related, pharmacist-reviewed responses from the generative AI chatbot were found to be incomplete or wrong.

In some cases, ChatGPT, which was developed by OpenAI in San Francisco and released in late 2022, provided "inaccurate responses that could endanger patients," the American Society of Health System Pharmacists (ASHP), headquartered in Bethesda, Maryland, stated in a press release.

WHAT IS ARTIFICIAL INTELLIGENCE?

ChatGPT also generated "fake citations" when asked to cite references to support some responses, the same study also found.

Along with her team, lead study author Sara Grossman, PharmD, associate professor of pharmacy practice at LIU, asked the AI chatbot real questions that were originally posed to LIU’s College of Pharmacy drug information service between 2022 and 2023.

Of the 39 questions posed to ChatGPT, only 10 responses were deemed "satisfactory," according to the research team's criteria.

The study findings were presented at ASHP’s Midyear Clinical Meeting from Dec. 3 to Dec. 7 in Anaheim, California.

Grossman, the lead author, shared her initial reaction to the study's findings with Fox News Digital.

BREAST CANCER BREAKTHROUGH: AI PREDICTS A THIRD OF CASES PRIOR TO DIAGNOSIS IN MAMMOGRAPHY STUDY

Since "we had not used ChatGPT previously, we were surprised by ChatGPT’s ability to provide quite a bit of background information about the medication and/or disease state relevant to the question within a matter of seconds," she said via email. 

"Despite that, ChatGPT did not generate accurate and/or complete responses that directly addressed most questions."

Grossman also mentioned her surprise that ChatGPT was able to generate "fabricated references to support the information provided."

In one example she cited from the study, ChatGPT was asked if "a drug interaction exists between Paxlovid, an antiviral medication used as a treatment for COVID-19, and verapamil, a medication used to lower blood pressure."

HEAD OF GOOGLE BARD BELIEVES AI CAN HELP IMPROVE COMMUNICATION AND COMPASSION: ‘REALLY REMARKABLE’

The AI model responded that no interactions had been reported with this combination.

But in reality, Grossman said, the two drugs pose a potential threat of "excessive lowering of blood pressure" when combined.

"Without knowledge of this interaction, a patient may suffer from an unwanted and preventable side effect," she warned.

ChatGPT should not be considered an "authoritative source of medication-related information," Grossman emphasized.

"Anyone who uses ChatGPT should make sure to verify information obtained from trusted sources — namely pharmacists, physicians or other health care providers," Grossman added.

MILITARY MENTAL HEALTH IN FOCUS AS AI TRAINING SIMULATES REAL CONVERSATIONS TO HELP PREVENT VETERAN SUICIDE

The LIU study did not evaluate the responses of other generative AI platforms, Grossman pointed out — so there isn’t any data on how other AI models would perform under the same condition.

"Regardless, it is always important to consult with health care professionals before using information that is generated by computers, which are not familiar with a patient’s specific needs," she said.

Fox News Digital reached out to OpenAI, the developer of ChatGPT, for comment on the new study.

OpenAI has a usage policy that disallows use for medical instruction, a company spokesperson previously told Fox News Digital in a statement.

"OpenAI’s models are not fine-tuned to provide medical information. You should never use our models to provide diagnostic or treatment services for serious medical conditions," the company spokesperson stated earlier this year. 

"OpenAI’s platforms should not be used to triage or manage life-threatening issues that need immediate attention."

The company also requires that when using ChatGPT to interface with patients, health care providers "must provide a disclaimer to users informing them that AI is being used and of its potential limitations." 

In addition, as Fox News Digital previously noted, one big caveat is that ChatGPT’s source of data is the internet — and there is plenty of misinformation on the web, as most people are aware. 

That’s why the chatbot’s responses, however convincing they may sound, should always be vetted by a doctor.

Additionally, ChatGPT was only "trained" on data up to September 2021, according to multiple sources. While it can increase its knowledge over time, it has limitations in terms of serving up more recent information.

Last month, CEO Sam Altman reportedly announced that OpenAI's ChatGPT had gotten an upgrade — and would soon be trained on data up to April 2023.

Dr. Harvey Castro, a Dallas, Texas-based board-certified emergency medicine physician and national speaker on AI in health care, weighed in on the "innovative potential" that ChatGPT offers in the medical arena.

"For general inquiries, ChatGPT can provide quick, accessible information, potentially reducing the workload on health care professionals," he told Fox News Digital.

ARTIFICIAL INTELLIGENCE HELPS DOCTORS PREDICT PATIENTS’ RISK OF DYING, STUDY FINDS: ‘SENSE OF URGENCY’

"ChatGPT's machine learning algorithms allow it to improve over time, especially with proper reinforcement learning mechanisms," he also said.

ChatGPT’s recently reported response inaccuracies, however, pose a "critical issue" with the program, the AI expert pointed out.

"This is particularly concerning in high-stakes fields like medicine," Castro said.

Another potential risk is that ChatGPT has been shown to "hallucinate" information — meaning it might generate plausible but false or unverified content, Castro warned. 

CLICK HERE TO SIGN UP FOR OUR HEALTH NEWSLETTER

"This is dangerous in medical settings where accuracy is paramount," said Castro.

AI "currently lacks the deep, nuanced understanding of medical contexts" possessed by human health care professionals, Castro added.

"While ChatGPT shows promise in health care, its current limitations, particularly in handling drug-related queries, underscore the need for cautious implementation."

Speaking as an ER physician and AI health care consultant, Castro emphasized the "invaluable" role that medical professionals have in "guiding and critiquing this evolving technology."

"Human oversight remains indispensable, ensuring that AI tools like ChatGPT are used as supplements rather than replacements for professional medical judgment," Castro added.

Melissa Rudy of Fox News Digital contributed reporting. 

For more Health articles, visit www.foxnews.com/health.

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.
 
 
Copyright © 2010-2020 CorteMadera.com & California Media Partners, LLC. All rights reserved.