top of page
Search

The Perils of Relying on ChatGPT and GoogleMD for Medical Advice

A person typing on a laptop
Photo by Canva

In an age where artificial intelligence (AI) is increasingly integrated into various aspects of our lives, including healthcare, it's crucial to understand the limitations and risks associated with relying solely on AI-powered tools like ChatGPT and GoogleMD for managing medical conditions. Recent studies have shed light on concerning inaccuracies and shortcomings in these platforms, highlighting the potential dangers they pose to users' health.

 

Two independent studies1 have revealed alarming findings regarding the reliability of ChatGPT. In both studies, ChatGPT demonstrated a propensity for providing incomplete or incorrect responses, and in some cases, even fabricated references to support its answers.


First Study: Inaccurate and Incomplete Information

In the first study, researchers compared ChatGPT's responses to questions about medications with information provided by a drug information service for pharmacists.


Shockingly, ChatGPT failed to provide a response or offered inaccurate and incomplete information in a staggering 74% of cases.

For instance, when asked about potential drug interactions between a prescription treatment for COVID-19 and a high blood pressure medicine, ChatGPT stated that there were no interactions, disregarding the known potential for increased effects of the high blood pressure medicine due to CYP3A4 inhibition. Even more concerning was the discovery that ChatGPT fabricated references to support its responses, leading users to nonexistent studies and misleading information.


Second Study: Missing Crucial Information

In the second study, researchers assessed ChatGPT's ability to identify medication side effects.


They found that ChatGPT consistently missed at least half of the established side effects for 26 out of 30 different medications.

This glaring deficiency in recognizing crucial information about medication side effects could have serious implications for users who rely on ChatGPT for guidance in managing their health conditions.


Can ChatGPT Replace Physicians?

The short answer is no they cannot. In an article published by the National Library of Medicine, Jan Homolak states that “…they are notoriously bad at context and nuance (8) – two things critical for safe and effective patient care, which requires the implementation of medical knowledge, concepts, and principles in real-world settings”. It's essential for users to approach AI-powered medical advice platforms with caution and skepticism, recognizing their limitations and seeking confirmation from reliable sources, such as healthcare professionals or reputable medical websites.

 

In conclusion, while AI technologies hold promise in revolutionizing healthcare, including the provision of medical advice and information, users must exercise caution and critical thinking when relying on platforms like ChatGPT and GoogleMD. The risks associated with inaccuracies and fabrications in AI-generated responses underscore the importance of seeking guidance from qualified healthcare professionals for managing medical conditions effectively and safely.

 

Tria Health Can Help

Tria Health utilizes pharmacists for one-on-one consultations for members with chronic conditions. Since medications are the primary treatment method for chronic diseases and pharmacists are medication specialists, they are the best resource to provide support. In addition, Tria Health utilizes intelligent remote monitoring to improve health literacy in between telehealth appointments.

 

During a consultation, the Tria Health pharmacist will review all current medications, including vitamins and supplements with member. Tria Health will assist members in identifying any possible drug interactions or savings opportunities! Pharmacists will work with members and their doctor(s) to ensure that the intended outcomes from their medications are being received. 

 

Questions?

Contact the Tria Health Desk at 1.888.799.8742.


Resources

Comments


bottom of page