Home / News / Technology / 1 in 5 Doctors Use AI Chatbots: Should Patients Be Worried?
Technology
3 min read

1 in 5 Doctors Use AI Chatbots: Should Patients Be Worried?

Published
James Morales
Published

Key Takeaways

  • A recent survey of U.K. doctors found that 20% had used generative AI tools in a medical setting.
  • Medical use cases for the technology include coming up with lists of potential diagnoses.
  • However, research suggests ChatGPT’s diagnostic accuracy is poor.

Doctors are increasingly turning to AI chatbots to help them generate medical documentation, file insurance requests, and even assist in diagnosis. For example, in a recent survey of general practitioners in the U.K., 20% reported using generative AI in clinical practice.

However, the impact on patients remains uncertain, with one in five doctors embracing chatbots.

1 in 5 U.K. Doctors Use AI

In the survey of 1006 doctors, 205 reported  using AI tools at work. 

Of the GPs who said they used chatbots,  29% reported using them to generate documentation after patient appointments. Meanwhile,  28% said they used them to suggest differential diagnoses.

While professionals in many fields will be able to relate to doctors’ use of AI to expedite routine post-appointment paperwork, the use of chatbots to help with diagnostics raises ethical questions. 

The Risk of Misdiagnosis

Despite ChatGPT’s increasingly sophisticated reasoning capabilities, trusting AI to correctly diagnose patients based on their symptoms runs the risk of misdiagnosis.

In a study  published in JAMA Pediatrics earlier this year, researchers investigated the diagnostic accuracy of ChatGPT. They found that when presented with real medical cases, the chatbot rendered an incorrect diagnosis 61% of the time.

In some instances, the AI was way off. For example, it concluded that rash symptoms presented by one patient were “immune thrombocytopenic purpura” when they were actually diagnosed with scurvy.

However, in other cases, the AI came close to the correct diagnosis but was not sufficiently detailed, such as when it diagnosed branchio-oto-renal syndrome as a more generic branchial cleft cyst.

The evidence suggests chatbots aren’t capable of making critical medical decisions; however, the researchers acknowledged that they can be important “supplementary tools” in the diagnostic process. 

How Medical Professionals Use Chatbots

Although the U.K. survey suggests doctors are using AI to help with diagnoses, this appears to be an example of the technology’s “supplementary” use.

Cited use cases include using AI to generate differential diagnoses, i.e., a preliminary list of possible conditions doctors create before making a final diagnosis.

But there are plenty of other ways doctors use the technology.

With more medical practitioners embracing AI, medical technology startups have developed new solutions that specifically address the field’s unique privacy and safety requirements. 

Billed as smart assistants, these can streamline charting and letter-writing, leading to more effective communication with patients.

In one case , a doctor using one of these smart assistants claimed to have halved the time he spent writing prior authorization requests for insurers. Not only was he able to write letters faster, but he also observed a higher success rate when using AI, which he can prompt to tailor requests according to each patient’s insurance policy.

Was this Article helpful? Yes No