Search
Close this search box.
Search
Close this search box.

New AI tools may capture your medical appointment or compose a message from your doctor

New AI tools are assisting doctors in interacting with their patients. Some tools respond to messages while others take notes during examinations. It’s only been 15 months since OpenAI introduced ChatGPT. Already, thousands of doctors are using products based on

By CARLA K. JOHNSON (AP Medical Writer)

You might not be surprised if your doctors begin sending you very friendly messages. They might be receiving assistance from artificial intelligence.

New AI tools are assisting doctors in communicating with their patients, some by responding to messages and others by taking notes during examinations. It has been 15 months since OpenAI launched ChatGPT. Already, thousands of doctors are using similar products based on extensive language models. One company claims its tool works in 14 different languages.

AI enthusiasts claim that AI saves doctors time and prevents burnout. It also alters the doctor-patient relationship, raising concerns about trust, transparency, privacy, and the future of human connection.

An examination of how new AI tools impact patients:

IS MY DOCTOR UTILIZING AI?

In recent years, medical devices utilizing machine learning have been performing tasks such as interpreting mammograms, diagnosing eye disease, and identifying heart issues. What's new is the generative AI's capability to respond to complex instructions by predicting language.

Your next check-up could be recorded by an AI-powered smartphone app that listens, documents, and instantly organizes everything into a note for you to access later. The tool can also result in more revenue for the doctor's employer as it will not overlook details that could legitimately be billed to insurance.

Your doctor should seek your consent before using the tool. You might also notice some new language in the forms you sign at the doctor's office.

Other AI tools might assist your doctor in composing a message, but you may never be aware of it.

“Your physician may inform you that they are using it, or they may not,” stated Cait DesRoches, director of OpenNotes, a Boston-based organization dedicated to transparent communication between doctors and patients. Some health systems promote disclosure, while others do not.

Doctors or nurses must approve the AI-generated messages before sending them. In one Colorado health system, such messages include a sentence indicating they were automatically generated. However, doctors can remove that line.

“It sounded exactly like him. It was remarkable,” said patient Tom Detner, 70, of Denver, who recently received an AI-generated message that began: “Hello, Tom, I’m glad to hear that your neck pain is improving. It’s important to listen to your body.” The message concluded with “Take care” and a disclaimer that it had been automatically generated and edited by his doctor.

Detner stated that he appreciated the transparency. “Full disclosure is very important,” he said.

WILL AI MAKE MISTAKES?

Extensive language models can misunderstand input or even produce inaccurate responses, a phenomenon known as hallucination. The new tools have internal safeguards to try to prevent inaccuracies from reaching patients or being included in electronic health records.

“You don’t want those false things entering the clinical notes,” stated Dr. Alistair Erskine, who oversees digital innovations for Emory Healthcare in Georgia, where hundreds of doctors are using a product from Abridge to document patient visits.

The tool operates the conversation between doctor and patient using multiple large language models and removes strange ideas, according to Erskine. He said it's a way to eliminate hallucinations through engineering.

Ultimately, Dr. Shiv Rao, CEO of Abridge, stated that “the doctor is the most important guardrail.” As doctors review AI-generated notes, they can click on any word to listen to the specific part of the patient’s visit for accuracy checks.

In Buffalo, New York, a different AI tool misunderstood Dr. Lauren Bruckner when she mentioned to a teenage cancer patient that it was a good thing she didn’t have an allergy to sulfa drugs. The AI-generated note incorrectly said, “Allergies: Sulfa.”

The tool completely misunderstood the conversation, as mentioned by Bruckner, chief medical information officer at Roswell Park Comprehensive Cancer Center. She said, “That doesn’t happen often, but clearly that’s a problem.”

WHAT ABOUT THE HUMAN TOUCH?

AI tools can be encouraged to be friendly, empathetic, and informative.

However, they can go too far. In Colorado, a patient with a runny nose was alarmed to learn from an AI-generated message that the problem could be a brain fluid leak. (It wasn’t.) A nurse had not proofread carefully and mistakenly sent the message.

“At times, it’s an astounding help and at times it’s of no help at all,” said Dr. C.T. Lin, head of technology innovations at Colorado-based UC Health, where about 250 doctors and staff use a Microsoft AI tool to write the initial version of messages to patients. The messages are delivered through Epic’s patient portal.

The tool had to learn about a new RSV vaccine because it was creating messages stating there was no such thing. However, with routine advice — like rest, ice, compression, and elevation for an ankle sprain — “it’s excellent for that,” Linn said.

Another positive aspect is that doctors using AI are no longer tied to their computers during medical appointments. They can make eye contact with their patients because the AI tool records the exam.

The tool needs audible words, so doctors are learning to explain things aloud, according to Dr. Robert Bart, chief medical information officer at Pittsburgh-based UPMC. A doctor might say: “I am currently examining the right elbow. It is quite swollen. It feels like there’s fluid in the right elbow.”

Talking through the exam for the benefit of the AI tool can also help patients understand what’s going on, Bart said. “I’ve been in an examination where you hear the hemming and hawing while the physician is doing it. And I’m always wondering, ‘Well, what does that mean?’”

WHAT ABOUT PRIVACY?

U.S. law requires health care systems to get assurances from business associates that they will safeguard protected health information, and the companies could face investigation and fines from the Department of Health and Human Services if they mess up.

Doctors interviewed for this article said they feel confident in the data security of the new products and that the information will not be sold.

Information shared with the new tools is used to improve them, so that could add to the risk of a health care data breach.

Dr. Lance Owens, chief medical information officer at the University of Michigan Health-West, where 265 doctors, physician assistants, and nurse practitioners are using a Microsoft tool to document patient exams, believes patient data is being protected.

Owens said that when they assure us that our information is safe and protected, we trust them.

___

The Associated Press Health and Science Department is supported by the Howard Hughes Medical Institute’s Science and Educational Media Group. The AP is entirely responsible for all its content.

Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments