Dr. Tiffany Kung

Dr. Tiffany Kung, researcher at AnsibleHealth

Modern Healthcare reporters take a deep dive with leaders in the industry who are standing out and making a difference in their organization or their field. We hear from Dr. Tiffany Kung, researcher at virtual pulmonary rehab treatment center AnsibleHealth, about how the ChatGPT model—which uses natural language processing to generate text responses based on available data—could eventually be used in the healthcare industry.

What are some ways that health systems could use artificial intelligence technology such as ChatGPT to augment care and provider operations?

At AnsibleHealth, we’re already using ChatGPT every day. We’ve incorporated it into our electronic health record so our providers are able to use ChatGPT to better communicate with patients, and we’re using it to talk to our insurance providers—to do things like rewrite an appeal letter if [payers have] denied a claim. All our providers have undergone training to make sure that everything’s deidentified, so it’s HIPAA-compliant.

ChatGPT is most commonly being used right now to communicate with insurance [companies] and to do a lot of administrative work, since physicians now spend so much of their time dealing with things that are not direct patient care: paperwork and billing.

In terms of the chatbot’s potential shortcomings, where might providers run into issues with ChatGPT? In what ways is this technology not fully equipped for use in the healthcare sector?

ChatGPT and most other existing AI are not HIPAA-compliant at the moment. That means it can’t handle any patient data that’s sensitive or anything that’s confidential. That’s really one of its big shortcomings. For us to incorporate ChatGPT and other AI more into our everyday use, we have to do a lot of rigorous testing. Just like any novel drug or any new technology, we need to test its safety, usability and efficacy.

You recently led a study in which researchers had ChatGPT take the U.S. Medical Licensing Exam. How is the chatbot’s performance on that exam an indicator of its possible effectiveness in medical education?

We were really excited to see that ChatGPT was capable of passing the U.S. Medical Licensing Exam. It [scored] about 60%, which was the passing threshold. That’s just the 1 to 2 percentile performance on this exam.

So by no means is ChatGPT capable of being your physician or being a good doctor right now. There’s a lot of work to be done. Everything is still very early, but we’re really excited about the potential.

What do you think some of that potential might amount to?

There are a lot of different applications. It’s still very early. At AnsibleHealth, we take care of patients who are extraordinarily sick: They have respiratory diseases like chronic obstructive pulmonary disease, and they also have other comorbidities like cardiac conditions and kidney conditions. A lot of the work we do is coordinating care among the many doctors and specialists these patients need. We help improve communication among the patients, cardiologists, nephrologists and pulmonologists. That’s something that AI can do: improve care coordination.

Healthcare leaders have several concerns about the chatbot’s inaccuracies, which could have detrimental effects on patient care. What are your impressions of the healthcare industry’s perception of this tool?

As a whole, healthcare has a really high bar for using anything for patient care. Our bar is so high because we’re dealing with patient lives. So anything we use has to be the safest possible.

Additionally, a lot of physicians are cautious when dealing with new technology or new drugs. Every day in the hospital, we communicate with each other with pagers: pretty antiquated technology, but it shows how sometimes healthcare is wary of new technologies, and we like things we’re comfortable with.


Dark blue quoteMy greatest hope is that AI will allow me to spend more time with patients … and allow me to be a better physician.”


At AnsibleHealth, how are you testing the chatbot to see if it works in daily operations with both providers and patients?

We’re using it every day, in a way that’s HIPAA-compliant. We like to do things in an iterative way, so we’ll put different example [questions into ChatGPT]. We’ll see the response, and we always review it to make sure things are accurate before sending it to a patient or to a different provider.

If a physician has a question about getting a [clinical] decision report on a diagnosis or is looking for a broad differential diagnosis, they could type in some symptoms, and then ChatGPT could give them an example of a really broad differential.

Which aspects of ChatGPT could be improved upon to make this technology more specifically helpful to those in the healthcare sector?

When we completed our research study looking at ChatGPT, we gave it a lot of classical case vignettes. That’s how the questions are [presented] on the U.S. Medical Licensing Exam. Sometimes the answers were incredibly impressive, and it would spell out exactly what it believed the diagnosis was and further management techniques. A lot of times it gave what we classified as indeterminate answers: It wouldn’t make a decision, and it would be very vague.

ChatGPT was just trained on general data, not specifically on healthcare data. As we move forward, I’m excited to see the possibilities for combining a really intelligent technology like ChatGPT with medical data.

We have a responsibility to make sure our healthcare AI is being trained on good-quality data. For example, a lot of the literature right now that dominates PubMed is based in high-income countries like the United States or Europe. We want to make sure that we’re [training tools on] data that [represent] a diverse healthcare population and diverse people of different socioeconomic statuses.

What are some outcomes that you hope will come from using this kind of technology on a regular basis?

I hope that patients are able to receive better care. For physicians right now, there are a lot of challenges that we face with the number of tasks that we’re supposed to do every day. There are so many administrative tasks, so many insurance tasks, so much billing—and a lot of this takes us away from actually being able to interact one-on-one and provide quality care. My greatest hope is that AI will allow me to spend more time with patients … and allow me to be a better physician.

I want physicians to recognize that AI is here. This is a really exciting time for us to be in medicine. 


Recommended for You

Healthcare data breaches