Given the growing ubiquity of machine learning and artificial intelligence in healthcare settings, it’s become increasingly important to meet patient needs and engage users.
And as panelists noted during a HIMSS Machine Learning and AI for Healthcare Forum session this week, designing technology with the user in mind is a vital way to ensure tools become an integral part of workflow.
“Big Tech has stumbled somewhat” in this regard, said Bill Fox, healthcare and life sciences lead at SambaNova Systems. “The patients, the providers – they don’t really care that much about the technology, how cool it is, what it can do from a technological standpoint.
“It really has to work for them,” Fox added.
Jai Nahar, a pediatric cardiologist at Children’s National Hospital, agreed, stressing the importance of human-centered AI design in healthcare delivery.
“Whenever we’re trying to roll out a productive solution that incorporates AI,” he said, “right from the designing [stage] of the product or service itself, the patients should be involved.”
That inclusion should also expand to provider users too, he said: “Before rolling out any product or service, we should involve physicians or clinicians who are going to use the technology.”
The panel, moderated by Rebekah Angove, vice president of evaluation and patient experience at the Patient Advocate Foundation, noted that AI is already affecting patients both directly and indirectly.
In ideal scenarios, for example, it’s empowering doctors to spend more time with individuals. “There’s going to be a human in the loop for a very long time,” said Fox.
“We can power the clinician with better information from a much larger data set,” he continued. AI is also enabling screening tools and patient access, said the experts.
“There are many things that work in the background that impact [patient] lives and experience already,” said Piyush Mathur, staff anesthesiologist and critical care physician at the Cleveland Clinic.
At the same time, the panel pointed to the role clinicians can play in building patient trust around artificial intelligence and machine learning technology.
Nahar said that as a provider, he considers several questions when using an AI-powered tool for his patient. “Is the technology … really needed for this patient to solve this problem?” he said he asks himself. “How will it improve the care that I deliver to the patient? Is it something reliable?”
“Those are the points, as a physician, I would like to know,” he said.
Mathur also raised the issue of educating clinicians about AI. “We have to understand it a little bit better to be able to translate that science to the patients in their own language,” he said. “We have to be the guardians of making sure that we’re providing the right data for the patient.”
The panelists discussed the problem of bias, about which patients may have concerns – and rightly so.
“There are multiple entry points at which bias can be introduced,” said Nahar.
During the design process, he said, multiple stakeholders need to be involved to closely consider where bias could be coming from and how it can be mitigated.
As panelists have pointed out at other sessions, he also emphasized the importance of evaluating tools in an ongoing process.
Developers and users should be asking themselves, “How can we improve and make it better?” he said.
Overall, said Nahar, best practices and guidances need to be established to better implement and operationalize AI from the patient perspective and provider perspective.
The onus is “upon us to make sure we use this technology in the correct way to improve care for our patients,” added Mathur.