sciencenewsnet.in

The Medical Minute: AI nothing new to health care, but enhancements offer possibilities, pitfalls

Although the term was first coined 67 years ago, artificial intelligence has dominated headlines in recent months.

AI ― loosely defined as the ability for computers to learn and reason ― isn’t some ominous new invention. And its application to health care dates back decades. For example, hospitals and doctors’ offices have been using computer programs that pore over notes and documentation to find the right terms to apply to someone’s billing for at least 40 years.

Advancements in AI systems have helped doctors find tumors more easily and root out hard-to-find disease in earlier stages than anyone ever dreamed possible without it.

Then why all the histrionics lately? A natural language processing tool (NLP) called ChatGPT, along with a handful of others, have put the stunning possibilities of AI on everyone’s laptop. Want a highly specific design for your dream house? A meticulously planned vacation tailored to your personality? A David Mamet-style play in three acts with you as the protagonist? AI can produce them all at speeds blinding enough to excite some and frighten others.

The fast developments stoke some natural fears about everything from movie-making to health care. Among them: Will AI take people out of systems where they’re need most?

“AI has been creeping in like a slow leak,” said Dr. Will Hazard, a physician in the Neurological Intensive Care Unit at Penn State Health Milton S. Hershey Medical Center. “This is like a dam breaking.”

Like doctors everywhere, Hazard has been using AI for years. And while there are dangers as there are with any information system, he believes the potential benefits are profound.

Hazard discusses AI, how he uses it, where it can go from here and how patients need not worry. Their good, old-fashioned flesh-and-blood doctors will still be on the job.

What’s new about NLPs like ChatGPT?

“Older NLPs were good, but they were trained on finite amounts of information,” Hazard said. For example, a hospital might purchase a program that just searches for information on neurological disorders and helps provide solutions.

Programs like ChatGPT, OpenAI and others can search for anything. And it can make that search through everythingvirtually all available electronic information. AI programs can process it, interpret it, learn from it and create responses. In seconds.

“That’s what makes it so universal,” Hazard said. “You have access to all the information that exists. You just need to think about how you want to get it and how you want to automate it.

“And the language output is very, very vernacular,” he added. “It’s close to the spoken language.”

What does that mean for medicine?

If doctors and health care organizations use it to its full capability, this development could “revolutionize medicine in nearly all fields,” Hazard said.

Think about this: The amount of medical literature doctors use to help research and diagnose illnesses doubles every 73 days. “Even if I devoted 24 hours a day to it, there’s no way my brain could assemble all of that information,” he said.

Doctors like Hazard keep up with the vast volumes of research materials as much as they can, but with so much being published all the time, they might miss something.

AI systems are helping fill the gaps.

How do you use it?

“I think of it as a really smart colleague that I have access to 24/7,” Hazard said.

He doesn’t count on a program like ChatGPT to make his diagnoses for him. Rather, he uses the program to test his logic, to open up other possibilities that might not occur to him. “Now I can listen to a consultant on my shoulder,” he said, “or I can say no, that’s not correct. I know for a fact that’s not true.”

He points to a story in the book “The AI Revolution in Medicine.” One of the authors, Dr. Isaac Kohane, a Harvard professor and pediatric endocrinologist, used ChatGPT-4 for the first time to offer “differential diagnoses” – or other alternatives to his diagnosis – for a patient. The system made the correct diagnosis. “He was one of five pediatric endocrinologists in the world that could make this diagnosis,” Hazard said.

The AI system got it right within seconds.

“For me that was validation,” Hazard said. “That’s the way I’m using it. I’m confirming my diagnosis and looking for things that I didn’t think of in my own delivery of care.”

What are the dangers?

Among the valid concerns people have involves what the AI industry refers to as “hallucinations.”

For example, a lawyer once reported that in using an AI product to prepare for a trial, the system, in attempting to prove the lawyer’s argument, invented cases to show legal precedent out of thin air. AI made it up. It hallucinated.

Incidents like this are becoming rarer as the systems are perfected – and, spookier, perfecting themselves, Hazard said. And when you’re ordering airline tickets or asking for help with stock picks, you can live with glitches.

But when you’re dealing with someone’s care, hallucinations are worrisome. And as use of the technology expands, problems like hallucinations underscore the importance of keeping human beings in the system and making sure it’s used correctly.

In “The AI Revolution in Medicine,” while extolling ChatGPT-4’s “mind-blowing” abilities, Kohane says, “its potential risks are so significant that I’d like to state my conclusion up front: For the foreseeable future, GPT-4 cannot be used in a medical setting without direct human supervision.”

Hazard agrees.

“Garbage in, garbage out,” he said. “You have to be a purveyor of the information that comes out. If you blindly follow it, bad things will certainly happen.”

What message do you have for people worried about the use of AI in their medical care?

“Don’t fear it, because it’s been here for quite some time,” Hazard said. “Your physicians have been successfully utilizing it to help in your care for years, if not decades. In no way, shape or form are we going on vacation and leaving Chat on call. There has to be a very systematic approach to using this in a smart way.”

Related content:

 

The Medical Minute is a weekly health news feature produced by Penn State Health. Articles feature the expertise of faculty, physicians and staff, and are designed to offer timely, relevant health information of interest to a broad audience.