Artificial Intelligence Conference Examines Impacts on Health care, Research, Education
View Larger Image
Image by Evan Lewis
| The fears and benefits of using artificial intelligence (AI) in health care research, education and clinical care took center stage during the inaugural Innovations in Artificial Intelligence Conference at the University of Arkansas for Medical Sciences (UAMS).
Held in the Jackson T. Stephens Spine & Neurosciences Institute’s Fred Smith Auditorium, more than 200 UAMS faculty, staff and employees attended the Oct. 25 event, which was also livestreamed to a virtual audience of 80.
Carole Tucker, PT, Ph.D., associate dean of research at the University of Texas School of Health Professions, emphasized in her presentation, “Artificial Intelligence in Health Care, Practice & Research,” that AI “is not scary.”
“It’s just complex connected math and logic models,” she said, adding, “the magic emerges” when it is used in combination with data sources.
Tucker, chair and professor of the Department of Physical Therapy & Rehabilitation Services at UT’s Medical Branch in Galveston, said AI originated in 1951 with Marvin Minksy, a computer scientist who died in 2016. He co-founded the Massachusetts Institute of Technology’s AI laboratory and built the first randomly wired neural network learning machine.
“Somebody back then was like, well, if we have these little silicon bits that we turn on and off that can be combined to do basic electrical engineering circuits, if we take thousands of them and put them in parallel, then it’s not human intelligence, it’s artificial intelligence,” she said.
“Artificial intelligence is actually layers on logic models,” said the former electrical engineer, comparing its operation to that of transistors.
“I can tell you,” she said, “This whole concept of it taking over the world has existed for 75 years. Hasn’t happened yet.”
Tucker acknowledged that the recent popularity of AI can cause disruptions.
“We think about disruption — disruption in society, disruption in health care, disruption in our personal lives,” she said. When some byproducts of the COVID-19 pandemic are included, she said, “within the last 10 years we’ve seen some amazing disruptions that really have made us think about the way we work.
“The neat thing about disruption in creativity and science,” she continued, “is that in order to move into new forms, into new knowledge, you almost have to be disruptive. If you’re complacent, if you’re comfortable, if you have everything you need, there’s not much motivation to move forward.”
Tucker compared the ways that scientists trained early AI models to make connections between input and output to the way a child learns or a person who has had a stroke relearns that something is “dangerous” or “hot” or “good.”
“It’s kind of that same process with these artificial neural networks,” she said, comparing the silicon bits to human neurons.
“Now, I don’t want to humanize this too much,” she said. “There are different ways that we can train it,” such as through the type of input and the ways AI is asked to categorize the information.
“Just in the last year, neuroscientists have published that by using reinforcement learning, their artificial neural networks now have a deeper understanding of some neuroscience concepts,” she said. “It’s a really interesting time. Again, this goes back to how it’s built. Eventually you have something that seems to have knowledge or think. And that’s what AI really is.”
Tucker said she considers machine learning to be a form of artificial intelligence, rather than a separate entity.
“It’s really programs with the ability to think and reason like humans,” she said, calling the artificial networks “just a mode, just a tool to do some statistical pattern.”
“What makes it a little more powerful is Big Data,” she said, referring to data that is characterized by high volume and that is created and collected at high velocity.
“AT UTMB,” she said, “we input everything into our health care system every day. Then it’s up for the researchers to call up the next day. This is really high velocity if you think of seeing tens of thousands of patients in a given day across a health care system,” which includes not only patient data, but insurance and pharmacological data as well.
Tucker said high variability in the data and the quality of some data can be “a huge mess,” but that AI and machine learning “are the tools we’re applying to these data sets.”
“To me, that’s where we need to have control, and where we need to be careful,” Tucker said. “When you’re looking at large data sets, how many of you have collected data but for that one person that’s an outlier, and you have to have big discussions with a lot of people about why you’re going to toss the outlier.”
She said Chat GPT has been in use for two years now, but many people don’t realize that the source of its data was “everything on the World Wide Web since 2007, with the next largest component from web links on Reddit.com that had three or more uploads.”
Tucker cautioned that “everything you put into Chat GPT is funneled back into the model,” which is why she’s careful which models she uses for research.
“I use Consensus for my research,” she said, referring to an AI-powered academic search engine. “If I ask it, it will give me the references so I can verify the sources of where it’s coming from.”
Tucker emphasized that there is a difference between data and AI.
“Data is just random things, not necessarily things that add value,” she said. “When we start to tie data together is when we get information. Then when we take that information and we aggregate it is when we get knowledge. Things start to come together. We start to see the patterns. Wisdom is interpretation, when I add my biases, my clinical interpretation, onto what is knowledge.”
By using AI, she said, “We’re not looking to do anything more than augment what we can do.”
Tucker also invited anyone who is hesitant about AI to keep in mind that “you have more data in your cell phone than the first Apollo rocket did.”
“AI is neither good nor evil,” she said. “It’s a technology. It depends on how we use it.”
Joining Tucker for a panel discussion that kicked off the day’s presentations were Carolyn Clancy, M.D., assistant undersecretary for health at the Veterans Health Administration, and Sean Gardner, scientific program manager at the National Center for Advancing Translational Sciences at the National Institutes of Health.
Clancy presented “Artificial Intelligence in Healthcare,” while Gardner’s talk was entitled “Artificial Intelligence in Research.”
After the conference, each of the speakers met individually with key UAMS leaders to discuss ways in which UAMS can adopt AI to jump-start its joint research, clinical and educational missions.
“AI is definitely the future of health care,” said Wendy Ward, Ph.D., a professor in the UAMS College of Medicine and associate provost for faculty in the Department of Academic Affairs, which presented the conference. “We plan to have another conference next year.”
link