CorporateHealthNovember 24, 2025

How artificial intelligence is transforming healthcare

In this video, Alex Tyrrell, Head of Advanced Technology at Wolters Kluwer, talks to Michael Vizard of Techstrong Group on how artificial intelligence (AI) is transforming healthcare. He explains the unique challenges organizations now face when securing sensitive medical information in the age of shadow artificial intelligence (AI). Please read a summary here below or visit Techstrong’ site for the whole story.
Many programs are exceedingly helpful and the guardrails sometimes come to an end. It's kind of like having a guard dog that when the robbers show up, he show them where the keys are. How do you see this?

When we start to use these technologies, we see that they have value. In our daily lives, it does and I think potentially cause people to let their guard down about what you might put in there. So when you think about personally identifiable information or sensitive information, people may think; what's the harm, right?

What you don’t realize is that the bad guys can connect the dots. And they're looking for pieces of information that might be here and they might be there. If they can get access to this, it can be a real threat to the individual. What's the threat of something knowing something about my cholesterol? Well, that's not really the end game. The end game of these actors is to empty your bank account and that is deeply profound.

Alex Tyrrell

And not to say, that it's not profound to have access to personal healthcare information. I think that it's both raising awareness so people understand to be very careful how you interact with these technologies, particularly with LLMs.

What you share, what you put into them, think about how it might come back to get you. And of course, as technologists and IT organizations, we have to be focused on not just the guardrails. We have to make sure that the LLMs don't hallucinate, but also to make sure they actually cannot be exploited.

Do you think we might be able to use AI to help enforce the guardrails so we'll have a kind of layered framework in place?
We focus on having an expert in the loop when we look at a particular place where the shadow IT can become a problem. There's great information out there and these LLMs have brought vast amounts of information to patients and to clinicians.


You can see a GenAI tool, potentially giving answers and insights and guidance and recommendation fast, which is good and it cuts down on the burnout, the administrative burden BUT you don't have to forego that ability to verify that you have trusted information. That you may not get if you're just using some of these tools that are on the internet.

What is that one thing you see people doing today that makes you shake your head and go “folks we should be a little bit smarter than that”…?
I think there's such excitement around the technology that you have to be very careful when you want to get your answers quickly. It’s all really neat and it can be a cool technology and it can maybe help you with doing things at work… BUT make sure you're using tools that your organization has said are OK.

And when you're seeing tools of the internet, I'm not going to put my LinkedIn profile in there. There is the saying: what happens in Vegas, stays in Vegas. These LLMs have this same characteristic where they're so large and the information is so vast and they're so expensive that the minute you accidentally tipped your hat and share something in them, it's going to be very difficult to ever get it back out.

And the way the models evolve and you get generation on generation, that information could be in that fossil record for virtually ever. And GenAI has really crossed a conundrum where maybe in the past you could extract information that you really should not have shared. That's almost impossible now with these models.

The other thing you need to be a little bit careful about but suddenly country of origin for the LLM matters.
We work with a number of partners to make sure that we're getting access to models that are trusted and verified. We know the provenance, we know exactly what we're getting. We know that we're deploying in trusted zones and we know the policy around data retention and what's collected and what is not. It's really important to focus on that because the idea of these models want to learn and if you're not careful, if you're using a particular kind of shadow AI…

And I think the quote is: if you're not sure who's paying for this product, you ARE the product! And you're potentially feeding these models through your interactions with what will be the next generation. That's a realization people need to understand that they're potentially harvesting interactions to make the next version of that model. We're very specifically at Wolters Kluwer focused on making sure that it does not happen.

Watch Video
Wolters Kluwer - Alex Tyrrell
Head of Advanced Technology
Alex Tyrrell, PhD, serves as Head of Advanced Technology Wolters Kluwer and Chief Technology Officer for Wolters Kluwer Health and oversees the Wolters Kluwer AI Center of Excellence, focused on accelerating innovation across all Wolters Kluwer divisions in the areas of GenAI/ Agentic/machine learning and data analytics.
Back To Top