Recently, Alex Tyrrell talked with Francis Gorman from The Entropy Podcast about safe and transparent AI in highly regulated environments. Please listen to the podcast here or find a verbatim of the conversation below.
Recently, Alex Tyrrell talked with Francis Gorman from The Entropy Podcast about safe and transparent AI in highly regulated environments. Please listen to the podcast here or find a verbatim of the conversation below.
A few soundbites: "We don't compromise on trust." and "Expert AI by experts for experts."
Alex oversees Wolters Kluwer’s AI Center of Excellence, focused on accelerating innovation across all Wolters Kluwer’s divisions in areas of general AI, agentic, machine learning and data analytics. Alex has extensive experience designing and delivering commercial scale machine learning and analytics platforms and setting technology strategy for enterprise content management, digital transformation and new product development.
The experts really help us break down complex problems. ‘Owning the outcome’ means you need to really understand, what the intent is. What is it you are trying to achieve. What the purpose is. What the risks are.
We feel like it doesn't make a lot of sense to have AI engineers develop complex solutions in the clinical setting for which they really know nothing about. And then to rely on simple benchmarks and statistical proxies as their guiding principles…
That really doesn’t reflect the complexities of the real world. That expert in the loop, establishes the guardrails, the firm foundation and makes sure that – as we innovate and really speed up those innovation cycles – that we're doing it in a safe and trustworthy manner and that we can provide that explainability and transparency along the way.