CorporateDecember 12, 2025

The best build-versus-buy strategy with modern AI

Alex Tyrrell, Head of Advanced Technology at Wolters Kluwer, talked to Pat Brans of CIO.com on the best strategy around build-versus-buy when it comes to modern AI deployments. As the technology matures beyond experimentation, the build-versus-buy question has returned with urgency, but the decision is harder than ever, Pat Brans writes. Unlike traditional software, agentic AI is not a single product. CIOs can no longer ask simply, “Do we build or do we buy?” Please see Alex Tyrrel’s contribution and the whole article or read the summary below.

Customer facing systems have special considerations regarding data privacy, trust, auditability and explainability. Those become critical, especially when using third party models. How do these governance needs influence the decision to build versus for Agentic AI?
You must be prepared to evaluate any third party model off your first principles. You cannot take the latest benchmarks as your source of truth. So this involves creating LLM evaluations in formal rubrics to really pressure test the model under realistic conditions that capture the complexity of your use case. And it should also faithfully identify and mitigate any potential sources of bias or other risk. Subject matter experts are key to creating the right evaluations and rubrics.

Alex Tyrrell

When developing our UpToDate Expert AI solutions for clinical decision support using GenAI – which helps doctors provide patient care and improve outcomes – we introduced an Expert-in-the-loop approach to rigorously test and evaluate our solution.

UpToDate Expert AI is also grounded in our trusted and verified content, delivering a clinical GenAI solution that provides the transparency and trust needed for real world care decisions.

Now when it comes to privacy and security, you have to be careful with LLMs. Especially third party services that market as ‘free’ or ‘easily accessible’ without proper governance or vetting by an organization.

When you use a GenAI solution, are you sure nothing is being harvested from your interactions? Are you sure that if you create a derived work, you actually own the IP? These are important things to consider when procuring GenAI and agentic solutions – due diligence and strong governance are critical. No harvesting of user behaviors should take place, clear data retention policies should be in place and no training of models on your data should take place either.”

If the agentic AI capability for customer interactions are being built in house, what architectures or design patterns do you favor to ensure it remains maintainable and scalable?
You really want to look at the use cases. We use a lot of model grounding that uses our verified expert content to prevent hallucinations and to ensure trust and explainability. With that in place, you can link straight to verified facts with attribution and you can explain the reasoning and steps that have led to a response or outcome.

These are key attributes for many of our use cases. So we focus a lot on vector databases, embeddings, and many of the artifacts that comprise ‘retrieval augmented generation’ or ‘RAG’. This is a common approach. We actually go well beyond RAG and introduce expert reasoning so that we can break down complex problems into steps and get much better outcomes.

That brings me to the next point that when solving complex problems; you often need to orchestrate multiple agents with Multiple Context Protocols (MCP). MCP is fast becoming the standard to incorporating traditional sources of information. Think of ‘databases into an agent’ and the ‘agent to agent’ or A2A (Agent-to-Agent) protocol which are useful for communication and orchestration between multiple agents.

Agentic AI is rapidly evolving. How might the build versus buy calculus shift over the next two to three years for customer geared use cases? For example, will modularization open source or ecosystem platforms tip the scale towards more building more or buying more?
You'll always measure build versus buy. But a third leg of the stool is emerging and that is strategic partnerships. As well as ecosystems where agents work together to solve more complex and impactful problems.

In the ecosystem model, you don't need to solve every problem. You focus on what you're good at and you look for strategic partners that add a synergistic component. Rather than simply buy, you can strategically partner. With agentic AI frameworks, this will become easier and more impactful as protocols like MCP and A2A will make this much easier in the future.

So when evaluating a vendor provided agentic AI system for customer interactions, which technical risk or constraints do you most scrutinize?
Two big factors include latency and cost. With basic LLMs we all got used to the idea of Time To First Token (TTFT) and there can be no noticeable latency. Now when it comes to integration into a SaaS offering directly in a workflow – which is where Agentic AI shows real promise – customers may be used to a certain transactional experience. Suddenly waiting for a result could lead to a bad outcome for the user. Understanding the source of latency in a vendor supplied agentic solution may also be difficult.

The more you understand about the agents and the LLMs being used, the better it is. Can you reserve model capacity? Can you enable global regional routing or is it possible to choose smaller or more efficient models within a model family?

Second is cost. What may look like a simple chat about ‘our question answering style interface’, could involve complex inferencing, especially when performing model grounding. The process of supplying facts to an LLM to prevent hallucination is key.

The context windows and token counts, especially within context learning. In other words, if you provide examples of the expected model behavior on the fly these types of prompts can get very large and costly. If you're used to costing out IT spend based on CP utilization, storage and IOPS (Input/Output Operations Per Second) it's a different calculus and LLMs and the agentic AI can give you a real sticker shock.

Wolters Kluwer - Alex Tyrrell
Head of Advanced Technology
Alex Tyrrell, PhD, serves as Head of Advanced Technology Wolters Kluwer and Chief Technology Officer for Wolters Kluwer Health and oversees the Wolters Kluwer AI Center of Excellence, focused on accelerating innovation across all Wolters Kluwer divisions in the areas of GenAI/ Agentic/machine learning and data analytics.
Back To Top