LegalApril 16, 2026

Why the Future of Legal AI Depends on Trust, Authoritative Content, and Connected Workflows

By Viktor von Essen, CEO, Libra by Wolters Kluwer

Legal AI is moving from experimentation into everyday legal work. Viktor von Essen, CEO of Libra by Wolters Kluwer, explains why trust, high‑quality content, and integrated workflows are now essential to delivering value at scale.


Erik MEERT

Over the past two years, legal AI has progressed rapidly. What began as experimentation with isolated use cases is now becoming part of daily legal work across law firms and corporate legal departments. This shift brings opportunity — but also responsibility.

Legal professionals are no longer asking whether AI is capable in principle. They are asking whether it can be relied on in practice. Can it be used consistently across teams and jurisdictions? Can it be embedded into established workflows without increasing risk? And can it support professional judgment rather than undermine it?

These questions framed a recent keynote I delivered at the Legal Geek conference in Amsterdam. But they are far broader than any single event. They reflect a structural change in how the legal market now evaluates AI.

Moving beyond experimentation
Once AI is expected to operate in real legal contexts, the standards change. Speed and novelty are no longer sufficient. Accuracy, consistency, explainability, and accountability become foundational requirements.

This marks the end of a phase in which legal AI could afford to be disconnected or experimental. Tools that operate in isolation, without regard for professional context, struggle to meet the expectations of legal practitioners who bear responsibility for outcomes. Mature legal AI must therefore be designed for operational use from the outset, not retrofitted after the fact.

Why trusted legal content is central
Legal AI cannot be separated from legal content. Lawyers work with statutes, case law, commentary, and expert analysis. When AI systems are used to support research, drafting, or review, their output is only as reliable as the sources they rely on.

Without authoritative, current, and professionally curated content, AI introduces uncertainty at precisely the moment when confidence matters most. For a profession grounded in accountability and accuracy, this is a decisive issue.

Trusted legal content is therefore not an enhancement to AI capability. It is a prerequisite for responsible use.

From tools to connected ecosystems
Many legal teams work with a growing number of specialized tools, each solving a narrow task. Over time, this fragmentation increases complexity, duplicates effort, and disrupts workflows.

Legal AI offers an opportunity to reverse this trend — but only if it is designed as part of a connected ecosystem. Research, drafting, review, and collaboration should flow together within a single environment that reflects how legal professionals really work.

When AI is integrated into such workflows, its value is amplified. It reduces friction rather than adding to it, and it supports end‑to‑end legal work instead of optimizing isolated steps.

A profession‑led approach to innovation
Sustainable progress in legal AI does not come from adopting the newest capability as quickly as possible. It comes from aligning technology with the realities of legal practice: professional standards, trusted sources, and established ways of working.

As AI becomes more deeply embedded in legal work, the challenge will not be whether it can do more, but whether it can do better. Better aligned with legal reasoning, better supported by authoritative content, and better integrated into professional workflows.

Explore related topics

Back To Top