LegalApril 08, 2026

Legal Leaders Exchange - Podcast episode 35

AI is here to stay: Turning adoption into a future ready legal practice

The conversation in this episode examines how legal organizations are navigating the rapid adoption of artificial intelligence, grounded in findings from the 2026 Future Ready Lawyer Survey Report. The discussion centers on the reality that a large majority of legal professionals now use AI in some capacity, while trust, compliance, and governance concerns remain significant barriers to deeper adoption. Participants emphasize that responsible AI use requires transparency around data handling, compliance with privacy and regulatory requirements, and a clear understanding of how AI systems generate outputs.

The panel also explores the practical realities of implementation, including the risks of over‑automation, the importance of keeping humans accountable for outcomes, and the growing interest in agent‑based AI systems. The speakers stress the need for clearly defined use cases, clean data, benchmarking, and workflow‑specific tools and highlight how how AI can shift legal work away from routine tasks toward higher‑value analysis and decision‑making.

Listen to hear our speakers discuss:

  • Trust and compliance concerns surrounding the use of AI-driven technology in a legal setting.
  • The importance of keeping a human in the loop to review AI outputs.
  • Why AI agents are best suited to specific tasks rather than entire legal roles or end‑to‑end workflows.
  • How top‑down pressure to use AI can stall adoption if not aligned with organizational strategy.
  • The growing value of technological fluency in legal teams. 

Be sure to follow Legal Leaders Exchange on:

Apple Podcast | SpotifyAudible | iHeart Radio

Transcript

Greg Corombos

Hello, and welcome back to Legal Leaders Exchange. In this episode, Jen McIver explores the realities of responsible AI adoption in legal teams. As AI integration continues to grow, legal professionals face new challenges in balancing innovation with responsibility. Join Jen as she sits down with industry experts Vince Venturella, Christian Hartz, and Ciaran Flaherty as they discuss what it takes to implement AI safely and effectively. This discussion offers clear strategies to help your organization adopt AI tools with confidence. Now, let’s hand it over to Jen.

Jen McIver

Thank you, Greg. I am so excited for today’s episode of the Legal Leaders Exchange. We’re really going to focus on the 2026 Future Ready Lawyer Report. And I mean, I can’t get past the fact that 92% percent of legal professionals now use AI daily. And, we’re going to unpack that today because what does that really mean for the business and the practice of law? And to help me do so today, I have Vince Venturella, Christian Hartz Hartz, and Ciaran Flaherty. And, we’re just going to dive in and use your experience for the three of you and really talk about what does it mean when we’re using AI at that level now? I of course start with the level of AI, 92%. But really when you look at the report, we still have a little bit of roadblocks, especially when it comes to adoption as well as compliance and trust. And when I say compliance and trust, we’re talking ethics, privacy, governance. This survey highlighted 39% of legal professionals still have ethical concerns, maybe data privacy concerns. 46% rank data privacy compliance as their highest information security challenge. But I do think it really does come down to trust. And so Christian, I want to start with you today and talk about the role of technology providers. We’re talking about how we can assist legal teams maybe to remove some of these barriers. What are you and your team doing that can really move this forward and maybe hopefully, bring down the percentages in our next report?

Christian Hartz

One of the things that you really need to see is that there are law articles everywhere. And the legal professionals, like for example in Germany, there is a law in the criminal code even telling them that they shouldn’t send their data anywhere. And that really frightens them. So it’s not just there is the GDPR thingy that everyone is aware of, but there’s even the criminal law code which just tells you hey guy, sorry, but if you send your data somewhere where it doesn’t belong, then you really have an issue. And those are the things they need to deal with. And yeah, you can make it better by giving them options, by explaining where the data is hosted, by making sure that they know what happens when they use your tools. And that’s all the basic stuff. So you really need to make sure that everyone feels comfortable. But also, we need to see how we can help the government making sure that they’re also ready for AI and that they give together with us, as companies, the option to give the trust back to the users so that they really can start.

Jen McIver

Christian, I love that you called it the GDPR thingy because I feel like that’s what I say across the pond. So hearing it from both sides now, I really i like that a lot. You know, one of the things that really goes into the trust is, I think you know, as vendors, we’re doing a great job explaining where the data comes from, and we’ll talk a little bit more about that or where the data may be going. But also, I think that there is just the idea of hallucinations; it’s still there. So Vince, do you want to talk a little bit about just what you are seeing from a trust perspective? And especially with media?

Vince Venturella

Absolutely. I mean, look, let’s be real; not a day or a week— I should say— at least, goes by where we don’t see some fun article about how somebody practicing law— usually for themselves, usually without expertise— put something in front of a judge or turned something in that hallucinated some citations or on a brief or something like that. And the reality is, is that you know those all make some big news and they’re splashy, we see them. What we don’t see is the thousands and thousands of lawyers who are using it successfully every day and mitigating any kind of hallucinations because they’re using well trained tools with good prompts. They know what they’re doing, and most importantly, they’re keeping a human in the loop who’s actually reviewing that work. Frankly, The same way you would do as a senior attorney. If you had a first-year attorney prepare something for you, you would check their citations and make sure that everything was correct. So the same rules apply here, right? It’s not magic, it’s not sorcery. It doesn’t simply know everything perfectly all the time, and you still need to bring your legal expertise to the table, but it can expedite a lot of those tasks for you, especially when it comes to that initial gathering as long as you’re still the arbiter. And realistically, the final say of the algorithm.

Jen McIver

I think that’s great. The human in the loop, I think is important. You know, a lot of folks I hear it all the time say,“ I want the easy button.“ AI is just going to take care of it. And I think that we often think about that. You know it’s just going to absolutely almost magically take care of things. Might even think about you know kind of like self-driving cars, right? Magically they’re doing that. And Ciaran, I know that you’ve really looked at, as legal leaders push their teams to move faster with AI, there’s governance choices that might get deferred, or there’s tensions that may show up later. What are your thoughts on just that human in the loop and accountability?

Ciaran Flaherty

Yeah, I think to Vince’s point as well, there’s a tremendous reporting bias towards when it does go wrong. And frankly, no one wants to be that story. Whether it’s, Christian, in Europe, there’s legitimate criminal and relevant concerns there but actually, No one wants to be that guy, right? And to do that, and so what it comes down to is doing so, I think really importantly, in a way that you understand. And that’s where it comes down to trusting, not just an answer that you get, but the architecture that delivers you that answer as well. Using purpose-built tools that have an explainable architecture and explainable place where your data is.

You know, naturally if you’re in Europe, that’s really, really important to have that visibility. But then also in the outputs, to Vince’s point around you wouldn’t trust a decision or a recommendation from a really junior member of team without evidence. And so, one of the biggest points of tool design, of architecture, of generally audit around these tools, is not just taking answers, but asking for how it got there and the evidence associated with that. So we always use citation. I think lawyers are very familiar with the idea of citation, but also checking, of course, the reality that those citations exist. And so, checking it just in the way that you would a really enthusiastic junior’s work is very much the way in which we see that adoption curve, right? That’s how you build trust, is understand that they’re using appropriate processes and that you’re aligning with the way in which they’re using those sources as well.

Jen McIver

So AI is not quite a Waymo yet, right? We’re not ready for the fully self-driving cars or the fully self-driving legal teams.

Ciaran Flaherty

Yeah, I think lawyers don’t want to be in the situation that self-driving cars are, where it becomes a kind of finger pointing when they do hit something as to whose fault it is. The reality is if you are negotiating on behalf, if you’re an authorized signatory for an organization, it is your responsibility. It is your job as the lawyer to understand that risk much more vividly than if you’re the driver behind the wheel of a Waymo.

Jen McIver

I know that responsible AI is really important for us here at Wolters Kluwer. And so I’m kind of curious, Vince or Christian, either of you feel free even speak at the same time, we’ll figure it out. But I’m kind of curious as we’re developing AI, as you’re talking to clients, whether it’s the law firms or whether it’s in-house legal, about AI. What are we doing to convey how Wolters Kluwer is really looking at the data concerns, the privacy concerns?

Vince Venturella

I’ll take it first. This is just simply an area of discussion. And by the by, this isn’t the first time this has happened. If we think back to when we first started doing SaaS solutions, you’d have these big review processes and internal governance boards around SaaSbased products, and that’s exactly what I see now around the AI products, right? Because it’s a new tech. People have a lot of questions. They need to see those extra layers.

Now we at Wolters Kluwer, what we’ve done is obviously we have our responsible AI framework. So that says we use a ring-fenced version of these frontier models and things like that, so your data is not going out into the world where you can’t control it. Very basic stuff like that. But at the same time, we also work to make sure that things are auditable, understandable. You know where your data is going, and we can show you where your data is. We can remove your data.

So, we mentioned GDPR earlier, those kinds of things, we comply with those sorts of things. We also use frankly a lot of the same security controls that we already have in our platforms that are trusted to guide the AI. So, you can’t use the AI to get access to information you couldn’t get access to in one of our systems. So, the same rules of the road apply.

Your story of the Waymo is very important. Yes, it’s an automated thing driving, but the Waymo still has to stop at red lights and follow the speed limit, right? So, we already have the rules of the road. We just make sure that the AI follows those existing rules.

Christian Hartz

And additionally, you also have a lot of benchmarking for doing those things. Just imagine you are thinking about a solution where you have within legal or regulatory, twelve plus different countries with their own solutions, and you need to measure that. So that also means that you have a lot of different ways how you measure. You have simple questions, you have complex questions, you have workflows, you have reviews, and all of those things. And all of that you need to bring together to really make sure that the quality works, that it works for all of the countries in a similar way.

And also keep in mind that some languages aren’t that much represented in the language models. Just imagine Hungarian, which is a really nice language, but it reflects just this very, very tiny fraction of all the training data in the internet the LLMs were trained on. But still, we need to make sure that it understands what the user really wanted, and we still can deliver the best quality. So that’s why, also making sure that the quality is as expected is of such high importance to us.

Jen McIver

And I think that really leads, Christian, into quality. We’re talking about transparency, but that does lead us into trust, and that leads us into the ability to really work in the legal teams, both law firms and inhouse legal teams, on taking that trust and moving it forward to adoption, and not just adoption on using enterprise tools daily. I’ll be honest, I question 92% using it. I think it could be a copilot, it could be a Claude, it could be anything, a Gemini, or it could be legal tools.

And I really do want to take us into adoption and embedded legal workflows and a little more consistency on that. Ciaran, adoption sometimes stalls. And I’d love to get into the idea of whether that’s a technology issue when adoption of AI stalls or is it a process issue or a people issue? What are your thoughts on that?

Ciaran Flaherty

I think it can be both. I think that’s where approaching these situations in a transparent and strategic way is really important. Because I think what we saw in probably from 2024 onward, was this big top-down instruction of you have to do AI, right? And I think that led to many of the usages that you’ve spoken about there. The copilot usages. The Hey yeah, I threw something at ChatGPT and it gave me this answer. I am now one of the 92% using AI.

But that’s not necessarily how to deliver success. And I don’t think it’s one of those tools’ fault if it’s not appropriate for that exercise. It’s just simply not what they’re designed for. And that’s where it goes back to Christian’s point about the importance of benchmarking, about the importance of testing. Because if you’re utilizing a platform or models that haven’t been evaluated for this type of work or these workflows, you’re going to have a lower trust. You’re going to have a lower adoption. And crucially, that lower adoption has a kind of validation cycle because the output’s not of a sufficiently high quality.

Jen McIver

And I think you brought this up earlier. It’s throwing it at using it in different ways. It’s kind of like AI for the sake of AI. That’s something that I’ve heard a lot, even internally. It’s like, we’ve got to do more AI. And I’m like, but why are we doing the AI? And when we talked earlier, you mentioned it’s kind of like the enthusiastic kid that wants to help with everything. I thought that was a really great analogy because the kid can definitely help with everything, but should the kid be helping with everything? I think that’s a really good question and you did a really great job talking about how you have to figure out where those tasks are more meaningful for the kid, or the AI.

Ciaran Flaherty

Yeah, 100%. The way to achieve this is not to do AI and produce something at the end of it. It’s to identify specific tasks, build agents, validate those agents, benchmark those agents appropriately for that task, and then to build orchestrations between them so that they’re able to speak together. The other risk you can have is that you end up with this stable of tools, and every toddler’s got their own toy in the sandbox, and they’re not talking to each other. And when one person tries to take the digger off the other, then suddenly they’re in an argument. And I don’t think that’s the situation we want either. And so establishing a unified platform-based approach with specialized agents designed to speak to each other inside of it is very much the approach we would naturally recommend.

Jen McIver

I think you said the magic word there, agents. That’s what I hear everywhere. And so Christian, just talking about agents and talking about being an attorney, and by the way, for those that don’t know, Christian is an attorney by trade as well, what are your thoughts on just getting in there and letting an agent do law?

Christian Hartz

Agents are awesome, but not in every single use case. And that’s what we need to bring it back to. Because I think you can just use an agent to do research, whatever you want. There are those new shiny things where you can even have your agent doing agentic loops running forever and ever and ever, and you will get something out. Would it be great? Maybe. Will it be completely wrong? Maybe too.

And that’s the thing you don’t want, because that maybe of getting it right and maybe of getting it wrong is nothing I as a lawyer would like to see in my daily practice. That’s also where the WK tagline, when you have to be right, comes into play. I don’t want to be partially right and partially wrong. I need to make sure the agent is working exactly as I need it, and that’s where WK knowledge comes into play, where you have workflows, where you have content that’s really authoritative. You can bring all of that together in pieces that are working. It’s not necessarily a fully autonomous agentic loop. It’s the combination of the workflow knowledge, bringing in the expertise bringing in the knowledge about the expertise and how to use which piece of content in which stage of your research. You can do these agentic things, but it doesn’t all need to be agentic. I needs to be in balance and lead to the best outcomes, not just for the purpose of doing agentic AI. It’s a nice thing to be able to say about your product, but in the end if it isn’t better, or if it’s even worse, it doesn’t help.

Ciaran Flaherty

I think this is where we often talk about enterprise amnesia or agent amnesia between the two components? If you are asking it to do the workflow without the why and without that sourcing to do it appropriately, it absolutely will produce an answer, but it’s not going to produce the answer that you’d want. You can think of it almost like outsourcing an exercise. If you ask a consultant to go and do an exercise, but you just send it on one email and don’t give them the explanation of why we are doing this and this is what we’ve done before, and here is the sources you should use. They’ll produce an output. It will be in line with some idea of standards, but it won’t necessarily be what you are looking for as an organization.

Jen McIver

I think that’s important Ciaran and I think Vince kind of coming to you and I know we’ve had this discussion before, but if you can really just name one early option. One early adoption mistake that organizations consistently make, especially when it comes to agentic and the idea of coming and wanting agentic to do everything, what do you think that is?

Vince Venturella

Yeah, so the analogy I always use—or that I’ve heard and I didn’t make this up—is that your data center, which is where all this AI comes from, is like a building full of the most genius six- year- olds you’ve ever met in your life, right? Okay, they’re really, really smart, but they’re also six-year-olds. And you wouldn’t trust any six-year-old, no matter how smart—to go and do everything for you. And I think the problem that most people run into when they’re picking these agent workflows is one—I’ll list a couple in rapid fire actually. One, not making sure the underlying data or structure or systems you want to talk to is actually ready for AI. If it’s messy, if it’s dirty, if it’s in seven different systems that don’t talk to each other, well guess what? The agents won’t be successful and your AI project will not be successful at the same time. Two is that you try to bite off more than you can chew. This is just a classic thing we’ve seen with all software challenges. It seems really exciting, we’ve got this new tool. Let’s throw it at everything, right? But the reality is that the people who’ve been the most successful find an existing use case, some workflow, some challenge, some real problem statement where the data is accessible, clean and ready. And, then they take that, they map the process, and you figure out where are the opportunity tasks. Agents are really good at doing tasks. They’re really bad at doing jobs, and that is a really important difference. Most of us have a job that consists of lots of tasks. The agent can probably do a good number of those, not all of them. Figure out which ones the agent actually can take, and then you do the rest. That’s where you stay in the loop. That’s my advice.

Jen McIver

And I think that really gets back to what Christian was saying earlier about mapping and Ciaran, I think you said it too, About just really knowing, you know, where the model needs to step in or where it may not step in and knowing that and then benchmarking to make sure those tasks are doing what they should be doing, or at least what you expect. And I think that really goes to something else that we notice in the Future Ready Lawyer survey results is that leadership expectations shows up as a trend. When I look at leadership expectations as being the expectation to use AI. And I know we mentioned this before, And I feel like sometimes it’s like I am on a recording loop of using AI for the sake of AI. But I think that leadership really does expect that legal teams, both again, law firms and in-house legal teams, are leveraging it and instantly getting ROI. Like instantly, it’s we’re going to reduce our staffing or we’re going to get more revenue. And, we do definitely have some statistics saying that we’re able to do either of those. But I am kind of wondering, And Christian maybe I’ll go to you on this. How do you balance the leadership pressure for a rapid rollout adoption of AI against the realities of the readiness? And Vince already said data is a piece of it. But can you expand on that a little bit for me?

Christian Hartz

Especially when I talk to law firms, there are discussions about where to start. The urge to use AI is more or less everywhere. Leadership also have the mandate to ask for AI, but the question is also whether their own processes are ready for that. There are some isolated usages of AI throughout the law firms. Some are shadow AI, which just happened because someone tried something out, and no one is aware about it. Those are even worse than anything else because you don’t know that they exist. But there are also other parts where things are tried out in a more planned way. But still, if there is not a really good plan on how to come to final conclusion on how to really use it as a process within the law firm, and that’s exactly where I see um the responsibility of the leadership team. They need to make sure that there is a structured process on how they want to tackle it. It’s not just like, oh okay, there is Copilot. Let’s just use Copilot to reformulate mails. That’s a nice thing, but that’s not a strategy. That’s just a part of this is a tool that can be used, And we need to get away from those are all of the tools or at least having a tool landscape so that we then can start connecting the tools. But just having tools for the sake of having AI in a tool, that’s the part where we really need to get away from, and where also leadership is responsible to make sure that this is being stopped somehow.

Ciaran Flaherty

I completely agree that this kind of, AI blur that seems to go on around use cases where because it’s AI, it’s exciting and useful. But that’s not necessarily actually meaningfully a contribution, right? And. So I do think there’s a very important conversation that needs to happen around all of these use cases, which is removing AI. Is this something that you would want to happen and feel comfortable in happening or not? Even if it’s not an AI use case, it’s like, should the sales team be negotiating a contract? Okay, maybe under a certain value, maybe over. There is context there that’s lost if you just say, hey, we’re going to use AI to do X.

Jen McIver

I think Vince, I’d love to hear your prep or your idea about the tension between moving faster, managing risk, and just really how can vendors help? And Cieran definitely chime in with Vince as well. But how can vendors help that and figure that out?

Vince Venturella

So, one of the ways we’re looking at helping this is with specialized tools where we’ve put our own training, what we would think of as like domain language models and/or prompts on top. Specialized tools for specialized workflows where we know we can be successful. So, there’s things we’re working on right now, say, like invoice review within the corporate legal world. So, there’s a world where you’ve got attorneys spending a lot of time doing this kind of thing. That’s a task that an agent can be very successful at, for example. But I do want to just say, as with any of these things, it is important to remember, this is a new technology, and there is a training learning curve here. So, no matter what the specific task is, the best way we can be we can help you be successful is by, one, giving you these specialized tools that we’ve created to do special tasks where you have a higher trust. They have a specific mandate. They can’t go off on their own. But at the same time, it’s also important to remember that no matter what tools you give your people, there has to be some kind of period where you’re still working with them. You don’t just flip the switch, and suddenly, oh, guess what, productivity increases 8X, all overnight. No people are going to learn this, the same as with any other tool set. I don’t think we suddenly can expect that all of a sudden, all of the gains happen immediately. It will still take time for your people to learn how to get the best use out of the tools.

Ciaran Flaherty

I think that’s where we have a tremendous advantage as vendors and as organizations that sit across the deployments of our technology in a specialist space. If you ask someone at a frontier model provider, there’s a vague idea of how those technologies might be applied in our space. But if you live and breathe that deployment every day, the expertise is not just in fine tuning a model for the area; it’s actually in how you deploy those technologies as well. One of our biggest assets as a business is not just in, yes we have a tool that performs something very well. For example, yes, you can drop in ten thousand contracts and have AI read them and pull out a thousand data points. It’s what should you be thinking that that empowers your organization to do? Because it’s not just that there is a leadership expectation on lawyers to use AI. It’s that if lawyers now, unlike really at any other point in modern in-house history, can have real quantified data on, hey, how often are we deviating? How often are we going against our standards? Should, we be evolving our standards, not just from a gut feeling perspective, but from actually, these tools empowering you to do that? That’s a complete shift in the way that role works. And it’s not something that happens just when a lawyer is thinking about how can I do my job that I do today differently. It can also come from the vendor of, hey, this is what this technology can empower as well. That’s where I think it’s really exciting to be part of that journey. I think you can also do things like set up customer networks and peer networks and things like that to understand, not just how you think your technology should be used, but how your customers think your technology should be used and share those learnings as well.

Jen McIver

I agree with you on that, and I find that really interesting though, when you talk about the attorney and the expectations. I know the Future Ready Lawyer survey data talked about the demand for new skills and that 70% of legal professionals view technological expertise as highly important when recruiting new talent. And so, I want to throw it to you, Christian, being the attorney turned legal engineer in the AI world. How much do you think attorneys are going to need to know from a technology perspective in AI? How deep do you think that that expertise needs to go?

Christian Hartz

A similar question I got when I joined WK like seven years ago, because it was the first time when my title was legal engineer. And the discussion was, do lawyers now need to code? I think it changed a bit. So, the question meanwhile could be, do lawyers have to vibe code? Different type of coding but still. A type of coding, I guess the answer is still a bit similar. Not everyone needs to, but you need at least a simplistic understanding. And maybe also in some cases, a more in-depth understanding of how law works and also on how technology works and the combination of those two. Seven years ago, all of that was called legal engineering. And meanwhile, it is a job family where you have a lot of different types of legal engineers. And this is exactly what we’re currently having to take a look into. And some legal professionals in the law firms do need to have that knowledge because they are at the forefront of implementing processes with AI in their daily practice. But not everyone needs to know that. So, I think you have to have a strong organization that has a very diverse team and very diverse roles in the team, which really make it possible to strive together as a big team.

Jen McIver

I like that. I think the reality is there is a lot of legal departments out there, and I think there is a lot of law firms out there that don’t have the capability or even the ability to staff at that level. So Vince, I am curious, you know now that technical fluency is kind of table stakes, What do you think will differentiate a high performing team that might not have the ability to have a legal engineer or the like within their team?

Vince Venturella

Well, a lot of these tools are going to have a very democratizing effect. One important thing to understand is that when these tools are used correctly, when you have the right specific tools in place to handle these specific tasks. Building on everything we’ve talked about thus far, it allows for people who maybe don’t have the big resources, who aren’t the biggest firms, who aren’t the biggest departments to suddenly have access to roles, knowledge, toolsets, capabilities that they otherwise would have never gotten a hold of. So, what I really see in the long term here is for a lot of these smaller departments, they’re suddenly going to be able to simply do a lot more. Yes, you could have some of your folks in-house vibe code out some simple tools for your office or for your department to do things. But what it also really means is that a lot of those simpler tasks or rote tasks, even if they’re value-add, are just going to go away. And I think where that really transforms is it means the whole industry is going to get to focus a lot more on true value-add tasks. So if you’re a law firm, you suddenly can do a lot more value-add for your clients. Not only do more actual business, but a lot more real, deep value-add, spending time treating each matter, each case, each whatever as its own individual special snowflake. Where you are actually taking the time to say, hey, can we employ some alternate strategies here, resolve this quicker, or use something we might not have had the time to explore before because we had to do all the grunt work. I think when you look at something like that, especially when you are a smaller law firm, that’s going to be very powerful for them to deliver a much higher level of service.

Ciaran Flaherty

Yeah, I think that opportunity to pivot to high value-add tasks is something that we get really excited about. For me personally, I went to law school, had the opportunity to go and be a summer at a law firm and experience what that junior lawyer work was like. And the really interesting bits are a very small proportion of what junior lawyer has looked like for a long time, And so being able to pivot that into actually being part of meaningful decisions and as a law firm, being able to offer much more meaningful insight is really valuable. I do think we need to think about the allocation of those resources as well. There’s always an interesting element of do you want your general counsel being a vibe coder? Or do you want your general counsel being the expert in making decisions for your organization, but using these tools to empower them with even more data to do so and using their time on that. And so there is always that push and pull of in really small teams. Someone needs to get those tools in. I think that’s where we as vendors can really be a great bridge there. But the skill of law and the practice of law has this really exciting trajectory towards being much more of a value-add, business guidance facility than historically where maybe we’ve often been seen as a facilitator or a barrier to getting stuff done at times.

Jen McIver

I absolutely agree with that, and I think the question there, Ciaran, too, is how much do you want to devote to the technology as a lawyer versus the strategic? I think it’s going to be interesting as we see new associates coming up through the ranks, those that are coming through law school who are really coming through in an AI-first type environment. And I think it’s going to take a few years for us to really see the differences if there are. But I do think that that’s somewhere that the transformation of law is really going to hit. Speaking of transformation, AI has been moving rapidly. I mean, since we’ve been on several podcasts now in the last year and a half and I feel like every time we take the conversation to the next level. This is the first time we’ve done vibe coding, so that’s a good new one for me. But I am curious, as we leave and as we wrap up today’s episode, if you really had to look at the changes that happened in the last six months or so, and you really want to reach out to those that are listening today, whether that’s legal operations professionals, whether that’s folks in a law firm. What’s something that you want our listeners to ponder or to really think about as they continue the AI journey? And Vince, I am going to start with you on that.

Vince Venturella

Oh, what an important question! The one thing to take away? Oh my gosh! Oh boy! There is so much. No, I mean look, the answer is simple. This landscape is changing very fast. Usually at least once a week, Anthropic or some company releases something that erases, oh I don’t know, a couple hundred billion dollars of value from somebody’s market cap somewhere. Because some new great tool set has come out. It’s easy to get your head spun around trying to follow all of this. What I want people to walk away with is not, oh boy, I need to tap in and follow every little bit of news and little thing like that. That’s not reality. What you need to understand is that this technology is advancing, And what you want to look at is where are the tools going that are going to be useful to me in my space? And so, things like this, where you have a legal-focused AI podcast or something like that. Or the companies that are taking this, like WK and so on. But that’s honestly going to be your best bet to follow along to see what’s happening. You don’t need to move right now. You are not falling behind. That’s the thing I want them to walk away with. Because right now a lot of these tools honestly still take a lot of your own work and change and figuring out and unlocking and, oh here’s how you set this up and do this. That’s not going to be super useful to you as an enterprise. It’s just not. You’re not behind, you’re on curve. My answer is what you can do right now is look at your system, look at your data, find the tasks that are there that are clean, that are automatable, they’re agent-friendly and start a few discrete, small, targeted projects to bring that in, that will teach you the technology, that will set you on the right track. And it’ll show you what you need to do with your other areas in the future as the technology continues to advance for you to succeed.

Ciaran Flaherty

I think from my perspective, it’s, as you say Vince, it’s thinking much more about the tools that are going to be useful in my space than about the tools themselves. I don’t think that following every notification is necessarily useful and certainly not necessary. But what I would encourage anyone thinking about this to think about is, what are the questions they’d really like to know the answers to? What have they always wondered rather than what they’ve always had to know. I think legal is so often driven by, I have to know this to answer this question, to respond to this, etc., that we don’t have time to think about wouldn’t it be great if I knew this, I would love to know this. And the way the tools are going and the availability of information is going is you’re actually going to be able to find out those answers, and that will make you better.

Christian Hartz

Yeah, and so we talked about technology, we talked about processes. We also should talk about the people, and that’s the thing I would like everyone to also take with you. You are not alone with having or fearing the vibe or fearing that the AI hype is there and that you need to speed up. Talk to your peers, talk to your colleagues If you’re not a leader, talk to your leader or vice versa, talk to your team and make sure that everyone is on the same page, because this is a game we all only can play together.

Jen McIver

I like that a lot, And I think I think something that all three of you had all wrapped together, “the art of the possible.” And Ciaran, I really did like your statement of think about the things that you’ve always wondered about that you’ve just never had time to even think about because you’re so busy on those tasks. And then, in turn, Vince, the tasks that are probably ripe and ready for agentic AI. I want to thank all three of you guys for joining today. It’s been amazing, Christian, Ciaran and Vince. Thank you, everybody who’s been listening to our episode. I do want to encourage you, if you haven’t done so already, to download the 2026 Future Ready Lawyer Survey Report. You can find that at wolterskluwer.com. Just take a look at that data and maybe think about the “art of the possible” and those things that you really haven’t had time to think about. And definitely look for some upcoming Future Ready Lawyer webinars. We have some coming up throughout the course of April and into May. Thank you again, Cieran, thank you, Christian, thank you, Vince. And I think we can call this a wrap!

Greg Corombos

Thank you for joining us on this episode of Legal Leaders Exchange. As we explored today, integrating AI into your daily operations is a strategic process that requires the right balance of technology and human oversight. We extend a special thank you to our moderator, Jennifer McIver, and our guests, Vince Venturella, Christian Hartz and Cieran Flaherty, for sharing their expertise and helping us navigate these complex topics. If you found this episode valuable, please subscribe to Legal Leaders Exchange on your favorite podcast platform. You can leave a review and share the show with your peers too. And please join us for future episodes.

Back To Top