Legal07 พฤษภาคม, 2569

Legaltech leaders stress change management, human assets as keys to effective AI scaling, drawing from data revealed by Wolters Kluwer 2026 Future Ready Lawyer Survey

By: Thomas Long

Key Takeaways

  • AI adoption is a change initiative, not just a tech project—success depends on trust, skills, and cultural alignment.
  • Responsible speed matters—organizations must move fast with clear governance, data security, and regulatory frameworks.
  • Human infrastructure is critical—training, continuous learning, and leadership example drive confident, effective AI use.

A panel of legal technology experts tackled the question of how leaders can successfully implement rapid adoption of artificial intelligence tools while accounting for regulatory pressure and geopolitical uncertainty in a webinar. Titled “Building Confidence in an AI Era – Leading Global Enterprises Through Accelerated AI Adoption,” the webinar was the fourth in a series of seven panel discussions exploring the recently released Wolters Kluwer Legal & Regulatory 2026 Future Ready Lawyer Survey. Panelists focused on techniques and approaches for encouraging AI use by law firm employees and outside counsel engaged by corporate legal departments while fostering a culture of responsibility and safety.

The session was moderated by Joy Heath Rush, CEO of the International Legal Technology Association (ILTA). Panelists were Philipp Eder, lawyer and legal tech specialist; Kevin Cohn, General Manager of Brightflag; Marlene Gebauer, Practice Support Attorney at K&L Gates and co-host of the Geek in Review Podcast; Sergio Liscia, Vice President & General Manager of Legal Software at Wolters Kluwer Legal & Regulatory; and Cathy Wolfe, Executive Vice President & General Manager at Wolters Kluwer Corporate & Legal Compliance.

What does “confidence” in AI look like in legal practice today?

Rush kicked off the webinar by asking panelists to describe the meaning of “confidence” in the practice of law today with respect to the proliferation of AI, and whether there are differences in this respect between corporate legal departments and law firms.

According to Cohn, “The AI transformation is unlike any technology transformation that has come before it as it pertains to knowledge workers, because it's causing everyone to have to fundamentally rethink what are the skills that make them valuable and differentiated in the workforce.” He explained that generative AI tools can do a lot of things that previously could only be done by people with specialized knowledge. “That doesn't mean that humans are no longer relevant,” he said. “It doesn't mean that humans are no longer valuable, but it does require that you sort of double click on what has made you successful as a professional to date and which of those skills are durable and transferable into new ways of working in an AI native world.”

The AI transformation … is causing everyone to fundamentally rethink the skills that make them valuable and differentiated in the workforce.
Kevin Cohn

Gebauer said that it is key for organizations to gather the data necessary to know what AI tools are capable of and “us[e] that knowledge to identify where AI eliminates waste, where it produces impactful outcomes, where the results are repeatable and scalable.” Gebauer noted that the Future Ready Lawyer report shows that 62% of respondents indicate weekly time savings of 6% to 20%, indicating that AI has value. However, while 92% of legal professionals use AI daily, Gebauer pointed out that only 31% feel prepared with respect to information security and governance. This, she reiterated, is an organizational preparedness problem.

Cathy Wolfe emphasized taking time to think about strategy and to identify opportunities and threats. She explained the importance of content expertise in developing tools and expressed the view that employee engagement in AI can be furthered by giving employees the opportunity to play with AI in a secure environment before using it for actual work product.

According to Sergio Liscia, “There must be transparency about the processes, as well as alignment among the parties with regard to how the use of AI is impacting departments across the organization.”

What challenges does scaling of AI pose to leadership?

Rush next noted that the Future Ready Lawyer report shows that legal organizations are rapidly embracing AI while facing rising complexity. Complicating factors include evolving regulations, conflicting outside counsel guidelines, and geopolitical uncertainty. She asked panelists to discuss the first leadership challenge that surfaces when trying to scale AI across a global enterprise.

Cohn started with the caveat, “I am at the risk of saying something that might be a little bit unpopular.” He elaborated, “I think that the most important thing is to clearly communicate as a leader that this is not optional. It's not experimentation. It's the expectation.” Cohn also explained that expectations of perfect results will have to be broken. “There are situations in which you really do have to get it right,” he said. “And then there are many, many more such situations in which getting it mostly right is a perfectly fine starting point.”

Eder said that a focus on data was a critical leadership challenge for confident implementation of AI. First, because AI is only as good as the data it is built on, leadership must develop data literacy, he said. Second, “We must help employees redefine their values and away from repetitive tasks towards interpretation,” Eder explained. Third, organizations need to treat change management as a permanent discipline, with AI serving as a transformation lever rather than just a tool. “Every AI project is a change project,” Eder said. “Technology is the easy half. The difficult half is building trust, addressing fears, developing skills, redefining roles, and enabling leaders to guide the transition.”

Technology is the easy half. The difficult half is building trust, addressing fears, developing skills, redefining roles, and enabling leaders to guide the transition.
Philipp Eder

How can leaders innovate quickly while maintaining responsible, compliant, and secure AI practices?

Rush then turned the discussion to a balance that leaders need to strike as AI capabilities rapidly develop. She asked speakers how to reconcile the pressure to innovate quickly with the need to comply with varying regulations across jurisdictions and to keep systems secure.

Liscia addressed this topic first. He said that in early days of the AI transformation, “the mindset was, let's move fast and we will fix the issues later,” he said. “Now it's move fast, but move fast with clear guidelines.” He said that AI implementers want the results to be controlled, which means accepting a certain level of accuracy below 100%, “but you have to know it.” Liscia pointed to the survey’s indication that 40% of survey respondents expressed concerns related to ethics, regulation, data privacy, and cybersecurity. It’s not true that one must move fast without having guidelines, he stated. “If you want to be really efficient, you want to have both,” Liscia said. “You want to move fast in a clear framework, you want to move fast with a clear governance.”

Wolfe spoke to the importance of getting closer to customers to understand their needs. Regarding product development, she said, “It's a much more collaborative process with AI. You need subject matter experts who are much, much more involved in the process. And then as you develop something, it's important to keep going back to the user at every stage, and make sure that you're truly headed to deliver the value you thought you were going to deliver or pivot if you need to.”

You want to move fast in a clear framework, you want to move fast with a clear governance.
Sergio Liscia

What operating or governance models are effective for managing enterprise AI?

Rush next mentioned that the survey report underlines the importance of continuous learning and strong compliance frameworks. She asked the panel which approaches were effective for carrying out these goals at an enterprise scale.

Gebauer noted that the survey showed that while only 31% of organizations feel prepared for AI, 92% of their people are already using it on a daily basis. “There’s definitely a readiness and learning gap, and there’s leadership responsibility to close that,” she said. She stressed the importance of having continuous learning woven into work, as well as conversations that normalize AI use. “Bans and restrictions don’t work,” she said. “Lawyers are going to use AI regardless. If the tools aren’t approved, there’s the possibility of using those things in an uncontrolled and dangerous way.”

According to Cohn, “There's a cohort of corporate legal departments that are taking what I would say is the wrong approach to this and putting a lot of restrictions in place in their guidelines.” This, he said, is not going to drive usage of AI. “What we encourage customers to do is to, number one, very clearly communicate that the expectation is, as I said earlier, you're going to use AI whenever it delivers a faster service, a better service, a less expensive service. And number two, instead of looking for signals that they are using AI, look for the occasions where they are billing you by the hour for humans to do something that very likely AI should be able to do.”

Bans and restrictions don’t work…. If the tools aren’t approved, there’s the possibility of using those things in an uncontrolled and dangerous way.
Marlene Gebauer

How are legal leaders ensuring that employees feel confident in using AI responsibly?

In response to Rush’s next question about developing employee confidence, Gebauer said, “Building a foundation for confident, responsible use is critical.” She emphasized training and “getting people comfortable with the technology and using it.” Employees should be given practical how-to’s and task-specific instruction related to work they’re actually doing, rather than training at an abstract level. “A litigator and a transactional lawyer are going to use the AI differently,” she pointed out, “and so will a partner and a first-year or a junior associate.”

Eder added, “We need defined guidelines, templates, safe tools, as you said, and clear do’s and don'ts.” There must be psychological safety, he said, in the sense that employees are allowed to try things. He also emphasized the importance of a top-down approach to adoption. “If leaders do not use AI, no one else will,” he said.

Panelists summarize keys to effective AI acceleration

Finally, panelists were asked to provide short “nuggets” to sum up their views.

Liscia spoke of responsible scaling. “If I had to summarize, I would say that AI is already delivering value,” he said. “Now it's time to scale it responsibly. That's it.”

Wolfe said, “People are using AI already. That's great. Now it's about incorporating it into your strategy, building the skills of your staff and designing solutions that add real value.”

According to Eder, changing the mindset of the organization is critical. “AI is not a technology project,” he said. “It's a culture project. If you change the mindset, you change the organization. If you only introduce tools, you change nothing.”

Gebauer referred to building the human infrastructure needed to use AI well. She explained that maintaining a successful strategy over time means building the human infrastructure to use it well. To this end, she pointed to the importance of “shared learning, proactive conversations with clients about what they need and successful change management to move people from deeply ingrained ways of working toward an approach that harnesses this technology.”

Gebauer emphasized that maintaining a successful AI strategy requires developing the human infrastructure necessary for effective implementation. She highlighted the importance of shared learning, proactive conversations with clients to understand their needs, and successful change management to help people transition from established workflows to approaches that fully leverage AI technology. These steps ensure that organizations not only adopt AI technologies but also achieve sustained improvements in efficiency and innovation.

Cohn emphasized positivity. “Celebrate when the team uses AI successfully and when they try to use AI and they are not successful,” he said. “It’s equally important to celebrate the failures.”

Thomas Long headshot
Thomas Long
Content Management Consultant, Wolters Kluwer Legal & Regulatory U.S.
Thomas Long is a Content Management Consultant at Wolters Kluwer Legal & Regulatory U.S. with over 25 years of experience in legal analysis, writing, editing, and product development. His areas of focus include intellectual property, advertising and marketing law, privacy and data security, consumer protection, and competition law.
Back To Top