HealthJanuary 23, 2026

Health system size impacts AI privacy and security concerns

Administrators at larger health systems are more concerned about data and privacy violations, according to a new survey on shadow AI in healthcare.

As AI tools are increasingly used in healthcare organizations, providers and administrators alike share concerns related to data breaches and privacy, according to a new survey.

These concerns are well-founded. A 2025 IBM report found that the average security breach in the healthcare industry totaled over $7.4 million, and 97% of organizations with AI-related security incidents lacked proper AI access controls.

The survey commissioned by Wolters Kluwer Health asked over 500 administrators and providers about unauthorized AI tool usage, known as “shadow AI”, and overall sentiments about AI solutions.1 Respondents were asked to rank the top risks for AI tools—unsurprisingly, patient safety was the top concern across all participants. However, privacy and data breaches were also significant concerns.

1. Concerns for data breaches and privacy are higher at large health systems

Across all respondents, nearly 30% ranked data breaches as #1 or #2. However, when narrowed to participants from health systems with over 25,000 employees, the statistic jumps to 57%.

When it comes to privacy, 33% of all respondents and 35% of administrators ranked it as the first or second concern. The concern increases with larger health systems—up to 46% of administrators of hospitals with 12,000 or more employees.

2. Administrators are generally more concerned about privacy

In examining the differences between the two surveyed groups, administrators were more concerned about privacy overall compared to providers. Over 20% of administrators (CEOs, CMOs, CIOs) said it was their first concern compared to 14% of providers, and 37% of CFOs ranked it at the top.

Support AI tool literacy with enterprise-wide training

Secure enterprises start with technology training and literacy, especially with the rise of AI tools. Solutions like secure, enterprise-grade clinical decision support, in addition to training on enterprise-approved tools and AI risk policies, help leaders educate employees across healthcare organizations on the risks posed to data and security through unsanctioned AI tools.

Explore more survey findings and action steps in our free whitepaper, “Shadow AI: A hidden risk for healthcare.”

Download The Whitepaper
Learn About UpToDate Enterprise Edition
In 2025, shadow AI surged across healthcare organizations, as staff across all aspects of care sought ways to improve efficiency amid persistent burnout, staffing shortages, and other factors. As a result, in 2026, healthcare leaders will be forced to rethink AI governance models and implement more formalized organization-wide frameworks that ensure the responsible use of AI, including proper training around the technology and appropriate guardrails to maintain compliance.
Alex Tyrrell, Senior Vice President and Chief Technology Officer, Wolters Kluwer
  1. Online survey of hospital and health system providers and administrators conducted on behalf of Wolters Kluwer Health. N=518, comprised of 256 providers and 262 administrators. Conducted December 2025. Data on file.
Back To Top