Operating in the dark: Shadow AI risks transparency and safety

Health systems are feeling operational and financial crunches, including funding cuts, increased administrative work, and clinician shortages. One study estimated that primary care physicians need an impossible 26.7 hours per day to provide guideline-recommended care.3 Healthcare professionals simply need more time and fewer administrative tasks to meet patient care requirements.

Healthcare has traditionally lagged behind other industries in adopting technology, even if it creates wider efficiencies or supports data sharing. However, it has rapidly adopted AI tools—at more than twice the rate compared to other industries—highlighting the need for urgent responses to rising costs, labor shortages, and shifting patient expectations.4 These tools offer a wide range of services, from office administration support to patient portal chatbots to generative information searching. A modern, fast, digital patient and clinician experience is now a competitive differentiator, and AI adoption has become a mix of opportunity and survival.

However, in many cases, adoption and innovation are outpacing policy and enterprise decision-making, leading employees to use any tool they can get their hands on to accomplish their tasks. When this happens, system leadership loses the ability to regulate or have full security oversight of tools within the organization, and clinical leaders have greater concerns about variations in care resulting from disparate tools.

The good news is that healthcare organizations can help mitigate these risks by establishing enterprise-wide guidelines for AI tool usage and communicating these policies to their teams, fostering a safer, more efficient, and more secure technology future.

  • Full text description

    The chart titled “Ranking AI risks in healthcare, top three selections” compares how Providers and Administrators rank eight major risks associated with artificial intelligence in healthcare. The chart uses two bar colors—light blue for Providers and dark blue for Administrators—to show the percentage of respondents who selected each risk as one of their top concerns. Patient safety ranks highest, with 49% of Providers and 55% of Administrators identifying it as a top risk. Privacy follows closely, selected by 45% of Providers and 49% of Administrators. Data breaches are similarly viewed as a significant risk, with 45% of Providers and 47% of Administrators citing them. Concerns about inaccurate outputs appear next, selected by 42% of Providers and 41% of Administrators. Regulatory compliance is identified by 32% of Providers and 33% of Administrators, while lack of transparency in sources is cited by 31% of Providers and 29% of Administrators. Bias is selected by 24% of Providers and 30% of Administrators. Deskilling ranks lowest among the listed risks, identified by 26% of Providers and 23% of Administrators. Overall, both groups share similar concerns, with patient safety, privacy, and data breaches emerging as the most significant risks.

A booming technology space can introduce risks to data and patient safety

Using unsanctioned AI tools can have wide-ranging—and costly—impacts. A 2025 IBM study identified that 97% of organizations that had had an AI-related security incident in their models or applications had lacked proper AI access controls, and 63% of organizations surveyed lacked AI governance policies.5 The average security breach in the healthcare industry totaled over $7.4 million in 2025, and takes the longest to identify and contain.

The Wolters Kluwer Health survey reflected these concerns. When asked to rank a series of AI risks to healthcare, both providers and administrators selected patient safety, privacy, and data breaches as the top concerns.

Within the healthcare space, generic AI tools—especially generative AI and chatbots—can pose more serious risks for organizations if they are embedded within patient data applications or used for clinical decision support. These tools can still run the risk of hallucinations, inconsistencies, and biases, bringing risks to patient safety, HIPAA violations, and compliance concerns.6 Even in cases where patient data has been de-identified to protect privacy, some tools re-identify datasets, potentially allowing data to be relinked to the individual.7

If generic AI solutions aren’t grounded in evidence and pull information from broad sources, they can lack transparency and introduce bias and risk into the clinical decision-making process. Understanding the black box of AI— how it generates outputs or recommendations—is critical for any healthcare organization, especially if the tool interacts with patient care decision-making. Leaders who don’t properly analyze this run the risk of selecting a tool that can impact the enterprise or that can create variations in care.

Ultimately, addressing shadow AI is not about restricting access to productivity tools. Leaders must understand why teams are using unsanctioned tools and which challenges they’re trying to solve, and then identify enterprise-level tools that can accomplish these goals safely and securely.

  • CEO Wolters Kluwer Health

  • SVP and CTO Wolters Kluwer

  • IT Executive

  • Resident

A headshot picture of Greg Samios

As shadow AI continues to be more prevalent, clinicians should only use purpose-built GenAI systems that are trained on expert-validated evidence, transparent with source citations, and capable of tailored recommendations. GenAI will provide an increase in staff efficiency and care quality, but we must preserve safety and clinician-patient relationships by reframing workflows that elevate GenAI from a tool to a partner, keeping patients at the center of care.

Greg Samios
Alex Tyrrell

In 2025, shadow AI surged across healthcare organizations, as staff across all aspects of care sought ways to improve efficiency amid persistent burnout, staffing shortages, and other factors. As a result, in 2026, healthcare leaders will be forced to rethink AI governance models and implement more formalized organization-wide frameworks that ensure the responsible use of AI, including proper training around the technology and appropriate guardrails to maintain compliance.

Alex Tyrrell
Quote Icon NEW

My biggest concern is ensuring that AI tools are safe, accurate, and compliant. Particularly that they do not compromise patient safety, privacy, or regulatory compliance.

IT Executive 
Quote Icon NEW

My biggest concern about AI in healthcare is algorithmic bias: If AI systems are trained on datasets that underrepresent certain groups (e.g., elderly patients, racial minorities), they may produce less accurate recommendations for these populations.

Resident

UpToDate supports clinical decision-making with UpToDate Expert AI

AI In UpToDate
Artificial Intelligence Hero

References

  1. Bruce, Giles. “Shadow AI goes ‘mainstream’ in healthcare: 5 notes.” Becker’s Health IT. December 18, 2025. https://www.beckershospitalreview.com/healthcare-information-technology/ai/shadow-ai-goes-mainstream-in-healthcare-5-notes/
  2. Online survey of hospital and health system providers and administrators conducted on behalf of Wolters Kluwer Health. N=518, comprised of 256 providers and 262 administrators. Conducted December 2025. Data on file.
  3. Porter, Justin, et al. “Revisiting the Time Needed to Provide Adult Primary Care.” Journal of General Internal Medicine. 38, 1. (2023): 147-155. doi:10.1007/s11606-022-07707-x
  4. Jain, Sachin.” AI Adoption In Healthcare Is Surging: What A New Report Reveals.” Forbes. October 21, 2025. https://www.forbes.com/sites/sachinjain/2025/10/21/ai-adoption-in-healthcare-is-surging-what-a-new-report-reveals/
  5. IBM. Cost of a Data Breach Report 2025. Accessed December 2025. https://www.ibm.com/reports/data-breach
  6. Bonis, Peter. “Avoiding a future where the ‘cause of death’ is an AI chatbot.” Chief Healthcare Executive. July 8, 2025. https://www.chiefhealthcareexecutive.com/view/avoiding-a-future-where-the-cause-of-death-is-an-ai-chatbot-viewpoint
  7. Donnellan, Alison. “Engineering identity: Anonymous data remains vulnerable to re-identification through basic details.” MSN. com. November 28, 2025. https://www.msn.com/en-us/technology/cybersecurity/engineering-identity-anonymous-data-remains-vulnerable-to-re-identification-through-basic-details/ar-AA1Rlrwz
Back To Top