GenAI has entered the (healthcare) chat
Generative AI (GenAI) has entered the healthcare industry as a transformative tool with the potential to revolutionize clinical decision-making, increase speed-to-answer, and improve health outcomes. Medical students and residents are entering the clinical environment with experience using GenAI tools to get quick clinical responses, and nursing schools are planning to more than double GenAI use in the next 2-3 years.
The main thing clinicians need is more time. A 2022 study found primary care clinicians need an impossible 26.7 hours per day to provide guideline-recommended care. Using GenAI tools at the point of care can help clinicians get faster responses, reduce administrative time, and have more moments to focus on patients. However, currently, these tools can vary from team to team and from clinician to clinician, opening up organizations to variations in care and tool security, setting up challenges to risk mitigation.
How much risk are leaders willing to tolerate when it comes to using nascent AI technology at the point of care? Where is the line between accepting people will use available tools to save time and opening up organizations to security risks?
New tools require new policies
While well-intended, using various free or unsanctioned tools on personal or work devices can introduce risk to an organization. The healthcare industry continues to be a major cybersecurity target, averaging $9.8 million per incident, and a KLAS Research analysis found that asset management was an opening for breaches. The analysis suggests a far more reactive approach than proactive among healthcare organizations.
Leaders are trying to figure out how AI technology fits into policymaking. AI is no longer confined to a couple of major technologies; it’s integrated into administrative tools, analytics, and decision support. The 2024 Healthcare Cyber Security Survey Report from HIMSS highlighted that stronger cybersecurity governance is essential, including in areas involving AI.
Outside of healthcare, many organizations are establishing risk policies for AI. From chatbots to generative content creation to analytics, AI has become commonplace in most organizations and available through various tools. However, some organizational risks include data privacy and security concerns, bias risks, and copyright issues.
A gap between organizational enthusiasm for GenAI and policy preparedness
The 2025 Wolters Kluwer Health Future Ready Healthcare report found stark differences in the enthusiasm for GenAI at healthcare organizations compared to the preparedness policy-wise. Out of respondents, 80% cited “optimizing workflows within departments and across practices” as a top priority, and 63% said they were “prepared” to use GenAI to optimize workflows. However, only 18% were aware of published policies for GenAI use in their organization.
This leads back to the GenAI tools. Without published policies or vetted tools, leaders are opening doors for users to select their own, thereby introducing risk to organizations and also to patient safety and privacy, if used at the point of care. Patient safety can be at risk depending on whether a given tool was purpose-made for healthcare use and whether it references trusted evidence. Additionally, without proper guidance and training, employees may inadvertently expose confidential information by using free GenAI tools.