HealthOctober 13, 2025

No tradeoffs: The triple mandate for clinical GenAI

For insights in the GenAI era, clinicians may feel that decision support tools will require tradeoffs between evidence, speed, and security. But this doesn’t have to be the case.

With the growth of AI solutions in healthcare, clinicians are looking for quick, concise insights to keep care moving. Patients have complex cases, expectations are changing, and administrative tasks continue to be a burden.

The promise of generative AI (GenAI) tools to drive efficiency in this urgent and complex care environment is compelling. In fact, many of the first-to-market tools and even general-use LLMs interest providers aiming to achieve faster speed-to-answer without sacrificing time with patients. However, with this early adoption of these tools, some may feel forced to accept a tradeoff—speed came with accepted sacrifices, like trust in the answer, privacy, or risk.

Clinicians have high expectations for GenAI—as they should

When it comes to GenAI in clinical decision support, healthcare leaders need to consider the problems clinicians are trying to solve, but they also shouldn’t accept tradeoffs. They should also consider how clinicians feel about available tools for maximum adoption potential. A Wolters Kluwer survey of physicians found that while overall interest in GenAI was high, respondents felt their ideal solution would meet the following criteria:


The use of unvetted solutions, or those that force a tradeoff, means that providers are electing to compromise. Health system leaders should take note that they don’t need to, especially in three crucial areas: Trusted evidence, speed to answer, and data privacy.

1. Trust in evidence

Evidence for clinical care goes beyond simply accessing information from research studies. It involves insights that provide differential diagnoses, identify complexity, and offer possible treatment options, while also considering industry and local guidelines. Care can be nuanced, and AI solutions can’t always detect these complexities or conflicting guidance.

Many of these insights can come to fruition only when experts are kept in the loop through content creation, revision, and grading. This helps the AI understand how to prioritize recommendations and identify considerations for care.

At the end of the day, clinical decision support should always augment and encourage clinical thinking, rather than replace it—the introduction of GenAI shouldn’t change this. Our Future Ready Healthcare survey showed that 57% of respondents are concerned that an overreliance on GenAI may erode clinical thinking skills. It’s critical that clinicians continue to exercise clinical judgment and draw on their expertise as they consult the evidence.

2. Speed when it’s needed

A study of primary care physicians found they would need an impossible 26.7 hours per day to provide guideline-recommended primary care. GenAI tools offer one way to trim down those hours through more rapid responses and surfacing research and information faster than ever. Even removing 10 or 30 seconds during patient visits can add up to meaningful time, allowing clinicians more time with patients.

But even with fast GenAI responses, clinicians need the context of the information to validate it against the evidence. Some solutions offer speed but may hallucinate responses or require additional fact-checking, which negates the speed savings.

Speed is also relevant to where clinicians access the information. Workflow integrations, whether it’s EHRs or technology partnerships, can bring information to clinicians faster and in a more accessible manner. These insights need to be surfaced intuitively, require fewer clicks, and be built for the pace and operations of modern care.

3. Privacy as a non-negotiable

Understanding how GenAI tools intake and process data is also essential for any organizational governance. According to the Future Ready Healthcare survey, 56% of respondents were concerned about data privacy and security for GenAI tools, and only 18% were aware of published policies for GenAI use in their organizations.

Some GenAI tools can risk user privacy by sharing engagement information with third-party partners or introducing advertisements at the point of care. Any form of advertising could have nuanced—or even undetected—influences on clinician decision-making or even risk exposing patient data.

Leaders need to carefully understand which tools are being used across teams, how they could possibly influence care, and establish and communicate out AI governance policies. Doing so can help streamline operations, reduce care variability, and improve security.

Evidence, speed, and privacy in medical AI

AI systems can balance trustworthiness and efficiency. Future-focused AI should deliver on the technological promises of saving time, optimizing resources, and striving to reduce the cognitive load and burden on humans, while also being mindful of energy consumption. At the same time, innovators should work constantly to mitigate bias, exhibit transparency, offer robust evidence, and demonstrate accountability. It’s essential to find this balance, rather than settle for tradeoffs. 

The good news is that clinicians can get all of the above. UpToDate Expert AI is built on the trusted foundation of evidence, a human-centric editorial process, and is pressure-tested with customers and experts. It also keeps care decisions private between patients and providers without introducing third-party data advertisers. 

Download our whitepaper, “Breaking the compromise: Why healthcare AI shouldn’t be an ‘either-or’ decision,” to learn how clinicians can have the best of GenAI decision support without compromise.

Complete the form to download the whitepaper

Back To Top