Compliance 30 September, 2025

Agentic AI in EHS needs a human-in-the-loop approach

Artificial Intelligence (AI) is already being used in Environment, Health, and Safety (EHS) management. While adoption is still in the early stages, AI is expected to fundamentally reshape how organizations manage safety and risk.

Most of the spotlight today is on Generative AI (GenAI). Powered by Large Language Models (LLMs), GenAI works in a straightforward way: a user provides a prompt, and the system generates an output in the form of text, images, code, summaries, insights, etc.

But Agentic AI is the future, and your organization must be ready to absorb it. While it’s more automated than GenAI, Agentic AI will still rely on human validation to ensure safe and reliable outcomes.

AI Agents and guardrails

Agentic AI goes further than GenAI. AI Agents can rationalize, plan, act, and self-correct with a degree of autonomy. They can also access external databases, tools, and systems through APIs. Like GenAI, they rely on LLMs, but in this case, the LLM serves as the agent’s brain.

Multiple AI agents can even collaborate, with each assigned a specific role, in what’s known as a “multi-agent” design pattern.

In the context of EHS, Agentic AI offers the potential to automate complex processes like risk assessments, incident management, and compliance.

To operate accurately and safely, Agentic AI relies on guardrails that aim to prevent hallucinations, poor decisions, or endless task loops. Guardrails may include:

  • Restrictions on which APIs an agent can access
  • Controls on the types of queries an agent is allowed to process
  • Backup or fallback mechanisms where another agent or a human takes over if the first agent fails
  • Human validation checkpoints that require human approval before a task can proceed or be completed

The last safeguard is often referred to as the “human-in-the-loop” approach. In EHS, its importance is immense. A poor AI decision or hallucination, such as missing a hazard, overlooking a control, or recommending an unsafe procedure, could result in a serious injury or fatality.

Agentic AI won’t replace the judgment of EHS professionals, who remain essential at critical validation checkpoints to confirm proposed root causes, controls, barriers, actions, etc.

GenAI and Agentic AI in EHS

Agentic AI will revolutionize EHS management, but its value and effectiveness will ultimately depend on the judgment of practitioners at critical checkpoints. As part of the human-in-the-loop approach, EHS professionals validate and strengthen AI outputs to ensure safety is never compromised. Future success in EHS will belong to organizations that embrace this partnership between Agentic AI and sound human judgment.

Download our insight brief on the role of AI in EHS to read more about the role of human input and critical thinking, and how AI can be used to augment and improve your EHS processes.

Content Thought Leader - Wolters Kluwer Enablon
Jean-Grégoire Manoukian is Content Thought Leader at Wolters Kluwer Enablon. He’s responsible for thought leadership, content creation and the management of articles and social media activities. JG started at Enablon in 2014 as Content Marketing Manager and has more than 25 years of experience, including many years as a product manager for chemical management and product stewardship solutions. He also worked as a product marketing manager in the telecommunications industry.
Back To Top