ChannelLife New Zealand - Industry insider news for technology resellers
Steve wilson
Wed, 14th Jan 2026

Exabeam has launched a connected set of security workflows that centres on behavioural analytics for AI agents and visibility into AI security posture, as organisations report increased operational and governance risks from agent-driven activity.

The company said the release extends its user and entity behaviour analytics approach into AI agent behaviour analytics. It also adds a unified investigation view for AI-related activity and a posture view for AI agent security.

Exabeam framed the move as a response to early enterprise experiences with AI agents operating across business systems. It said some organisations have seen AI agents share sensitive data, override internal policies and make unsanctioned changes. It said teams often lack visibility into who authorised the action or why it occurred.

Agent behaviour focus

Exabeam said the new release places AI agent behaviour analytics at the centre of how security teams detect and investigate AI-related activity. It said the product unifies AI investigations in one place. It also adds posture insight for AI usage and AI agent activity.

The company described maturity tracking and recommendations as part of the posture view. It also pointed to updated data and analytics that model emerging agent behaviours.

Exabeam linked the announcement to earlier work in the area. It said it introduced what it described as the first user and entity behaviour analytics designed to detect AI agent behaviour through an integration with what is now Google Gemini Enterprise. It said the integration gave organisations a way to detect, investigate and respond to agent activity.

Security vendors have started to address a shift in identity and activity monitoring, as AI tools move from chat interfaces into autonomous and semi-autonomous agents that take actions across corporate systems. Many organisations now run pilots where agents query internal knowledge bases, draft documents, create tickets, modify configuration settings, or call other tools through application programming interfaces.

"Securing the use of AI and AI agent behaviour requires more than brittle guardrails; it requires understanding what normal behaviour looks like for agents and having the ability to detect risky deviations," said Steve Wilson, Chief AI and Product Officer, Exabeam.

"Exabeam is the first to apply UEBA to AI agents, and this release further extends that agent behavior analytics leadership," said Wilson. "These capabilities give security teams the behavioural insight needed to identify risk early, investigate AI agent activity quickly, and continuously strengthen resilience as AI usage and agents become integral to enterprise workflows," said Wilson.

Posture tracking

Exabeam said the latest release strengthens a team's ability to assess security posture around AI usage and agent activity. The company said it provides a structured framework for understanding AI activity and conducting investigations.

Governance has become a central concern as AI agents begin to act on behalf of employees and departments. Security teams have started to look for ways to tie agent actions back to authorisation, policy, and identity controls. They also need audit records that show what data an agent accessed and what it changed.

"AI agents have the potential to radically transform how businesses operate and serve their customers, but only if they can be governed responsibly," said Pete Harteveld, CEO, Exabeam.

"Executives need clear insight into AI agent behaviour and an understanding of whether their security posture is strong enough to support safe adoption," said Harteveld. "These new capabilities from Exabeam provide that insight and give organisations a path to continuously improve, ensuring we protect our customers, their customers, and the broader ecosystem from emerging AI-driven threats," said Harteveld.

Systems integrators and security services firms have also started to position agent governance as part of broader cyber and risk programmes. They increasingly treat AI agents as new operational entities that require monitoring and policy enforcement.

"As AI adoption accelerates, one of our greatest priorities is understanding and managing agent behaviour," said Joep Kremer, Business Unit Director, ilionx. "The new connected capabilities from Exabeam provide the ability to see when an AI agent deviates from expected patterns, follow its activity through a unified investigation, and continuously improve our defences with posture insights," said Kremer. "This level of connected visibility and governance for AI agent activity is extremely valuable for ourselves and our end customers, and I look forward to seeing Exabeam continue to expand upon these capabilities," said Kremer.

New security category

Exabeam said the industry increasingly views AI agent oversight as a distinct security category, alongside identity, cloud, and data protection. It also argued that tools designed for static users and devices do not match the risks and operational patterns of decision-making agent systems.

The company said the connected set of behavioural analytics, centralised investigation, and AI posture visibility marks an expansion of its security operations offering, as enterprises formalise controls for AI agents and increase the number of automated tasks that agents perform across business applications.