ChannelLife New Zealand - Industry insider news for technology resellers
Australian office manager with humanoid robot and security icons

Australian firms to treat AI like staff by 2026, experts say

Fri, 12th Dec 2025

Australian organisations are likely to treat artificial intelligence systems as they do human staff within the next two years, according to new forecasts from OpenText's Australian and New Zealand leadership.

The company expects a fundamental shift in how businesses in the region manage AI, identity, and data risk by 2026. It predicts tighter controls on machine identities, a stronger focus on data quality, and rising cyber exposure for small and medium enterprises.

George Harb, Vice President-Australia and New Zealand at OpenText, said organisations were already moving beyond early AI trials.

"Australian organisations will increasingly treat AI agents and digital workers as if they were human employees when it comes to risk, access and oversight," said George Harb, Vice President-Australia and New Zealand, OpenText.

He said many businesses still run AI tools in isolated environments. He said they discover gaps in basic data awareness as they expand AI across more systems.

"Right now, many organisations are using AI in very siloed ways. They are moving beyond proof-of-concept deployments and starting to deploy agentic AI across more systems, only to discover they do not actually know where all their data is or how it is exposed. Legacy databases and servers remain connected to the network, even after modernisation to the cloud and containerised software, which means sensitive data can be brought back online and exposed without people realising it," said Harb.

Harb said organisations face similar risk whether the decision maker is human or machine.

"The core risk does not change whether the decision maker is a person or an AI agent. Organisations will need to manage AI with the same controls they expect for humans, including clarity on what information it can see, how it uses that information, and how they prevent inappropriate access or leakage under evolving privacy and cyber laws," said Harb.

Data as 'fuel'

OpenText expects the emphasis in AI programmes to shift from model size to data quality and governance.

"The next step in enterprise AI will be less about model size and more about whether the data feeding those models is clean, governed and fit for purpose," said Harb.

He compared AI systems with finely tuned vehicles that rely on the right input.

"You can have a high-performance vehicle tuned to perfection. If you put the wrong fuel in it, you will not get the result you expect. The same principle applies to AI. Large language models can be powerful, but if you put dirty or poorly governed data into them, they will produce outcomes that cannot be trusted," said Harb.

He said organisations are creating new leadership roles focused on data discipline.

"In response, more Australian organisations are appointing Chief Data Officers and Chief AI Officers whose focus is to engineer data, not only to clean up what exists but also to change how the organisation captures and manages new data so it stays fit for purpose over time. The right data is king. Tech leaders who fail to get this right face not only wasted AI spend but serious exposure under privacy and cyber regulation, including the risk of very large penalties if they mishandle sensitive information," said Harb.

Machine identities

OpenText also expects non-human identities to become central to cyber risk management by 2026. This includes bots, digital twins, application interfaces, and background service accounts.

"In 2026, non-human identities such as bots, digital twins, APIs and service accounts will move to the centre of identity and access management in Australia and New Zealand," said Harb.

He said security teams will apply the same access rules to humans and machines.

"Every non-human identity will need to be managed in the same way as a human identity. That means applying the same identity and access management controls across every agentic AI persona, every digital worker and every automated process that can act on behalf of an employee," said Harb.

Harb said this shift will include clear rules on who owns each identity and what it can do.

"In practice, that includes authentication, authorisation, auditability and clear ownership for each identity, whether human or machine," said Harb.

He said more staff will work alongside AI-based colleagues.

"We are heading toward a world where many employees will have a virtual colleague by their side, taking action and handling their workload. If those machine identities are not properly governed, the risk is no different from a compromised employee account, but at a greater speed and scale," said Harb.

SME exposure

The forecasts also highlight small and medium enterprises as a growing weak point in Australia's AI and data ecosystem. Harb said larger enterprises attract more attention in governance debates, while SMEs often run with lighter controls.

"Small and medium enterprises will emerge as one of the most exposed segments in the Australian AI and data landscape," said Harb.

He said big organisations have more resources for privacy, security, and compliance programmes.

"Most of the current AI and data governance conversation is happening at the enterprise level, where large organisations have people and budgets dedicated to privacy, cybersecurity, and compliance. In contrast, many SMEs still assume they are too small to be targets and lack the security measures, data governance, and identity controls that larger organisations are now implementing," said Harb.

He expects attackers to respond as larger firms strengthen defences.

"As large enterprises become harder to breach, attackers will move down the chain of command. SMEs hold valuable customer and operational data, but often operate with open or lightly protected systems. This creates a growing pool of data privacy and cyber risk that has not yet been fully acknowledged. Tech leaders and business owners in this segment will need to understand where their data resides, how it is protected, and how AI uses it, or risk finding out the hard way through regulatory action or a serious breach," said Harb.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X