ChannelLife New Zealand - Industry insider news for technology resellers
New Zealand
Cloudera hackathon highlights enterprise AI data challenges

Cloudera hackathon highlights enterprise AI data challenges

Tue, 12th May 2026 (Yesterday)
David Shilovsky
DAVID SHILOVSKY Interview Editor

The rapid rise of enterprise AI has created a familiar tension across industries: organisations are eager to move from experimentation to deployment, but many remain constrained by weak data governance, fragmented infrastructure, and uncertainty around operational controls.

Those themes emerged clearly during Cloudera's recent hackathon, where partners and technology leaders were challenged to develop commercially viable AI solutions within an accelerated timeframe.

The event was less about theoretical innovation and more about proving whether enterprise AI can move quickly from concept to deployment while remaining governed and commercially relevant. 

For Citadel Edge, the hackathon challenge revolved around a common enterprise pain point: manually interpreting unstructured RFPs. The Citadel Edge team addressed this bottleneck by developing an 'RFP-to-Proof-of-Concept (PoC) Builder,' designed to automate and accelerate the process of moving from RFP analysis to a working prototype, reducing what used to take three to four weeks down to a matter of hours.

Converting the abstract into real-world applications was high on the agenda, according to Todd Trevillion, Executive General Manager - Strategic Partnerships and Relations for Citadel Edge, and Keir Garrett, Managing Director of Cloudera Australia and New Zealand.

"As a software technology firm, we get a lot of RFPs coming through, and the process of responding, architecting, and designing solutions can take significant time. The goal was to compress that cycle and accelerate delivery," said Trevillion.

The solution ultimately evolved beyond simple proposal generation into a full coding development environment that is fully traceable with got human-in-the-loop controls built-in.

The rapid development cycle itself became one of the hackathon's defining lessons.

"One of the things that we found excellent to come out of the hackathon was the actual speed at which we could develop the use case and solution that we were conceptualising," Trevillion said.

"The Cloudera tools, such as AI Studio, RAG Studio, Agent Studio and the overall platform, really allowed us to rapidly build out the platform solution."

Garrett agreed and noted, "It was such a pressure pot of going from inception to output and to the outcome, because we gave them a very short window of time to come up with the concept and then go through iterations. And some of the partners actually switched directions partway through and refocused the outcomes that they were looking to achieve."

"The whole concept was really all about focusing on creating a deployable and commercial outcome, not just for learning's sake. It was actually 'come up with something that we could go to market with and that would resonate with the market."

While organisations continue to heavily invest in AI initiatives, many remain trapped in PoC cycles without the governance or operational maturity required for enterprise deployment. 

One issue may be the sophistication of a particular AI model, but if there is also a question mark against the veracity of data that companies are inputting, the result the model gives out may not be trustworthy.

This was echoed by a recent AI readiness assessment conducted by Cloudera, which suggested that only a fraction of organisations are actually prepared from a data perspective.

"Everybody wants AI, but the real reality is, the first question we should be asking is, do you trust your data?" Garrett said. 

"If you can't trust your data, then it's very hard to trust the outputs and leverage AI. Only seven per cent of organisations are truly data ready.

"A lot of organisations are really very low on that maturity curve because they don't know where to start, they don't have funding and they can't take their prototypes to production because of that funding."

Garrett also warned that many companies are allowing AI adoption to outpace existing frameworks, sometimes to a very significant extent, and ultimately only to their own detriment.

She cited one large organisation apparently using more than 40 separate AI tools across the enterprise, which is only ever going to be an unsustainable move.

Concern comes from not only operational sprawl, but also growing financial opacity and security exposure. Any AI tool deployed within a company carries infrastructure, data, and model consumption costs, but often without centralised visibility.

"At times, it almost feels like we're going back to the old Access database days," Garrett said. 

"The proliferation of consumption of let's build this, let's build that, and then it gets out of control. So we've got to pull that in sooner and put the right guardrails in place."

"Every person that's deploying an AI tool or an agent across their organisation is effectively spending the company money, without the company fully realising the cumulative cost of all that data and AI consumption."

Sovereignty and data control emerged as equally urgent concerns, particularly for the companies working with sensitive information.

Teams drew a clear distinction between sovereign cloud infrastructure and sovereign AI, highlighting a persistent misconception in the market. Many organisations still equate sovereignty with simply hosting data on hyperscaler infrastructure within Australia. In practice, that addresses only part of the risk.

True AI sovereignty extends further, encompassing where models are trained, how data is processed, and whether information leaves jurisdictional boundaries at any point in the lifecycle.

"We work with a lot of regulated industries, including defence and federal government agencies," Trevillion said.

"In that context, knowing exactly where your data is and where it's going is critical. Even when hosted in Australia, models can still process data offshore."

The implication is clear: hosting infrastructure locally is no longer enough. It is defined by control. That includes flexibility over model and infrastructure choice, the ability to avoid vendor lockin, and governance frameworks that ensure data remains secure and compliant throughout the AI lifecycle.