Ensuring secure and sustainable AI innovation in the public sector


By Prasanna Raghavendra*
Monday, 17 March, 2025


Ensuring secure and sustainable AI innovation in the public sector

The concept of AI governance may be unfamiliar territory for most government agencies. The consequences that result from a lack of governance, however, are not.

The meteoric rise of DeepSeek’s AI-powered chatbot — which seemingly emerged overnight to global acclaim — was abruptly overshadowed by a massive cyber attack on the company weeks ago. As the dust settles, security teams have begun raising alarms over data security and privacy risks inherent within the AI model. But DeepSeek is just the latest incident. In the race to innovate, I’d expect more vulnerabilities to slip through the cracks — and impact individuals, businesses and governments looking to embed AI into everything they do.

These challenges underscore the urgency for organisations to establish guardrails and standards across development life cycles that ensure secure, transparent and tracked integration of AI models — along with the data used to train and fuel it.

This is especially true for the public sector, which stands to gain the most from AI adoption but also faces the highest stakes when incidents occur. With stakes that high, nothing less than a comprehensive and formal approach to AI governance — one that emphasises security and compliance at its core — will do.

Navigating AI security risks with strong governance

The concept of AI governance may be unfamiliar territory for most government agencies. The consequences that result from a lack of governance, however, are not. The fallout of the DeepSeek hack — which exposed massive amounts of sensitive user data — is a stark reminder of the dangers of rapid AI integration into development life cycles without proper security oversight. For agencies, any data breach caused by vulnerabilities in AI applications doesn’t just result in the violation of Australia’s privacy laws — it also erodes public trust in the government’s ability to protect the sensitive information of its citizens.

The absence of governing compliance processes also leaves agencies unprepared for incoming AI regulations. Australia’s recent policy for responsible AI use in government offers a glimpse into areas that future laws or regulatory frameworks might cover. If they can proactively align AI development to the security and privacy requirements outlined in the nascent policy, agencies should be able to minimise risks and disruption to AI initiatives down the line.

Taken altogether, governance isn’t a barrier to innovation and progress, as some might fear. Instead, the structure and boundaries provided by a strong governance framework will provide necessary direction, accountability and security, allowing agencies to confidently leverage AI to its fullest potential — while successfully navigating security threats and regulatory pitfalls that may follow.

Getting started with governance for AI development

How should government agencies go about designing their governance framework? What security and compliance considerations should be included? Below are several measures and best practices that agencies can adopt throughout their development cycle.

1. Adopt a trust less, verify first approach

Before integrating any AI model, it might be worth looking at what’s under the hood. Mandate reviews for all new AI models — including updated versions of previously vetted ones — to identify vulnerabilities. Developers should leverage MLOps best practices along with zero trust principles and threat modelling into code reviews and security testing.

2. Embed model testing and risk analysis in development

Conventional wisdom in software development states that you should never push anything to production without testing first — why should the AI model integration be an exception? Include testing and risk analysis stages across the AI development workflow, supported by versioning, compliance and sign-off gates to ensure all security risks are identified and addressed, before an AI-driven application or software is cleared for its intended use case.

It’s also worth air-gapping the tests and data that’s used within lab environments or security platforms. This will allow development teams to properly verify the safety of an AI model during development, without exposing it to real-time data.

3. Implement continuous security and compliance monitoring

Vigilance doesn’t stop after deployment. Agencies must establish frameworks for regular AI model audits to detect drift, bias, emerging vulnerabilities or non-compliance with evolving regulations. Additionally, proactive incident response mechanisms are crucial for addressing AI-related security breaches, unethical use or compliance violations.

Data that’s used by AI is also a huge point of vulnerability for any organisation. Enforce strict data governance policies that define what data can be used with AI, along with rules for disclosure and reporting should these guidelines be breached. Governance teams must continuously monitor for compliance to these guidelines — and rectify non-compliance the moment it occurs.

4. Form a multidisciplinary AI governance committee

A strong AI governance framework requires collaboration across multiple domains. Agencies should establish AI governance committees comprising experts from legal, compliance, cybersecurity, MLOps and development teams. This cross-functional approach ensures comprehensive oversight, mitigating risks that could jeopardise AI initiatives.

Takeaways

For government agencies to fully harness AI’s potential while safeguarding public trust, AI governance is not an obstacle — it’s a necessity. Far from being a barrier to progress, AI governance provides that critical foundation that’s needed for the secure and responsible development of AI capabilities. As AI regulations inevitably take shape across Australia, agencies that establish structured oversight and controls now will be well-positioned to adapt seamlessly — without any serious disruption to AI adoption or innovation.

Additionally, governance for AI development shouldn’t be overly complicated. Developers can take cues from existing software development processes and best practices to begin building and securing the integration of AI models at every step. Proactively aligning AI development with governance frameworks today will prevent costly backtracking when regulations become more stringent.

With strong governance frameworks in place, Australia’s public sector won’t just be ready to capitalise on the benefits of the incoming AI revolution. They will also set the standard to follow for businesses or industries looking to securely and safely weave AI into every aspect of their operations.

*Prasanna Raghavendra is Senior Director of Research and Development at JFrog.

Top image credit: iStock.com/Chayada Jeeratheepatanont

Related Articles

The zero-sum game: data privacy or data analytics?

As data privacy and AI regulations evolve, agentic AI could revolutionise the public sector by...

Unshackling the public sector from legacy systems for citizen-centric innovation

IT modernisation is more than just a cost-cutting exercise: it’s a strategic necessity for...

Shoalhaven City Council strengthens disaster recovery and security with Azure

In recent years, the Shoalhaven region has experienced numerous natural disasters, from bushfires...


  • All content Copyright © 2025 Westwick-Farrow Pty Ltd