Building security-centric AI: why it is key to the government's AI ambitions
As government agencies test the waters of AI, public sector leaders must consider how they can secure and protect their investment.
The adoption of artificial intelligence (AI) continues unimpeded across multiple industries, grabbing the interest of even the public sector. It’s easy to see why. The capabilities of AI could radically transform a wide range of government operations, from the streamlining of bureaucratic processes to the automation of routine administrative tasks. The technology can also empower public employees to deliver greater value and elevate the quality of public services, reshaping how government institutions could engage and build trust with the citizens they serve.
Given such benefits, it’s unsurprising that government agencies across Australia are looking to double down on AI spending.1 But even as they test the waters of this new technology, public sector leaders must consider how they can secure and protect their investment. One sensible approach is to reassess cyber and data security capabilities. Here, we’ll explore why robust security is more essential than ever in the early days of AI adoption and the strategies or practices government organisations can employ to strengthen their defences.
The two sides to AI security
The development and deployment of AI-based solutions have changed significantly over the past few years. Previously, training with large and continuous volumes of data was typically required to create an AI that truly understands and adds value to an organisation over an extended period of time. This fact alone was a hurdle to any government’s AI ambitions, as it happened post-deployment. Depending on the use case, the training data might range from ordinary operational data to sensitive information like medical records or personally identifiable data. This restriction was based on the limited variety of mathematical models and computing power.
Now, vendors have access to more sophisticated models and theoretically unlimited computing power, making finding a proper match for their tasks more manageable. This advancement significantly reduces the time required to train with actual data, making the process more efficient, secure and productive.
That said, the use of any personally identifiable data raises security questions. Should sensitive national information be used for the purpose of improving public sector efficiency and productivity? Are the risks to national sovereignty and security justifiable?
No single person can answer these questions, but one thing is clear: government organisations cannot afford to rely on off-the-shelf AI solutions from just any provider. The potential exposure of sensitive national data to external, possibly hostile, entities through a third-party AI solution poses unacceptable risks to privacy and national security.
Building a security-centric AI foundation
So, how do you ensure AI is built with security at its core? Below are solutions and best practices governments should employ throughout their AI program.
Security by design
‘Secure by design’ should be a crucial principle for AI development, emphasising the integration of security from the outset, rather than as an afterthought. While this is a multi-pronged approach, part of it involves adopting the limiting of access to AI models and sensitive data to only those involved in development, training and processing via access management tools. This minimises potential risk vectors and helps ensure secure, responsible data use, backed by clear accountability.
Moreover, a comprehensive data governance framework, encompassing clear policies, privacy compliance and regular audits, helps ensure that data is handled responsibly and ethically.
Prioritise supply chain security
The security of the software supply chain is fundamental to ensuring the fortification of modern AI applications. This involves safeguarding the entire life cycle, from data acquisition and model development to deployment and maintenance.
Join forces with other public and private organisations
Collaboration is key in the development and deployment of AI. By working together with other public and private organisations, public sector bodies can leverage diverse expertise, resources and data to tackle complex challenges, accelerate research and drive innovation.
Don’t forget about AI-enabled security
AI- and ML-powered cybersecurity solutions allow IT teams to act more swiftly and decisively against modern evolving threats. AI-powered observability tools, for instance, offer a unified dashboard for visibility into security events across networks, infrastructures, applications and databases.
By enabling observability across hybrid IT environments and augmenting it with security management tools, IT and security admins can quickly identify and diagnose security issues and address regulatory compliance problems across complex and distributed infrastructures. This also helps break down IT silos and fosters cross-domain correlation and collaboration.
Consider tenant isolation
Isolating Al infrastructure in dedicated physical servers or data centres creates a physical buffer against external security threats.
This strategy helps protect against risks like data poisoning while maintaining local or private cloud access to Al capabilities. It’s a concept not unlike what the finance industry established decades ago with its PCI zones.
Employ proven cybersecurity solutions
Proven cybersecurity solutions such firewalls, monitoring and observability tools remain core essentials for maintaining secure AI infrastructure. A combination of these solutions allows IT teams to detect anomalies, monitor suspicious activity and deploy proactive protection measures to prevent unauthorised access.
Prioritising security is a necessary trade-off
Truthfully, prioritising security in AI development might increase costs and slow the development of future AI initiatives, but it’s a necessary trade-off when national security and public trust are on the line.
In the long run, this investment not only helps protect sensitive data but also paves the way for the far-reaching benefits sovereign AI offers — namely, enhanced public service efficiency, a better customer experience and stronger national competitiveness on the global stage.
1. Francis L 2024, ‘AI and GenAI investment areas for public sector in Asia Pacific’, GovInsider, <<https://govinsider.asia/intl-en/article/ai-and-genai-investment-areas-for-public-sector-in-asia-pacific>>
Demystifying zero trust for government
As zero trust becomes more central to ICT environments, it needs to be considered not just as an...
Cyberwarfare 2025: the rise of AI weapons, zero-days and state-sponsored chaos
Nation-states and rogue factions are rapidly integrating cyber attacks into their military...
Phishing-resistant MFA: elevating security standards in the public sector
Phishing remains a significant issue for government agencies, and current MFA solutions often...