Building security‍-‍centric AI: why it is key to the government's AI ambitions

SolarWinds

By Sascha Giese*
Tuesday, 12 November, 2024


Building security‍-‍centric AI: why it is key to the government's AI ambitions

As government agencies test the waters of AI, public sector leaders must consider how they can secure and protect their investment.

The adoption of artificial intelligence (AI) continues unimpeded across multiple industries, grabbing the interest of even the public sector. It’s easy to see why. The capabilities of AI could radically transform a wide range of government operations, from the streamlining of bureaucratic processes to the automation of routine administrative tasks. The technology can also empower public employees to deliver greater value and elevate the quality of public services, reshaping how government institutions could engage and build trust with the citizens they serve.

Given such benefits, it’s unsurprising that government agencies across Australia are looking to double down on AI spending.1 But even as they test the waters of this new technology, public sector leaders must consider how they can secure and protect their investment. One sensible approach is to reassess cyber and data security capabilities. Here, we’ll explore why robust security is more essential than ever in the early days of AI adoption and the strategies or practices government organisations can employ to strengthen their defences.

The two sides to AI security

The development and deployment of AI-based solutions have changed significantly over the past few years. Previously, training with large and continuous volumes of data was typically required to create an AI that truly understands and adds value to an organisation over an extended period of time. This fact alone was a hurdle to any government’s AI ambitions, as it happened post-deployment. Depending on the use case, the training data might range from ordinary operational data to sensitive information like medical records or personally identifiable data. This restriction was based on the limited variety of mathematical models and computing power.

Now, vendors have access to more sophisticated models and theoretically unlimited computing power, making finding a proper match for their tasks more manageable. This advancement significantly reduces the time required to train with actual data, making the process more efficient, secure and productive.

That said, the use of any personally identifiable data raises security questions. Should sensitive national information be used for the purpose of improving public sector efficiency and productivity? Are the risks to national sovereignty and security justifiable?

No single person can answer these questions, but one thing is clear: government organisations cannot afford to rely on off-the-shelf AI solutions from just any provider. The potential exposure of sensitive national data to external, possibly hostile, entities through a third-party AI solution poses unacceptable risks to privacy and national security.

Building a security-centric AI foundation

So, how do you ensure AI is built with security at its core? Below are solutions and best practices governments should employ throughout their AI program.

Security by design

‘Secure by design’ should be a crucial principle for AI development, emphasising the integration of security from the outset, rather than as an afterthought. While this is a multi-pronged approach, part of it involves adopting the limiting of access to AI models and sensitive data to only those involved in development, training and processing via access management tools. This minimises potential risk vectors and helps ensure secure, responsible data use, backed by clear accountability.

Moreover, a comprehensive data governance framework, encompassing clear policies, privacy compliance and regular audits, helps ensure that data is handled responsibly and ethically.

Prioritise supply chain security

The security of the software supply chain is fundamental to ensuring the fortification of modern AI applications. This involves safeguarding the entire life cycle, from data acquisition and model development to deployment and maintenance.

Join forces with other public and private organisations

Collaboration is key in the development and deployment of AI. By working together with other public and private organisations, public sector bodies can leverage diverse expertise, resources and data to tackle complex challenges, accelerate research and drive innovation.

Don’t forget about AI-enabled security

AI- and ML-powered cybersecurity solutions allow IT teams to act more swiftly and decisively against modern evolving threats. AI-powered observability tools, for instance, offer a unified dashboard for visibility into security events across networks, infrastructures, applications and databases.

By enabling observability across hybrid IT environments and augmenting it with security management tools, IT and security admins can quickly identify and diagnose security issues and address regulatory compliance problems across complex and distributed infrastructures. This also helps break down IT silos and fosters cross-domain correlation and collaboration.

Consider tenant isolation

Isolating Al infrastructure in dedicated physical servers or data centres creates a physical buffer against external security threats.

This strategy helps protect against risks like data poisoning while maintaining local or private cloud access to Al capabilities. It’s a concept not unlike what the finance industry established decades ago with its PCI zones.

Employ proven cybersecurity solutions

Proven cybersecurity solutions such firewalls, monitoring and observability tools remain core essentials for maintaining secure AI infrastructure. A combination of these solutions allows IT teams to detect anomalies, monitor suspicious activity and deploy proactive protection measures to prevent unauthorised access.

Prioritising security is a necessary trade-off

Truthfully, prioritising security in AI development might increase costs and slow the development of future AI initiatives, but it’s a necessary trade-off when national security and public trust are on the line.

In the long run, this investment not only helps protect sensitive data but also paves the way for the far-reaching benefits sovereign AI offers — namely, enhanced public service efficiency, a better customer experience and stronger national competitiveness on the global stage.

1. Francis L 2024, ‘AI and GenAI investment areas for public sector in Asia Pacific’, GovInsider, <<https://govinsider.asia/intl-en/article/ai-and-genai-investment-areas-for-public-sector-in-asia-pacific>>

*Sascha Giese is Global Tech Evangelist, Observability at SolarWinds. He has more than 15 years of technical IT experience, four of which have been as a senior pre-sales engineer at SolarWinds. Sascha has been responsible for product training SolarWinds channel partners and customers and has contributed to the company’s professional certification program, SolarWinds Certified Professional.

Top image credit: iStock.com/Just_Super

Related Articles

State government agencies still struggling with securing user access

Audit reports have shown that Australian government agencies in four states experience challenges...

Balancing digital innovation and cybersecurity in the public sector

Balancing digital innovation with cybersecurity is not an easy task; however, it is one that...

Escalating cyberthreats demand stronger global defence and cooperation

A new Microsoft report highlights the growing sophistication and resourcing of nation-state...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd