Building secure AI: a critical guardrail for Australian policymakers

Palo Alto Networks

By Sarah Sloan, Head Government Affairs and Public Policy ANZ and Indonesia, Palo Alto Networks
Friday, 22 November, 2024


Building secure AI: a critical guardrail for Australian policymakers

Imagine a malicious actor known to authorities is boarding a plane. They move confidently through the airport, blending in seamlessly with the crowd. At security, they present their ID and boarding pass, placing their belongings into trays for screening.

An AI system is deployed as part of airport security, designed to detect threats automatically during scans. This AI model is trained to identify prohibited items such as weapons by analysing X-ray images of passenger luggage.

At the same time, the actor’s cyber team has exploited the AI model’s vulnerabilities using ‘adversarial’ examples: subtly altered images that trick the AI into misclassifying objects. By modifying the appearance of items like knives or guns — perhaps by overlaying a pattern or altering the visual properties in ways that are imperceptible to humans but highly significant to the AI — they manage to deceive the system. The AI misidentifies the weapon as a benign item, such as a metal pen or a toy, allowing it to pass through undetected.

With no red flags raised, they gather their belongings, including the concealed weapons in their bag, and head towards the gate, unnoticed and undetected.

The case for a security guardrail

In October, the Australian Government concluded its consultation on the ‘Safe and Responsible AI in Australia — response to Proposals Paper for introducing mandatory guardrails for AI in high-risk settings’.

The paper outlines governance mechanisms to ensure AI is developed and used safely and responsibly in Australia, proposing 10 guardrails for organisations developing or deploying high-risk AI systems. However, while the paper highlights the importance of ‘safe and responsible AI’, the omission of ‘security’ is notable, especially given that similar frameworks in the UK and US emphasise ‘safe, secure and responsible AI’. Cybersecurity is briefly mentioned under Guardrail 3, which calls for appropriate data governance, privacy and cybersecurity measures, but lacks specific weight and guidance on what cybersecurity measures should be implemented.

As highlighted in the scenario above, while AI has the potential to significantly enhance Australia’s national security, economic prosperity and social cohesion, ensuring its security is paramount.

According to Unit 42’s recently released report, ‘Preparing for Emerging AI Risks’, we have seen evidence of a threat group using AI-enabled tools in attacks. And we have seen attacks that occur at a scale that suggests the presence of AI. Thankfully, at this time AI-powered changes in attacks seem to be evolutionary, not revolutionary. This means attackers are enhancing techniques they already know how to use, rather than using AI to create attacks that have never been seen before. However, this may rapidly change in the current threat environment.

AI security is integral to having safe, responsible and trustworthy AI, and a dedicated security guardrail focused on securing AI at every stage of its development and deployment would set security as a foundational element of Australia’s emerging AI ecosystem.

A cybersecurity guardrail

Given the pivotal role AI will play in shaping Australia’s digital future, it is essential to address the following security practices when establishing a comprehensive guardrail dedicated to securing AI systems throughout their development and deployment.

Recognise secure development practices unique to GenAI systems

While the majority of the recognised secure software development principles can help mitigate common security issues with GenAI systems, there are unique threats and vulnerabilities that must be addressed to help bolster GenAI security. To that end, Palo Alto Networks researchers and experts have developed a GenAI Security Framework that comprises five core security aspects to help enterprises across the AI value chain secure their GenAI development and deployment processes. These include:

  1. Hardening GenAI input/output integrity: mitigating security risks through strong input and output data validation and sanitisation.
  2. Trusted GenAI data lifecycle practices: ensuring that GenAI models are trained on trustworthy and reliable data.
  3. Secure GenAI system infrastructure: safeguarding the system from (model) denial of service attacks.
  4. Trustworthy GenAI governance: ensuring GenAI aligns with desired objectives and ethical standards.
  5. Adversarial GenAI threat defence: detecting and responding effectively to emerging threats to improve security posture.
Further include secure AI software development practices

The above specific development practices for GenAI systems must also be complemented by broader secure AI software development practices including:

  1. Risk identification — identifying risk across the lifecycle of advanced AI systems: CISOs and security teams need to understand AI threats and cybersecurity risks in order to properly secure their enterprise AI transformation.
  2. Risk evaluation — testing measures, red-teaming and exercises: Organisations should ensure that red-team testing processes are supported by clear and understandable metrics to measure multiple attributes. An effective initial metric for red teams to evaluate the security risk is the AI system’s fragility against attacks. This can generally be framed as measuring the attacker’s cost of exploiting an AI system against their potential gains in a successful attack. It is also recommended that tabletop exercises are conducted on a semi-annual basis.
  3. Risk management — advancing the security of AI systems (secure AI by design): Comprehensive AI security posture management is recommended to maintain the security and integrity of AI and ML systems. This involves continuous monitoring, assessment and improvement of the security posture of AI models, data and infrastructure. AI-SPM includes identifying and addressing vulnerabilities, misconfigurations and potential risks associated with AI adoption, as well as ensuring compliance with relevant privacy and security regulations. It focuses on the core functions of visibility and discovery; data governance; vulnerability management; runtime monitoring and detection; risk mitigation and response; and governance and compliance.
  4. Post-deployment monitoring and reporting — comprehensive AI usage monitoring and data protection: Securing enterprise AI usage (employees using enterprise AI apps) requires new levels of visibility and control to inventory and sanction AI usage and protect data, AI apps and AI models. Organisations need the ability to: discover all AI apps, models, users and data automatically; assess the AI ecosystem risk exposure; and protect runtime AI applications, models and datasets from AI-specific threats and foundational attacks, like insecure outputs, prompt injection attacks and sensitive data leaks.
  5. Transparency reporting and organisational governance — risk-based governance, principles and policies: As we grow our AI capabilities, we also recognise the need for transparency in the governance process we follow in our AI development and deployment.
  6. Information security — best practices for data security of advanced AI systems: This includes ensuring complete visibility of attack surfaces, leveraging the power of AI and automation, promoting Secure AI by Design, implementing enterprise-wide zero trust network architecture, protecting cloud infrastructure, applications and data, and maintaining an incident response plan to prepare for and respond to cyber incidents.

Conclusion

Collaborating to secure AI is not just essential but also an opportunity to shape the future of technology in a way that safeguards Australia’s interests. By partnering with industry, academia and policymakers, we can ensure that AI systems are developed responsibly, securely and in alignment with national values. Together we can create a framework that encourages innovation while maintaining the integrity of our digital landscape, ensuring that AI remains a force for good, supporting national security, economic growth and societal wellbeing.

Image credit: iStock.com/BlackJack3D

Related Articles

Building security‍-‍centric AI: why it is key to the government's AI ambitions

As government agencies test the waters of AI, public sector leaders must consider how they can...

State government agencies still struggling with securing user access

Audit reports have shown that Australian government agencies in four states experience challenges...

Balancing digital innovation and cybersecurity in the public sector

Balancing digital innovation with cybersecurity is not an easy task; however, it is one that...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd