Risk vs productivity: how AI is impacting cybersecurity in the public sector

Tanium

By James Greenwood*
Wednesday, 12 June, 2024


Risk vs productivity: how AI is impacting cybersecurity in the public sector

If Australia is aiming to be a modern and leading digital economy by 2030 we need to embrace AI.

A recent Kingston AI Group report found AI could boost the Australian economy by $200 billion a year.1 Despite this, key industries including government are unlikely to realise these benefits because they are not yet embracing AI. Much of this hesitation comes from finding a balance between the perceived risk and productivity.

When we think about how AI can increase efficiencies and boost productivity in government, one of the areas where it is becoming most beneficial is in defending against cyber threats. Organisations need to realise that attackers are already leveraging AI to find new, more effective ways to breach their systems. So sitting on our hands and falling victim to analysis paralysis is only going to lose time and stifle the productivity gains AI can bring. In saying that, there are some steps that need to be taken before an organisation can safely deploy AI.

The role of AI in securing the public sector and economy

While it’s always wise to approach new technologies with a certain level of caution — especially ones as powerful as AI — the real risks of AI are likely to be much more mundane than the worst-case scenarios we hear about in the media.

What is a reality, though, is that bad actors are deploying AI to create more sophisticated attacks — and in a government context, breaches are exponentially more serious. The good news is that AI cuts both ways. Organisations can leverage the same technology as attackers to build resilience against AI threats. It’s like fighting fire with fire. Deloitte found2 more than two-thirds (69%) of enterprises believe AI is necessary for cybersecurity due to threats rising to levels beyond the capacity of cybersecurity analysts.

The benefits of AI when it comes to securing the public sector are clear. It has the ability to process vast amounts of data in real time. This means shifting away from a traditionally manual and inaccurate process of identifying and remediating vulnerabilities. For example, public sector organisations are required to comply with the Essential Eight requirements. However, it’s near impossible to prove compliance, which is often done with manual audits, without visibility over all endpoints — something that AI can do in real time with the click of a button when holistic environmental data is provided.

Overall, having AI in our arsenal decreases the chances of an attack, or at least reduces the impact an attack can have. As we all know, attacks are costly and, for the public sector, can have a significant impact on the economy. Think about key government services going down like Centrelink or the public transport network. By delaying AI uptake, the public sector risks leaving its windows open to attacks.

How to safely prepare for AI adoption

While deploying AI to improve public sector cyber resilience is critical, there are some steps that need to be taken before a government agency can be considered ‘AI-ready’. AI is only as good as the data it’s given, and if an agency doesn’t have strong data governance in place, then any AI efforts won’t be set up to succeed.

Firstly, agencies need to be putting a data governance program in place. This involves setting strong boundaries to prevent breaching any compliance or privacy regulations when using AI technologies. This might mean ensuring data is encrypted or not accessible to certain staff. As part of this program it’s also important to establish clear responsibilities for the AI project. For example, who is responsible for managing the AI deployment? How will the data be collected, stored and managed?

Secondly, in order to properly train an AI model, organisations need to establish strong data hygiene. It needs to be validated and sanitised to make sure no sensitive, regulated or IP data is misused. This is incredibly important in reducing the impact of a breach and the potential involved. Lastly, incident response plans need to be updated to ensure they address any new AI tools should they misbehave or come under attack.

While AI adoption has come with a lot of hesitation, so have other revolutionary technologies, from electricity to cloud computing. If Australia is aiming to be a modern and leading digital economy by 2030, we need to embrace AI. Although a good level of caution is often a good thing, too much paralysis, particularly in the public sector, will leave us behind the eight ball in years to come.

1. Davidson J 2024, ‘AI to create 150,000 jobs, claim the academics who study it’, Australian Financial Review, 16 April 2024, <<https://www.afr.com/technology/how-ai-will-bring-200b-to-the-australian-economy-20240415-p5fjuk>>
2. Deloitte 2023, 'AI in cybersecurity: A double-edged sword’, Deloitte Middle East Point of View, Fall 2023, <<https://www2.deloitte.com/xe/en/pages/about-deloitte/articles/securing-the-future/ai-in-cybersecurity.html>>

*James Greenwood is a Regional Vice President of Technical Account Management at Tanium with more than 20 years of experience in IT. James has a passion for helping customers solve complex problems through the use of technology and automation.

Top image credit: iStock.com/Just_Super

Related Articles

Building secure AI: a critical guardrail for Australian policymakers

While AI has the potential to significantly enhance Australia's national security, economic...

Building security‍-‍centric AI: why it is key to the government's AI ambitions

As government agencies test the waters of AI, public sector leaders must consider how they can...

State government agencies still struggling with securing user access

Audit reports have shown that Australian government agencies in four states experience challenges...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd