AI regulation: work with new technologies, not against them
Generative AI is changing the landscape. How do we mitigate the risks and effectively regulate its use?
We all knew generative AI tools like ChatGPT were fast approaching, yet what has been interesting to watch is the subsequent backstepping from the very people who created it. We all saw Geoffrey Hinton, dubbed one of the “Godfathers of AI”, quit his job at Google just months after the launch of ChatGPT so he could speak out about the risks of AI. And we can’t forget the more than 1000 high-profile industry executives — from Elon Musk to Steve Wozniak — who signed an open letter calling for a six-month pause on AI development to allow for regulation to catch up.
The hesitation from those behind the technology itself has served as a much-needed warning about blindly adopting a technology before understanding the consequences.
Minister for Industry and Science Ed Husic has already signalled intent to regulate ‘high-risk’ AI to curb threats like deep fakes and algorithmic biases. In response, Microsoft urged the government to assess risk based on the outcome of the tool itself, rather than banning AI technologies, particularly given the tools to measure potential AI harm are also still in development.
The thing is, we can’t put generative AI back in the box it came in. It’s here, and organisations, their employees and the general public are going to use it to find efficiencies, reduce human error and make more informed decisions. And whilst the AI waters are murky, it’s clear that as a country, and even at a global level, we don’t know enough about the risks to effectively regulate against it in a timely way.
So, if regulation is not the answer to mitigating AI risk in the short term, how can the government make sure that Australian organisations stay protected?
When regulation is not the answer
When it comes to legislating technologies, we’ve seen this same situation play out time and time again. Whether it be updating the Privacy Act or legislating data sovereignty, by the time the government has defined the problem, the entire issue has evolved so rapidly that any regulation is subsequently out of date.
The fact of the matter is that government legislation is not what we need to mitigate the immediate threats posed by AI. The legislative process is far too slow, and the adoption of AI is going to outpace any new policies before the ink is even dry. Instead, we’d be far better off with frameworks that can be drafted and then adapted quickly to evolve with the technology.
The other hurdle is that Australia doesn’t currently have the talent to ensure effective regulation. But the truth is, nowhere does. There currently aren’t enough people in Australia that know enough about generative AI and its consequences to create the right level of legislation. This is going to take time and collaboration with the private sector for governments around the world to better understand the risks and how to mitigate them.
Lastly, even if we were to regulate AI, it’s all a waste of time if it’s not enforced. Australia is the land of the slap on the wrist. Nobody has been fined 30% of their domestic revenue for breaching the Privacy Act despite it being written in law. The government needs a clear path for enforcement before starting down the road of regulating AI.
If it’s not clear by now, my point is that protecting Australians against AI today will largely fall on the private sector. And what we need is a pragmatic approach outlined by the government that accepts AI as an integral part of the future of business whilst addressing the risks it poses.
A pragmatic approach to risk
There’s a lot of confusion within organisations when it comes to AI adoption. We know effective regulation is a long way off, meaning the government needs to offer clear advice to establish a baseline for the private sector. Making sweeping statements threatening to ban AI is not going to stop adoption or reduce potential harm in the short term. The government needs to accept that we need to work with AI, not against it.
Whilst AI use in the workplace is likely to start off as a tool to help with relatively benign tasks like researching or writing presentations, it only takes one person to upload sensitive data into a third-party AI tool and before you know it, they’ve opened the company up to significant breaches of data privacy laws.
The government should be advocating for a collaborative, company-wide approach to implementing AI tools to properly access the risks and educate staff as to how to use them in a way that aligns with company policies and compliance requirements like data privacy and sovereignty. It sounds like an arduous task for IT teams, but this means reading terms of service to assess exactly how data is processed, stored and accessed. From there, organisations can create practical strategies to reduce the risks AI tools pose.
To go one step further, the government should be encouraging the private sector to establish formal working groups at an organisational level to investigate the benefits of each AI tool and develop a tailored action plan to roll out new technologies across the business. Having a working group will ensure that AI tools are providing actual value to the organisation, rather than a free-for-all based on individual needs. The working group will also be critical in making sure the adoption of AI is consistent across all functions and that it meets ethical, regulatory and compliance needs.
As AI evolves, so will our understanding of the benefits and risks it brings to organisations and society more broadly. As we wait for governments to catch up from a regulatory standpoint, the private sector needs the government to wake up to the realities of AI and provide clear advice based on the knowledge that the technology is already pervasive across Australian businesses. A pragmatic, outcomes-based approach to AI risk is required to protect Australian organisations and citizens whilst the government grapples with long-term regulation strategies.
From training to collaboration: how XR can transform public sector operations
Extended reality (XR) has emerged in recent years as a powerful tool for public service...
How financial automation bridges the public sector resource gap
The growing resources gap in finance and technology is a key challenge in the public sector.
Meeting modern citizens' needs with AI-powered government services
Many citizens find themselves experiencing long wait times when pursuing services, or unable to...