Unleashing AI with mature, risk-based regulation

Workday

By Paul Leahy, Director Public Sector, Workday Australia and New Zealand
Wednesday, 25 October, 2023


Unleashing AI with mature, risk-based regulation

AI is shaping the future of work. Proactive organisations can use AI models, services and applications to amplify human potential, grow value and help enable employees to focus on more strategic and fulfilling tasks. However, AI can also pose very real risks to businesses and individuals without adequate safeguards, and it's critically important that we navigate these risks now to ensure a responsible AI future for everyone.

Since 2019, we've been helping establish the groundwork for smart regulatory safeguards and taking a leading role in shaping AI-focused policy discussions around the world. In the United States, our company actively participates in federal, state and local policy dialogues to advocate for thoughtful and effective regulations governing AI applications. We've also built strong partnerships with the European Union, working collaboratively to develop comprehensive policy approaches that promote the responsible use of AI.

With AI innovation advancing at breakneck speed, we believe the Australian Government's examination of the domestic regulatory landscape in order to minimise policy overlap and ensure coherence should be applauded and also welcome the release of the Department of Industry, Science and Resources' Discussion Paper regarding Safe and Responsible AI in Australia. This proactive regulatory review is a constructive first step towards the safe and responsible development and adoption of AI in Australia.

Globally, our organisation is pleased to see many policymakers recognising the importance of a risk-based approach to AI governance, which paves the way for innovation and adoption while minimising potential harm.

Focus regulation on consequential decisions

To strike the most effective balance between innovation and safeguards, regulation should be risk-based. Because AI risk varies by individual use cases, regulators should focus on those cases that have the potential to impact people's lives in a significant way. For example, the tools used to hire, promote or terminate employment, or determine access to credit, health care or housing, should be examined closely and with the recognition that risk profiles can vary widely even within a given area. Rather than assign risks by sector or take other sweeping approaches, regulators should define higher-risk use cases and focus their regulatory efforts accordingly.

We support a tiered approach based on different levels of risk and recommends the Australian Government adopt a risk management framework that aligns with leading frameworks in the European Union and the United States.

Target automated AI rather than AI that includes human decision-making

Policymakers should also focus on fully automated AI tools that aim to replace human decision-making rather than those that augment human activity. Automated tools can make consequential decisions at a higher volume and greater velocity than a human, but without the careful judgment that a human brings. Maintaining "human in the loop" is critical to ensuring trust in AI-driven systems.

While assessments may be considered where human oversight exists in some cases, these may not be appropriate if efficiency and automation at scale outweigh potentially minor impacts on individuals. This nuanced distinction is important to the safe and responsible development and adoption of AI.

Adopt AI impact assessments rather than immature third-party audits

With AI governance best practices, standards and accountability tools still maturing, Workday believes policymakers should consider the proven, workable approaches to accountability available today and which approaches are still immature.

Impact assessments are key to the success of risk management frameworks, as they are a proven accountability tool. Many organisations already use them to identify, document and mitigate the risks posed by technology, especially in the fields of privacy and data protection, as they are required under the General Data Protection Regulation in Europe. They can also help detect and mitigate potential bias that could result in unlawful discrimination.

There is also a growing consensus among lawmakers, business leaders and civil society that impact assessments for high-risk AI tools are the most promising AI accountability tool available. By contrast, AI auditing is a field that is still in development, with no technical standards or common criteria to audit AI tools against. Instituting premature third-party audit requirements may instead diminish trust in the AI marketplace by failing to promote consistent accountability.

Delivering safe and responsible AI in Australia requires a regulatory approach that applies smart safeguards to protect individuals and businesses, while minimising constraints on innovation. The Australian Government's review is a unique opportunity to get these settings right at a crucial point in the development of AI.

Image credit: iStock.com/Blue Planet Studio

Related Articles

Automated decision-making systems: ensuring transparency

Ensuring transparency is essential in government decision-making when using AI and automated...

Interview: Ryan van Leent, SAP Global Public Services

In our annual Leaders in Technology series, we ask the experts what the year ahead holds. Today...

AI in health care: the burning question that will only be answered with time

We are at an exciting juncture in our global healthcare journey, and AI’s arrival and...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd