Responsible AI

ServiceNow Australia Pty Ltd

By John Asquith, Government Relations Lead, ServiceNow Australia
Tuesday, 06 September, 2022


Responsible AI

Global advances in artificial intelligence (AI) and predictive analytics have shown better ways to get things done. Millions of Aussies have embraced AI-based applications to make everyday tasks easier, faster and more meaningful. This is especially true for the 40% of knowledge workers that have embraced hybrid work arrangements post-pandemic.

Dr Catriona Wallace, CEO of Ethical AI Advisory, believes the pace of change will only accelerate and it must be met with an increase in responsible design and development.

“Over the next few decades, AI will become the most intelligent entity on the planet,” she said.

“We should be excited about this possibility, but conscious of the risks. Leaders need to act now, double down on responsible and ethical AI, and get diversity into the design and build of AI tools.”

In a new report commissioned by our team, Wallace looks at how AI will transform the way people work, live and play by 2032, and will pave the way for a nationwide digital gold rush. Wallace believes that when it comes to AI, employees and customers will overwhelmingly favour organisations that actively practise ethics, accessibility and fairness.

A new set of rules

With Australia’s exponential uptake in digital services, business leaders are currently playing catch-up to build trust in how AI is governed and used. Questions of integrity and responsibility loom large. Who’s making the rules for how AI is applied, and how do we hold them to account?

People want convenience, choice and frictionless services, but not at the expense of fairness. They don’t want biased or opaque decision-making processes that can’t be understood or questioned.

As investment in digital transformation accelerates, ethical decision-making — and the technology that underpins it — presents a new layer of organisational responsibility for leaders.

Invisible AI

The average Australian already interacts with AI around 100 times each day, according to our report. However, there’s a trust gap between consumers and business leaders when it comes to AI use. Our report reveals that 96% of Australian executives believe AI is becoming pervasive and only 22% of Aussies trust how companies are currently implementing AI.

This trust gap presents a challenge for businesses and a need for organisations to be active in their ethical decision-making. The need for AI ethics is more pertinent than ever, given the predicted pervasiveness of the technology.

According to Wallace, by 2032 we will interact with AI in almost every activity and function we perform, hundreds of times a day, even when we sleep. AI will be everywhere, all the time, often without us knowing.

So, how can executives better prepare for AI’s omnipresence? Wallace advises organisations to focus on three priorities that will enable them to realise AI’s full potential, while reducing organisational risk.

1. Ethics and diversity must be built into AI

Designers of AI systems know what data goes in — and what answers come out. However, what happens in between is often a mystery. Hidden biases in the data can deliver results that are inaccurate, unethical and even illegal.

Many companies struggle to quantify the ROI from AI governance measures like building fairness, gaining trust with employees and customers, and ensuring regulatory compliance. Wallace predicts the risks of inaction are increasing, and that Australian regulators will move more aggressively against irresponsible operators.

Faced with increased stakeholder pressure, organisations must develop responsible AI strategies that reduce the chance of causing unintended harm to employees or customers. These must be clearly articulated in company policies and deployed wherever AI is being used.

Time is ticking on transparency as a choice. Reports show that this year, more companies will factor ethical responsibility into employee and customer journeys.

“Voluntary guidelines like the Australian Government’s AI Ethics Framework will soon be replaced with minimum required standards, and responsible use of AI will be required by law,” Wallace has predicted.

“When this happens, responsible AI will join other risk and compliance topics as a board-level imperative.”

2. Governance is key

At our recent Knowledge 2022 conference, speakers — including NSW Minister for Customer Experience and Digital Government Victor Dominello — suggested a credential-based digital identity would be a catalyst for open ecosystems, marketplaces and platforms, representing the next generation of digital citizen experiences.

Dominello stressed the urgency for citizen services to catch up with consumer-grade experiences, saying “the biggest productivity play we have as a nation is getting digital identity sorted for Australians — we’re still mucking around with paper and plastic cards. But to do that, trust in how data is secured and managed is critical.”

3. Meet people where they are

Our research has found Australians generally want speed, transparency and a personalised approach when resolving customer service issues. However, Wallace’s analysis reveals two new user mentalities are emerging. The ‘digital experiencers’ will embrace technology with few limits, while the ‘organic experiencers’ (roughly 25% of the population) will demand choice in how they interact with brands and employers. This group will reject digital-only models, preferring to pick and choose between touchpoints based on the task at hand.

This divide means business and government will need to design products and services that cater to both groups.

Take our client, Energy Queensland as an example. The energy provider manages 247,000 km of network for more than two million customers. This large-scale operation requires a seamless flow of real-time information across the organisation. Disconnected, home-grown systems meant employees were wasting valuable time waiting for decisions, actions and responses, causing bottlenecks and eroding trust.

When implementing a new digital strategy, the organisation wanted a consistent experience, without forcing every employee on to digital channels. Energy Queensland created a process that allowed people to log requests via an app, but also continued to offer a phone helpdesk, giving employees time to adjust to the change. Meeting staff ‘where they are’ and where their preferences lie increased uptake, while also allowing digital users to gain back critical time.

Responsible AI becomes business strategy

Employees and customers will increasingly decide which brands they engage with, based on responsibility standards set out by leaders. By focusing on the ethical design and delivery of products and services today, organisations can become more agile and adaptive, while staying ahead of the regulatory curve.

Forward-thinking firms will invest in responsible AI, fair business practices that meet stakeholder expectations and systems that empower stakeholders via greater choice in how and when they interact with brands.

Image credit: iStock.com/ipopba

Related Articles

Automated decision-making systems: ensuring transparency

Ensuring transparency is essential in government decision-making when using AI and automated...

Interview: Ryan van Leent, SAP Global Public Services

In our annual Leaders in Technology series, we ask the experts what the year ahead holds. Today...

AI in health care: the burning question that will only be answered with time

We are at an exciting juncture in our global healthcare journey, and AI’s arrival and...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd