Why we urgently need a gen AI regulatory framework
From writing code to composing essays and answering questions to assisting with tasks, generative artificial intelligence (GenAI) is transforming the way people work, study and find information. This technology is poised to drive the next wave of disruption across nearly every industry, despite current limitations, including ‘hallucinations’, misinformation, bias and a lack of explainability.
Regulatory moves so far
Due to these concerns, there’s a growing consensus in Australia that GenAI needs a regulatory framework to ensure its safe and responsible development and deployment.
The Albanese government has released two papers to ensure the safe and responsible use of AI. The Safe and Responsible AI in Australia discussion paper investigates existing regulatory and governance responses here and overseas to identify potential gaps and recommend options to strengthen the framework. The National Science and Technology Council’s Rapid Response Report: Generative AI paper also assesses the potential risks and opportunities related to AI, delivering a scientific basis for discussions about the way forward.
Via a call for industry submission from the federal government, Google has urged we look at the greater picture. The organisation suggests copyright laws should be relaxed, thereby allowing AI systems to be trained. The company believes talent and opportunity are at risk, saying “the lack of such copyright flexibilities means that investment in and development of AI and machine-learning technologies is happening and will continue to happen overseas”. In addition, it says that there is a need for greater flexibility in sending sensitive data offshore and that companies should not have to deal with overly cumbersome requirements to justify how AI derives at its decisions.
The federal government has issued eight voluntary AI Ethics Principles and called for public submissions on the matter, the results of which are expected to drive decisions on potential regulatory changes.
Some other nations have also taken steps towards this. The European Union has developed legislation that would require companies that create GenAI platforms such as ChatGPT to publicise copyrighted content used by these technology platforms to create content of any kind. The United States Department of Justice (DOJ) and other US agencies have also issued a joint statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems.
Industries that require regulation
The education sector is hugely impacted by generative AI. There have been instances where ChatGPT has been used to exploit the system, enabling students to pass exams ranging from high school graduation to legal and medical board exams. In response, educators have rushed out tools to try to detect AI-generated essays and ‘cheating’.
There is considerable debate around whether such technology should be allowed at all. Curtailing GenAI in education may not be the answer, as it seems inevitable that students will continue to use it and find ways to circumvent bans.
Beyond the classroom, there is a need for regulation in industries where compliance and safety are crucial to ensure accountability, mitigate risks and safeguard the public welfare. The benefits of boosting productivity and facilitating data analysis by applying AI models should be combined with human oversight, which may involve incorporating additional checks and validation processes to verify compliance.
Establishing clear guidelines and validation processes will ensure AI systems align with regulatory requirements, maintaining standards for safety and compliance.
As exciting as these applications are, GenAI is intended to support and assist human workers, not replace them. The more powerful and widely used GenAI becomes, the greater its potential for negative influence. It will require a careful and responsible approach to ensure trustworthiness and transparency.
While AI requires regulation, technology providers cannot just outsource the ‘how’ to regulators, as there is a need to make the technology inherent to the solution itself. This is where graph technology comes in — by helping to solve issues on both the innovation and regulation fronts, making AI more productive and ethical in the process.
GenAI algorithms can inherit or amplify biases from training data, leading to discriminatory outcomes that reinforce existing inequalities. For example, if training data used to create a hiring algorithm is biased towards specific demographics, the algorithm may discriminate against other groups.
However, if IT leaders look at the technology that powers GenAI, namely large language models (LLMs), the potential of LLMs presents opportunities to overcome these challenges.
Enabling LLMs to ingest large volumes of text and make sense of it using knowledge graphs and graph data science algorithms will significantly reduce the risk of errors. Training LLMs on curated, high-quality structured data will help GenAI achieve accuracy, explainability, compliance and reproducibility.
For technologies where compliance or safety are important, creating a policy environment that helps inform the use of that technology is ideal. Our best response is to blend the best of GenAI with other tools to ensure that safety, rigour and transparency are adhered to and can truly be a benefit to organisations, businesses and society at large. The data and AI investment decisions made today will determine the leaders of the future.
Building a plane while you fly it: challenges in public sector digital transformation
Achieving flexibility becomes possible when implementing an agility layer, as it provides the...
Automated decision-making systems: ensuring transparency
Ensuring transparency is essential in government decision-making when using AI and automated...
Interview: Ryan van Leent, SAP Global Public Services
In our annual Leaders in Technology series, we ask the experts what the year ahead holds. Today...