US urged to double national security AI spending
An expert advisory committee on the use of AI in national security has urged the US Congress to double non-defence AI spending in FY21. In its first quarterly set of recommendations since an initial report in November, the National Security Commission on Artificial Intelligence has made a set of 43 actionable recommendations for policymakers to consider when setting budgets for the fiscal year.
While the panel has a national security focus, the initial quarterly update is focused on improving non-defence spending on AI, because the panel feels this is where the US is falling further behind.
The report therefore recommends that US Congress increase non-defence AI funding by around US$1 billion ($1.64 billion) in FY21. It calls for US$450 million of the additional funding to be allocated to the National Science Foundation (NSF) to support foundational AI software and hardware research.
A further US$300 million should be provided to the Department of Energy (DoE), which oversees 17 national AI labs.
The report also states that US$125 million should be allocated to the National Institute of Health (NIH) to explore opportunities for incorporating AI into the health sector, including using AI to help manage pandemics such as the current COVID-19 crisis.
The remaining recommended allocations are US$75 million for NASA to explore deployment of AI in space missions, US$50 million for the National Institute of Science and Technology (NIST) to allow the agency to take the lead in developing protocols and nationwide testing standards for AI, and US$100 million to expand AI fellowship and scholarship programs managed by various government agencies.
The US Department of Defense (DoD) is already spending heavily on AI and is further along in terms of developing the frameworks and principles for AI research.
For example, the department recently adopted a series of ethical principles for the adoption of AI in warfare. The principles state that the use of AI in warfare in national defence should be responsible, equitable, traceable, reliable and governable.
The report recommends that efforts to advance ethical and responsible use of AI be expanded by integrating ethical and responsible AI training within general AI courses and sharing access to these courses broadly with US law enforcement organisations.
The report also recommends that funding for the Defense Advanced Research Projects Agency's microelectronics program should be increased to $500 million.
Government spending on microelectronics research should be expanded so the US can develop novel and resilient sources for producing, integrating, assembling and testing AI-enabling microelectronics, the committee said.
More broadly, the report recommends that additional defence and non-defence AI research funding for FY21 be concentrated on six specific areas of AI which have the potential to benefit most from government research funding.
These are 1) novel machine learning approaches, 2) testing, evaluation, verification and validation of AI systems, 3) methods for overcoming barriers to machine learning, 4) complex multi-agent scenarios, 5) AI for modelling simulation and design, and 6) advanced scene understanding.
As well as funding recommendations, the report advocates for the establishment of a task force instructed to establish a national AI research centre.
It calls for US$25m in funding for the first year of a five-year pilot program to develop National AI Research Resource (NAIRR) infrastructure to help bridge the resources gap between AI researchers in academia and in the more deeply funded private sector.
The report also includes organisational restructuring recommendations that would facilitate the acceleration of the adoption of AI in the DoD. Other recommendations cover strengthening the AI workforce, promoting US leadership in the development of AI hardware, and deepening research into understanding and responding to AI-related threats from foreign adversaries.
Finally, the report includes a series of recommendations aimed at improving AI cooperation among key allies and partners — including Australia and the other Five Eyes alliance members.
Australia has started establishing its own frameworks supporting AI research. Standards Australia this year launched an AI Standards Roadmap containing recommendations aimed at allowing Australia to take a leading international role in AI development.
Building a plane while you fly it: challenges in public sector digital transformation
Achieving flexibility becomes possible when implementing an agility layer, as it provides the...
Automated decision-making systems: ensuring transparency
Ensuring transparency is essential in government decision-making when using AI and automated...
Interview: Ryan van Leent, SAP Global Public Services
In our annual Leaders in Technology series, we ask the experts what the year ahead holds. Today...