Go back

Some AI research should be halted, top academic says

Image: University of New South Wales

Technologies such as facial recognition bring too much potential danger, Australian artificial intelligence forum hears

Artificial intelligence researchers should avoid working in certain areas because of the ethical risks, a national forum has heard.

Toby Walsh (pictured), an AI researcher at the University of New South Wales and a fellow of the Association for Computing Machinery, said that facial recognition was “such a potentially dangerous technology that we shouldn’t be working on it at all”.

Walsh was speaking at a forum on AI and the law, organised by the Australian Academy of Science and the Australian Academy of Law on 7 October.

He later told Research Professional News that ethics considerations needed to be put first, with medicine as an example. Medicine is a good starting point “in regards to what are the ethical principles that should be guiding the safe deployment of this technology in our lives”, he said.

‘Inherently dual-use’

Walsh said that he would like to see the recommendations of a report in May 2021 from the Human Rights Commission implemented. Walsh served as an expert adviser for the report, which recommended “a moratorium on some high-risk uses of facial recognition technology, and on the use of ‘black box’ or opaque AI in decision-making by corporations and by government”.

The commission found that only one in three Australians said they trusted AI technology, and it recommended the creation of a national AI safety commissioner.

Walsh said that even if certain uses were banned, whether researchers were considering the possible uses of their work before going ahead was “a good question”. All AI technology is “inherently dual-use”, he said. Ethics committees only consider research with clear impacts on humans, but some work could be repurposed later.

There is also a disconnect in ethics decision-making between the clear question of physical harm and less clear mental or emotional harm, he said.

While stopping research on a particular topic can be “very challenging”, the research community has done it in the past, including in relation to genetic research on humans, he said.

Dangers and unintended consequences

Walsh also worked on a recent global report on AI, published by Stanford University, known as the AI100 report. It says that “government institutions are still behind the curve, and sustained investment of time and resources will be needed to meet the challenges posed by rapidly evolving technology”.

“The AI research community itself has a critical role to play in this regard, learning how to share important trends and findings with the public in informative and actionable ways, free of hype and clear about the dangers and unintended consequences along with the opportunities and benefits. AI researchers should also recognise that complete autonomy is not the eventual goal for AI systems,” the AI100 report says.

Australia should look to developments in Europe around regulation not only of the use of AI but also in the research that produces it, Walsh said.

He added that even when public research was constrained by ethical guidelines, there were serious concerns about the activities of private organisations. For example, the testing of driverless cars is going on without any clarity about the algorithms behind them and the actual effects, and without peer-reviewed data. “Tesla doesn’t publish,” he said.

Facial recognition is potentially so dangerous that the risks outweigh the benefits, except possibly in limited uses such as improving access for disabled people, he said. “Facial recognition is troubling when it works because it allows you to scale up surveillance…and it’s troubling when it gets it wrong. You can’t change your face.”