South Korea university under fire for ‘killer robot’ AI research
Share this on

South Korea university under fire for ‘killer robot’ AI research

DOZENS of researchers from around the world are planning to boycott a leading South Korean research university over the launch of an Artificial Intelligence (AI) weapons lab over concerns on the development of “killer robots”.

According to the Times Higher Education, AI and robotics researchers from the University of Cambridge, Cornell University, the University of California, Berkeley and 52 other institutions have planned to cease all contact with the Korea Advanced Institute of Science and Technology (KAIST) over the new research center.

The group, in an open letter, pointed out media reports on the “Research Centre for the Convergence of National Defence and Artificial Intelligence” being involved in the development of autonomous armaments, warning that the “weapons will … permit war to be fought faster and at a scale great than ever before.”

They also warn that the research has “the potential to be weapons of terror.”

SEE ALSO: Robots to care for 80 pc of Japan’s elderly by 2020 

“At a time when the United Nations is discussing how to contain the threat posed to international security by autonomous weapons, it is regrettable that a prestigious institution like KAIST looks to accelerate the arms race to develop such weapons,” the letter said.

“We publicly declare that we will boycott all collaborations with any part of KAIST until such time as the president of KAIST provides assurances – which we have sought but not received – that the centre will not develop autonomous weapons lacking meaningful human control.”

AI is the field in computer science that aims to create machines able to perceive the environment and make decisions.

Scientia professor of artificial intelligence at the University of New South Wales, Toby Walsh, who organised the boycott, said he was informed that centre was working on four autonomous weapons projects, including a submarine.

“KAIST has made two significant concessions: not to develop autonomous weapons and to ensure meaningful human control,” he said, as quoted by Reuters.

He added that the university’s response would add weight to UN discussions taking place next week on the overall issue.

shutterstock_551571478

(File) University President Sung-Chul Shin said the university was “significantly aware” of ethical concerns regarding Artificial Intelligence. Source: Shutterstock

Walsh said it remained unclear how one could establish meaningful human control of an unmanned submarine – one of the launch projects – when it was under the sea and unable to communicate.

They cited effective bans on previous arms technologies and urged KAIST ban any work on lethal autonomous weapons, and to refrain from AI uses that would harm human lives.

KAIST, which opened the center in February with Hanwha Systems, one of two South Korean makers of cluster munitions, responded within hours, saying it had “no intention to engage in the development of lethal autonomous weapons systems and killer robots,” according to Reuters.

University President Sung-Chul Shin said the university was “significantly aware” of ethical concerns regarding Artificial Intelligence, adding, “I reaffirm once again that KAIST will not conduct any research activities counter to human dignity including autonomous weapons lacking meaningful human control.”

SEE ALSO: Rise of the sex robots: What does this mean for human intimacy?

The university said the new Research Centre for the Convergence of National Defence and Artificial Intelligence would focus on using AI for command and control systems, navigation for large unmanned undersea vehicles, smart aircraft training and tracking and recognition of objects.

Walsh told Reuters there were many potential good uses of robotics and Artificial Intelligence in the military, including removing humans from dangerous task such as clearing minefields.

“But we should not hand over the decision of who lives or dies to a machine. This crosses a clear moral line,” he said.

“We should not let robots decide who lives and who dies.”