International rules are being drawn up to stop military robots with artificial intelligence (AI) from autonomously deciding which targets to choose and whether to use lethal force.

Although the guidelines under consideration at an international conference that gets under way in Geneva from Aug. 20 will not be legally binding, key countries, such as permanent members of the U.N. Security Council and Japan, are likely to agree to them.

The guiding principles will effectively become the foundation of international rules for the development of AI-equipped robotic weapons that can autonomously move and kill or injure people.

It is widely believed that weapons and technologies to create such lethal autonomous weapons systems are now being developed by the United States, Russia, Israel, South Korea and other countries.

Human rights organizations assert that autonomous robots could make bad calls that result in dire consequences, and for this reason should be banned altogether.

Against this background, countries working in tandem with nongovernmental organizations (NGOs) have been exploring the creation of international rules governing AI-equipped robotic weapons since 2017.

According to draft documents obtained by The Asahi Shimbun, the guiding principles state, “Human responsibility for decisions on the use of weapons systems must be retained since accountability cannot be transferred to machines.”

They also say such weapons systems must be in accordance with international laws, such as international humanitarian laws.

They also read, “When developing or acquiring new weapons systems based on emerging technologies, the risk of acquisition by terrorist groups and the risk of proliferation should be considered.”

Whatever guiding principles are adopted, they are bound to be ones that member countries can easily agree to.

The countries are expected to discuss over several years whether the principles should be developed into a legally binding treaty. However, human rights organizations contend the countries are trying to delay the issue.

The Japanese government takes the position that it will refrain from developing completely autonomous weapons systems that kill people.

Even so, the government is funding research into AI-equipped robotic weapons as long as human involvement is guaranteed on grounds the technology will reduce the chance of human error and be less labor-intensive.

The U.S. Defense Department stipulates in its in-house rules that autonomous weapons must not be used unless humans are involved. At present, however, the process to decisions for attacks remains unclear.

The Pentagon is now developing technologies to make the process accountable. This could lead to the emergence of AI-equipped robotic weapons that will meet the guiding principles.

Heigo Sato, a professor of national security at Takushoku University, who is a member of the Japanese government delegation at the Geneva conference, places high value on the guiding principles.

“It will be a fruitful result if the direction in which international rules are headed is shown,” he said.

However, Sato cautioned that if some advanced countries develop AI weapons that meet the guiding principles, countries without the technologies to develop such weapons may become dissatisfied.

“It could lead them to call for a total ban on AI-equipped weapons,” he said.

Sato fears that efforts to establish full-fledged rules could be jeopardized.