Boston Dynamics and other robot manufacturers refused to arm their machines. Representatives of Agility Robotics, ANYbotics, Boston Dynamics, Clearpath Robotics, Open Robotics and Unitree Robotic corporations wrote an open letter about their decision to all members of the robotics community and urged them to listen.
Scientists against militarization
The emergence of highly mobile robots threatens their possible abuse
the letter says.
The general position of the inventors is this: we must not allow the fruits of the development of new of technologies used to harm people.
Opponents of the militarization of machines believe that the use of robots for military purposes will shake people's confidence in new technologies and science in general.
We believe that modern mobile robots will bring great benefits to society as colleagues in industry and companions in our homes.
- say representatives of robotic firms.
At the same time, Ghost Robotics, also known in military circles, did not support the protest. These AI machine developers are being actively used by the US military: the Pentagon is currently testing their products at test sites.
Like any new technology that offers new opportunities, the advent of advanced mobile robots carries the risk of abuse. Untrustworthy people can use them to violate civil rights or intimidate other people. One area of particular concern is the use of weaponized robots.
How robots are used in the military
Last October, the aforementioned company Ghost Robotics published photos of the Q-UGV robot, which was originally created for reconnaissance operations, and later received an automated sniper shooting system (pictured). Unlike long-known remotely controlled stationary turrets, robot weapons can operate without the participation of a human operator. The US and Australian armies were interested in similar devices.
The Russian military did not make us wait long for an answer. On October 19, 2021, it was announced that the Marker combat robot was being tested on a caterpillar track. Now they plan to modernize it for use during a special operation in Ukraine. The weight and dimensions of the platform will remain the same as those of the existing samples, but the possibilities will be significantly expanded. "Marker-2" will be aimed at at least three tasks: protection, elimination of the consequences of emergencies (ES) and the conduct of auxiliary work in combat conditions, taking into account the needs of the troops. The only problem is finding suitable production facilities for the project.
At last year's tests, three "Markers" demonstrated the teamwork of artificial intelligence with might and main: they competently occupied firing positions and distributed targets among themselves. Their armament was much more powerful than that of the American competitor: machine guns, grenade launchers and even rockets. If a car from the USA looks like a dog, then our "Marker" is like a miniature tank.
On both sides, the military stated that they would use robots only to protect critical facilities, and not in combat operations. However, in the spring of last year, a report appeared on the table of the UN Security Council, which reported that combat robots were used for the first time in an open clash with people.
The use of autonomous robots in war: risks and ethical issues
According to the report, in 2020, Turkish kamikaze drones were used with the support of Western Libyan troops aimed at suppressing the positions of Marshal Haftar's soldiers in Libya. Drones at the same time acted independently, without external control.
Let me emphasize that the remotely controlled drones that you see in the military chronicle of today are no match for an autonomous car that needs a person only to change its batteries. Thus, until recently, the concept of "kills a person, not a weapon" was true. Even a homing bomb or rocket does not control itself, because a person launches it on the orders of the same person, but with a large number of stars on shoulder straps. What will happen if robotic weapons have freedom of action? Well, if it happens like in the old joke: "Smart tanks decided not to fight and went to drink diesel fuel."
Who will be responsible if the program fails and the machines begin to destroy civilians, doctors, peacekeepers or their own military? Who can give an exact answer, was it a glitch, a hack, or a malicious imitation of a bug? How, finally, can a machine distinguish an enemy from a friend in a hybrid semi-guerrilla war, when the same Armed Forces of Ukraine use civilian transport to move?
The existence of combat robots will violate the provisions of the Geneva Conventions regarding the protection of the rights of the civilian population in the zone of military conflicts, in particular the principles of distinction and proportionality. The last postulate is that the civilian population should not suffer in any case.
Another difficulty in the legal and ethical regulation of the use of autonomous drones is that there is no clear definition of an "autonomous combat system" at the international level. That is, in some countries, both a robot capable of choosing one of several options for action, even if it is under external control, and a fully automatic killer drone can be considered autonomous.
In August 2018, the third meeting of the UN Group of Governmental Experts on Combat Autonomous Systems was held in Geneva. However, its participants failed to reach specific agreements due to the above contradictions. And the development of military autonomous systems continues today.
Nothing prevents, for example, the US government from giving an order to the same Boston Dynamics to develop a reconnaissance drone and, in the dark, create a subsidiary company that will attach weapons to it. Military contracts usually smell like fabulous money, and the capitalist is always on the hunt for profit. So the answer to the question of whether it is worth believing in the “honest pioneering” statement of the designers is still open.