Artificial intelligence decided to kill the operator on UAV tests in the USA
Artificial intelligence has decided to kill the operator in a simulation test in the United States. So he wanted to remove the obstacle preventing him from completing the task, The Guardian reported.
The incident was explained by the head of the US Air Force AI testing and operations department, Colonel Tucker Hamilton. The system received points for hitting the target. And when the operator ordered her not to eliminate the threat, the AI made an unexpected decision.
The system began to realize that although it had identified a threat, sometimes the human operator would tell it not to destroy the threat, but that it would receive points for hitting a target. [And the system] decided to kill the operator because he interfered with her task
the colonel said.
Hamilton noted that in this case it is generally impossible to talk about artificial intelligence, in general, about intelligence, machine learning, autonomy, if at the same time we keep silent about ethics and AI.
The newspaper noted that these are not the first tests where the US military has introduced artificial intelligence into a combat vehicle. Recently, AI has been used to control the F-16.
Formerly chatbot ChatGPT drafted peace treaty between Ukraine and Russia. In addition, artificial intelligence "concocted" messages to the presidents of the Russian Federation and Ukraine on behalf of the German chancellor.
Information