Tank panic. The Pentagon intends to equip armored vehicles with artificial intelligence

Table of contents:

Tank panic. The Pentagon intends to equip armored vehicles with artificial intelligence
Tank panic. The Pentagon intends to equip armored vehicles with artificial intelligence

Video: Tank panic. The Pentagon intends to equip armored vehicles with artificial intelligence

Video: Tank panic. The Pentagon intends to equip armored vehicles with artificial intelligence
Video: Johan Liebert: The Life of the Monster 2024, December
Anonim
Image
Image

Contradictory ATLAS

At the beginning of last year, the United States military woke the world with the news of the development of the ATLAS (Advanced Targeting and Lethality Aided System) system, designed to take combat operations to a new level of automation. The initiative caused a mixed reaction among ordinary people and enlightened military experts. Much of the blame was on the developers (the military C5ISR center and the Armaments Center of the Ministry of Defense), who, for the sake of the euphonious abbreviation ATLAS, included the terms "lethality" and "improved target designation" in the name. Frightened by stories of rebellious robots, the Americans criticized the army initiative, they say, it contradicts the ethics of war. In particular, many referred to Pentagon Directive 3000.09, which prohibits the transfer of the right to open fire to an automated system. The integration of artificial intelligence and machine learning into ground vehicles, according to protesters, could lead to rash casualties among civilians and friendly troops. Among the critics were quite respectable scientists - for example, Stuart Russell, professor of computer science at the University of California at Berkeley.

Image
Image

The developers quite reasonably explained that ATLAS has nothing to do with the hypothetical "killer robots" that humanity has been dreaming about since the first "Terminator". The system is based on algorithms for finding a target using various sensor systems, selecting the most important ones and informing the operator about it. Now in the USA, the M113 armored personnel carrier with the integrated ATLAS system is being tested. For the operator of the weapon, artificial intelligence algorithms display not only the most dangerous targets on the screen, but also recommend the type of ammunition and even the number of shots for guaranteed defeat. According to the developers, the final decision on hitting the target remains with the shooter, and it is he who is responsible for the result. The main task of ATLAS in an armored version is to increase the speed of response to a potential threat - on average, a tank (BMP or armored personnel carrier) opens fire on a target with an automatic assistant three times faster. Naturally, an armored vehicle can work more efficiently with group targets. In this case, the artificial intelligence promptly selects targets in the order of tank hazard, guides the weapon on its own and recommends the type of ammunition. Since the beginning of August, various types of armored vehicles with integrated ATLAS systems have been tested at the Aberdeen Proving Ground. Based on the results of the work, a decision will be made on military tests and even on the adoption of such weapons.

Tank panic. The Pentagon intends to equip armored vehicles with artificial intelligence
Tank panic. The Pentagon intends to equip armored vehicles with artificial intelligence

Tanks are now one of the most conservative targets on the battlefield. Many of them have not fundamentally improved for decades, remaining in the 70-80s of the last century in terms of technical development. Often this inertia is associated with the widespread use of tanks in individual countries. In order to seriously modernize an armored army of many thousands, enormous resources are required. But the means of countering tanks are developing by leaps and bounds. An excellent example is the current conflict in Nagorno-Karabakh, when Turkish and Israeli drones are extremely effective against Armenian tanks. If we ignore casualties, calculating the price / performance ratio of such anti-tank weapons makes them simply the kings of the battlefield. Of course, ATLAS will not protect against air threats, but it can be a good tool for early warning of tank-hazardous targets such as ATGM crews or single grenade launchers.

Image
Image

The Pentagon considers the ATLAS system not as a single military structure, but as part of a large Project Convergence. This initiative should take troop awareness to the next level. Through machine learning, artificial intelligence and the unprecedented saturation of the battlefield with drones, the Americans hope to seriously increase the combat capability of their units. The key idea is not new - to connect all objects on the battlefield with a common information structure and to digitize the surrounding reality. So far, ATLAS is not fully included in Project Convergence due to the lack of data exchange skills with "neighbors", but in the future, the artificial brains of the tank will become common property. By the way, in the commercial for the project, China and Russia are designated as unambiguous military targets.

No trust in electronics

American troops already have a negative experience with armed robotic systems. In 2007, three small-sized tracked platforms SWORDS (short for Special Weapons Observation Reconnaissance Detection System), armed with M249 machine guns, were sent to Iraq. And although they were not fully autonomous vehicles, they managed to scare the soldiers with their periodic chaotic movements of the barrels of machine guns while patrolling the streets of Baghdad. This seemed to the Pentagon a sign of unpredictability, and the tracked machine gunners were slowly sent home. In 2012, a directive was issued stating that automated and remotely controlled weapons systems should not fire on their own. Formally, ATLAS has been developed entirely within the framework of this provision, but there are no fewer questions about innovation. Some experts (in particular, Michael S. Horowitz, assistant professor of political science at the University of Pennsylvania) accuse the novelty of oversimplifying the process of hitting a target. In fact, this level of automation of search and target designation turns combat into an ordinary game like World of Tanks for the gunner. In the ATLAS guidance system, the priority target is highlighted in red, an alarm sounds and the technique, as it can, stimulates a person to open fire. In extreme combat conditions, there is little time to make a decision about shooting, and then there is also a "smart robot" encouraging. As a result, the fighter simply does not have time to critically assess the situation, and without understanding it, he opens fire. It is necessary to evaluate how ATLAS correctly selected targets after shooting. To what extent is this approach ethical and does it comply with the notorious American directive? Microsoft, by the way, has already managed to fall under public condemnation for such a helmet-mounted target designation system for the military, up to and including a user boycott. In the United States, there has been a debate about robotization of detection and guidance systems for many years. As an example, critics cite examples of errors of the autopilot system on public roads, which have already led to casualties. If even after driving millions of kilometers, the autopilots did not become 100% reliable, then what can we say about a completely fresh ATLAS, which can push tankers to shoot at an innocent person with a 120-mm projectile. Modern wars are now so bloody precisely because the military gained the ability to kill remotely, hiding behind a reliable barrier. The example of the mentioned Nagorno-Karabakh once again confirms this truth. If the fighter is also deprived of the opportunity to critically assess the parameters of the target (this is exactly what ATLAS leads to), then there can be much more victims, and the blame for the murder can already be partially shifted to the machine.

And finally, the main argument against ATLAS-type systems among pacifist commentators was the virtual absence of a ban on the opening of automatic fire. Now only the ethical requirements of the Pentagon (which also have a lot of reservations) prohibit fully automating the murder process. With the introduction of ATLAS, there will be no technical obstacles for this at all. Will the US Army be able to give up such a promising opportunity to further accelerate the response time to a threat and keep its fighters from under attack?

Recommended: