Artificial intelligence. The future of Russia's national security?

Table of contents:

Artificial intelligence. The future of Russia's national security?
Artificial intelligence. The future of Russia's national security?

Video: Artificial intelligence. The future of Russia's national security?

Video: Artificial intelligence. The future of Russia's national security?
Video: U.S. Army's New Advanced Enhanced Night Vision Goggles Look Like Predator's Vision 2024, December
Anonim
Image
Image

Ten years of development

It's no secret that artificial intelligence is penetrating deeper and deeper into the lives of ordinary people around the world. This is facilitated by both the global spread of the Internet and the massive increase in computing power. Neural networks, which have a certain similarity to the human brain, have made it possible to qualitatively improve the work of the developed software. There are, however, a couple of clarifying points: neural networks are still very far from the level of the human brain, especially in terms of energy efficiency, and the algorithms of work are still extremely difficult to understand.

Image
Image

The money for the artificial intelligence industry, despite some restrictions and high-profile accidents with self-driving cars, is a wide river. Last year, according to the approved National Strategy, the market for IT solutions in this area exceeded $ 21.5 billion. God knows what amount, but it will only increase every year, and by 2024 the total AI in the world will conditionally cost 140 billion, and the potential economic growth from the introduction of AI by this time will reach a quite decent 1 trillion. dollars. Actually, the approval of the aforementioned National Strategy by President Vladimir Putin on October 10, 2019, was an attempt to keep up with world trends. At the same time, the program itself declares not just a reduction in the gap with world leaders, but an entry into the number of top players in this market. And it is planned to do this by 2030. Among the obvious obstacles on this path will be the protectionist statements of a number of countries that any Russian software carries a potential danger.

Where are they going to implement the "limitless" capabilities of AI on Russian soil? First of all, this is the automation of routine operations along with the replacement of a person in hazardous industries (read: including in the army). Further, serious work is planned with big data, which has been generated just like an avalanche lately. It is assumed that they will be able to improve forecasts for management decisions, as well as optimize the selection and training of personnel. Healthcare with education in ten years' time will also be active users of AI. In medicine, prophylaxis, diagnostics, dosage of drugs, and even surgery will be given over to the machine mind, partially or completely. In schools, AI will be involved in the individualization of learning processes, the analysis of a child's propensity for professional activity and the early identification of talented youth. In the strategy, one can find a provision on "the development and implementation of educational modules within the educational programs of all levels of education." That is, the basics of AI will be taught at school?

As usual, in addition to the tangible results of AI development, the scientific community will be required to increase the number and citation index of articles by Russian scientists in the world's specialized publications. And by 2024, that is, very soon, the number of citizens with AI competencies should increase in Russia. In particular, this will be realized by attracting domestic specialists from abroad, as well as attracting foreign citizens to work on this topic in Russia.

However, AI has one controversial quality, which the strategy is supposed to solve "by developing ethical rules for human interaction with artificial intelligence." It turns out that the cold calculation of the computer mind leads it to make biased and unfair generalizations.

AI bias

Among the mass of questions about the functioning of modern AI systems, the currently imperfect algorithms for autopilotation of wheeled vehicles stand out, which still do not allow them to be legally allowed to be widely used. Most likely, in the foreseeable future, we will not see AI cars on our roads. Our road conditions are not suitable for this, and the climate does not favor using the autopilot all year round: mud and snow will quickly "blind" the sensory systems of the most advanced robot. In addition, the massive introduction of AI will inevitably take jobs from millions of people around the world - they will either have to retrain or spend the rest of their days in idleness. It is fair to say that the various newfangled "Atlases of the professions of the future" sometimes carry outright nonsense: in one of them, dated 2015, by the new 2020, for example, the professions of an accountant, librarian, proofreader and tester should have become obsolete. But, nevertheless, the profile of most professions will change, and the negative factor of AI will prevail here. In any case, the prospects for the further introduction of AI into society pose many questions for government regulators. And it seems that few people know how to solve them.

Image
Image

Another issue that is already looming on the horizon is AI bias in decision making. The Americans were one of the first to face this when the COMPAS system was introduced in 15 states to predict the cases of relapse of criminals. And everything seemed to start very well: we managed to develop an algorithm that, based on the mass of data (Big Data), form recommendations on the severity of punishment, the regime of a correctional institution or early release. The programmers rightly argued that in the afternoon a hungry judge can endure an excessively harsh punishment, and a well-fed judge, on the contrary, is too mild. The AI must add cold calculation to this procedure. But it turned out that COMPAS and all similar programs are racist: AI was twice as likely to mistakenly blame African Americans for relapse rates than whites (45% versus 23%). AI generally regards light-skinned criminals as people with a low level of risk, since they are statistically less likely to break the law - therefore, the forecasts for them are more optimistic. In this regard, in the United States, more and more voices are heard about the abolition of AI in resolving issues of bail, sentencing and early release. At the same time, the US justice has nothing to do with the program code of these systems - everything is purchased from third-party developers. Predpol, HunchLab and Series Finder software systems operating on the streets of many cities around the world have already statistically proven their effectiveness: crime is decreasing, but they are not devoid of racial prejudice. The most interesting thing is that we do not know what other "cockroaches" are sewn into the artificial brains of these systems, since many parameters of the analysis are classified. There are also doubts that the developers themselves understand how the AI makes certain decisions, which parameters it considers key. Similar situations develop not only in law enforcement and justice, but also in recruiting agencies. AI in most cases gives preference to hiring young men, leaving aside the weaker sex and age candidates. It's funny that the values of the West, which they so zealously promote (equality of sexes and races), are trampled on by the latest Western achievement - artificial intelligence.

Image
Image

The conclusion from a small excursion into the theory and practice of AI suggests the following. It's one thing when our data from social networks and other sources are massively processed for the purpose of marketing or political manipulation, and another thing when the sword of justice or, even worse, the arsenal of national security is handed over to AI. The price of a biased decision increases many times over, and something needs to be done about it. Whoever succeeds in this will become the real ruler of the XXI century.

Recommended: