Scientists are afraid of the threat from artificial intelligence

Scientists are afraid of the threat from artificial intelligence
Scientists are afraid of the threat from artificial intelligence

Video: Scientists are afraid of the threat from artificial intelligence

Video: Scientists are afraid of the threat from artificial intelligence
Video: Valery Legasov - one of the heroes of Chernobyl | PART 1 2024, May
Anonim

Self-improving artificial intelligence (AI) in the future can enslave or kill people if he wants to. This was told by the scientist Amnon Eden, who believes that the risks from the development of a free-thinking and highly intelligent consciousness are very high, and "if you do not take care of the issues of AI control already at the current stage of development, then tomorrow may simply not come." According to the English edition Express, humanity, according to Amnon Eden, is today at the "point of no return" for the implementation of the plot of the famous film epic "The Terminator".

It is worth noting that Dr. Amnon Eden is a project leader whose main goal is to analyze the potential devastating effects of AI. Without a correct understanding of the consequences of creating artificial intelligence, its development can threaten with disaster, the scientist believes. Currently, our society is poorly informed about the debate that is going on in the scientific community about the analysis of the potential impact of AI. "In the coming year, 2016, the analysis of possible risks will have to become significantly more widespread in the thinking of corporations and governments, politicians and those who are responsible for making decisions," says Eden.

The scientist is confident that science fiction, which depicts the destruction of humanity by robots, may soon become our common problem, as the process of creating AI has got out of control. For example, Elon Musk, with the support of entrepreneur Sam Altman, decided to create a new $ 1 billion non-profit organization that develops open source AI that should surpass the human mind. At the same time, the American billionaire Elon Musk himself ranks artificial intelligence among the "greatest threats to our existence." Steve Wozniak, who co-founded Apple, said last March that “the future looks daunting and very dangerous for people … eventually the day will come when computers will think faster than we do and they will get rid of slow people in order to so that companies can work more efficiently."

Image
Image

It is worth noting that many scientists see the threat from AI. Dozens of well-known scientists, investors and entrepreneurs, whose activities are, in one way or another, related to the development of artificial intelligence, have signed an open letter calling for more attention to the issue of safety and social utility of work in the field of AI. Astrophysicist Stephen Hawking and founder of Tesla and SpaceX Elon Musk are among the signatories to this document. The letter, along with an accompanying document, which was drafted by the Future of Life Institute (FLI), was written amid growing concern about the impact of artificial intelligence on the labor market and even the long-term survival of all of humanity in an environment where the capabilities of robots and machines will grow almost uncontrollably.

Scientists understand the fact that the potential of AI today is very large, so it is necessary to fully investigate the possibilities of its optimal use for us in order to avoid the accompanying pitfalls, the FLI letter notes. It is imperative that human-made AI systems do exactly what we want them to do. It is worth noting that the Future of Life Institute was founded only last year by a number of enthusiasts, among whom was the creator of Skype, Jaan Tallinn, in order to "minimize the risks facing humanity" and stimulate research with an "optimistic vision of the future." First of all, we are talking here about the risks that are caused by the development of AI and robotics. The FLI Advisory Board includes Musk and Hawking, along with acclaimed actor Morgan Freeman and other famous people. According to Elon Musk, the uncontrolled development of artificial intelligence is potentially more dangerous than nuclear weapons.

The famous British astrophysicist Stephen Hawking at the end of 2015 tried to explain his rejection of AI technologies. In his opinion, over time, superintelligent machines will look at people as consumables or ants that simply interfere with the solution of their tasks. Talking to users of the Reddit portal, Stephen Hawking noted that he does not believe that such superintelligent machines will be "evil creatures" who want to destroy all of humanity because of their intellectual superiority. Most likely, it will be possible to talk about the fact that they simply will not notice humanity.

Image
Image

“The media have been constantly distorting my words lately. The main risk in the development of AI is not the malice of the machines, but their competence. Superintelligent artificial intelligence will do an excellent job, but if it and our goals do not coincide, humanity will have very serious problems,”the famous scientist explains. As an example, Hawking cited a hypothetical situation in which a super-powerful AI is responsible for the operation or construction of a new hydroelectric dam. For such a machine, the priority will be how much energy the entrusted system will generate, and the fate of people will not matter. “There are few of us who trample anthills and step on ants out of anger, but let's imagine a situation - you control a powerful hydroelectric power station that generates electricity. If you need to raise the water level and as a result of your actions one anthill will be flooded, then the problems of drowning insects are unlikely to bother you. Let's not put people in the place of ants,”the scientist said.

The second potential problem for the further development of artificial intelligence, according to Hawking, may be the "tyranny of the owners of machines" - the rapid growth of the gap in the level of income between rich people who will manage to monopolize the production of intelligent machines, and the rest of the world's population. Stephen Hawking proposes to solve these possible problems in the following way - to slow down the process of AI development and switch to the development of not "universal", but highly specialized artificial intelligence, which can solve only a very limited range of problems.

In addition to Hawking and Musk, the letter was signed by the Nobel laureate and MIT physics professor Frank Wilczek, the executive director of the Machine Intelligence Research Institute (MIRI) Luc Mühlhauser, as well as many specialists from large IT companies: Google, Microsoft and IBM, as well as entrepreneurs who founded the AI companies Vicarious and DeepMind. The authors of the letter note that they do not aim to scare the public, but plan to highlight both the positive and negative aspects that are associated with the creation of artificial intelligence. “At present, everyone agrees that research in the field of AI is progressing steadily, and the influence of AI on modern human society will only increase,” the letter says, “the opportunities that open up to humans are enormous, everything that modern civilization has to offer was created by intelligence. person. We are unable to predict what we will be able to achieve if human intelligence can be multiplied by AI, but the problem of getting rid of poverty and disease is no longer infinitely difficult.”

Image
Image

Numerous developments in the field of artificial intelligence are already included in modern life, including image and speech recognition systems, unmanned vehicles and much more. Silicon Valley observers estimate that more than 150 startups are currently being implemented in this area. At the same time, developments in this area are attracting more and more investments, and more and more companies like Google are developing their projects based on AI. Therefore, the authors of the letter believe that the time has come to pay increased attention to all the possible consequences of the observed boom for the economic, social and legal aspects of human life.

The position that artificial intelligence can pose a danger to humans is shared by Nick Bostrom, a professor at the University of Oxford, who is known for his work on the anthropic principle. This specialist believes that AI has come to the point that will be followed by its incompatibility with humans. Nick Bostrom points out that, unlike genetic engineering and climate change, for which governments allocate sufficient funds to control, "nothing is being done to control the evolution of AI." According to the professor, a "policy of a legal vacuum that needs to be filled" is currently being pursued with regard to artificial intelligence. Even technologies such as self-driving cars, which appear harmless and useful, raise a number of questions. For example, should such a car have to perform emergency braking in order to save its passengers and who will be responsible in the event of an accident committed by an autonomous vehicle?

Discussing the potential risks, Nick Bostrom noted that "the computer is not able to determine the benefits and harm to humans" and "does not even have the slightest idea of human morality." In addition, self-improvement cycles in computers can occur at such a speed that a person simply cannot keep track of, and almost nothing can be done about this either, says the scientist. “At the stage of development when computers can think for themselves, no one can predict for sure whether this will lead to chaos or significantly improve our world,” said Nick Bostrom, citing as an example a simple possible solution for a computer - shutting down in countries with cold climate heating to improve people's health and increase their endurance, which "can come to the head of artificial intelligence."

Image
Image

In addition, Bostrom also raises the problem of chipping the human brain in order to increase our biointelligence. “In many ways, such a procedure can be useful if all processes are controlled, but what happens if the implanted chip can reprogram itself? What consequences can this lead to - to the emergence of a superman or to the emergence of a computer that will only look like a human? " - the professor asks. The way computers solve human problems is very different from ours. For example, in chess, the human brain considers only a narrow set of moves, choosing the best option from them. In turn, the computer considers all possible moves, choosing the best one. At the same time, the computer does not expect to upset or surprise its opponent in the game. Unlike a human being, playing chess, a computer can make a cunning and subtle move only by accident. Artificial intelligence can calculate in the best way - to eliminate the error from any system by removing the "human factor" from there, but, unlike a human, a robot is not ready to perform feats that would save people's lives.

Among other things, the rise in the number of smart machines represents the stage of a new industrial revolution. In turn, this means that in the near future, humanity will face inevitable social changes. Over time, work will become the lot of highly qualified specialists, since almost all simple tasks can be taken on by robots and other mechanisms. Scientists believe that artificial intelligence "needs an eye and an eye" so that our planet does not turn into a cartoon planet "Zhelezyaka", which was inhabited by robots.

In terms of more and more automation of production processes, the future has already arrived. The World Economic Forum (WEF) presented its report, according to which automation will lead to the fact that by 2020 more than 5 million people working in various fields will lose their jobs. This is the impact of robots and robotic systems on our lives. To compile the report, WEF employees used data on 13.5 million employees from around the world. According to them, by 2020, the total need for more than 7 million jobs will disappear, while the expected growth in employment in other industries will amount to just over 2 million jobs.

Recommended: