Artificial intelligence. Part One: The Path to Superintelligence

Table of contents:

Artificial intelligence. Part One: The Path to Superintelligence
Artificial intelligence. Part One: The Path to Superintelligence

Video: Artificial intelligence. Part One: The Path to Superintelligence

Video: Artificial intelligence. Part One: The Path to Superintelligence
Video: The Grand Tour Presents: Seamen | Official Trailer | The Grand Tour 2024, December
Anonim
Artificial intelligence. Part One: The Path to Superintelligence
Artificial intelligence. Part One: The Path to Superintelligence

The reason this (and others) article came to light is simple: perhaps artificial intelligence is not just an important topic for discussion, but the most important in the context of the future. Anyone who gets even a little into the essence of the potential of artificial intelligence recognizes that this topic cannot be ignored. Some - and among them Elon Musk, Stephen Hawking, Bill Gates, not the most stupid people on our planet - believe that artificial intelligence poses an existential threat to humanity, comparable in scale to the complete extinction of us as a species. Well, sit back and dot the i's for yourself.

“We are on the verge of changes comparable to the origin of human life on Earth” (Vernor Vinge).

What does it mean to be on the verge of such a change?

Image
Image

It seems to be nothing special. But you must remember that being in such a place on the graph means that you do not know what is on your right. You should feel something like this:

Image
Image

Feelings are quite normal, the flight is going well.

The future is coming

Imagine that a time machine transported you to 1750 - a time when the world was experiencing constant interruptions in the supply of electricity, communication between cities meant cannon shots, and all transport was running on hay. Let's say you get there, take someone and bring them to 2015, show how it is here. We are unable to understand what it would be like for him to see all these shiny capsules flying along the roads; talk to people on the other side of the ocean; look at sports games a thousand kilometers away; hear a musical performance recorded 50 years ago; play with a magic rectangle that can take a photo or capture a live moment; build a map with a paranormal blue dot indicating its location; look at someone's face and communicate with him many kilometers and so on. All this is inexplicable magic for almost three hundred years old people. Not to mention the Internet, the International Space Station, the Large Hadron Collider, nuclear weapons and general relativity.

Such an experience for him will not be surprising or shocking - these words do not convey the whole essence of mental collapse. Our traveler may die altogether.

But there is an interesting point. If he goes back to 1750 and gets jealous that we wanted to see his reaction to 2015, he can take a time machine with him and try to do the same with, say, 1500. He will fly there, find a person, pick him up in 1750 and show everything. A guy from 1500 will be shocked beyond measure - but not likely to die. Although he, of course, will be surprised, the difference between 1500 and 1750 is much less than between 1750 and 2015. A person from 1500 will be surprised at some moments from physics, will be amazed at what Europe has become under the hard heel of imperialism, will draw a new map of the world in his head … But everyday life in 1750 - transport, communications, etc. - is unlikely to surprise him to death.

No, for a guy from 1750 to have the same fun as we do, he has to go much further - perhaps a year like this in 12,000 BC. BC, even before the first agricultural revolution gave birth to the first cities and the concept of civilization. If anyone from the world of hunter-gatherers, from the time when people were still more another animal species, saw the huge human empires of 1750 with their tall churches, ships crossing the oceans, their concept of being "inside" a building, everything this knowledge - he would have died, most likely.

And then, after death, he would have envied and wanted to do the same. Would return 12,000 years ago, at 24,000 BC. e., would have taken a person and brought him in due time. And a new traveler would say to him: "Well, that's fine, thank you." Because in this case, a person from 12,000 BC. NS. it would be necessary to go back 100,000 years and show the local aborigines fire and language for the first time.

If we need to transport someone into the future to be surprised to death, progress must travel a certain distance. The Point of Death Progress (TPP) must be reached. That is, if at the time of hunter-gatherers the TSP took 100,000 years, the next stop took place already in 12,000 BC. NS. After it, progress was already faster and radically transformed the world by 1750 (roughly). Then it took a couple of hundred years, and here we are.

This picture - where human progress moves faster as time passes - futurist Ray Kurzweil calls the law of accelerating returns in human history. This is because more developed societies have the ability to move progress at a faster pace than less developed societies. The people of the 19th century knew more than the people of the 15th century, so it is not surprising that progress in the 19th century was more rapid than in the 15th century, and so on.

On a smaller scale, this also works. Back to the Future was released in 1985 and the past was in 1955. In the film, when Michael J. Fox returned in 1955, he was taken by surprise by the novelty of televisions, soda prices, lack of love for guitar sound, and variations in slang. It was a different world, of course, but if the film was shot today, and the past was in 1985, the difference would be much more global. Marty McFly, back in time from the days of personal computers, the Internet, mobile phones, would be far more irrelevant than Marty, who went to 1955 from 1985.

All this is due to the law of accelerating returns. The average rate of development of progress between 1985 and 2015 was higher than the rate from 1955 to 1985 - because in the first case, the world was more developed, it was saturated with the achievements of the past 30 years.

Thus, the more achievements, the faster the changes occur. But shouldn't that leave us with certain hints for the future?

Kurzweil suggests that the progress of the entire 20th century could have been made in just 20 years at the 2000 level of development - that is, in 2000 the rate of progress was five times faster than the average rate of progress of the 20th century. He also believes that the progress of the entire 20th century was equivalent to the progress of the period from 2000 to 2014, and the progress of another 20th century will be equivalent to the period until 2021 - that is, in just seven years. After several decades, all the progress of the 20th century will take place several times a year, and then in just a month. Ultimately, the law of accelerating returns will drive us to the point that progress over the entire 21st century will be 1,000 times greater than the progress of the 20th century.

If Kurzweil and his supporters are right, 2030 will surprise us in the same way that the guy from 1750 would have surprised our 2015 - that is, the next TSP will take only a couple of decades - and the world of 2050 will be so different from the modern one that we hardly find out. And this is not fiction. This is the opinion of many scientists who are smarter and more educated than you and me. And if you look at history, you will understand that this prediction follows from pure logic.

Why then, when we are faced with statements like "the world in 35 years will change beyond recognition", we skeptically shrug our shoulders? There are three reasons for our skepticism about future predictions:

1. When it comes to history, we think in straight lines. In trying to visualize the progress of the next 30 years, we look at the progress of the previous 30 as an indicator of how much is likely to happen. When we think about how our world will change in the 21st century, we take the progress of the 20th century and add it to the year 2000. The same mistake our guy from 1750 makes when he gets someone from 1500 and tries to surprise him. We intuitively think in a linear fashion, when we should be exponential. Essentially, a futurist should try to predict the progress of the next 30 years, not looking at the previous 30, but judging by the current level of progress. Then the forecast will be more accurate, but still by the gate. To think correctly about the future, you need to see things moving at a much faster pace than they are moving now.

Image
Image

[/center]

2. The trajectory of recent history is often distorted. First, even a steep exponential curve appears linear when you see small portions of it. Second, exponential growth is not always smooth and uniform. Kurzweil believes that progress moves in serpentine curves.

Image
Image

This curve goes through three phases: 1) slow growth (early phase of exponential growth); 2) rapid growth (explosive, late phase of exponential growth); 3) stabilization in the form of a specific paradigm.

If you look at the last story, the part of the S-curve you are currently in can hide the speed of progress from your perception. Some of the time between 1995 and 2007 was spent on the explosive development of the Internet, introducing Microsoft, Google and Facebook to the public, the birth of social networking and the development of cell phones and then smartphones. This was the second phase of our curve. But the period from 2008 to 2015 was less disruptive, at least on the technology front. Those thinking about the future today can take the last couple of years to gauge the overall pace of progress, but they don't see the bigger picture. In fact, a new and powerful Phase 2 may be brewing now.

3. Our own experience makes us grumpy old people when it comes to the future. We base our ideas about the world on our own experience, and this experience has set the pace of growth in the recent past for us as a matter of course. Likewise, our imaginations are limited, as they use our experience to predict - but more often than not, we simply do not have the tools that allow us to accurately predict the future. When we hear predictions for the future that are at odds with our day-to-day perceptions of how things work, we instinctively regard them as naive. If I told you that you will live to be 150 or 250 years old, or maybe you will not die at all, you will instinctively think that “this is stupid, I know from history that during this time everyone died”. So it is: no one lived to see such years. But not a single airplane flew before the invention of airplanes.

Thus, while skepticism seems reasonable to you, it is more often than not wrong. We should accept that if we arm ourselves with pure logic and wait for the usual historical zigzags, we must admit that very, very, very much must change in the coming decades; much more than intuitively. Logic also dictates that if the most advanced species on the planet continues to make giant leaps forward, faster and faster, at some point the leap will be so severe that it will radically change life as we know it. Something similar happened in the process of evolution, when man became so smart that he completely changed the life of any other species on planet Earth. And if you take a little time to read what is happening in science and technology right now, you may begin to see some clues about what the next giant leap will be like.

The road to superintelligence: what is AI (artificial intelligence)?

Like many people on this planet, you are used to thinking of artificial intelligence as a silly science fiction idea. But lately, a lot of serious people have shown concern about this stupid idea. What's wrong?

There are three reasons that lead to confusion around the term AI:

We associate AI with movies. "Star Wars". "Terminator". "A Space Odyssey 2001". But like robots, the AI in these films is fiction. Thus, Hollywood tapes dilute the level of our perception, AI becomes familiar, familiar and, of course, evil.

This is a wide field of application. It starts with a calculator in your phone and developing self-driving cars to something far in the future that will revolutionize the world. AI stands for all of these things, and it's confusing.

We use AI every day, but often we don't even realize it. As John McCarthy, the inventor of the term "artificial intelligence" in 1956, said, "once it works, no one calls it AI anymore." AI has become more like a mythical prediction about the future than something real. At the same time, this name also has a flavor of something from the past that has never become reality. Ray Kurzweil says he hears people associating AI with facts from the 80s, which can be compared to "claiming that the internet died with the dotcoms in the early 2000s."

Let's be clear. First, stop thinking about robots. The robot that is the container for the AI sometimes mimics the human form, sometimes it doesn't, but the AI itself is the computer inside the robot. AI is a brain, and a robot is a body, if it has a body at all. For example, Siri's software and data is artificial intelligence, a woman's voice is the personification of this AI, and there are no robots in this system.

Secondly, you've probably heard the term "singularity" or "technological singularity". This term is used in mathematics to describe an unusual situation where the usual rules no longer work. In physics, it is used to describe the infinitesimal and dense point of a black hole, or the original point of the Big Bang. Again, the laws of physics don't work in it. In 1993, Vernor Vinge wrote a famous essay in which he applied the term to a moment in the future when the intelligence of our technologies surpasses our own - at which point life as we know it will change forever, and the usual rules of its existence will no longer work. … Ray Kurzweil further refined this term by pointing out that the singularity will be reached when the law of accelerating recoil reaches an extreme point, when technological progress moves so fast that we cease noticing its achievements, almost infinitely fast. Then we will live in a completely new world. However, many experts have stopped using this term, so let's and we will not refer to it often.

Finally, while there are many types or forms of AI that derive from the broad concept of AI, the main categories of AI are caliber. There are three main categories:

Focused (weak) artificial intelligence (AI). UII specializes in one area. Among these AIs there are those who can beat the world chess champion, but that's about it. There is one that can offer the best way to store data on your hard drive, and that's it.

General (strong) artificial intelligence. Sometimes also referred to as human level AI. AGI refers to a computer that is as smart as a person - a machine that is capable of performing any intellectual action inherent in a person. Creating AGI is much more difficult than AGI, and we haven't gotten to that yet. Professor Linda Gottfredson describes intelligence as "in a general sense mental potential, which, among other things, includes the ability to reason, plan, solve problems, think abstractly, understand complex ideas, learn quickly and learn from experience."AGI should be able to do all this as easily as you do.

Artificial superintelligence (ISI). Oxford philosopher and AI theorist Nick Bostrom defines superintelligence as "intelligence that is far smarter than the best human minds in virtually every field, including scientific creativity, general wisdom, and social skills." Artificial superintelligence includes both a computer that is slightly smarter than a person and one that is trillions of times smarter in any direction. ISI is the reason for the growing interest in AI, as well as the fact that the words "extinction" and "immortality" often appear in such discussions.

Nowadays, humans have already conquered the very first stage of the AI caliber - AI - in many ways. The AI revolution is a journey from AGI through AGI to ISI. This path we may not survive, but it will definitely change everything.

Let's take a close look at how the leading thinkers in the field see this path and why this revolution could happen faster than you might think.

Where are we in this stream?

Focused artificial intelligence is machine intelligence that is equal to or greater than human intelligence or efficiency in performing a specific task. A few examples:

* Cars are jam-packed with ICD systems, from computers that determine when the anti-lock braking system should kick in to a computer that determines the parameters of the fuel injection system. Google's self-driving cars, which are currently being tested, will contain robust AI systems that sense and respond to the world around them.

* Your phone is a small ICD factory. When you use the maps app, get recommendations for downloading apps or music, check the weather for tomorrow, talk to Siri, or do anything else - you're using AI.

* Your email spam filter is a classic type of AI. It starts by figuring out how to separate spam from usable emails and then learns as it handles your emails and preferences.

* And this awkward feeling, when yesterday you were looking for a screwdriver or a new plasma in a search engine, and today you see offers from helpful shops on other sites? Or when the social network recommends you to add interesting people as friends? All of these are AI systems that work together, determining your preferences, fetching data about you from the Internet, getting closer and closer to you. They analyze the behavior of millions of people and draw conclusions based on these analyzes in order to sell the services of large companies or make their services better.

* Google Translate, another classic AI system, is impressively good at certain things. So does voice recognition. When your plane lands, the terminal for it is not identified by a person. The price of the ticket is the same. The world's best checkers, chess, backgammon, bulldozer and other games are today represented by narrowly focused artificial intelligence.

* Google Search is one giant AI that uses incredibly clever methods to rank pages and determine SERPs.

And this is only in the consumer world. Sophisticated IMD systems are widely used in the military, manufacturing and financial industries; in medical systems (think IBM's Watson) and so on.

IMD systems in their present form do not pose a threat. In the worst case scenario, a buggy or poorly programmed IRD can lead to local disaster, power outages, financial markets collapse, and the like. But while AGI is not empowered to create an existential threat, we need to see things broader - a devastating hurricane awaits us, the forerunner of which is IAI. Each new innovation in AGI adds one block to the path leading to AGI and ISI. Or, as Aaron Saenz has well noted, the AIs of our world are like "the amino acids of the primordial soup of the young Earth" - yet lifeless components of life that will wake up one day.

The road from AGI to AGI: why is it so difficult?

Nothing reveals the complexity of human intelligence more than trying to create a computer that's just as smart. Building skyscrapers, flying into space, the secrets of the Big Bang - all this is nonsense compared to repeating our own brain or at least just understanding it. The human brain is currently the most complex object in the known universe.

You may not even suspect what the difficulty is in creating AGI (a computer that will be smart as a person, in general, and not just in one area). Building a computer that can multiply two ten-digit numbers in a split second is as easy as shelling pears. Creating one who can look at a dog and a cat and tell where the dog is and where the cat is is incredibly difficult. Create an AI that can beat a grandmaster? Made. Now try to get him to read a paragraph from a six-year-old book and not only understand the words, but also their meaning. Google spends billions of dollars trying to do this. With complex things - like calculations, calculating financial market strategies, translating a language - the computer copes with this with ease, but with simple things - vision, movement, perception - no. As Donald Knuth put it, "AI now does almost everything that requires 'thinking', but it cannot cope with what humans and animals do without thinking."

When you think about the reasons for this, you will realize that things that seem simple to us to do only seem so because they have been optimized for us (and animals) over hundreds of millions of years of evolution. When you reach out to an object, the muscles, joints, bones of your shoulders, elbows and hands instantly perform long chains of physical operations, synchronous with what you see, and move your arm in three dimensions. It seems simple to you, because the ideal software in your brain is responsible for these processes. This simple trick makes the procedure for registering a new account by entering a crookedly written word (captcha) simple for you and hell for a malicious bot. For our brain, this is not difficult: you just need to be able to see.

On the other hand, multiplying large numbers or playing chess are new activities for biological beings, and we did not have enough time to improve in them (not millions of years), so it is not difficult for a computer to defeat us. Just think about it: Would you rather create a program that can multiply large numbers, or a program that recognizes the letter B in its millions of spellings, in the most unpredictable fonts, by hand or with a stick in the snow?

One simple example: when you look at this, you and your computer realize that these are alternating squares of two different shades.

Image
Image

But if you remove the black, you will immediately describe the complete picture: cylinders, planes, three-dimensional angles, but a computer cannot.

Image
Image

He will describe what he sees as a variety of two-dimensional shapes in different shades, which, in principle, is true. Your brain does a ton of work interpreting depth, shadow play, light in a picture. In the picture below, the computer will see a two-dimensional white-gray-black collage, when in reality there is a three-dimensional stone.

Image
Image

And what we have just outlined is the tip of the iceberg when it comes to understanding and processing information. To reach the same level with a person, a computer must understand the difference in subtle facial expressions, the difference between pleasure, sadness, satisfaction, joy, and why Chatsky is good, and Molchalin is not.

What to do?

The first step to building AGI: increasing computing power

One of the necessary things that needs to happen for AGI to be possible is to increase the power of computing hardware. If an artificial intelligence system is to be as smart as the brain, it needs to match the brain in raw processing power.

One way to increase this ability is through the total number of computations per second (OPS) that the brain can produce, and you can determine this number by finding out the maximum OPS for each brain structure and putting them together.

Ray Kurzweil concluded that it is enough to take a professional estimate of the OPS of one structure and its weight relative to the weight of the whole brain, and then multiply it proportionally to get the overall estimate. Sounds a bit dubious, but he did it many times with different estimates of different areas and always came up with the same number: about 10 ^ 16, or 10 quadrillion OPS.

The fastest supercomputer in the world, China's Tianhe-2, has already surpassed this number: it is capable of doing about 32 quadrillion operations per second. But "Tianhe-2" occupies 720 square meters of space, consumes 24 megawatts of energy (our brain consumes only 20 watts) and costs 390 million dollars. We are not talking about commercial or widespread use.

Kurzweil suggests that we judge the health of computers by how many OPS you can buy for $ 1,000. When that number reaches the human level - 10 quadrillion OPS - AGI may well become a part of our lives.

Moore's Law - the historically reliable rule that the maximum computing power of computers doubles every two years - implies that the development of computer technology, like the movement of man through history, grows exponentially. If we compare this with Kurzweil's thousand dollar rule, we can now afford 10 trillion OPS for $ 1,000.

Image
Image

Computers for $ 1,000 bypass the brain of a mouse in their computing power and are a thousand times weaker than humans. This seems like a bad indicator until we remember that computers were a trillion times weaker than the human brain in 1985, a billion in 1995, and a million in 2005. By 2025, we should have an affordable computer that rivals the computing power our brain.

Thus, the raw power required for AGI is already technically available. Within 10 years, it will leave China and spread throughout the world. But computing power alone isn't enough. And the next question is: how do we provide human-level intelligence with all this power?

The second step to creating AGI: giving it intelligence

This part is pretty tricky. In truth, no one really knows how to make a machine intelligent - we are still trying to figure out how to create a human-level mind that can tell a cat from a dog, isolate a B drawn in the snow, and analyze a second-rate movie. However, there are a handful of forward-thinking strategies out there, and at one point one of them should work.

1. Repeat the brain

This option is like scientists are in the same classroom with a child who is very smart and good at answering questions; and even if they diligently try to comprehend science, they do not even come close to catching up with the clever child. In the end, they decide: to hell, just write off the answers to his questions. It makes sense: we can't build a super-complex computer, so why not take one of the best prototypes of the universe as a basis: our brain?

The scientific world is working hard to figure out how our brains work and how evolution created such a complex thing. According to the most optimistic estimates, they will be able to do this only by 2030. But once we understand all the secrets of the brain, its efficiency and power, we can be inspired by its methods in creating technology. For example, one of the computer architectures that mimics the work of the brain is a neural network. She starts with a network of transistors "neurons" connected to each other by input and output, and knows nothing - like a newborn. The system "learns" by trying to complete tasks, recognize handwritten text, and the like. The connections between transistors are strengthened in the case of a correct answer and weakened in the case of an incorrect one. After many cycles of questions and answers, the system forms smart neural weaves that are optimized for specific tasks. The brain learns in a similar way, but in a much more complex manner, and as we continue to study it, we are discovering incredible new ways to improve neural networks.

Even more extreme plagiarism involves a strategy called full brain emulation. Purpose: To cut a real brain into thin slices, scan each of them, then accurately reconstruct the 3D model using software, and then translate it into a powerful computer. Then we will have a computer that can officially do everything that the brain can do: it just needs to learn and collect information. If engineers succeed, they can emulate a real brain with such incredible accuracy that once downloaded to a computer, the brain's real identity and memory will remain intact. If the brain belonged to Vadim before he died, the computer will wake up in the role of Vadim, who will now be a human-level AGI, and we, in turn, will turn Vadim into an incredibly intelligent ISI, which he will certainly be delighted with.

How far are we from completely emulating the brain? In truth, we just emulated the brain of a millimeter flatworm, which contains 302 neurons in total. The human brain contains 100 billion neurons. If trying to get to this number seems useless to you, think about the exponential growth rate of progress. The next step will be the emulation of the ant's brain, then there will be a mouse, and then a person is within easy reach.

2. Try to follow the trail of evolution

Well, if we decide that the answers of a smart child are too complex to write off, we can try to follow in his footsteps of learning and preparing for exams. What do we know? It is quite possible to build a computer as powerful as a brain - the evolution of our own brains has proven this. And if the brain is too complex to emulate, we can try to emulate evolution. The point is, even if we can emulate the brain, it might be like trying to build an airplane by ridiculously waving hands that mimic the movements of birds' wings. More often than not, we manage to create good machines using a machine-oriented approach, rather than an exact imitation of biology.

How to simulate evolution to build AGI? This method, called "genetic algorithms", should work something like this: there must be a productive process and its evaluation, and it will repeat itself over and over again (in the same way biological creatures "exist" and "are evaluated" by their ability to reproduce). A group of computers will perform tasks, and the most successful of them will share their characteristics with other computers, "output". The less successful will be mercilessly thrown into the dustbin of history. Through many, many iterations, this natural selection process will produce better computers. The challenge lies in creating and automating breeding and evaluation cycles so that the evolution process goes on by itself.

The downside to copying evolution is that it takes evolution billions of years to do something, and we only need a few decades to do it.

But we have a lot of advantages, unlike evolution. Firstly, it does not have the gift of foresight, it works by chance - it gives out useless mutations, for example, - and we can control the process within the framework of the assigned tasks. Secondly, evolution has no goal, including the desire for intelligence - sometimes in the environment a certain species does not win at the expense of intelligence (because the latter consumes more energy). We, on the other hand, can aim at increasing intelligence. Thirdly, in order to choose intelligence, evolution needs to make a number of third-party improvements - like redistributing energy consumption by cells - we can just remove the excess and use electricity. Without a doubt, we will be faster than evolution - but again, it is not clear if we can surpass it.

3. Leave computers to themselves

This is the last chance when scientists are completely desperate and try to program a program for self-development. However, this method may prove to be the most promising of all. The idea is that we are building a computer that will have two basic skills: research AI and code changes in itself - which will allow it not only to learn more, but also to improve its own architecture. We can train computers to be their own computer engineers so they can self-develop. And their main task will be to figure out how to get smarter. We will talk about this in more detail.

All this can happen very soon

Rapid advances in hardware and experimentation with software run in parallel, and AGI can emerge quickly and unexpectedly for two main reasons:

1. Exponential growth is intense, and what appears to be a snail's stride can quickly develop into seven-mile jumps - this-g.webp

Image
Image

animated picture: hi-news.ru/wp-content/uploads/2015/02/gif.gif

2. When it comes to software, progress may seem slow, but then one breakthrough instantly changes the speed of progress (good example: in the days of the geocentric worldview, it was difficult for people to calculate the work of the universe, but the discovery of heliocentrism made everything much easier). Or, when it comes to a computer that improves itself, things can seem extremely slow, but sometimes just one amendment to the system separates it from a thousandfold efficiency compared to a human or a legacy version.

The road from AGI to ISI

At some point, we will definitely get AGI - general artificial intelligence, computers with a general human level of intelligence. Computers and humans will live together. Or they won't.

The point is that AGI with the same level of intelligence and computing power as humans will still have significant advantages over humans. For example:

Equipment

Speed. Brain neurons operate at 200 Hz, while modern microprocessors (which are significantly slower than what we will get by the time the AGI is created) operate at a frequency of 2 GHz, or 10 million times faster than our neurons. And the internal communications of the brain, which can move at a speed of 120 m / s, are significantly inferior to the ability of computers to use optics and the speed of light.

Size and storage. The size of the brain is limited by the size of our skulls, and it cannot get larger, otherwise internal communications at a speed of 120 m / s will take too long to travel from one structure to another. Computers can expand to any physical size, use more hardware, increase RAM, long-term memory - all this is beyond our capabilities.

Reliability and durability. Not only computer memory is more accurate than human memory. Computer transistors are more accurate than biological neurons and are less prone to deterioration (and indeed, they can be replaced or repaired). People's brains get tired faster, while computers can work non-stop, 24 hours a day, 7 days a week.

Software

Possibility of editing, modernization, a wider range of possibilities. Unlike the human brain, a computer program can be easily corrected, updated, or experimented with. Areas in which human brains are weak can also be upgraded. The human software for vision is superbly designed, but from an engineering point of view, its capabilities are still very limited - we see only in the visible spectrum of light.

Collective ability. Humans are superior to other species in terms of grand collective intelligence. Starting with the development of language and the formation of large communities, moving through the inventions of writing and printing, and now energized by tools such as the Internet, the collective intelligence of people is an important reason why we can call ourselves the crown of evolution. But computers will still be better. The global network of artificial intelligences working on one program, constantly synchronizing and self-developing, will allow you to instantly add new information to the database, wherever you get it. Such a group will also be able to work towards one goal as a whole, because computers do not suffer from the special opinions, motivation and self-interest that people do.

The AI, which is most likely to become AGI through programmed self-improvement, will not see "human-level intelligence" as a major milestone - that milestone is only important to us. He will have no reason to stop at this dubious level. And given the benefits that even human-level AGI will have, it is pretty obvious that human intelligence will be a short flash for it in the race for intellectual superiority.

This development of events may surprise us very, very much. The fact is that, from our point of view, a) the only criterion that allows us to determine the quality of intelligence is the intelligence of animals, which is lower than ours by default; b) for us, the smartest people are ALWAYS smarter than the stupidest. Like that:

Image
Image

That is, while the AI is just trying to reach our level of development, we see how it becomes smarter, approaching the level of the animal. When he gets to the first human level - Nick Bostrom uses the term "country idiot" - we will be delighted: "Wow, he is already like a moron. Cool! " The only thing is that in the general spectrum of intelligence of people, from the village idiot to Einstein, the range is small - therefore, after the AI gets to the level of the idiot and becomes AGI, it will suddenly become smarter than Einstein.

Image
Image

And what will happen next?

Explosion of intelligence

I hope you found it interesting and fun, because from that moment on, the topic we are discussing becomes abnormal and creepy. We should pause and remind ourselves that each fact stated above and below is real science and real predictions for the future made by the most prominent thinkers and scientists. Just keep in mind.

So, as we indicated above, all our modern models for achieving AGI include the option when AI improves itself. And as soon as he becomes AGI, even the systems and methods with which he grew up become smart enough to improve himself - if they want to. An interesting concept emerges: recursive self-improvement. It works like this.

A certain AI system at a certain level - say, a village idiot - is programmed to improve its own intelligence. Having developed - say, to the level of Einstein - such a system begins to develop already with the intelligence of Einstein, it takes less time to develop, and the leaps are more and more large. They allow the system to outperform any person, becoming more and more. With its rapid development, AGI soars to heavenly heights in its intelligence and becomes a superintelligent ISI system. This process is called an explosion of intelligence, and it is the clearest example of the law of accelerating returns.

Scientists argue about how quickly AI will reach AGI level - most believe that we will get AGI by 2040, in just 25 years, which is very, very little by the standards of technology development. Continuing the logical chain, it is easy to assume that the transition from AGI to ISI will also take place extremely quickly. Like that:

“It took decades for the first AI system to reach its lowest level of general intelligence, but it finally happened. The computer is able to understand the world around as a four-year-old person. Suddenly, literally an hour after reaching this milestone, the system produces a great theory of physics that combines general relativity and quantum mechanics, which no human can do. After an hour and a half, AI becomes ISI, 170,000 times smarter than any human."

We don't even have the right terms to describe superintelligence of this magnitude. In our world, “smart” means a person with an IQ of 130, “stupid” - 85, but we have no examples of people with an IQ of 12,952. Our rulers are not designed for that.

The history of mankind tells us clearly and clearly: along with the intellect comes power and strength. This means that when we create artificial superintelligence, it will be the most powerful creature in the history of life on Earth, and all living beings, including humans, will be completely in its power - and this may happen in twenty years.

If our meager brains were able to come up with Wi-Fi, then something smarter than us a hundred, a thousand, a billion times can easily calculate the position of every atom in the universe at any given time. Everything that can be called magic, any power that is attributed to an omnipotent deity - all this will be at the disposal of the ISI. Creating technology to reverse aging, treating any disease, eliminating hunger and even death, controlling the weather - everything will suddenly become possible. An instant end to all life on Earth is also possible. The smartest people on our planet agree that as soon as artificial superintelligence appears in the world, it will mark the appearance of God on Earth. And an important question remains.

Recommended: