Artificial intelligence. Part two: extinction or immortality?

Table of contents:

Artificial intelligence. Part two: extinction or immortality?
Artificial intelligence. Part two: extinction or immortality?

Video: Artificial intelligence. Part two: extinction or immortality?

Video: Artificial intelligence. Part two: extinction or immortality?
Video: 20 Telltale Signs You've Served Time in the Military 2024, November
Anonim
Artificial intelligence. Part two: extinction or immortality?
Artificial intelligence. Part two: extinction or immortality?

Here is the second part of an article from the series "Wait, how can all this be real, why is it still not talked about at every corner." In the previous series, it became known that an explosion of intelligence is gradually creeping up to the people of the planet Earth, it is trying to develop from narrowly focused to universal intelligence and, finally, artificial superintelligence.

"Perhaps we are facing an extremely difficult problem, and it is not known how much time is allotted to solve it, but the future of humanity may depend on its solution." - Nick Bostrom.

The first part of the article began innocently enough. We discussed narrowly focused artificial intelligence (AI, which specializes in solving one specific problem like determining routes or playing chess), in our world there is a lot of it. Then they analyzed why it is so difficult to grow general directional artificial intelligence (AGI, or AI, which, in terms of intellectual capabilities, can compare with a human in solving any problem), is so difficult. We concluded that the exponential rate of technological advancement hints that AGI may be around the corner pretty soon. In the end, we decided that as soon as machines reached human intelligence, the following could immediately happen:

Image
Image
Image
Image
Image
Image
Image
Image

As usual, we look at the screen, not believing that artificial superintelligence (ISI, which is much smarter than any person) can appear during our lifetime, and choosing emotions that would best reflect our opinion on this issue.

Before we dive into the specifics of ISI, let's remind ourselves what it means for a machine to be superintelligent.

The main difference lies between fast superintelligence and quality superintelligence. Often, the first thing that comes to mind when thinking about a superintelligent computer is that it can think much faster than a person - millions of times faster, and in five minutes it will comprehend what it would take a person ten years. ("I know kung fu!")

It sounds impressive, and the ISI really should think faster than any of the people - but the main separating feature will be the quality of its intelligence, which is completely different. Humans are much smarter than monkeys, not because they think faster, but because their brains contain a number of ingenious cognitive modules that carry out complex linguistic representations, long-term planning, abstract thinking, which monkeys are not capable of. If you accelerate the brain of a monkey a thousand times, it will not become smarter than us - even after ten years it will not be able to assemble a constructor according to the instructions, which would take a person a couple of hours at most. There are things a monkey will never learn, no matter how many hours it spends or how fast its brain works.

In addition, the monkey does not know how humanly, because its brain is simply not able to realize the existence of other worlds - the monkey may know what a person is and what a skyscraper is, but will never understand that a skyscraper was built by people. In her world, everything belongs to nature, and the macaque not only cannot build a skyscraper, but also understand that anyone can build it at all. And this is the result of a small difference in the quality of intelligence.

In the general scheme of intelligence that we are talking about, or simply by the standards of biological beings, the difference in the quality of intelligence between humans and monkeys is tiny. In the previous article, we placed biological cognitive abilities on a ladder:

Image
Image

To understand how serious a superintelligent machine will be, place it two notches higher than the person on that ladder. This machine may be just a little superintelligent, but its superiority over our cognitive abilities will be the same as ours - over the monkeys. And just as a chimpanzee will never comprehend that a skyscraper can be built, we may never understand what a machine a couple of steps higher will understand, even if the machine tries to explain it to us. But this is just a couple of steps. The smarter machine will see ants in us - it will teach us the simplest things from its position for years, and these attempts will be completely hopeless.

The type of superintelligence we'll talk about today lies far beyond this ladder. This is an explosion of intelligence - when the smarter a car becomes, the faster it can increase its own intelligence, gradually increasing momentum. It might take years for such a machine to surpass chimpanzees in intelligence, but perhaps a couple of hours to surpass us by a couple of notches. From that moment on, the car can already jump over four steps every second. That is why we should understand that very soon after the first news that the machine has reached the level of human intelligence appears, we may face the reality of coexistence on Earth with something that will be much higher than us on this ladder (or maybe, and millions of times higher):

Image
Image

And since we have established that it is completely useless to try to understand the power of a machine that is only two steps above us, let's define once and for all that there is no way to understand what the ISI will do and what the consequences of this will be for us. Anyone who claims to the contrary simply does not understand what superintelligence means.

Evolution has slowly and gradually evolved the biological brain over hundreds of millions of years, and if humans create a superintelligent machine, in a sense we will transcend evolution. Or it will be part of evolution - perhaps evolution works in such a way that intelligence develops gradually until it reaches a turning point that heralds a new future for all living things:

Image
Image

For reasons we will discuss later, a large segment of the scientific community believes that the question is not whether we will get to this tipping point, but when.

Where do we end up after this?

I think no one in this world, neither me nor you, will be able to say what will happen when we reach the tipping point. Oxford philosopher and leading AI theorist Nick Bostrom believes that we can boil all possible outcomes into two broad categories.

First, looking at history, we know the following about life: species appear, exist for a certain time, and then inevitably fall off the balance beam and die out.

Image
Image

"All species die out" has been as reliable a rule in history as "all people die someday." 99.9% of species have fallen from a log of life, and it is quite clear that if a species hangs on this log for too long, a gust of natural wind or a sudden asteroid will turn the log over. Bostrom calls extinction the state of an attractor - a place where all species balance so as not to fall where no species has returned yet.

And although most scientists admit that ISI will have the ability to doom people to extinction, many also believe that using the capabilities of ISI will allow individuals (and the species as a whole) to achieve the second state of the attractor - species immortality. Bostrom believes that the immortality of a species is as much an attractor as the extinction of a species, that is, if we get to this, we will be doomed to eternal existence. Thus, even if all species until now fell from this stick into the maelstrom of extinction, Bostrom believes that the log has two sides, and there simply did not appear on Earth such an intelligence that would understand how to fall to the other side.

Image
Image

If Bostrom and others are right, and judging by all the information available to us, they may well be so, we need to accept two very shocking facts:

The emergence of ISI for the first time in history will open up the opportunity for a species to achieve immortality and drop out of the fatal cycle of extinction.

The emergence of ISI will have such an unimaginably huge impact that, most likely, it will push humanity off this log in one direction or the other.

It is possible that when evolution reaches such a turning point, it always puts an end to the relationship of people with the stream of life and creates a new world, with or without people.

This raises an interesting question that only a bummer would not ask: When will we get to this tipping point, and where will it put us? No one in the world knows the answer to this double question, but many smart people have tried to figure it out for decades. For the rest of the article, we'll figure out where they come from.

* * *

Let's start with the first part of this question: when should we reach the tipping point? In other words: how much time is left until the first machine reaches superintelligence?

Opinions vary from case to case. Many, including Professor Vernor Vinge, scientist Ben Herzel, Sun Microsystems co-founder Bill Joy, futurist Ray Kurzweil, agreed with machine learning expert Jeremy Howard when he presented the following graph at TED Talk:

Image
Image

These people share the view that ISI is coming soon - this exponential growth, which seems slow to us today, will literally explode in the next few decades.

Others, like Microsoft co-founder Paul Allen, research psychologist Gary Marcus, computer expert Ernest Davis, and tech entrepreneur Mitch Kapor, believe that thinkers like Kurzweil are seriously underestimating the magnitude of the problem and think we're not nearly as close to a tipping point.

Kurzweil's camp argues that the only underestimation that occurs is ignoring exponential growth, and the doubters can be compared to those who looked at the slowly booming Internet in 1985 and argued that it would not have an impact on the world in the near future.

Doubters may fend off that it is more difficult for progress to take each subsequent step when it comes to the exponential development of intelligence, which neutralizes the typical exponential nature of technological progress. Etc.

The third camp, in which Nick Bostrom is, disagrees with either the first or the second, arguing that a) all this can absolutely happen in the near future; and b) there is no guarantee that this will happen at all or that it will take longer.

Others, like the philosopher Hubert Dreyfus, believe that all three groups naively believe that there will be a tipping point at all, and that we will most likely never get to ISI.

What happens when we put all these opinions together?

In 2013, Bostrom conducted a survey in which he interviewed hundreds of experts in the field of artificial intelligence during a series of conferences on the following topic: "What will be your predictions for achieving AGI at the human level?" and asked us to name an optimistic year (in which we will have AGI with a 10 percent chance), a realistic assumption (the year in which we will have AGI with a 50 percent probability) and a confident assumption (the earliest year in which AGI will appear since 90 -percentage probability). Here are the results:

* Average optimistic year (10%): 2022

* Average realistic year (50%): 2040

* Average pessimistic year (90%): 2075

Average respondents believe that in 25 years we will have AGI rather than not. The 90 percent chance of AGI occurring by 2075 means that if you're still quite young now, it will most likely happen in your lifetime.

A separate study recently conducted by James Barratt (author of the acclaimed and very good book Our Latest Invention, excerpts from I presented to the attention of readers Hi-News.ru) and Ben Hertzel at the AGI Annual Conference, the AGI Conference, simply showed people's opinions on the year in which we get to AGI: 2030, 2050, 2100, later or never. Here are the results:

* 2030: 42% of respondents

* 2050: 25%

* 2100: 20%

After 2100: 10%

Never: 2%

Similar to Bostrom's results. In Barratt's poll, more than two-thirds of those polled believe AGI will be here by 2050, and less than half believe AGI will appear in the next 15 years. It is also striking that only 2% of the respondents, in principle, do not see AGI in our future.

But AGI is not a tipping point like ISI. When, according to experts, will we have an ISI?

Bostrom asked the experts when we will reach ASI: a) two years after reaching AGI (that is, almost instantly due to an explosion of intelligence); b) after 30 years. Results?

The average opinion is that the rapid transition from AGI to ISI will occur with a 10% probability, but in 30 years or less, it will occur with a 75% probability.

From this data, we do not know what date respondents would call a 50 percent chance of an ASI, but based on the two answers above, let's assume it is 20 years. That is, the world's leading AI experts believe that the turning point will come in 2060 (AGI will appear in 2040 + it will take 20 years for the transition from AGI to ISI).

Image
Image

Of course, all of the above statistics are speculative and simply represent the opinion of experts in the field of artificial intelligence, but they also indicate that most interested people agree that by 2060, AI is likely to arrive. In just 45 years.

Let's move on to the second question. When we get to the tipping point, which side of the fatal choice will determine us?

Superintelligence will have the most powerful power, and the critical question for us will be the following:

Who or what will control this power and what will be their motivation?

The answer to this question will depend on whether the ISI gets an incredibly powerful development, an immeasurably terrifying development, or something in between.

Of course, the expert community is trying to answer these questions as well. Bostrom's poll analyzed the likelihood of possible consequences of AGI's impact on humanity, and it turned out that with a 52 percent chance everything will go very well and with a 31 percent chance everything will go either badly or extremely badly. The poll attached at the end of the previous part of this topic, conducted among you, dear readers of Hi-News, showed about the same results. For a relatively neutral outcome, the probability was only 17%. In other words, we all believe that AGI is going to be a big deal. It is also worth noting that this survey concerns the emergence of AGI - in the case of ISI, the percentage of neutrality will be lower.

Before we go deeper into thinking about the good and bad sides of the question, let's combine both sides of the question - "when will this happen?" and "is this good or bad?" into a table that covers the views of most experts.

Image
Image

We'll talk about the main camp in a minute, but first decide on your position. Chances are, you are in the same place as I was before I started working on this topic. There are several reasons why people don't think about this topic at all:

* As mentioned in the first part, the films have seriously confused people and facts, presenting unrealistic scenarios with artificial intelligence, which led to the fact that we should not take AI seriously at all. James Barratt likened this situation to being issued by the Centers for Disease Control (CDC) with a serious warning about vampires in our future.

* Due to so-called cognitive biases, it is very difficult for us to believe that something is real until we have proof. One can confidently imagine the computer scientists of 1988 regularly discussing the far-reaching consequences of the Internet and what it might become, but people hardly believed that it would change their lives until it actually happened. It's just that computers didn't know how to do this in 1988, and people just looked at their computers and thought, “Really? Is this what will change the world? Their imaginations were limited by what they had learned from personal experience, they knew what a computer was, and it was difficult to imagine what a computer would be capable of in the future. The same is happening now with AI. We heard that it will become a serious thing, but since we have not yet encountered it face to face and, in general, we observe rather weak manifestations of AI in our modern world, it is rather difficult for us to believe that it will radically change our life. It is against these prejudices that numerous experts from all camps, as well as interested people, are opposed to try to grab our attention through the noise of everyday collective egocentrism.

* Even if we believed all this - how many times today have you thought about the fact that you will spend the rest of eternity in nothingness? A little, agree. Even if this fact is much more important than anything you do day in and day out. This is because our brains are usually focused on small, everyday things, no matter how delusional the long-term situation we find ourselves in. It's just that we are made.

One of the goals of this article is to get you out of the camp called "I like to think about other things" and put you in the expert camp, even if you are just standing at the crossroads between the two dotted lines in the square above, completely undecided.

In the course of research, it becomes obvious that the opinions of most people quickly drift towards the "main camp", and three-quarters of the experts fall into two subcamps in the main camp.

Image
Image

We will visit both of these camps in full. Let's start with fun.

Why could the future be our greatest dream?

As we explore the AI world, we find surprisingly many people in our comfort zone. The people in the upper right square are buzzing with excitement. They believe that we will fall on the good side of the log, and they also believe that we will inevitably come to this. For them, the future is all only the best that can only be dreamed of.

The point that distinguishes these people from other thinkers is not that they want to be on the happy side - but that they are sure that she is waiting for us.

This confidence comes out of controversy. Critics believe it comes from a dazzling excitement that overshadows potential negative sides. But proponents say gloomy predictions are always naive; technology continues and will always help us more than harm us.

You are free to choose any of these opinions, but put aside skepticism and take a good look at the happy side of the balance beam, trying to accept the fact that everything you read about may already have happened. If you showed hunter-gatherers our world of comfort, technology and endless abundance, it would seem to them a magical fiction - and we behave quite modestly, unable to admit that the same incomprehensible transformation awaits us in the future.

Nick Bostrom describes three paths that a superintelligent AI system can take:

* An oracle that can answer any exact question, including complex questions that people cannot answer - for example, "how to make a car engine more efficient?" Google is a primitive type of "oracle".

* A genie who will execute any high-level command - using molecular assembler to create a new, more efficient version of an automobile engine - and will be waiting for the next command.

* A sovereign who will have wide access and the ability to function freely in the world, making his own decisions and improving the process. He will invent a cheaper, faster and safer way of private transportation than a car.

These questions and tasks, which seem difficult for us, will seem to the superintelligent system as if someone asked to improve the situation “my pencil fell off the table”, in which you would simply pick it up and put it back.

Eliezer Yudkowski, an American specialist in artificial intelligence, put it well:

“There are no hard problems, only problems that are hard for a certain level of intelligence. Go one step higher (in terms of intelligence), and some problems will suddenly move from the category of "impossible" to the camp of "obvious". One step higher - and they will all become obvious."

There are many impatient scientists, inventors and entrepreneurs who have chosen a zone of confident comfort from our table, but we only need one guide to walk for the best in this best of worlds.

Ray Kurzweil is ambiguous. Some idolize his ideas, some despise. Some stay in the middle - Douglas Hofstadter, discussing the ideas of Kurzweil's books, eloquently noted that "it's as if you took a lot of good food and some dog poop, and then mixed everything in such a way that it is impossible to understand what is good and what is bad."

Whether you like his ideas or not, it is impossible to pass them by without a shadow of interest. He began inventing things as a teenager, and in the following years he invented several important things, including the first flatbed scanner, the first scanner to convert text to speech, the well-known Kurzweil music synthesizer (the first real electric piano), and the first commercially successful speech recognizer. He is also the author of five sensational books. Kurzweil is appreciated for his daring predictions, and his track record is quite good - in the late 80s, when the Internet was still in its infancy, he predicted that by the 2000s the Web would become a global phenomenon. The Wall Street Journal called Kurzweil a "restless genius," Forbes a "global thinking machine," Inc. Magazine - "Edison's rightful heir", Bill Gates - "the best of those who predict the future of artificial intelligence." In 2012, Google co-founder Larry Page invited Kurzweil to the post of CTO. In 2011, he co-founded Singularity University, which is hosted by NASA and is partly sponsored by Google.

His biography matters. When Kurzweil talks about his vision of the future, it sounds like crazy crazy, but the really crazy thing about it is that he is far from crazy - he is incredibly smart, educated and sane person. You may think he is wrong in his predictions, but he is not a fool. Kurzweil's predictions are shared by many comfort zone experts, Peter Diamandis and Ben Herzel. This is what he thinks will happen.

Chronology

Kurzweil believes that computers will reach the level of general artificial intelligence (AGI) by 2029, and by 2045 we will not only have artificial superintelligence, but also a completely new world - the time of the so-called singularity. Its chronology of AI is still considered outrageously exaggerated, but over the past 15 years, the rapid development of highly focused artificial intelligence (AI) systems has forced many experts to side with Kurzweil. His predictions are still more ambitious than those in Bostrom's survey (AGI by 2040, ISI by 2060), but not by much.

According to Kurzweil, the Singularity of 2045 is being driven by three simultaneous revolutions in biotechnology, nanotechnology and, more importantly, AI. But before we continue - and nanotechnology is closely following artificial intelligence - let's take a moment to nanotechnology.

Image
Image

A few words about nanotechnology

We usually call nanotechnology technologies that deal with the manipulation of matter in the range of 1-100 nanometers. A nanometer is one billionth of a meter, or a millionth of a millimeter; within the range of 1-100 nanometers, viruses (100 nm in diameter), DNA (10 nm in width), hemoglobin molecules (5 nm), glucose (1 nm) and others can be accommodated. If nanotechnology ever becomes ours, the next step will be to manipulate individual atoms that are less than one order of magnitude (~, 1 nm).

To understand where humans run into problems trying to manipulate matter on such a scale, let's jump to a larger scale. The International Space Station is 481 kilometers above Earth. If humans were giants and hit the ISS with their heads, they would be 250,000 times larger than they are now. If you magnify something from 1 to 100 nanometers 250,000 times, you get 2.5 centimeters. Nanotechnology is the equivalent of a human, orbiting the ISS, trying to manipulate things the size of a grain of sand or an eyeball. To get to the next level - the control of individual atoms - the giant will have to carefully position objects with a diameter of 1/40 millimeter. Ordinary people will need a microscope to see them.

For the first time, Richard Feynman spoke about nanotechnology in 1959. Then he said: “The principles of physics, as far as I can tell, do not speak against the possibility of controlling things atom by atom. In principle, a physicist could synthesize any chemical that a chemist has written down. How? By placing atoms where the chemist says to get the substance. This is the whole simplicity. If you know how to move individual molecules or atoms, you can do almost anything.

Nanotechnology became a serious scientific field in 1986 when engineer Eric Drexler presented its foundations in his seminal book, Machines of Creation, but Drexler himself believes that those who want to learn more about modern ideas in nanotechnology should read his 2013 book. Full Abundance (Radical Abundance).

A few words about "gray goo"

We delve deeper into nanotechnology. In particular, the topic of "gray goo" is one of the not very pleasant topics in the field of nanotechnology, which cannot be ignored. Older versions of the theory of nanotechnology proposed a nano-assembly method involving the creation of trillions of tiny nanorobots that would work together to create something. One way to create trillions of nanorobots is to create one that can self-replicate, that is, from one to two, from two to four, and so on. Several trillion nanorobots will appear in a day. This is the power of exponential growth. Funny, isn't it?

It's funny, but exactly until it leads to the apocalypse. The problem is that the power of exponential growth, which makes it a pretty convenient way to quickly create a trillion nanobots, makes self-replication a scary thing in the long run. What if the system crashes, and instead of stopping replication for a couple trillion, nanobots continue to breed? What if this whole process depends on carbon? The biomass of the Earth contains 10 ^ 45 carbon atoms. A nanobot should be on the order of 10 ^ 6 carbon atoms, so 10 ^ 39 nanobots will devour all life on Earth in just 130 replications. An ocean of nanobots ("gray goo") will flood the planet. Scientists think that nanobots can replicate in 100 seconds, which means that a simple mistake can kill all life on Earth in just 3.5 hours.

It could be even worse - if terrorists and unfavorable specialists reach the hands of nanotechnology. They could create several trillion nanobots and program them to quietly spread around the world in a couple of weeks. Then, at the touch of a button, in just 90 minutes they will eat everything at all, without a chance.

While this horror story has been widely discussed for years, the good news is that it is just a horror story. Eric Drexler, who coined the term "gray goo", recently said the following: “People love horror stories, and this one belongs to the category of horror stories about zombies. This idea in itself is already eating brains."

Once we get to the very bottom of nanotechnology, we can use it to create technical devices, clothing, food, bioproducts - blood cells, virus and cancer fighters, muscle tissue, and so on - whatever. And in a world that uses nanotechnology, the cost of a material will no longer be tied to its scarcity or the complexity of its manufacturing process, but rather to the complexity of its atomic structure. In the world of nanotechnology, a diamond could be cheaper than an eraser.

We're not even close there yet. And it is not entirely clear whether we underestimate or overestimate the complexity of this path. However, everything goes to the point that nanotechnology is not far off. Kurzweil assumes that by the 2020s we will have them. The world states know that nanotechnology can promise a great future, and therefore they are investing many billions in them.

Just imagine what possibilities a superintelligent computer would get if it got to a reliable nanoscale assembler. But nanotechnology is our idea, and we are trying to ride it, it is difficult for us. What if they are just a joke for the ISI system, and the ISI itself comes up with technologies that will be many times more powerful than anything we can in principle assume? We agreed: no one can imagine what artificial superintelligence will be capable of? It is believed that our brains are unable to predict even the minimum of what will happen.

What could AI do for us?

Image
Image

Armed with superintelligence and all the technology that superintelligence could create, ISI will probably be able to solve all of humanity's problems. Global warming? ISI will first stop carbon emissions by coming up with a host of efficient non-fossil fuels for generating energy. He will then come up with an effective, innovative way to remove excess CO2 from the atmosphere. Cancer and other diseases? Not a problem - healthcare and medicine will change in ways that are unimaginable. World hunger? ISI will use nanotechnology to create meat that is identical to natural, from scratch, real meat.

Nanotechnology will be able to turn a pile of garbage into a vat of fresh meat or other food (not necessarily even in its usual form - imagine a giant apple cube) and distribute all this food around the world using advanced transportation systems. Of course, this will be great for animals that no longer have to die for food. ISI can also do many other things, like preserving endangered species or even bringing back extinct ones from stored DNA. ISI can solve our most difficult macroeconomic problems - our most difficult economic debates, ethical and philosophical issues, global trade - all of which will be painfully obvious to ISI.

But there is something very special that ISI could do for us. Alluring and tantalizing that would change everything: ISI can help us cope with mortality … Gradually comprehending the capabilities of AI, perhaps you will reconsider all your ideas about death.

There was no reason for evolution to extend our lifespan any longer than it does now. If we live long enough to give birth and raise children to the point where they can fend for themselves, evolution is enough. From an evolutionary point of view, 30+ years is enough for development, and there is no reason for mutations to prolong life and reduce the value of natural selection. William Butler Yates called our species "a soul attached to a dying animal." Not much fun.

And since we all die someday, we live with the idea that death is inevitable. We think about aging over time - continuing to move forward and not being able to stop this process. But the thought of death is treacherous: captured by it, we forget to live. Richard Feynman wrote:

“There is a wonderful thing in biology: there is nothing in this science that would speak of the necessity of death. If we want to create a perpetual motion machine, we realize that we have found enough laws in physics that either indicate the impossibility of this, or that the laws are wrong. But there is nothing in biology that would indicate the inevitability of death. This leads me to believe that it is not so inevitable, and it is only a matter of time before biologists discover the cause of this problem, this terrible universal disease, it will be cured."

The fact is that aging has nothing to do with time. Aging is when the physical materials of the body wear out. Car parts also degrade - but is aging inevitable? If you repair your car as the parts wear out, it will last forever. The human body is no different - just more complex.

Kurzweil talks about intelligent, Wi-Fi-connected nanobots in the bloodstream that could perform countless tasks for human health, including regularly repairing or replacing worn-out cells anywhere in the body. Improving this process (or finding an alternative suggested by a smarter ASI) will not only keep the body healthy, it can reverse aging. The difference between the body of a 60-year-old and a 30-year-old is a handful of physical issues that could be corrected with the right technology. ISI could build a car that a person would enter when they are 60 years old and leave when they are 30 years old.

Even a degraded brain could be renewed. The ISI would surely know how to do this without affecting the brain data (personality, memories, etc.). A 90-year-old man suffering from complete brain degradation could undergo retraining, renew and return to the beginning of his life. It may seem absurd, but the body is a handful of atoms, and the ISI could certainly easily manipulate them, any atomic structures. It's not that absurd.

Kurzweil also believes that artificial materials will integrate more and more into the body as time passes. For a start, the organs could be replaced with super-advanced machine versions that would last forever and never fail. Then we could do a complete body redesign, replacing red blood cells with perfect nanobots that move on their own, eliminating the need for a heart altogether. We could also improve our cognitive abilities, start thinking billions of times faster, and access all of the information available to humanity through the cloud.

The possibilities for comprehending new horizons would be truly endless. People have managed to endow sex with a new purpose, they do it for pleasure, not just for reproduction. Kurzweil believes we can do the same with food. Nanobots could deliver ideal nutrition directly to the cells of the body, allowing unhealthy substances to pass through the body. Nanotechnology theorist Robert Freitas has already developed a replacement for blood cells, which, when implemented in the human body, can allow him not to breathe for 15 minutes - and this was invented by a person. Imagine when the ISI will gain power.

After all, Kurzweil believes that humans will reach the point where they become completely artificial; the time when we look at biological materials and think about how primitive they were; time when we will read about the early stages of human history, amazed at how germs, accidents, diseases, or simply old age could kill a person against his will. Ultimately, humans will defeat their own biology and become eternal - this is the path to the happy side of the balance beam that we have been talking about from the very beginning. And people who believe in this are also sure that such a future awaits us very, very soon.

You probably won't be surprised that Kurzweil's ideas have drawn strong criticism. Its singularity in 2045 and the subsequent eternal life for people was called "the ascension of nerds" or "intelligent creation of people with an IQ of 140". Others questioned the optimistic time frame, understanding of the human body and brain, reminded of Moore's law, which has not gone away yet. For every expert who believes in Kurzweil's ideas, there are three who think he is wrong.

But the most interesting thing about this is that most of the experts who disagree with him, on the whole, do not say that this is impossible. Instead of saying "bullshit, this will never happen", they say something like "all this will happen if we get to the ISI, but this is the problem." Bostrom, one of the acclaimed AI experts warning of AI dangers, also admits:

“There is hardly any problem left that superintelligence cannot solve, or even help us solve. Disease, poverty, environmental destruction, suffering of all kinds - all this superintelligence with the help of nanotechnology can solve in a moment. Superintelligence can also give us unlimited lifespan by stopping and reversing the aging process using nanomedicine or the ability to upload us to the cloud. Superintelligence can also create opportunities for endless increases in intellectual and emotional capabilities; he can help us create a world in which we will live in joy and understanding, approaching our ideals and regularly making our dreams come true."

This is a quote from one of Kurzweil's critics, however, admitting that all of this is possible if we can create a secure ASI. Kurzweil simply defined what artificial superintelligence should become, if at all possible. And if he is a good god.

The most obvious criticism of comfort zone advocates is that they can be damned wrong when they assess the future of ISI. In his book The Singularity, Kurzweil devoted 20 pages out of 700 potential ISI threats. The question is not when we get to ISI, the question is what will be its motivation. Kurzweil answers this question with caution: “ISI stems from many disparate efforts and will be deeply integrated into the infrastructure of our civilization. In fact, it will be closely embedded in our body and brain. He will reflect our values because he will be one with us."

But if the answer is, why are so many smart people in this world worried about the future of artificial intelligence? Why does Stephen Hawking say that the development of ISI "could spell the end of the human race"? Bill Gates says he "doesn't understand people who aren't bothered" about it. Elon Musk fears that we are "summoning a demon." Why do many experts consider ISI to be the greatest threat to humanity?

We will talk about this next time.

Recommended: