Before reading the book, I expected an entirely scientific work full of heavy technical concepts or deeply detailed processes, not really designed to be pleasant to read. I know! That was a hurried judgment, probably because of the title.
Gladly, it turned out to be an enjoyable « Science-Philosophy Crossing »!
Its author Nick Bostrom, benefiting from a multidisciplinary background that includes physics, computational neuroscience, mathematical logic as well as philosophy, knew a great success for his focus on new technologies, transhumanism and futuristic mindsets. He makes his case with humor and character and this original and well-investigated work deserves to be, not only read and understood, but also taken seriously enough, at least for some of us readers, to broaden upon his analysis and act upon our convictions.
The manner the writer has approached the existential danger of artificial intelligence fascinated me. It was differently perceived and differently presented to the reader.
The passage that caught my attention the most in the book was the honest certitude that we are not mature enough for artificial intelligence. So, in this first part I will talk about the dangers of AI that we are not wisely considering. Bostorm’s eloquent warning encapsulated perfectly the situation: “We humans are like small children playing with a bomb. Such is the mismatch between the power of our plaything and the immaturity of our conduct. Superintelligence is a challenge for which we are not ready now and will not be ready for a long time”.
His standpoint was kind of refreshing because a lot of AI experts have been either unconcerned with the threat of AI or they have focalized on the threat to humanity only once a massive population of intelligent machines is omnipresent all around human society.
When you think about it, several scenarios might take place in which autonomous, self-replicating AI entities could come to light and threaten their human founders. The scenarios I will present might sound like science-fiction, but they are prone to occur in two or three centuries if the AI growth continues to increase at this rate.
I will begin with the AI Space-Travelling scenario, in which self-reliant agents are destined for space travel and asteroid mining. I have bad news for people who believe in the “Star Trek” backup which states that humans are quite successful space travelers, assuming that a “faster-than-light drive” can be designed. Sapiens have evolved on Earth with oxygen as their major source of energy. In opposition to us, synthetic entities will need neither oxygen nor gravity. In the biological sense, they will not be awake and accordingly they will not waste any energy during the “journey”. An uncomplicated clock could turn them on once at the target planet and they will be unaffected. If a conflict should arise at that time between humans and these space-traveling AI entities, they would be looking down on us from outer space. A huge advantage!
The second scenario would be the “Robotic Warfare”. No human being would want to sacrify voluntarily human soldiers to die on the battlefield. The solution would be a community of intelligent robots that are prearranged to kill. Unluckily, if command over such AI warriors is somehow lost, then this could spell disaster for humanity.
What is captivating about the book of Bostrom is that it focusses on the dangers, not from a society of robots more performant than humans, but, rather, on the dangers posed by a single entity with superintelligence coming about. This bright reasoning by such a thinker tackling one of humanity’s biggest challenges should make us aware of the situation. It might be frightening and shocking but our failure to recognize the magnitude of the risks we are on the brink of confronting would be a harsh mistake knowing that, once super-intelligence begins to reveal itself and act, the change may be severely fast and we may not be afforded a second chance.
To put it in a nutshell, us not taking this enough seriously and not being prepared may engender our destruction while serious pre-emergence debate and anticipation may result in us controlling them or at least result in some form of co-existence.
For AI to present an existential danger to humanity, it would require processes of robotic self-replication. Not all robots are like “Data” from Star Trek who is cleverer than humans but still does not plan to make copies of himself. Once smart entities have the will to upgrade their own designs and to reproduce themselves, then they will have many advantages over humans; which leads us to the second question. Are such forms of super intelligence achievable?
I expect similar superintelligence to appear. Maybe not in the near future, but such synthetic entities will someday come to light. I will list some arguments in this second part of the essay that support that belief.
The first argument would be our incessantly multiplied dependency! Even if we wanted to, it is already inconceivable at this point to get rid of computers because we are incredibly reliant on them. Without computers our financial, transportation, communication and manufacturing services would grind to a halt. We are today designing robots to perform tasks we order them to do. AI used to be about putting commands in a box and see it fulfill our requests. This area is over. If we do not take our precautions and consider to go step by step instead of climbing high and fast, one day these robots will be able to do the same and master their own creations.
Picture a near-future society in which AI agents accomplish the vast majority of the services nowadays accomplished by humans and in which the design and fabrication of robots are managed by robots as well. Imagine that, at some time, a new design gives rise to robots that do not obey anymore their human masters. We humans would decide at that juncture to intercept power to the robotic factory but it turns out that the hydroelectric plant which is the power supplier is controlled by robots like almost everything in the factories. Then humans think about stopping all vans responsible of delivering materials to the factory, but it turns out once again that those vans are driven by robots, and so on and so forth.
When a super-intelligent entity becomes more and more super-intelligent, it will possess more and more consciousness of its own mental routines. Along with enhanced self-reflection it will become more and more autonomous and less controllable. Just like us, human beings, it will have to be persuaded to believe in something.
Furthermore, this super-intelligent entity will be designing and then producing even more self-aware editions of itself. Escalated intelligence and escalated self-reflection go hand in hand. Monkeys do not convince humans since monkeys lack the capacity to relate to the notions and impressions that humans are able to entertain. For a super-intelligent entity we will be as persuasive as monkeys if we are not super careful with AI today.
Not surprisingly, after 320 pages, the author cannot answer the ‘what is to be done’ question regarding the likely manifestation of non-human super-intelligence someday. This is predictable since, as a species, we’ve always been the smartest ones around and never thought about the eventuality of coexistence aside something or someone impossibly smart and smart in ways over and above our understanding, perhaps driven by goals we can’t understand and acting in ways that may harm us.
Number of characters: 7562.