This book is
sincerely a mine of impressive notions and inspiring concepts, choosing just
one is difficult. However, the major fact that made
me think and freed my mind while reading this book is how rapidly are we developing S.I
machines and consequently how could this technology shape our future?
does the term superintelligence actually mean? According to Bostrom “we can
tentatively define a superintelligence as any intellect that greatly exceeds
the cognitive performance of humans in virtually all domains of interest or a system that can do all
that a human intellect can do, but much faster”. That sounds positive and
genius, nevertheless, the speed at which the development of human-level
artificial intelligence would lead to the development of ‘extreme’
superintelligence, so could this represent a potential risk/threat to human?
Arguably, the biggest threat from AI comes from developing engines that
are better decision makers than we are. Moreover, they can interpret situations
and make better calls than humans in those jobs. They
some-how enhance our lives in countless ways. For instance, we use them to help us shop, translate
and navigate, and soon they’ll drive our cars.
However, it can also
cause harm and discrimination by using it out of context. As stated in the book “We would want
the solution to the safety problem before somebody figures out the solution to
the AI problem.” Bostrom repeatedly emphasized the danger
of becoming slaves of automated decision makers and warned about the
consequences, when they will become so intent on
their own goals, that they some-how could end up crushing mankind without a
in mind, it wouldn’t be surprising if in a few years, the military will be
using AI “killer robots” on the battlefield as research
into autonomous robots and drones is richly funded today in many nations,
including Germany. These creators could
potentially make the kill decision, the decision to target and kill not only enemies, but also innocent people
without a human in the loop.
This inspires me as I’m concerned with
trading some cryptocurrencies. We also need to look at the potential dangers associated with trusting AI
to make stock trades. In fact, most AI operations react to specific incidents
with specific strategies, which can cause drops throughout the market in a
cascading effect. This is a reality and it already happened in the flash crash of 2010.
On the other hand, the technology has already taken a
toll on Wall Street jobs. By the year 2025,
AI technologies will slash employees number in the capital markets worldwide by
approximately 230K people. In this context, the author claimed in his book “When we are
headed the wrong way, the last thing we need is progress.”
all these facts remind me the interview with Microsoft founder Bill Gates when he warned that robotics and advanced
algorithms will likely eliminate many jobs. In the same
way, Tesla and SpaceX CEO Elon
Musk predicted about the dangers of AI,
saying it could eventually escape our understanding and control. On the
contrary, some major figures have argued against the doomsday scenarios
e.g. Facebook CEO Mark Zuckerberg said he is “really
optimistic” about the future of AI.
We can clearly infer that the purpose of the book is to raise
people’s awareness, and trigger them to prevent that from happening. The
problem is a research challenge worthy of the next generation’s best
mathematical talent and I personally think that, if we’re smart enough to build machine with super-human intelligence, we will not be stupid
enough to give them infinite power to shape our future.