Baptized as “the philosopher of the end of the world” by the New Yorker, Nick Bostrom does not look like a crazy apocalyptic. It is rather the image of moderation and reflection, even if their ideas will unleash controversy every time he exposes and win sworn enemies with the same ease with which you get zealous advocates. Among these latter are minds as brilliant as those of Stephen Hawking or visionary of new technologies such as Bill Gates or Elon Musk.
What has called the attention of this philosopher, Swedish, director and founder of the Future of Humanity Institute at the university of Oxford, are their developments about the dangers that threaten our species behind the artificial intelligence. In front of the naive optimism of those who see in the thinking machines the solution to all our problems, Bostrom warns that we must be wary of. Or, to put it in your own words, stop behaving like “small children playing with a bomb.”
The possibility of computers or robots that we are building we exceed in intelligence some day, it is not far-fetched. What until very recently it was the domain of science fiction should be seen now as a horizon very likely: “I think that if there is something that can fundamentally change the nature of life on Earth, that is the transition into the age of intelligence machines. We have No other option than to confront this challenge.
The superintelligence of the machines is like a portal that humanity must pass through necessarily, but we must make sure not to crash against the wall when we do it”. Although it may seem otherwise, the Bostrom is not a message discouraging nor preaches-as if it were a digital version of the luddites that destroyed machines during the first industrial revolution – a war against the new technologies.In fact his confidence in the possibilities of science led him to found in 1998 the World Association Transhuman, which advocates for strengthening the capacities of human beings through hybridization with the technology.
In an interview with The Country, Bostrom affected in that their function is to call a deep reflection, not to demonize the machines: “there are many things that are not going well in this world: people who die of hunger, people who are bitten by a mosquito and contracts malaria, people that decays by aging, inequality, injustice, poverty, and many are preventable. In general, I think that there is a race between our ability to do things, to make progress rapidly our technological capabilities, and our wisdom, which goes much more slowly.
We need a certain level of wisdom and collaboration for the time that we reach certain technological milestones, to survive those transitions.”
As advocated in his book, Superintelligence: Paths, Dangers, Strategies, » published in 2014 (and that came quickly in the bestseller list of the New York Times Book Review) the real challenge is not so much the intelligence that will be able to reach the machines, but in the moral development of our species. At the end, as already postulated by Jean-Paul Sartre, we are condemned to be free. And that can be dangerous, but also an excellent opportunity to take another evolutionary leap. (http://www.nickbostrom.com).