What You Need to Know About the Future of Humanity Institute
If you have been paying attention to the global news, you will know that there’s no shortage of potential threats that could theoretically wipe out humankind. A huge asteroid that’s coming towards Earth and whose impact could wipe out humanity? Computers and robots taking over the world like in many works of science fiction? These are just some of the real threats that could pave the way for a catastrophic extinction event. So it is not a surprise that there’s a branch of academia that is dedicated to studying these scenarios in an effort to prevent them from happening in the first place.
We’re talking about the Future of Humanity Institute. In this article, you will learn about this research center and how they can help with predicting and preventing potential risks to humankind.
What is the Future of Humanity Institute?
The Future of Humanity Institute (FHI) is a branch of the Faculty of Philosophy and the Oxford Martin School at the University of Oxford. The institute is founded in 2005 by Prof. Nick Bostrom. The main mission of the FHI is to make use of tools of philosophy, mathematics, science, and social sciences to look at a bigger picture of the human civilization. The institute believes that humanity has the potential to experience a long and flourishing future. However, there are numerous crucial considerations that shape that future. The Future of Humanity Institute is entrusted with the mission to shed light on these considerations.
The Founder of FHI
Nick Bostrom is the founder of FHI back in 2005, at the age of 32. He founded the institute two years after coming to Oxford from Yale. If there’s a poster boy for the threats against humankind, then it has to be Bostrom. He always tends to attract press attention because he always writes a great deal about the extinction of humankind. So much that his work has earned him a reputation as a secular Daniel – a doomsday prophet if you will.
With his growing audience, he finds himself giving keynote talks on extinction risks at global conferences. Recently, he became an advisor to the Study of Existential Risk at Cambridge while working with renowned physicist Stephen Hawking.
The Main Threats to Humankind According to FHI
With the ever-changing world, all sorts of threats are rearing their ugly heads. FHI makes sure that they’re on top of these threats for humanity’s long-term future. Some of the most likely threats according to the FHI are:
Extinction Risks by Nature
Many movies have depicted that the Earth will be wiped out by some intergalactic, cosmic force. However, Bostrom and the FHI are not too concerned with the extinction risks posed by nature. This is rather surprising since every 50 years, one of the stars in the Milky Way brilliantly explodes into a supernova. If just one of our local stars goes supernova, it could easily blow away planet Earth with its thin atmosphere. Lucky for us, the Sun is in a good position to prevent such catastrophic event. Basically, the solar system is in a bubble of space and time.
Nuclear weapons have the honor of being the first technology to pose a threat to humankind. Although the threat of a nuclear war can have catastrophic effects on the world, FHI believes that they’re not an immediate threat to the world. Now that the Cold War has ended, it is a lot less likely that there will be an exchange of fissile weapons that could lead to our extinction. While there are still thousands of nukes out there, they’re not enough to level the planet and target all humans.
There’s one technology that is taking most of Bostrom’s time as of late. The idea of computers taking over the world is a common theme in science fiction. However, according to the experts, machine superintelligence is the most likely threat to humans. According to Bostrom, there’s a 50% probability that human-level artificial intelligence (Strong AI) will be developed by the middle of the century. There’s a good deal of uncertainty around it, so it can happen sooner or later.
Superintelligence can pose existential risks which mean that artificial intelligence can trigger an extinction event. This is why Professor Stephen Hawking, Elon Musk, and Bill Gates have all warned us about the dangers of artificial intelligence.
As the world progresses, there is no doubt that the threats to our very existence will only multiply. As you can see, you can forget about cosmic catastrophes and nuclear wars, artificial intelligence is the most likely threat. It is both humanity’s important achievement and a daunting challenge. The Future of Humanity Institute is doing a great job to make sure that we as a species stays on top of these existential risks.