The end of the world as we know it?

Hannah Graham 17 October 2014

As a Cambridge student it's so easy to get so wrapped up in your own personal essay crises and problem sheets that you forget that our university is full of people working at the very cutting edge of fascinating topics in hundreds of different areas. In a new regular feature, TCS aims to present a broader view of what's going on in this amazing university by exploring some of the most interesting research projects going on in Cambridge right now.

Arguably the most interesting and terrifying research being done in Cambridge happens at the Centre for the Study of Existential Risk (CSER). Set up by an astronomer, a philosopher and one of the people who invented Skype, CSER studies scenarios that wouldn't look out of place in a disaster movie. Artificial intelligence gone rogue, super-viruses released from labs, even the bizarre-sounding possibility that advances in nano-technology could lead to an uncontrollable plague of self-replicating machines.
The centre aims to examine potential risks to the future of human existence, which come about as a result of our own technological development. While large amounts of research and funding are directed at examining risks posed by natural disasters – floods, earthquakes, asteroids and the like – the study of the risks that new technologies pose is a relatively new field of study. According to Professor Huw Price, philosopher and founding member of CSER, the risk posed by human technologies is arguably far greater than that posed by natural disasters, hence the pressing need for further study. The threats in question are likely to come from either "error or terror": our downfall may come from accidental consequences of new technologies or, in the words of Professor Martin Rees, another founder, "some weirdo" who decides to trigger a world-ending catastrophe. 

One of the scariest of the centre's research projects examines the threat posed by artificial intelligence (AI). Writing in the New York Times, Huw Price explains that if computers become able to think for themselves, human life may find itself at risk. While the machines are unlikely to rise up, matrix-style, to overthrow their former masters, our greatest risk comes from AI's lack of interest in the things that humans value. Their indifference to our well-being may be a far greater threat than their hostility, as intelligent machines adapt their environment to suit their own needs and priorities, as opposed to ours. Imagine a world in which the greatest intelligence was originally created to produce flat-pack furniture. The beauty of the rainforest, the value of art and even the sanctity of human life are not important to this machine and soon everything we know and love has been co-opted for the creation of IKEA products.

On the other hand, it's possible that a fanatic will release a super-virus from a lab before we ever reach the artificial intelligence "singularity"; or perhaps 3D printers will make deadly weapons so available that we'll all kill each other before machines or viruses can do it for us. Either way, it is perhaps comforting to know that someone right here in Cambridge, is considering these potential risks on our behalf.