Siri, explain artificial intelligence

Simon Langer 13 November 2017

Artificial intelligence (AI) is defined in computer science as the study of intelligent agents, devices which perceive their environment and maximize their chances to reach a certain goal. While having existed since 1956 as an academic discipline, the study of AI shows accelerated progress in the 21st Century thanks to advances in computer performance and improved theoretical approaches.

The overall purpose of AI is to create technology and computers which function intelligently and to implement said technologies to better the lives of humans. However, intelligence is a highly complex psychological trait. In the domain of AI, intelligence has often been divided into traits like reasoning, knowledge, planning, learning, perception, creativity and social as well as general intelligence. In this issue we describe a recent study of self-taught machine learning, namely the AlphaGo Zero AI from Deepmind.

The application areas of artificial intelligence are diverse and prevalent in our modern everyday life. Straight forward examples include digital helpers like Amazon’s Alexa and Apple’s Siri, facial recognition softwares or self-driving vehicles like cars or underground carriages. Subtler examples include online shopping, translation softwares, analysis and prognosis of stock exchange trends and automated weapons. Humanity has crossed a line and at this point we are already heavily dependent on artificial intelligence.

Many public figures have weighed in on the question, whether the development and improvement of artificial intelligence will be the downfall or the salvation of humanity. In this article, you will be presented representatives and arguments from both sides.

A prominent adversary is business magnate, engineer and Tesla-CEO Elon Musk. He has been known for his futuristic ideas and willingness to gamble, but Mr. Musk is scared of an overhasty development of AI, which could “lead to a third world war”.

Musk names one problem that the human and humanitarian way is not always the calculated best way, warning in an interview with The Guardian of an overcautious AI that decides that a pre-amptive strike is the most probable path to victory. Musk is one of 100 signatories who are calling for a United Nations ban on lethal automated weaponry. These weapons lead to the possibility that large-scale armed conflict can be instructed far more carelessly. According to Elon Musk, the development of AI is a Pandora’s Box and Shane Legg, a co-founder of DeepMind Technologies, is quoted by VanityFair “I think human extinction will probably occur, and technology will likely play a part in this“. Another problem of AI, at least at this point of development, are safety issues which arise if the AI doesn’t do its job. For example, an AI software is used to detect weapons outside a sports stadium. Researchers from MIT showed just recently that they were able to convince Google’s object recognition software that a turtle was a rifle. They used a concept called “adversarial image”, slowly adding visual noise to the turtle’s image, which resulted in confusing the AI in a case, while a human would have had no trouble recognizing the difference.

Proponents of AI include Bill Gates, who disagrees with Elon Musk, believes that AI will make human life more productive and creative and urges the public not to panic about artificial intelligence, Marc Zuckerberg, who calls Musks statements about AI „pretty irresponsible“, and Wladimir Putin, who can be quoted that „artificial intelligence is the future, not only for Russia, but for all humankind. It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world“.

Cambridge’s own Prof. Stephen Hawking, speaking at the opening of the Leverhulme Centre for the Future of Intelligence at the University of Cambridge, took a moderate approach. Prof. Hawking is quoted that the creation of powerful AI will „either be the best or worst thing ever to happen to humanity“. Furthermore, Hawking said that „We cannot predict what we might achieve when our own minds are amplified by AI. Perhaps with the tools of this new technological revolution, we will be able to undo some of the damage done to the natural world by the last one – industrialisation. And surely we will aim to finally eradicate disease and poverty“.

As experts disagree about the fate of humanity and the role of AI, everyone needs to make their own mind up about the ethics behind the development of AI. But maybe the question experts ask themselves is wrong. Maybe we should not be asking “will the development of AI lead to the end of humanity or save it?” but rather ask ourselves how we need to go about the development process and discuss what principles and rules we need to set ourselves in order to create the sophisticated AI that can lead to a bright future without compromising it.