Artificial Intelligence is creating a new cultural divide

15 May 2018
Image Credit: Valentine Kim

For many, the term Artificial Intelligence (AI) is inherently frightening. It conjures images of the machine uprising, spearheaded by an army of killer robots. This complements a deeper, more realistic fear that increased automation is eroding our sense of purpose, and what it means to be human.

For others, AI is an exciting prospect, promising a future defined by futuristic technology, free from suffering. They are driven by the challenge of building intelligent systems, and the potential they have to solve the world’s problems.

This is bringing about a new cultural divide. On one side there are technical, scientifically minded designers and researchers, with a deep understanding of AI systems. On the other, lay members of society: from writers to taxi drivers to policy makers. As AI increasingly defines how society functions, this divide is becoming more and more problematic. The designers, often separated from cultures differing to their own, are creating systems that alienate many sections of society.

A big part of the problem is a lack of understanding of what artificial intelligence actually is. In general, AI systems are those which perform tasks typically associated with human intelligence. This includes learning, pattern recognition, and decision making. Right now, the way this is achieved is with data. Lots and lots of data. Google Translate learns from swathes of text translated by humans. Targeted advertising systems recognise patterns in the data from your online behaviour to determine what interests you. The products Amazon recommends to you are shown based on what has worked with millions of other shoppers before you.

Many of those working with AI, however, have bigger goals in mind. DeepMind, the world leading AI research company that was acquired by Google in 2014, say their core goal is to “solve intelligence” and “use it to make the world a better place”. OpenAI, the non-profit AI research company co-founded by Elon Musk, has the stated aim of “discovering and enacting the path to safe artificial general intelligence”. The idea is to make something more intelligent than any human alive today, and then get it to solve all of our problems.

This task is in equal parts terrifying, exciting and horribly difficult. Making sure that this doesn’t go wrong is the focus of a huge amount of AI safety research, and the prospect that it very well could is what most people find so scary. This single minded focus, however, blinds people to the very real issues presented by the narrower, data-driven AI algorithms that are in place today.

One such algorithm assigns “risk assessment scores” to convicts, which predict how likely they are to recommit crime. In May of 2016, an investigation by ProPublica uncovered that this algorithm, which is widely used to inform decisions regarding when people are set free, was found to be twice as likely to mistakenly flag black defendants as “high risk”. The algorithm was showing a racial bias.

Biases like these are increasingly common, as AI is used to automate the hiring processes in many larger companies, as well as being used increasingly in legal and medical contexts. These biases solidify existing inequalities, and slow the progress of social justice. Uncovering algorithmic bias like this is difficult, and often made impossible by the sensitive nature of the data being used.

“It’s not killer robots we should be worrying about”, says Victor Parchment, a philosopher at Cambridge who is also in the business of developing AI. “Rather it is the entrenching of problems that we already face.” He thinks that we need people with a background in the humanities to help tackle these issues.

A recent article by DeepMind’s Ethics and Society, a research unit dedicated to the impacts of the companies work on society, echoes this sentiment. It calls for a “social-systems approach” for understanding and tackling the problems created by AI. This approach has already been used to discover that an AI used to identify people likely to be involved in a shooting is ineffective. Whilst increasing the chance these people were targeted by the police, it did not reduce crime. The article goes on to say that we need to “draw on philosophy, law, sociology and anthropology among other disciplines”, in order to make sure AI is used for good.

The problem is made worse when there is an economic incentive. For the most part, AI systems are implemented to cut costs and increase efficiency. However the push for efficiency can conceal difficult issues. For instance, AI is increasingly being used to automate processes in hospitals. However it is still unclear who is held accountable when these systems fail, and little research has been done to assess the effects of these systems on patient mental health.

AI systems have the potential to help reduce inequality by freeing up resources and increasing the efficiency of government programs. But without an understanding how these systems could affect society, the reverse is likely.

Christopher Wylie, the whistleblower who outed the recent misuse of AI by Cambridge Analytica, was originally approached by Steve Bannon with the task of “quantifying culture, in order to change it”. Wylie believes this was possible, as he helped build the AI which, using the data from 87 million Facebook profiles, targeted political ads based on people’s personalities. “People are the units of culture”, Wylie said at a press conference in March. By manipulating people, Cambridge Analytica’s AI may have played a crucial part in changing America’s culture.

Wylie was persuaded to whistleblow by journalist Carole Cadwalladr. The pair spoke over the phone for years before the story came out, often for hours at a time. He recounts Carole’s ability to be comforting and patient, despite the stress and chaos of his situation. This is what ultimately allowed him to stand up for what he thought was right. “Discussion of the role of technology in society should not be confined to technical people”, he said, stressing the importance of storytelling in humanising technology.

Alongside its use for political manipulation and oppression, a recent report by the future of humanity institute at Oxford highlighted several other ways AI could be used maliciously. The bulk of these are “cyber attacks”, which use AI to find weaknesses in existing technological systems. An example they give is the potential use of speech and video synthesis programs to dupe people into handing over sensitive information. AI is already eerily close to perfectly impersonating people over the phone. It won’t be long until this technology is in widespread use.

The report also offers potential solutions. “Policy makers need to collaborate closely with technical researchers. Collaboration ensures responses will be informed by the technical realities of technologies at hand.” The reverse is also true. Those researching and designing AI systems have to be informed by the realities of the society they are changing. It is down to our generation to bridge this divide.