The Cambridge Union is well-known for hosting famous speakers in debates for over 200 years; but on the 21st November, it was the first time a non-human voice resonated within the Union walls during a debate.
The motion, “This House Believes AI Will Bring More Harm Than Good” featured opening proposition and opposition speeches from IBM Research’s ‘Project Debater’, the world’s first artificial intelligence platform that can spar against the human wit. In what was perhaps the most popular debate of the term, the Union was packed to the brim with eager members and journalists from mainstream news outlets, including the BBC and CNN.
Chief Investigator of Project Debater, Noam Slonim, kicked off the night by introducing the AI system. Known as ‘Project Debater’, the AI system works by identifying, selecting, and synthesizing more than 1100 crowdsourced arguments about the pros and cons of AI that were submitted to IBM through a website a week before the debate. Whilst Slonim admits that the system is “not perfect” in its ability to have “quality arguments”, he believes that there is huge potential for the ‘Project Debater’ technology to be used not for debates, but to help companies understand what customers think of a product or governments to understand the views of its citizens.
The evening’s debate, reasonably scoped to only include AI developments between 2020-2030, kicked off as ‘Project Debater’ began with the proposition and opposition arguments in a feminine monotonous voice. ‘Project Debater’ clearly established the harms of AI through four key points: firstly, arguing that the dangers of biased data could be used to discriminate against minorities; secondly, on issues of unemployment and job displacement; thirdly, on the wider societal effects of disconnecting individuals; and lastly, on the moral dangers of AI being used for illegal and harmful purposes if placed in the wrong hands. In one of the occasions when Project Debater referred to itself in the first person, it delivered a sarcastic line of “I am not employed, so I would hope [not]!” during its point about employment, which garnered chuckles from the crowd.
“In one of the occasions when Project Debater referred to itself in the first person, it delivered a sarcastic line of “I am not employed, so I would hope [not]!” during its point about employment, which garnered chuckles from the crowd.”
Through an interesting and slightly unnerving dynamic, Project Debater then turned to counter its own earlier points in an opposition speech with just main points and no rebuttals. Project Debater argued for the benefits that automation would bring to society and the creation of new jobs. On the topic of technology, which Project Debater said “is an issue close to my artificial heart”, it asserted how it can play a crucial role in increasing human well-being through medical and scientific advancements, as well as the removal of human errors in complex procedures. Project Debater clearly established the key arguments for the rest of the debate in a manner that showed how it was aware of its own position as an AI platform. For example, it did not hesitate to say “I will likely speculate my own opinion… if I have any!” The subtext of the humour, according to Noah Slonim, implies that the AI is only a machine, not trying to replace humans. The humour is not scripted and is intended to be used as a “rhetorical tool not more than a few times”, Slonim later shared in a Q&A.
The human team members left to debate the points raised by Project Debater delivered equally engaging speeches.
PhD Candidate at Cambridge, Sharmila Parmanand on side proposition, helped contextualise the points raised by Project Debater about the harms of AI by situating it firstly, within governance structures with existing power hierarchies and political agendas, and secondly, within regulatory frameworks in a privatized context where there is limited accountability. Crucially, she contributed a more detailed analysis of the unequal ratio in job loss and creation across sectors, ultimately showing that more people would be displaced than the rate of job creation.
The response from the opposition team to Sharmila was delivered by Professor Sylvie Delacroix, Professor in Law and Ethics at the University of Birmingham. Affectionately naming Project Debater “Debby”, Professor Delacroix praised “Debby” on her opening remarks, before steering the debate to look not at “the instrumental considerations” of AI, but instead a human-centric one. “AI is ultimately about ‘us’, who we are and what we want to become”, she argues. By seeing AI just as a tool, Professor Delacroix maintains that as long as humans remain in control, the good or harm of AI depends on how humans use it, and not what AI inherently is. She extends this line of argument by saying that because “AI is only as good as the data from us”, it requires a re-evaluation of biases within our data collection methods and sources.
“AI is ultimately about ‘us’, who we are and what we want to become..”
The closing debates did not disappoint, introducing some new points about attitudes towards AI. Professor Neil Lawrence, the DeepMind Professor of Machine Learning at Cambridge, reframed Professor Delacroix’s point about the data humans provide to AI, by noting that “by placing AI entities in positions with a lot of data, they now know us better than we know ourselves.” Based on cognition theory, he coined his own term “system 0”, which he defined as divisive behaviour in society that undermines human ability to work together. By reminding the audience of the dangers of big data in manipulating data during the eugenics era, Professor Lawrence urged the floor to vote for the motion by asserting that we should believe AI will do us harm so that there is greater awareness of its potentially harmful effects.
Lastly, closing opposition speaker Oxford and Cambridge-graduate Harish Natarajan was the one to consciously weigh up the relative probability and scaled impacts of the points raised by earlier speakers. On the point of AI being biased, Natarajan recalled the differences that AI could make between life and death to argue for the relative good brought by AI. As for the point on the loss of jobs due to AI, he argued for the liberating nature of AI that empowers and democratises technology for the masses, which could lead to new innovation and businesses. This democratisation aspect, he argues, means that AI can serve as a “natural mitigator” against the so-called “bad guys” who will use AI for harmful purposes.
Ultimately, at the heart of the debate is a highly probabilistic clash of ideas about how AI will have uneven impacts within different spatial and temporal contexts. The fact that AI remains fundamentally ‘human’ because it has been built by humans and learns from humans, raises interesting questions of human rationality. Project Debater raised logical and structural arguments but lacked the persuasiveness and dynamic engagement in debates that its human team members supplemented afterwards.