“Artificial intelligence will kill over half the world’s jobs” says Kai Fu Lee

Munira Rajkotwalla 26 January 2018

In an interview, you stated that artificial intelligence will kill over half the world’s jobs. Seeing that pinned against an ever-growing world population, what do you think the future will look like, and what needs to change to balance this disparity?

Well, for a start, I think that many of the jobs that will be replaced by AI are non-aspirational jobs. They’re jobs that involve routine, repetition; and in some sense, we can see this as potentially freeing up more time for us to do things that we’re good at – things that are aspirational, things we’re excited about. I think that two types of jobs cannot be replaced by AI. One is creative jobs, and that goes from scientific engineering to story-telling, entertainment, all the way across the spectrum. The type of job that will not be replaced involves human contact and compassion, because machines have no feelings, and they cannot fake them. Now, we connect back to people. Jobs that are service-oriented – jobs that involve helping other people, like care-takers, tour guides, concierges – will be created in bulk, and people will be trained to enter these types of professions.

So, you see the outcome as changing, but positive, looking forward?

I think it will be positive if we create jobs that are satisfying. If people can be trained, are happy in their new jobs, and can feel fulfilled, then I think it will be a good outcome. There is certainly the possibility of a negative outcome, which is that there will be a large number of people out of jobs, becoming dependent on some form of social welfare, where the funding comes from AI profit. However, if people feel like they’re not contributing and not gaining the respect they want from society, even if they get social welfare, that can quickly become a source of social instability. The downside exists, but we have to do something about it.

Do you think that AI will pose a threat to humanity?

I think the dangers that it imposes are imposed by humanity itself. If we use it too quickly, and displace too many jobs without offering a good future outlook, that would be one possible danger. If we develop it without proper security, and put in in the hands of bad people, that could endanger humanity. For example, if AI weaponry or autonomous vehicles are hacked into and transformed into a form, such that it threatens humanity, then that could be another danger. The other threat to consider is the widening gap between the haves and have-nots as a result of AI introduction – because it will make some people extraordinarily rich, and will exacerbate the current gap. A gap that becomes too wide is a danger to us all. However, I currently do not see the danger to humanity as the dystopia from the media, or the AI from science-fiction taking over our world. 

Credit: philippkoehler.com Photo credit: philippkoehler.com

Speaking about future outlooks, what advice would you give to young people?

Go after creative jobs. Go after your passion. Do things you love, do things you’re really good at. Create things out of nothing. Don’t turn yourself into a machine. The biggest danger for us is not so much the AI becoming human, but humans becoming machines. If we do machine-like, repetitive jobs, we’re bound to be replaced.

As a politics student, I’m interested in your background. You grew up in Taiwan, but founded Google China. How did that come about?

Everyone in Taiwan is Chinese. My parents were from mainland China, and my father had always wanted me to return to the mainland, which I did; so despite certain tensions, my family and my upbringing have always taught me that we are Chinese.

You were banned from Weibo, when complaining about Chinese censorship laws.

[chuckles]

How did this affect you and how did this shape your understanding of the Chinese relationship with the internet (as well as their control over it)?

I think every country has its rules and regulations that it chooses to impose. When you live in that country, you are to be governed according to those rules.

Do you think they should be loosened?

All governments have their reasons for having either loose or tight internet regulations. We can see today that there are many benefits and downsides. Take a look at the American internet. The positives are that they can say, whatever they please. The negatives are the masses of “fake news” confusing people, so I think that the governments should decide according to what is most appropriate at the time. I’m used to posting on both ends of the spectrum, and modify my postings accordingly, while still advocating the things that are important to me.

The future appears quite uncertain with AI, fake news and other challenges on the horizon. How do you remain positive about the prospects ahead?

I like to think about technology as an enabling tool that can be used for good. My role as a technologist is to advocate the good use of technology for beneficial purposes. If more people like me behave responsibly and optimistically, then the outcome will become a self-fulfilling prophecy. If people think negatively, and try to worry and disconnect – trying to stop technology’s advancement – then the prophecy will turn to the negative side. We must remain positive to become positive.