View a PDF version of this topic guide here.
INTRODUCTION
In March 2016, Google’s AlphaGo computer program beat master player Lee Se-dol, at the notoriously difficult and abstract ancient Chinese board game GO [Ref: Guardian], in what was seen as another example of the march of artificial intelligence. It follows in the footsteps of IBM’s Deep Blue, which beat world chess champion Garry Kasparov in 1997 [Ref: Time Magazine], and Watson, another IBM machine that defeated two former champions in the US TV quiz show, Jeopardy! in 2011, demonstrating an ability to understand questions in natural language [Ref: TechRepublic]. However, artificial intelligence isn’t just being used to beat humans at games – for some, its impact will have profound implications for the way in which we will live our lives in the future. Currently, AI is being developed in numerous fields, such as driverless transport, finance, fraud detection, as well as robotics and text and speech recognition for numerous other applications. As such, supporters of AI suggest that: “It’s a massive opportunity for humanity, not a threat” [Ref: Huffington Post], and argue that machines which can learn to do tasks currently requiring humans could speed up processes, allowing humans more leisure time in the future [Ref: The Times]. But critics worry that if we develop machines that can learn very rapidly, drive our cars and do our jobs, we may reach a situation where they become more intelligent than humans – thus posing existential issues for the future of humans in the workplace as well as our place in the world more broadly. Given the continued development of aspects of AI, such as deep learning [Ref: Tech World], opponents wonder whether at some point it might develop interests of its own and come to dominate humanity, or do us harm in particular situations. In light of these concerns, should we fear advances in artificial technology?
DEBATE IN CONTEXT
This section provides a summary of the key issues in the debate, set in the context of recent discussions and the competing positions that have been adopted.
The ethics of AI
By way of definition: “AI can be seen as a collection of technologies that can be used to imitate or even to outperform tasks performed by humans using machines” [Ref: The Conversation], and encompasses everything ranging from search engines on the internet to self-teaching programs which have the ability to learn from experience, such as Google’s Deepmind technology [Ref: Financial Times]. At a time when: “Machines are rapidly taking on ever more challenging cognitive tasks, encroaching on the fundamental ability that sets humans apart as a species: to make complex decisions, to solve problems – and, most importantly, to learn” [Ref: Financial Times], AI will continue to pose some fundamental ethical questions for society. For example, how should we view the potential for AI to be used in the military arena? Although there is currently a consensus that, “giving robots the agency to kill humans would trample over a red line that should never be crossed” [Ref: Financial Times], it should be noted that robots are already present in bomb disposal, mine clearance, and anti-missile systems. Some, such as software engineer Ronald Arkin, think that developing ‘ethical robots’ which are programmed to strict ethical codes could be beneficial in the military, if they are programmed never to break rules of combat that humans might flout [Ref: Nature]. Similarly, the potential for the increased autonomy and decision making that AI embodies, opens up a moral vacuum that some suggest needs to be addressed by society, governments and legislators [Ref: The Times], whilst others argue that a code of ethics for robotics is urgently needed [Ref: The Times]. After all, who would be responsible for a decision badly made by a machine? The programmer, the engineer, the owner or the robot itself? Furthermore, critics say that driverless cars may be involved in situations where there is a split-second decision either to swerve, possibly killing the passengers, or not to swerve, possibly killing another road user. How should a machine decide? To what extent should we even allow machines to decide? [Ref: Aeon] Others argue that technology is fundamentally ‘morally neutral’, as: “The same technology that launched deadly missiles in WWII brought Neil Armstrong and Buzz Aldrin to the surface of the moon. The harnessing of nuclear power laid waste to Hiroshima and Nagasaki but it also provides power to billions without burning fossil fuels”. In this sense: “AI is another tool and we can use it to make the world a better place, if we wish.” [Ref: Gadgette]
A threat to humanity?
“Entrenched in our culture is the idea that when man overreaches himself by playing God, he faces disaster” [Ref: The Times], and for some critics, advances in AI pose very real existential problems for humanity in the future. Oxford professor Nick Bostrom, for instance, has voiced concerns about what might happen if the ability for machines to learn for themselves accelerates very rapidly – what he calls an ‘intelligence explosion’. Bostrom believes “at some point we will create machines that are superintelligent, and that the first machine to attain superintelligence may become extremely powerful to the point of being able to shape the future according to its preferences” [Ref: Vox]. Professor Stephen Hawking has expressed the fear more bluntly: “The development of full artificial intelligence could spell the end of the human race… It would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.” [Ref: BBC News] Technology entrepreneurs Bill Gates and Elon Musk have also publicly stated fears about the dangers of artificial intelligence, and caution that there are very real risks associated with the march of the technology if left unchecked [Ref: Guardian]. Autonomy is a key issue that some critics are especially concerned about, with technologist Tom Ditterich warning that despite proposals to have driverless cars, autonomous weapons and automated surgical assistants, AI systems should never be fully autonomous, because: “By definition a fully autonomous system is one that we have no control over, and I don’t think we ever want to be in that situation.” [Ref: Business Insider] Additionally, there are also practical issues critics are keen to explore, such as the future of work, with many suggesting that advances in automation will result in certain jobs becoming obsolete. Commentator Claire Foges reflects on these developments, and draws parallels with the Luddites 200 years ago, who attempted to resist the increasing automation of their jobs during the onset of the industrial revolution [Ref: History.com]. She notes that amid recent forecasts that up to 5 million people could lose their jobs because of automation [Ref: The Times]: “Two hundred years on, a braver newer world is arriving at astonishing speed, and threatens to make luddites out of us all. The robots are coming, they are here; creeping stealthily into factory, office and shop.” [Ref: The Times]
A brave new world?
For advocates, the advance of AI has the potential to change the world in unimaginable ways, and they largely dismiss warnings about the dangers that it may pose. As Adam Jezard observes: “Such concerns are not new…From the weaving machines of the industrial revolution to the bicycle, mechanisation has prompted concerns that technology will make people redundant or alter society in unsettling ways.” [Ref: Financial Times] Moreover, supporters ask us to consider the benefits that AI has already brought to us, such as speech recognition and autonomous vehicles, which will continue to develop and revolutionise the way we live our lives. In the field of medicine, one commentator posits the increasingly plausible idea, of having a program which may in future be able to recognise the difference between cancer tumours infinitely better than humans, which would revolutionise healthcare [Ref: The Times]. Others also criticise arguments that advances in AI signal the end of humanity, and point to the fact that: “After so much talking about the risks of super intelligent machines, it’s time to turn on the light, stop worrying about sci-fi scenarios, and start focusing on AI’s actual challenges.” [Ref: Aeon] Perhaps more profoundly, others question why we are so quick to underestimate our abilities as humans, and fear AI. Author Nicholas Carr observes that although: “Every day we are reminded of the superiority of computers…What we forget is that our machines are built by our own hands”, and in actual fact: “If computers had the ability to be amazed, they’d be amazed by us.” [Ref: New York Times] In addition, fundamental to the pro AI argument is the idea of technological progress being a good thing in and of itself. Futurist Dominic Basulto summarises this point when he speaks of ‘existential reward’, arguing that, “humanity has an imperative to consider dystopian predictions of the future. But it also has an imperative to push on, to reach its full potential.” [Ref: Washington Post] From the industrial revolution onwards we have gradually made our everyday lives easier and safer through innovation, automation and technology. For instance, the onset of driverless vehicles is predicted to drastically reduce the number of road traffic incidents in the future, and: “Machines known as automobiles long ago made horses redundant in the developed world – except riding for a pure leisure pursuit or in sport” [Ref: The Times]. So with all of the arguments in mind, are critics right to be wary of the proliferation of AI in our lives, and the ethical and practical problems that it may present humanity in the future? Or should we embrace the technological progress that AI represents, and all of the potential that it has to change our lives for the better?
ESSENTIAL READING
It is crucial for debaters to have read the articles in this section, which provide essential information and arguments for and against the debate motion. Students will be expected to have additional evidence and examples derived from independent research, but they can expect to be criticised if they lack a basic familiarity with the issues raised in the essential reading.
Brave new era in technology needs new ethics
John Thornhill Financial Times 20 January 2016
FOR
Are robots going to steal your job? Probably
Moshe Y Vardi Guardian 7 April 2016
It’s time to put these robots in their place
Ben McIntyre The Times 11 March 2016
‘This Oxford professor thinks artificial intelligence will destroy us all’
Dylan Matthews Vox 19 August 2014
Automated Ethics
Tom Chatfield Aeon 31 March 2014
AGAINST
I for one welcome the rise of the robots. They can do the work while I play
Dominic Lawson The Times 20 March 2016
Will artificial intelligence destroy humanity? Here are five reasons not to worry
Timothy B. Lee Vox 29 July 2015
Why robots will always need us
Nicholas Carr New York Times 20 May 2015
Why the world’s most intelligent people shouldn’t be so afraid of AI
Dominic Basulto Washington Post 20 January 2015
IN DEPTH
The doomsday invention
Raffi Khatchadourian New Yorker 25 November 2015
‘Omens’
Ross Anderson Aeon 25 February 2013
KEY TERMS
Definitions of key concepts that are crucial for understanding the topic. Students should be familiar with these terms and the different ways in which they are used and interpreted and should be prepared to explain their significance.
Artificial Intelligence
Deep learning
GO – board game
Luddite
BACKGROUNDERS
Davos 2016: The state of artificial intelligence
World Economic Forum 20 January 2016
Should we be afraid of AI?
Luciano Floridi Aeon 9 May 2016
Robot revolution: rise of the intelligent automated workforce
Danushka Bollegala The Conversation 5 May 2016
How unprepared are we for the robot revolution?
Martin Ford Financial Times 3 May 2016
Military killer robots create a moral dilemma
John Thornhill Financial Times 25 April 2016
The robot age will make Luddites out of all of us
Clare Foges The Times 25 April 2016
The danger isn’t artificial intelligence – it’s us’
Jennifer Harrision Gadgette 6 April 2016
What is the future of artificial intelligence?
Michael Brooks New Statesman 18 March 2016
How much should we fear the rise of artificial intelligence?
Tom Chatfield Guardian 18 March 2016
All systems GO for the robot takeover
Josh Glancy The Times 13 March 2016
Technophobia is so last century: fears of robots, AI and drones are not new
Adam Jezard Financial Times 2 March 2016
They robots
The Times 1 January 2016
Robot panic peaked in 2015 – so where will AI go next?
Charles Arthur Guardian 27 December 2015
Will artificial intelligence surpass our own?
Christof Koch Scientific American 1 September 2015
Machine ethics: The robot’s dilemma
Boer Deng Nature 1 July 2015
Why I don’t fear artificial intelligence
Peter Diamandis Huffington Post 18 May 2015
Artificial intelligence: Rise of the machines
Economist 9 May 2015
March of the robots
Andrew Keen The Times 22 February 2015
Did Deep Blue beat Kasparov because of a system glitch?
Jennifer Latson Time Magazine 17 February 2015
IBM Watson: The story of how the Jeopardy-winning supercomputer was born
Jo Best Tech Republic
IN THE NEWS
Google patents ‘sticky cars to save pedestrians hit by driverless vehicles’
Telegraph 19 May 2016
Brave new world? ‘Sci-fi fears hold back progress of AI’ warns expert
Guardian 12 April 2016
Microsoft created a Twitter bot to learn from users. It quickly became a racist jerk
New York Times 24 March 2016
Google’s AI wins final GO challenge
BBC News 15 March 2016
Google driverless car crash was ‘not a surprise’
Independent 14 March 2016
Rise of robots ‘will cost 5 million jobs’
The Times 19 January 2016
Artificial intelligence: Elon Musk backs open project ‘to benefit humanity’
Guardian 12 December 2015
The real problem with artificial intelligence
Business Insider 10 September 2015
Robot kills worker at Volkswagen plant in Germany
Guardian 2 July 2015
Apple co-founder Steve Wozniak says humans will be robots’ pets
Guardian 25 June 2015
Bill Gates on AI: “I don’t understand why some people are not concerned”
Washington Post 29 January 2015
Elon Musk and Stephen Hawking join forces to avoid “pitfalls” of artificial intelligence
Washington Post 12 January 2015
“Artificial intelligence could spell end of human race” – Stephen Hawking
Guardian 2 December 2014
Artificial intelligence is our biggest existential threat
Guardian 27 October 2014