Follow us on

AI and making our future

by Dr Simon Longstaff AO
28 July 2017
TECHNOLOGY
Jokes of robots taking our jobs are now made everyday. AI is on our minds and in our future. Ethicist Dr Simon Longstaff considers how we make that tomorrow good.
 
Way back in 1950, the great computer scientist, Alan Turing, published in Mind a paper that set a test for determining whether or not a machine possesses ‘artificial intelligence’. In essence, the Turing Test is passed if a human communicating with others by text cannot tell the difference between a human response and one produced by the machine. The important thing to note about Turing’s test is that he does not try to prove whether or not machines can ‘think’ as humans do – just whether or not they can successfully imitate the outcomes of human thinking.
 
Although the makers of a chat-bot called Eugene Goostman claimed it passed the test (by masquerading as a 13 year old boy) general opinion is the bot was designed to ‘game’ the system by using the boy’s apparent young age as a plausible excuse for the mistakes it made in the course of the test. Even so, the development of computers continues apace – with ‘expert systems’ and robots predicted to displace humans in a variety of occupations ranging from the legal profession to taxi drivers and miners.
 
All of this is causing considerable anxiety – not unlike that felt by people whose lives were upended by the development of steam power and mass production during the first Industrial Revolution. Back then people could more or less understand what was going on. The machines (and how they worked) were fairly obvious. These days the inner workings of our advanced machines are far more mysterious. Coal or timber burning in a furnace is tactile and observable. But what exactly is an electron? How do you see it? And a Q-bit?
 
Add to this the extraordinary power of modern machines and it is not surprising that some people (including the likes of Stephen Hawking) are expressing caution about the potential threat our own technologies present, not only to our lifestyles, but to human existence. Of course, not everybody is so pessimistic. However, the key thing to note here is we have choices to make about how we develop our technology. The future is not inevitable – we make it. And that is where ethics comes in.
 
IQ2 debate | ‘Humanity is designing its own demise’ | Sydney | 24 August | Tickets here
 
One small example of how our choices matter... At first glance, let’s imagine how wonderful it would be if we could build batteries that never need re-charging. That might seem to solve a raft of problems. However, as Raja Jurdak and Brano Kusy observe in The Conversation, there may be ‘downsides’ to consider, “creating indefinitely powered devices that can sense, think, and act moves us closer to creating artificial life forms. Couple that with an ability to reproduce through 3D printing, for example, and to learn their own program code, and you get most of the essential components for creating a self-sustaining species of machines.” Dystopian images of the Terminator come to mind.
 
However, back to Turing and his test. As noted above, computers that pass his test will not necessarily be thinking. Instead, they will be imitating what it means to be a thinking human being. This may be a crucial difference. What will we make of a medi-bot that tells us we have cancer and tries to comfort us – but that we know can have no authentic sense of its (or our) mortality? No matter how good it is at imitating sympathy, won’t the machine’s lack of genuine understanding and compassion lead us to discount the worth of its ‘support’?
 
Then there is the fundamental problem at the heart of the ethical life lived by human beings. Our form of being is endowed with the capacity to make conscious, ethical choices in conditions of fundamental uncertainty. It is our lot to be faced with genuine ethical dilemmas in which there is, in principle, no ‘right’ answer. This is because values like truth and compassion can be held with equal ‘weight’ and yet pull us in opposite directions. As humans we know what it means to make a responsible decision – even in the face of such radical uncertainty. And we do it all the time.
 
What will a machine do when there is no right answer? Will it do the equivalent to flipping a coin? Will it be indifferent to the answer it gets and act on the results of chance alone? Will that ever be good enough for us and the world we inhabit?

Dr Simon Longstaff is Executive Director of The Ethics Centre.  
 
Liked this article? Come to the IQ2 debate, ‘Humanity is designing its own demise’. Tickets here

Follow The Ethics Centre on TwitterFacebookInstagram and LinkedIn.

Twitter-Logo.png Facebook-Logo.png instagram-logo-sketch-copy.png linkedin-logo.jpg