Imagine two people talking in a bar and one says that they believe in God and the other says that there is no such thing. The conversation moves on. One says that they think their Apple phone and tablet are the best things ever and the other says that if most of the world uses Android that must prove their thinking is wrong. The conversation moves on. One person says that Trump is the best person to lead the USA into the future and the other says that Trump will only harm the country’s standing in the world.
It doesn’t
matter which person you identify with in each of those discussions, what it
shows is that not all people agree on these three and many other issues. But we
knew that already. The reason why it is important is because those two
hypothetical people could be responsible for the training of two different
pieces of artificial intelligence (AI) software. The views, opinions, beliefs,
and values of the person responsible for the training of an AI could influence
the ‘thinking’ of the AI and the responses that it comes up with when asked
questions by users. And those people could be mainframe users.
Britannica
tells us that the “term ethics may refer to the philosophical study of the
concepts of moral right and wrong and moral good and bad, to any philosophical
theory of what is morally right and wrong or morally good and bad, and to any
system or code of moral rules, principles, or values”.
Let’s suppose that someone with the mindset and ethics of Adolf Hitler trained a popular AI, or perhaps one of the founding fathers of the USA was responsible for the training. What kind of AI would they produce. The founding fathers of the USA were generally quite happy with the idea of slavery. The men still thought that women didn’t need to be educated because their poor feeble female brains couldn’t cope. And that women basically belonged to their fathers until they were married when ownership passed to their husbands. Much the same thinking applied over most of Europe. It’s the way most Europeans thought in the 17th and 18th centuries.
So, let’s
suppose that a piece of AI software – and, nowadays, you can hardly buy a new
device without it being advertised as coming with some super new AI – has been
trained with some ethical value that the majority of people don’t agree with.
However, because that is such a small part and everything else seems OK, the
software gets installed on your mainframe. Let’s suppose it’s a piece of
security software that is identifying unusual activity on your mainframe.
Perhaps a systems programmer has apparently logged in from a foreign country at
2am, and is now making changes to the system. Perhaps he is giving some
software higher access levels than before. Perhaps he is deleting certain
files. Now, hopefully, your security AI will spot this as unusual, and quickly suspend
the job until someone can check exactly what is happening. Then, if it’s all
OK, the job can continue. If it’s not OK, then not too much damage has been
done.
The users of
the AI will assume that the AI is on their side. It has the same values as them
and knows what’s good and bad, or right and wrong in the same way as the user.
But what if it doesn’t? You don’t usually expect software to have ethical
values, but with AI, this becomes more of a concern. What about using an
open-source AI. How can you check whether the values that have been trained
into it match yours?
There’s lots of
talk about the ethics of using AI software. Should students use AI to write
their essays. Should AI be used to create nude videos of famous (and not so
famous) people. And there are so many other areas. But what no-one talks about
is the actual ethical values of the AI software itself.
We’re all
familiar with the Terminator movies. Suppose the AI decides that humans
are destroying the planet, and the right thing is to remove them from
existence. Or, more worryingly, suppose the AI decides that someone logging
into your mainframe from one specific foreign country is permissible because
they are our friends, and lets them launch a ransomware attack on your
mainframe.
Ethical
conversations over a beer usually pass off without anyone getting too upset.
The embedded ethics of AI software might have more far-reaching consequences.
No comments:
Post a Comment