Coming ethical dilemmas in Artificial Intelligence

Coming ethical dilemmas in Artificial Intelligence

Artificial Intelligence technology continues to advance ever faster on its exponential path towards the technological singularity, and alongside its truly amazing achievements we can see a growing profusion of ethical problems. I’ve already written about the ethical issues involved in the self-driving car domain, but that is just one example. Here I look at some more ethical dilemmas in AI.

Consider the judiciary system. Say you’re arrested and brought before a judge, and the question of bail – whether you can go home until your trial – comes up. The judge denies bail; off to prison with you! You ask, “why did you deny me bail?”. The judge says, “because it’s likely you’d murder someone between now and the trial”. “What made you think that?!?”, you protest. And his honor replies “I dunno… my computer told me it is so. I’ve no idea why”.

Shades of Minority Report! But this is not a dystopian science fiction story: an AI system is in use today in many courts in the US that has been trained on countless examples of past cases and recommends to judges whether to grant bail. And there are two problems here. One is that the system seems racially biased; reports say it treats African Americans more harshly that whites. The other – which is also a hindrance to fixing the first –  is that we don’t know why it recommends one way or the other. By “we” I don’t mean just the judges: the computer scientists that created and trained the system have no idea either. This is of course a key trait of neural networks, which are the brain-inspired technology at the heart of most of today’s AI systems. You train them by examples, and they extract meaning by tweaking their internal parameters – the synaptic weights – of which they have hundreds or thousands. Usually there is no way to make sense of these parameters; we just know that they cause the system to make correct predictions when we test the system. But without knowing what makes them tick, how can we be sure there won’t be a category of data where they’ll go wrong? And how can we verify their recommendations ourselves?

Then there’s the issue of the Big Data that is used to train the networks. Take a health setting: the wonderful AI tools that can advise your doctor how to treat you when you get seriously ill – or, preferably, before you do – need to learn from past cases; the data in those cases is derived from real patients, often without their consent.  Implanted devices like heart pacemakers create data that is sent back to their manufacturer around the clock – and is used or sold. Who owns this data? And before you say “who cares”, remember the murder suspect who claimed he had been sleeping at the time of the crime – the court subpoenaed his pacemaker data and proved he had been very much awake… And yet, heart patients also benefit from having their data sent to their doctors, to alert them to any life-threatening anomalies.

More on medicine: remember the historian Yuval Noah Harari’s amazing statement that “Organisms are Algorithms”, and can be hacked? (I recommend you watch Harari’s talk on this.) Indeed, our genome is data – and we have a lot of this data, now that genome sequencing prices are falling within reach of the average citizen. This data can be analyzed by AI systems and will teach them to predict life outcomes at the individual level. And since we can edit the genome (of plants, farm animals, and humans), the potential is as great as the danger. AI will play a big role in understanding and modifying DNA in order to increase crop yields and create better medical treatments – and the specter of “designer babies” and eugenics is just as real.

Next: with enough data, very soon AI will be able to predict with good accuracy the cause and time of your natural death (barring accidents). It may even be able to do so in your infancy, or maybe in utero. What should we do with such prophetic power? Do people want to know when they’ll die? Will it make them happier – or will it drive some of them to depression or suicide?

Disturbing as all this is, it is just the beginning. We can be certain that AI is going to widen and deepen its involvement in every aspect of our lives, and sooner rather than later. Intelligent systems will learn to do anything we can, and do it better on average, and we’ll be less and less able to understand how they do it. Not only will the systems themselves do their thing without a shred of ethical understanding, but the humans who program them will have no ability to foresee the problems that will arise. Just like the proverbial blind leading the blind… and the rest of us will need to hope for the best.

To conclude this post on an optimistic note: it will certainly be interesting to see what happens!

Nathan Zeldes
Nathan Zeldes

Nathan Zeldes is a globally recognized thought leader in the search for improved knowledge worker productivity. After a 26 year career as a manager and principal engineer at Intel Corporation, he now helps organizations solve core problems at the intersection of information technology and human behavior.

Leave a Reply

Your Email address will not be published. Required fields are marked *