There are still so many unanswered questions when it comes to the age of AI and how we can live with Artificially Intelligent machines and robots that may become more intelligent than us. How can we coexist comfortably and conveniently if one day, the machines we have created, decide to think for themselves? Do you believe in technological singularity and is it near? Here are some common ethical dilemmas we will have in the age of AI.
An AI machine can be a computer or smart device and can also be known as a robot with or without appendages and can emulate human life physically.
Machines Lack of Empathy
Many would argue that you can’t program the feeling of empathy and that it’s more of an innate feeling that we’re born with and share with others that can be empathetic as well.
In the distant future, some machines may have to make decisions for humans. In the case of empathy, imagine a robot having to decide if resuscitation attempts on a deceased human should be undertaken and if so, for how long?
If a machine had to choose between saving the life of a child or a parent, what would it choose? Some would argue that you can never program or write code that will be equivalent to the amount of personality, judgement and empathy that humans have.
Machines Lack Moral Judgement
Let’s look at some moral judgement decisions that may arise if Artificial Intelligence would take a more center role within daily duties in the future.
What about a hostage situation? Would a machine be able to decide and act upon correctly in this type of situation? What steps would be involved in ensuring that the robot is ready for deployment into the field of emergency rescue services for example?
Could a robot decide whether to risk itself to save 100 people or save one person who won’t put the robot at risk (while leading to the peril of 100)?
Along with empathy, it may not be possible to program moral judgement into Artificial Intelligence. Would you ever trust your child to be babysat by an AI robot? Would it clearly know the difference between toy scissors and real ones?
Machines Lack a Full Understanding of Risk
When we drive our vehicles, or walk through a crowded parking lot full of moving traffic we almost completely understand the inherent risk involved with these activities. We also assume the responsibilities we have for those around us while we captain our way to our destinations.
Machines, on their own, serve one purpose and do not stop serving that purpose until they break or shut down. This poses a problem when we finally relinquish control to AI. Will they take on the same level of self-preservation in all the unique positions we can find ourselves in but can’t always plan for? Can self-driving vehicles change their pre-sets in the face of a surprise that could endanger itself and by connection, us?
What are the Rules on How to Manage Them?
Making rules for such computers may prove to be even more challenging than having Artificial Intelligent know how to act on them. As it’s been for centuries, humans by nature don’t agree on thousands of issues of ethics or morals. This is sometimes why we segregate into smaller groups of people who agree on the same beliefs. It helps us feel more firmly planted and justified in our ways.
It stands to reason, if machines can act or vote on our behalf, would we not have to create them to live just like we do and separate from each other to form like-minded groups? Would that make AI racist? Would robots need to know that other AI beings think differently than them for each to be a critical thinker? Could AI communicate properly with each other?
Should Machines Have Rights?
If Artificial Intelligence become truly as lifelike as us, even if only in their ability to think, they may not have rights when they are created but in time they may very well demand those rights as it might be out of our hands and even look for Artificial Intelligence love.
When it comes down to it, is it not our morals and empathy that lead us to having rights in the first place? Why would our creation not come to that conclusion as well?
Latest posts by Eran Abramson (see all)
- How Artificial Intelligence Can Contribute to the Social Good - January 18, 2017
- Ethical Dilemmas in the Age of AI - December 7, 2016
- 4 Self-Driving Car Companies Worth Talking About - November 23, 2016