As technology improves and artificial intelligence becomes less of a movie shtick and more of a viable business option, we’re tasked with answering a wave of questions on the ethics of automation.
And while we’ve had to answer some of these questions before, some of them are unique to the growing complexity of AI and the increasingly automated workforce that we see today.
What’s more, these questions force us to re-examine our own values and consider just what it is that makes us human, and what makes everything else anything but.
What Do We Do with The Displaced Workers?
The first question that usually springs to mind in almost any discussion on the ethics of automation is whether or not it is right to replace so many workers with machines.
To begin, this question is one that’s as old as progress itself. When a new technology comes along that can do the job quicker, better, and cheaper than a human, is it right to use that technology knowing it’s going to cost someone their job?
While it’s certainly difficult to deny the emotional pull of employing the man rather than the machine, the drive of progress is just as human of a feeling as sympathy. And more often than not, when that drive is supplemented with the reward of money, it’s likely to be the obvious choice for any business owner.
Rather than debate the issue of whether or not to allow machines to replace workers, then, it might be more beneficial to discuss how to better help these displaced workers find new employment. Putting funds into workforce transition programs may be the key to moving ever forward while still holding onto our humanity.
How Do We Account for Growing Income Inequality?
A mechanical workforce brings with it a variety of benefits. In general, automated workers make fewer errors, are more efficient, and can work without breaks and with significantly less downtime.
These three factors alone are enough to bring in substantial profits for any business, even despite the large initial investment.
And most importantly, these workers aren’t paid a salary. When you take into account that many businesses spend around one-third of their revenue on paying their employees, it’s easy to see why a mechanized workforce can be an absolute game changer.
But one question that arises out of all this extra dough is should automated businesses be required to contribute a different amount to society? Essentially, should they be forced to pay higher taxes based on the fact that their business is run mostly by machines?
After all, as the owners continue to get richer, the people they used to employ are now out of a job, causing the income gap to grow even wider.
And when income inequality grows, economic instability has been shown to rise as well. What’s more, if the majority of the population is out of work, how are they supposed to keep buying the products produced by the company that replaced them in the first place?
Forcing these business owners to shoulder a heavier tax rate can help keep money flowing into the social and financial systems that the poorer population is putting a burden on, thereby staving off economic disaster for the entire country.
How Do We Decide to Treat Advanced Robotic Intelligence?
Worrying about the rights and liberties of displaced workers is certainly one of the biggest topics of concern when it comes to the ethics of automation. But there’s one other type of worker whose rights may need to be considered in this brave new world: the robot’s.
We may still be a few decades (or centuries) away from this type of scenario but at what level of intelligence and personality must a machine have before it becomes necessary to start considering whether or not it should have legal rights?
Is it intellect that gives something the right to legal recognition as an individual? A personality? Emotions?
The rapid advancement of artificial intelligence has put humans in an ethical quandary that hasn’t ever needed to be dealt with before. What if we’re faced with a being that is just as smart and capable as we are (and probably even more so at some point) but isn’t technically human? Does it deserve to be treated as an equal?
And how will doing so change our systems of taxation, of voting, of healthcare, welfare, and citizenship?
How Will We Instill Human Values into Non-Human Systems?
And finally, if we do plan on implementing some sort of value system into artificial intelligence, how do we do so in a way that truly reflects what we think is important in society?
An artificial intelligence built entirely on logic, for instance, may determine that the only way to protect the human race indefinitely would be to enslave it, a la the supercomputer VIKI from the popular film I-Robot.
And while this kind of reasoning is obviously flawed from our perspective, it would only be considered absolutist if the other values that made it seem absurd (i.e. the importance of liberty, self-realization, etc.) were never really communicated to the artificial intelligence system in the first place.
Before implementing such a sophisticated AI system, then, it would be absolutely crucial that we find a way to build a comprehensive value system by which these systems could operate. Basing decisions on logic alone may bring with it a host of unforeseen consequences.
The Ethics of Automation: A Complex Problem
While there are already a large number of ethics-based automation conversations already happening, the problem is likely going to become a lot more complex than we think. As new technologies begin to become more common, unforeseen ethical questions are bound to arise.
And while the outcome of such conversations may help to put policies and regulations into place that are bound to change the world (for better or for worse), they also serve as a way to examine our own values a little bit closer. And that alone makes the conversations worth having.