This matters in public relations, because inevitably we will find ourselves at the intersection of AI and its effects on people. We can expect much of the conversation to center on the ethical issues at play.
Here are the nine ethical issues the WEF identified:
- Unemployment. What happens at the end of jobs?
- Inequality. How do we distribute wealth created by machines?
- Humanity. How do machines affect our behavior and interaction?
- Artificial Stupidity. How can we guard against mistakes?
- Racist Robots. How do we eliminate AI bias?
- Security. How do we keep AI safe from adversaries?
- Evil Genies. How do we protect against unintended consequences?
- Singularity. How do we stay in control of a complex intelligent system?
- Robot Rights. How do we define humane treatment of AI?
Every one of these questions is serious, real and provocative. These are issues that will present themselves whether our leaders address them preemptively or not. Anyone who has “public relations” or “communications” in his or her title will be required to explain what it all means.
You may be that person.
For this reason, it may be worth examining how the climate for communications could take shape. Obviously, no effective communication can happen without a solid grasp of the technologies at play. Further, it will require a mastery of ethics at several depths, from basic human ethics and morality, to the ethics of behaviors in business, in government and in communications.
But before we even try to wrap our heads around all of that, we will need a fundamental centeredness that begins with our own individual moral compass. Personally, we need to have a clear idea about right and wrong, and an instinctive sense of pragmatism. Or to put it more simply, I’ll use the words of my late father: “You need to use the good common sense God gave you.”
Common Sense and those Nine Questions
The underlying theme or the premise of each WEF question relies on the belief that we can control all of the variables that will determine the outcome. Since we don’t, and therefore we can’t, the common sense answer to all nine questions above is, “You don’t.”
AI is man-made but once it starts to take on a life of its own, control over its evolution will become much more fragmented and difficult to achieve. Of course, society must do everything it can to influence positive outcomes, but for those of us in charge of communications, the first mistake we can make is to agree with what appears to be the WEF’s premise that a singular group or body is qualified to define what’s right and wrong for everyone else. But perhaps just as importantly, even if there was one, that it could deliver.
“End of Jobs?”
AI visionaries predict that employment as we know it will end. They are probably right in the same way as those who might have predicted an end to transportation jobs when blacksmiths were replaced by automobile mechanics, or when telephone operators were replaced by automatic switches, or when elevator operators were replaced by … buttons.
It is probably true that society will need to brace itself for yet another revolution in the way we work, perhaps on a par with the transformation from an agricultural economy to an industrial one, and then many decades later to an informational one. But “end of jobs” sounds a bit melodramatic.
Our role in public relations will be to assess at every step the impact of AI on the work force and help explain not only how that impact is taking shape, but also where the new opportunities may be as work itself continues to evolve. We’ve done this many times before. It’s one of our strengths and probably the one area where the communications profession is most prepared to step in and pave the way for AI.
Who gets to define “inequality?”
We learned in history class or political science class or economics class about the basic systems for governing and economics. Some are pretty straightforward. Under dictatorships or monarchies, the lines of inequality are pretty clear. You have the few who make all the decisions on “wealth distribution,” and then you have everyone else who are not deemed as “equal” or deserving.
Under communist and socialist regimes over the past 100 years you had what was written on paper, and then you had those theories put into practice, which usually ends up in some form or fashion like what I just described in the previous paragraph.
Because a free and democratic society is founded on the rights individuals possess, there is a key distinction between rights and outcomes. Economically, we have a right to work or start a business, but it’s on us to go out and earn. The system (in the U.S.) is structured to assure us the right to earn, but not the entitlement to receive. Of course, governments have certain entitlement programs, but the economic engines that drive growth, prosperity and feed tax coffers rely on income- and revenue-generation. With this in mind, it is largely assumed that the distribution of wealth is self-determined and based on all of the factors that go into making a living.
For PR pros, the major issues with the question about how to distribute wealth created by machines is to accept the premise that an individual or a small group of individuals should be given the power and authority to decide on how to allocate wealth and to whom. There will most likely be public relations professionals on all sides of these issues.
Machine Impacts on Human Behavior
Perhaps the most common and pressing issue that public relations professionals will face as AI is integrated more deeply into our daily lives will be the impact those machines will have on our own human behavior and interaction.
All you have to do is sit in the food court of any shopping mall and you’ll see the how machines are changing human interaction. Watch a bunch of teens stare at their phones instead of talking to each other, or stroll by the increasing numbers of empty storefronts in the mall thanks to the rising dominance of ecommerce.
At every turn in this evolution, it will be PR’s job to educate, persuade and inform on the full range of issues where new technologies continue to change the way humans interact with each other.
Artificial Stupidity: Guarding against mistakes?
This, we know. AI is only as good as its makers, and its makers are human and therefore imperfect. It’s not hard to imagine a world reliant on self-driving cars, where some of those cars kill people. It’s equally easy to envision an AI-controlled drone every now and then falling from the sky, putting people’s safety at risk. And did I mention invasion-of-privacy issues?
Over the decades, society has learned to accept certain trade-offs with increased automation. Goals are usually to minimize mistakes with the understanding that perfection is not attainable. What makes this issue even more challenging is the scale of power and influence AI has the potential to wield. Grids can be affected. Entire cities and regions. Millions of people can be more readily impacted by a singular event.
For communicators, one major dynamic will change – accountability.
Until now, human accountability has always been the cornerstone of ethical decision-making and behavior. When something goes wrong, we immediately and innately look for the humans in responsibility to address the issue. And to do the right thing, those humans rely on their own survival instincts, from something as basic as wanting to physically survive a crisis, to the more common motivations of fear of being criminally prosecuted, fear of being sued, or fear of being fired from a job.
AI removes all of these emotions and dynamics and puts a disconcerting buffer between responsible humans and decision-making. This presents big challenges for communications professionals, who will still be required to look for accountable parties, people who will be held responsible when machines make bad decisions. PR will have to play a role in sorting that out.
AI and Bias
The WEF points out that Google had some problems of its own with AI and how it was used to predict future criminals. Apparently, Google’s AI showed bias against African Americans.
Before getting into the bias of the technology, it’s worth asking a more fundamental ethics question: Who gave Google the right to predict criminal behavior based on appearances?
At the moment, it’s assumed AI is not our justice system and that we have a right to expect assumptions of innocence until guilt is proven. In other words, we have a highly regulated justice system of checks and balances, which is designed to be slow and deliberate.
So, a reliance on algorithms to predict criminal behavior based on appearances can lead to all sorts of issues that can create or perpetuate AI-driven stereotypes across all demographics in any number of situations.
As a result, it’s quite possible that in the future when AI is involved, one of our most important roles may be to give voice to the concern that an organization may be relying too heavily on value judgements made by machines.
Keeping AI from Adversaries
The WEF is concerned that evil people may see AI as a powerful new weapon in their arsenal. This concern is not only valid but probably as serious as conventional policies to keep weapons of mass destruction out of the hands of the bad guys. The problem is, AI may eventually be so ubiquitous that to try to “keep it” from adversaries may not be realistic.
In the PR profession, our role may be to sound the alarm on issues as they relate to policy. This will allow decision-makers to better create policies that favor the good AI can do for the world, while not underestimating the bad it can do in the wrong hands.
What if AI Turns Against Us?
Mary Shelley wrote the iconic Frankenstein story in 1818, 200 years ago, and it’s even more relevant today when we contemplate the power and the risks of AI. In that story, Dr. Victor Frankenstein creates a monster that eventually turns against him, its creator. No longer science fiction, today scientists are creating machines that learn and decide and learn again. They control other machines and the many mechanisms that allow society to function.
The WEF cites one of the most classic concerns of science fiction authors: What if the machines turn against us?
Rather than assign nefarious motives, the WEF points to the real possibility that through some sort of glitch or misunderstanding in programming, a machine could misinterpret stimulus, data, an image or the very presence of a human, and make an errant decision causing harm.
As with other scenarios, it may the role of communications professionals to speak up when they see the potential for human risk and how that risk could play out.
What if the Machine Becomes Smarter than Us?
The difference between humans and all other beings on Earth, says the WEF, is our intelligence. So, the ethical question becomes, “What could happen if the machine becomes ‘smarter’ than its creator?”
This question appears to raise the issue of whether technology policy-making in the future will call for a figurative “kill switch” to be built into any AI technology to serve as a means to shut down a system before it can do much damage.
Public relations professionals will play an important role in the debate over how such policies should be crafted and enforced, and eventually implemented.
What if Machines Develop Feelings?
The final WEF question entertains the extreme notion that robots could (should?) have rights not unlike human or animal rights and that machines could deserve “humane treatment.” To give this question serious consideration, we would have to accept the premise that machines are “living” beings with feelings.
At the moment, and with so many other higher priorities when it comes to AI, assigning feelings to machines and then assigning them the same rights as humans may be a bridge too far. It’s probably best to let the next generation of PR pros deal with that.