Before You Put Your Faith in AI for Crisis Communications, Learn About “Automation Bias”

Crisis communications

In closing chapter of my book, The Essential Crisis Communications Plan, I touch on one of the most transformative developments to come along in recent years when it comes to crisis communications. That is the emergence of generative Artificial Intelligence (AI).

First, I’ll recap what it says in the book, and then I’ll elaborate further:

“As a tool, (AI) will be powerful. It will dramatically accelerate the gathering and collection of information. It will enhance the analysis process. As the technology becomes more advanced, it may even be able to tap the “collective brain” to make some serious and mind-boggling discernments.

But as vast and powerful as AI may become, it can only capture a fraction of the human emotional and intellectual experience. Here is what AI will never be able to do. It will never be human. It will never be truly sentient in that sense. It won’t be able to feel or comprehend human feelings as only humans can. It won’t be able to empathize. It won’t be able to appreciate goodness for its own sake. It won’t fear evil or comprehend optimism, or selfishness, or selflessness, or ego, or humility, or pessimism, or anxiety, or morality, or faith, or God.

Only humans can grasp these things and we must if we are to be truly effective crisis managers and communicators. Because the people who are impacted by crises are humans, and that’s how they see the world.

We call them “stakeholders” for the expediency of communications planning, but in the end, they are people, imperfect and human. Their welfare, their well-being is the root of their trust in us or their distrust of us. They can only trust us because we are people, too.

Imperfect and flawed, we are like them. That is our superpower. We can never forget this.”

What Does This All Mean?

There is much excitement over AI. In some of my non-crisis work these days, I’m working closely with AI scientists, developers and creators, so I understand that excitement.

But at the same time, we have to be aware of something called automation bias. This is when people put more faith in a technology or an automated process than in humans or the human-designed processes already in place.

Machines make mistakes. We know this and have for a long time. AI makes mistakes, sometimes serious ones.

To deal with this, we have to remember, always, that AI is just a tool, and that when we use it in crisis communications, we cannot be over-reliant on its capabilities. We have to maintain our own roles as the overseers, the verifiers, the fact-checkers, the quality control managers.

I’m particularly happy about the amount of time AI will save me in collecting data and beginning the process of organizing it and even laying the foundation for analysis. But for me, AI will be on a short leash.  I won’t ever trust that it in itself got all the facts right or will produce the kind of quality I think my clients deserve.  I look at the tool as an efficient starting point for the very human process of providing clients with responsible, ethical and effective crisis communications counsel.

∼ ∼ ∼

Tim is the author of the book called “The Essential Crisis Communications Plan: A Crisis Management Process that Fits Your Culture.” He is founder of O’Brien Communications and has provided crisis communications and issues management support to clients from Fortune 100 firms and national nonprofits, to emerging start-ups. Tim has handled hundreds of crises, large and small over decades, working with some of the most iconic brands in the world along the way.

Posted in Corporate & Strategic Communication, Crisis & Issues Management, Pittsburgh, PR & Media Relations, Workplace Communications and tagged , , , .