(Not) trusting AI


Humans and AI working together typically do a better job than either could on their own. People’s innate wariness toward AI presents an impediment, but one that can be overcome by fostering fruitful cooperation, which breeds trust. Hence, the objective of this text is to suggest an alternative, pragmatic perspective on the trust issue when it comes to working with AI in a company.

When working with AI, sooner or later the topic of trust comes up.

“We don’t trust the system to make responsible decisions” or “if we can’t see exactly how the algorithm came up with all the individual steps, we can’t trust it” are typical statements describing mistrust towards artificial intelligence.

This lack of trust is particularly challenging, because AI’s biggest potential lies in humans and AI working together and complementing each other. This is most clearly on display in games, but it is no less true in the working world. Yet, something is holding us back from making full use of that potential. And that something is within us.

Since it is obvious that trust plays such an important role in this relationship, let’s think about that a little bit.

Humans trusting humans vs. humans trusting AI

At the beginning of interpersonal relations, there usually is a leap of faith, and most of our life together is based on mutual trust. To start off with a leap of faith has paid off from an evolutionary perspective and makes life much easier than mistrusting everyone from the beginning. The British journalist, businessman and writer Matt Ridley even suggests in his “The Origins of Virtue” that it has paid off to such an extent that cooperation itself might be in our DNA, as genes which generated reciprocal altruistic behavior would have been likely to be passed on. And according to psychoanalyst Erik Erikson, the development of basic trust is the first state of psychosocial development, occuring during the first two years of life.

Trust, communication, reputation and mechanisms to regulate opportunistic behavior are the foundation of successful cooperation. Game theory has demonstrated beautifully that it is actually in our own best interest to renounce short-term profit maximization when cooperating with other human beings, and to instead care for the common good. To summarize, trust is one of the key factors that lead us humans to cooperate with each other (figure 1).

Figure 1: Trust between humans leads to cooperation

Alas, these age-old mechanisms do not transfer very well to human-AI-interaction. Perhaps if AI resembled humans more closely, we would be more inclined to extend to it the same basic trust that we do to each other. For the time being, however, we have to find ways to overcome our emotional reluctance to cooperate with AI, which we know on a rational level can be far superior to just humans or just AI working alone.

How to increase trust in AI

This begs the question of how trust in AI could be increased. Interestingly, there is research suggesting that trust is better understood as a result of, rather than a precondition for, cooperation. That is to say, if we know what part of the job an algorithm is supposed to do, we understand how it does it, and we can see it making a solid attempt at its goal, we are going to trust it. The practical implication, then, would be to make it easier to learn about how a given AI works and to integrate it into one’s workflow in an accountable manner. Trust should then be the natural consequence of successful cooperation (figure 2).

Figure 2: Cooperation between human and AI leads to trust

A lot of effort is already going into the design side of facilitating human-AI-cooperation (think human-centered design), working out systems with which humans can more easily collaborate. If we complement these efforts from the other side, and give people more opportunities to learn about AI on a broader level – especially its actual capabilities and limits – we would foster human-AI-cooperation and at the same time work on demystifying AI.

The European Commission has expanded on this line of thought in its recently published “Ethics guidelines for trustworthy AI” . According to the guidelines, trustworthy AI should be (1) lawful, respecting all applicable laws and regulations; (2) ethical, respecting ethical principles and values; and (3) robust, both technically and in terms of its social environment. The aim of the guidelines is to promote trustworthy AI as well as to set out a framework for achieving it. The third of these objectives is of particular interest to the subject at hand. When aiming to work with AI, one of the major objectives has to be making AI more reliable, explainable and thus understandable for humans.

As we try to foster tangible, traceable and transparent human-AI-interaction on a larger scale, we may look to other countries such as Finland, which has set out to teach one percent of the country’s population the basic concepts at the root of artificial technology (https://www.elementsofai.com/), thus giving people easy access to AI and the chance to familiarize with it in a conducive way.

To many employees, even though they personally might be very interested in learning more about AI in general and how they could benefit from working with it, AI still seems to be more of a curiosity than an actual thing that they can integrate into their daily work lives. One lesson to draw from the discussion above is that we need to increase the number of touch points people have with AI, such as demos and training opportunities for all employees of a company. Also, visualizations that companies put up their walls and that work as a communication tool by displaying the changes lying ahead (“change pictures”) can help decrease the perceived distance between humans and AI. Last, but not least, more actively rewarding human-AI-cooperation works as an additional motivator on the way to developing a positive AI culture in an organization.