This should take around 5 minutes to read.
Here’s the thing: there are big problems associated with having trust models that are too complex for people to understand. Most importantly, if a model is too complex, a human being would not actually wish to use it to help them. Furthermore, if a system can’t explain how it got somewhere, the reason to trust it is rather small.
It gets worse because complexity is a problem when we want real people to use our shiny tools.
There are some aspects of this argument that are perhaps a little problematic. Firstly, I have previously suggested that complexity actually necessitates trust: if something is too complex to understand or predict (and thus control) what else is there to do but think in terms of trust? I think this is a fair argument, and it’s one that Piotr Cofta’s work supports.
So, why wouldn’t a complex trust model be worthwhile, since it puts us, ironically, into the ‘need to trust’ zone? It’s a fair question.
It also mixes up our metaphors a little.
On the one hand, we can talk about the fact that complexity means we have little choice but to trust a system for some purpose. On the other, complexity and obfuscation mean that we have little reason to do so.
What a conundrum!
A counter-argument is of course that the model need not be decipherable, it just needs to be followed — suggestions need to be seen as, as it were, commands.
This is the very antithesis of Trust Empowerment (indeed, by definition it is Trust Enforcement). This is not the right way to do things: One of the things that we should be trying to do is to design systems, including those that help us with trust decisions, that are more understandable and by extension more empowering. It should go without saying that helping people understand better is a good thing.
To be clear, I’m talking about trust models in autonomous systems that help these systems make decisions for us (like recommending hiring someone, buying stocks and bonds, things like that). Clearly, there are situations where the autonomous systems we create will use their own models of trust. In these cases, the opaqueness of the model itself — either because it is extremely complex or because it is obfuscated in some way — is of use, at least to strengthen the society against trust attacks (we’ll get back to that sometime!).
However, even in these circumstances there remains a need to explain.
Steve’s First Law of Computing (hey, that’s me!) is that, sooner or later every computational system impacts humans. This is important to remember because at some point a human being will either be affected or want to know what is happening, and often both.
It is at this point that the whole explainable bit becomes critical. There is a lot of literature around things like explainable AI, explainable systems and so forth, but what it comes down to is this: The complex tools that we create for people to use have to be able to explain why they have chosen, or recommended, or acted the way they have.
They may need to do this after the fact (let’s call those justification systems) or before (in which case explain or “decision support” is probably more apropos).
As an aside, it’s probably fair to say that we as humans don’t always know why we did something, or trusted someone when we did, for example. Asking us to explain ourselves often results in a “just because”.
A long time ago when Expert Systems were all the rage, explainability was recognized as important too. If an Expert System came up with some diagnosis (there are lots of medical ones) or prediction it was possible to ask it “why” or more correctly “how” it came to the conclusions it did. The system could show which rules were triggered (and the system’s confidence in them) by which data all the way to the first evidence or question it got. It’s pretty neat actually, and as a form of justification it works. In fact, the whole notion of Explainable AI is looking for a similar kind of support system (in spirit at least).
The thing is, complex systems like neural networks or genetic algorithms or black box artificial intelligence(s), or even complex mathematical trust models in an eCommerce setting, can’t really explain like that. They can, however, maybe pretend to, or perhaps lead the human through a path that makes sense to them. This may actually be enough. After all, bear in mind that humans are notoriously bad at explaining themselves too. Holding up other systems to a higher standard, especially in uncertain circumstances, does rather seem a little demanding.
In many situations we do expect good explanations from humans. “Why did you hit Steve?”, “Why did you swerve off the road?”, “How did this happen?”, “Why do you think this is true?”, “Why should I trust Steve to mail a letter?”.
Going out on a limb here: If we are using systems that act in our name, the systems that we use have to be able to explain or justify in some reasonable way. The more complex, the more imperative this is.
This of course begs a question: What does “in our name” actually mean? Perhaps this is something we need to think about a little more. I’m sure we’ll get back to that sometime.
To bring us back to trust for a moment, it’s probably pretty clear that better explanations lead to increased trust, but that’s not the point here. The point is that if we have trust systems that are so complex that they are too hard for people to understand leads nowhere fast unless they can explain themselves.
Simple or complex: explanations are key. But they are also just part of the problem. In part two I’ll jump a little further into this.