There are a significant number of questions that exist around the systems we are deploying in the world. Many of them are urgent, and not all of them are being addressed all that well. Barring the obvious racism, sexism and other bias in the tools we are creating, complexity and understanding are, to be honest, amongst the most urgent.
If we are using systems that act in our name, the systems that we use have to be able to explain or justify in some reasonable way. The more complex, the more imperative this is.
If anything is true, it’s that AI has become a hot topic in recent years. However, it’s almost as if the people who ‘make’ AI are rather worried that we won’t trust it. There is a great deal of chatter about making AI more trustworthy in some way, as if this will be a solutionContinue reading “On “Trustworthy” AI.”
I’ve been working for many years on what is called computational trust. It’s basically taking trust the way humans do it, thinking about how computers might do it, and creating ways to make that happen. This usually involves a bit of mathematics, but since I am not that much a fan of maths, I tendContinue reading “What are Trust Systems?”