An extract from Trust Systems, the book

On Reputation Systems and Social Credit

(According to Ulysses this should take between 6 and 10 minutes to read).

In the course of my career, I have been lucky in the things I have been able to do. After nearly 30 years thinking about trust I figure it’s time to give back. Here’s how.

For the last few years I have been teaching a course called Trust Systems at Ontario Tech. It covers trust from what you might call first principles (the human stuff) all the way through to trust in and of AI and other autonomous systems. There are various digressions along the way. This year I’ve also covered a bit about TrustLess Systems like Blockchains and Zero Trust Security.

I have been writing a textbook to go along with the course. In addition to being a textbook for an undergraduate course, it is my goal for it to be a stand-alone book for the interested, non-expert reader. It’s aimed at, well, basically anyone who might like to learn a bit more. Whilst it is moderately scientific in nature, it’s more of a personal journey. This of course means that the references are minimal in the text, but there is to be a large “Further Reading” section at the end. As it stands in-text references are basically hyperlinks to various sources (ResearchGate, Amazon, Google Books, even (the horror!) Wikipedia).

Since I live on Turtle Island and am a treaty person, I am also working toward acknowledging Indigenous ways of knowing in this work. Trust is a hugely contextual and cultural phenomenon and in the past I haven’t done enough to bring all of this in. As a result there is much to do to bring in diversity.

This is important: the systems we create today will have an impact on the people of tomorrow. I know that if anything builds on the work I do it will have such an impact. This isn’t self-importance: my work is quite heavily cited and I basically founded the field of computational trust. These things are facts. It is the responsibility of all of us to acknowledge this and to try to be the difference.

In this, I, like the book, am a work in progress.

The ‘give back’ bit is that it will be released Creative Commons licence (in specific Creative Commons Attribution-NonCommercial-Share Alike, or by-nc-sa), like all the words that I have written on this website.

The book will be finished in the next few months, but in the interim I’m going to use this blog to post a few different ‘bits’ of it — if you like it, let me know. If not, the same. I’m open to all feedback.

So, for today, a section from the “Reputation and Recommendation” chapter.

I’m no fan of these systems actually, as will likely become apparent. I think there are so many better ways we could be doing the things we do, but I also appreciate that, on the Internet, nobody knows you are a dog. So sure, let’s throw some mathematics at it and see if we can fix that, shall we?

Anyway, the following discusses social credit with a little lean toward the Chinese Social Credit System. The pictures are ones I drew. I’m no artist but like I said, this is a personal journey. I hope you like it.

———————————————————

This brings us to dystopian disguised as ‘good for society’. In China, you may have heard, there is a social credit system. What is this? Well, it’s not (totally) Douglas’ idea of social credit, that’s for sure. Not sure what that means? Look at http://socialcredit.com.au.

The Social Credit System in China is still in development as of the time of writing, which means that there are many questions about how it works and how it might not. There are also different kinds of representation in the system itself (like numerical values for credit or black/whitelisting for credit). It basically works like this: do things that are socially acceptable or correct — like donate blood or volunteer — and you get credit. Do things that people don’t like — like playing loud music, jaywalking, bribery and so on — and you lose credit . How is it enforced? By citizen participation (oh, those crowds again, we’ll get back to crowds, don’t worry), facial recognition systems (and we know how perfect those are, right?), a whole lot of AI. Things like that. There’s also evidence that you can buy credit, but of course, that would be wrong, so it never happens (yes, that was sarcasm too).

And so: control.

The Social Credit System in China is a control mechanism. It’s possible to see it as a form of reputation, and the behaviour is not far from Whuffie: if you have bad credit you will be blacklisted, and you won’t be allowed to travel (already happened), or stand for politics, or get your children into universities. To become un-blacklisted? Do good things for society.

You get the idea.

Doesn’t it sound wonderful? Sure. Until you start asking questions like ‘who gets to decide what is good and what isn’t?’ Is posting on a social network that isn’t Chinese good or not? What about reading a certain kind of book? How about running for political office?

Like many things which seem interesting, promising, and plausible in first light, there are huge issues here.

What could possibly go wrong? Blackmail, coercion, corruption, mistaken identity. The list is quite long.

And just in case you think it couldn’t happen here, wherever here is, consider: a good (financial) credit score gets you a long way. Moreover, see those reputation scores you’re building on all those nice sites you use? Who controls them? Who decides what is ‘good’?

Figure R21: War is Peace. Freedom is Slavery.  Ignorance is Strength.Social Credit is Truth.
Figure R21: War is Peace. Freedom is Slavery. Ignorance is Strength.Social Credit is Truth.

In fact, the concept of social capital is closely linked to this. Social capital is basically the idea that positive connections with people around us mean that we are somehow happier, more trusting, less lonely and so on… Social capital, like reputation, can be used in situations where we are in trouble (ask for help) or need a little extra push (getting your child into that next best school) or a small recognition (like getting your coffee paid for by a co-worker every so often). You can gain social capital, and you can lose it. And if you lose it, then you don’t get the good things. It isn’t a specific number but the crowd you are part of calculates and implicitly shares it — by showing that you are accepted, by valuing your presence, things like that. It’s about shared values and making sure that you share them in order to move groups, society, companies, people who look like you, forward.

Does that sound at all familiar?

Political capital is a similar thing, you could see it as an extension of social capital.

It’s all reputation.

It has come to the attention of some thinkers (Like Rogers and Botsman, 2010) that all of this stuff hanging around is pretty useful. I mean, if you have a great reputation in one context, why is it that this isn’t used in other contexts? This has led to the idea of “Reputation Banks” where you can manage reputation capital to be able to use it in different contexts. Good reputation capital means you get to choose your passengers as an Uber driver, or get nice seats at restaurants, and so on.

How familiar does that sound?

By the way, I think it’s an absolutely awful idea.

So, why do I sound so down about all of this?

Reputation systems, especially when pushed to the limits we see in China’s Social Credit System or even the concept of social capital, are a means to control the behaviour of others. This is where the whole nudge theory stuff comes from. That’s fine when we think of some of the behaviour that we don’t like. I’m sure I don’t need to tell you what that might be. And there’s one of the problems, because your opinion and mine may well differ. I might happen to think that certain behaviours are okay — perhaps I have a more profound insight into why they happen than you do. Whereas you just see them as a nuisance. In the superb “This is Water” David Foster Wallace talks about putting yourself aside for just a moment and trying to figure out why things are happening, or why people are behaving the way they are. Like trust, this stuff is very personal and subjective. It’s also information-driven in a way that we haven’t figured out properly yet. If someone is exhibiting a certain kind of (for the sake of simplicity, let’s call it ‘anti-social’) behaviour, why are they doing it? Do you know? I am sure I don’t. But the crowd believes that it does (I told you we’d get back there).

What is the crowd? Well, it’s usually not the people who are exhibiting behaviour that challenges it in some way (I’m sorry, that was a difficult sentence to parse). Let’s imagine it’s neurotypical, probably Caucasian, depending on where you are, almost certainly what you might call ‘middle to upper class’, possibly male-dominated. None of that has worked out particularly well for the planet so far, so why would we expect it to work out now in systems that expand its power exponentially?

It’s also stupid. Crowds are not ‘wise’. Their behaviour may be explainable by some statistical measures, but that doesn’t mean that what the crowd thinks is good for everyone actually is. Figure R22: Sure, crowds are wise...

It doesn’t mean that what anyone might think is good for us actually is.

To argue the point a little, consider the flying of a plane. If you put enough people together in a crowd who don’t know how to fly it, they’re not going to get any better at it (thanks to Jeremy Pitt for this example). You need an expert. Some problems are expert problems. Some problems (and their solutions) are domain-specific. I would venture to suggest that figuring out who is more skilled than anyone else, or which is the correct switch to flick on an airplane controls, are certainly both expert and domain-specific problems.

What if the behaviour you see in someone — for example a woman shouting at her son — is the result of a personal tragedy that she is dealing with that leaves her emotionally and physically exhausted (I got this example from “This is Water” which if you haven’t read is worth it — see, a recommendation!)? None of this information fits into a Social Credit System. Even if a credit agency is supposed to let you put a note someplace to explain a discrepancy, that doesn’t change the score. If you have missed some payments because you had to pay for your child’s medication, the score doesn’t care. It’s a score. It makes people money, and it helps people who have money decide how to treat those who may not.

If you use a reputation system to decide whether to buy something from someone, then use it as a tool to help, not a tool to tell. A tool to inform, not to dictate. Or, in the language we’ve used up to now, a tool to empower, not to enforce. Enforcement goes two ways — it enforces behaviour the crowd sees as correct, and it enforces the ‘right’ choice (the choice that the system wants you to make).

Reputation systems are fine. They give you information that can help you make decisions. Just don’t use them to judge people or things. Or to believe one kind of thing over another. Or to choose friends or people to date. Use your head, that’s what it’s for.

Published by Steve

Partner, Dad, Prof, Writer

One thought on “An extract from Trust Systems, the book

  1. Excellent. Particularly the conclusion in the 2nd last paragraph. Both trust and reputation are contextual and personal, they can’t be measured by a single score. There are also several types of trust – personal, relational, network, institutional and reputational. Any system has to be able to handle all of these.

    Like

Leave a Reply to tomfoale Cancel reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: