Complexity, Part Two

(Ten Commandments!)

This should take about 4 minutes to read.

In my previous post I talked about how complex systems need to be able to explain themselves. That’s undoubtedly true, but they also could be made less, well, complex.

This brings us to a paper we wrote in 2012 which saw the potential pitfalls of complexity and tried to address them, or at least to make a start. It’s called “Rendering unto Cæsar the Things That Are Cæsar’s: Complex Trust Models and Human Understanding” which is a bit of a pretentious title (I can say that because I co-wrote it! My long-suffering colleagues Natasha Dwyer and Anirban Basu probably rolled their eyes when I suggested the title…) but the contents are important.

The argument I have made in the previous post was made in this paper and, in the almost ten years since, not much has changed for the better.

This is a little bit of a shame.

There are a significant number of questions that exist around the systems we are deploying in the world. Many of them are urgent, and not all of them are being addressed all that well. Barring the obvious racism, sexism and other bias in the tools we are creating, complexity and understanding are, to be honest, amongst the most urgent.

The paper presented ten commandments (actually, it presented 8, but in a later paper we introduced a couple more because one should always have ten commandments). I repeat the 8 and the extra 2 here. Since the words in the paper speak for themselves, I’ll just use those words and not dress them up unless there is a need to explain further. The first 8 are from pages 197-198:

  1. The model is for people.
  2. The model should be understandable, not just by mathematics professors, but by the people who are expected to use and make decisions with or from it.
  3. Allow for monitoring and intervention. Understand that a human’s conception of trust and risk is difficult to conceptualize. Many mathematical and economic models of trust assume (or hope for) a ‘rational man’ who makes judgments based on self-interest. However, in reality, humans weigh trust and risk in ways that cannot be fully predicted. A human needs to be able to make the judgment.
  4. The model should not fail silently, but should prompt for and expect input on ‘failure’ or uncertainty.
  5. The model should allow for a deep level of configuration. Trust models should not assume what is ‘best’ for the user. Often design tends to guide users towards what the owner or developer of the site thinks what people should be doing. However, only the user can make that call.
  6. The model should allow for querying: a user may want to know more about a system or a context. A trust interface working in the interest of the user should gather and present data the user regards as relevant. Some questions will be difficult for a system to predict and a developer to pre- prepare, so a level of dynamic information exchange is necessary.
  7. The model should cater for different time priorities. In some cases, a trust decision does need to be made quickly. But in other cases, a speedy response is not necessary, and it is possible to take advantage of new information as it comes to hand. A trust model working for humans needs to be able to respond to different timelines and not always seek a short-cut.
  8. The model should allow for incompleteness. Many models aim to provide a definitive answer. Human life is rarely like that. A more appropriate approach is to keep the case open; allowing for new developments, users to change their minds, and for situations to be re-visited.

    And the other two? They are from this paper, (Security Enhancement with Foreground Trust, Comfort, and Ten Commandments for Real People) and go like this:
  9. Trust (and security) is an ongoing relationship that changes over time. Do not assume that the context in which the user is situated today will be identical tomorrow.
  10. It is important to acknowledge risk up front.

So, why was it called “Render unto Cæsar…”? Because there are times when the complex models serve important purposes — the world is a complex place after all. But there are also times when humans (Cæsar, if you will) need to be acknowledged. The commandments aim to satisfy that need.

The most important thing to bear in mind: Put people first.

And that brings us to Slow Computing, which I will get to at some point.

Thanks for reading to hear. As always I’d be glad to have feedback!

On Complexity, Part One

(Explaining Stuff!)

This should take around 5 minutes to read.

Here’s the thing: there are big problems associated with having trust models that are too complex for people to understand. Most importantly, if a model is too complex, a human being would not actually wish to use it to help them. Furthermore, if a system can’t explain how it got somewhere, the reason to trust it is rather small.

It gets worse because complexity is a problem when we want real people to use our shiny tools.

There are some aspects of this argument that are perhaps a little problematic. Firstly, I have previously suggested that complexity actually necessitates trust: if something is too complex to understand or predict (and thus control) what else is there to do but think in terms of trust? I think this is a fair argument, and it’s one that Piotr Cofta’s work supports.

So, why wouldn’t a complex trust model be worthwhile, since it puts us, ironically, into the ‘need to trust’ zone? It’s a fair question.

It also mixes up our metaphors a little.

On the one hand, we can talk about the fact that complexity means we have little choice but to trust a system for some purpose. On the other, complexity and obfuscation mean that we have little reason to do so.

What a conundrum!

A counter-argument is of course that the model need not be decipherable, it just needs to be followed — suggestions need to be seen as, as it were, commands.

This is the very antithesis of Trust Empowerment (indeed, by definition it is Trust Enforcement). This is not the right way to do things: One of the things that we should be trying to do is to design systems, including those that help us with trust decisions, that are more understandable and by extension more empowering. It should go without saying that helping people understand better is a good thing.

To be clear, I’m talking about trust models in autonomous systems that help these systems make decisions for us (like recommending hiring someone, buying stocks and bonds, things like that). Clearly, there are situations where the autonomous systems we create will use their own models of trust. In these cases, the opaqueness of the model itself — either because it is extremely complex or because it is obfuscated in some way — is of use, at least to strengthen the society against trust attacks (we’ll get back to that sometime!).

However, even in these circumstances there remains a need to explain.

Steve’s First Law of Computing (hey, that’s me!) is that, sooner or later every computational system impacts humans. This is important to remember because at some point a human being will either be affected or want to know what is happening, and often both.

It is at this point that the whole explainable bit becomes critical. There is a lot of literature around things like explainable AI, explainable systems and so forth, but what it comes down to is this: The complex tools that we create for people to use have to be able to explain why they have chosen, or recommended, or acted the way they have.

They may need to do this after the fact (let’s call those justification systems) or before (in which case explain or “decision support” is probably more apropos).

As an aside, it’s probably fair to say that we as humans don’t always know why we did something, or trusted someone when we did, for example. Asking us to explain ourselves often results in a “just because”.

A long time ago when Expert Systems were all the rage, explainability was recognized as important too. If an Expert System came up with some diagnosis (there are lots of medical ones) or prediction it was possible to ask it “why” or more correctly “how” it came to the conclusions it did. The system could show which rules were triggered (and the system’s confidence in them) by which data all the way to the first evidence or question it got. It’s pretty neat actually, and as a form of justification it works. In fact, the whole notion of Explainable AI is looking for a similar kind of support system (in spirit at least).

The thing is, complex systems like neural networks or genetic algorithms or black box artificial intelligence(s), or even complex mathematical trust models in an eCommerce setting, can’t really explain like that. They can, however, maybe pretend to, or perhaps lead the human through a path that makes sense to them. This may actually be enough. After all, bear in mind that humans are notoriously bad at explaining themselves too. Holding up other systems to a higher standard, especially in uncertain circumstances, does rather seem a little demanding.

But.

In many situations we do expect good explanations from humans. “Why did you hit Steve?”, “Why did you swerve off the road?”, “How did this happen?”, “Why do you think this is true?”, “Why should I trust Steve to mail a letter?”.

Going out on a limb here: If we are using systems that act in our name, the systems that we use have to be able to explain or justify in some reasonable way. The more complex, the more imperative this is.

This of course begs a question: What does “in our name” actually mean? Perhaps this is something we need to think about a little more. I’m sure we’ll get back to that sometime.

To bring us back to trust for a moment, it’s probably pretty clear that better explanations lead to increased trust, but that’s not the point here. The point is that if we have trust systems that are so complex that they are too hard for people to understand leads nowhere fast unless they can explain themselves.

Simple or complex: explanations are key. But they are also just part of the problem. In part two I’ll jump a little further into this.

On “Trustworthy” AI.

If anything is true, it’s that AI has become a hot topic in recent years. However, it’s almost as if the people who ‘make’ AI are rather worried that we won’t trust it. There is a great deal of chatter about making AI more trustworthy in some way, as if this will be a solution to the problem.

In one of the chapters of the Trust Systems book I make an attempt to address the problem. It’s not what you think it is.

Here’s an extract. I hope you enjoy it. Comments are, as ever, welcomed.

Consider this: when an AI is released into the “real” world, every experience it has changes it. It is almost instantly no longer the thing that was released. Who is to blame if it fails? When Tay was released (rather: subjected) to Twitter in 2016 she was innocent in the sense that she didn’t know different (although some things she did know not to touch). Her subsequent descent into racism and homophobia was perfectly understandable (have you seen what gets posted on ‘social’ media?). Much more to the point, she wasn’t the agent that was released onto Twitter as soon as she was released. There really is no-one to blame.

Truly.

Sure, Microsoft apologized, but most importantly, Microsoft apologized like this: “We are deeply sorry for the unintended offensive and hurtful tweets from Tay…” It is easy to say that Microsoft was at fault, but Tay posted the tweets.

Did you notice something in the preceding paragraphs? I’ll leave you to think about it.

There is a great deal of airtime devoted to making AI more trustworthy by, for example, increasing transparency, or predictability, or whatever, in the hope that people will trust it. The goal is to get people to trust AI, of course, so that all its beneficence will be showered upon us, and we will be, as it were, “All watched over by machines of loving grace.” (Which, if you don’t know it, was the last line of a poem by Richard Brautigan, as well as the rock band!).

Sure, that was sarcasm, but the point is this: some people want us to “trust” AI. Naturally, the answer would seem to be to make AI more trustworthy.

This is answering the wrong question.

Trustworthiness is something that, if you have got this far, you know is the provenance of the thing or person you are thinking of trusting. That is to say, we don’t give trustworthiness to something, it either is or is not trustworthy to some extent. What we (can choose to) give is trust. More to the point, we can choose to give trust even if the thing we are trusting is untrustworthy. Even if we know it is untrustworthy.

To labour the point a little more, let’s turn to the Media Equation. As a reminder: people treat technology as a social actor (they are even polite to technology).

The argument that we shouldn’t trust technology because it is basically just an inanimate, manufactured ‘thing’ is entirely moot.

I’m not going to argue one way or another about whether or not we should trust an AI. That cat is already out of the proverbial bag. If you haven’t seen that yet, let me spell it out for you: that people already see their technology as a social actor means that they almost certainly also think of it in terms of trust. It truly doesn’t matter if they should or not, they just do.

This leaves us with only one option, which is what Reeves and Nass told us all along: design technology on a path of least resistance. Accept that people will be doing what people do and make it easier for them to do so. Even if you don’t, they will anyway, so why make it hard?

Let’s briefly return to the trustworthiness of AI. I’ve already said it’s pretty much a done deal anyway — we will see AI in terms of trust regardless of what might happen. The argument that we should make AI more trustworthy so that people will trust it is pointless.

What is not pointless is thinking about what “trustworthy” actually means. It doesn’t mean “more transparent”, for instance. Consider: the more we know about something, the more we can control (or predict) its actions, and so the less we need to even consider trust. Transparency doesn’t increase trustworthiness, it just removes the need to trust in the first place (or gives us a really good reason not to trust!).

But of course, AI, autonomous vehicles, robot surgeons and the like are not transparent. As we already alluded to in a 2012 paper, we’ve already crossed the line of making things too hard for mere mortals to understand. Coupled with the rather obvious fact that there is no way you can make a learning system really transparent to even its creator after it has learned something that wasn’t controlled, we are left with only the choice to consider trust. There is not another choice. Transparency is a red herring.

That given, what can we do? We are already in a situation where people will be thinking about trust, one way or another. What is it that we can do to make them be more positive?

Again: this is not the right question.

Really.

If you want someone to trust you, be trustworthy. It’s actually simple. Behave in a trustworthy fashion. Be seen as trustworthy. Don’t steal information. Don’t make stupid predictions. Don’t accuse people with different skin colours of being more likely to re-offend than others. Don’t treat women differently from men. Don’t flag black or brown people as cheating in exams simply because of the colour of their skin.

Just don’t. It’s honestly not that hard.

It’s actually not rocket science (which is good because I am not a rocket scientist). If the systems we create behave in a way that people see is untrustworthy, they will not trust them. And of course, with excellent reason.

And if you are about to say “it is hard actually” then what follows is as a result almost certainly true: We are applying AI in all kinds of places where we shouldn’t because the AI can’t do it properly yet.

And we expect people will want to trust it positively?

Let me ask one question: if you saw a human being behaving the way much of the AI we have experienced does toward different kinds of people, what would you do?

An extract from Trust Systems, the book

On Reputation Systems and Social Credit

(According to Ulysses this should take between 6 and 10 minutes to read).

In the course of my career, I have been lucky in the things I have been able to do. After nearly 30 years thinking about trust I figure it’s time to give back. Here’s how.

For the last few years I have been teaching a course called Trust Systems at Ontario Tech. It covers trust from what you might call first principles (the human stuff) all the way through to trust in and of AI and other autonomous systems. There are various digressions along the way. This year I’ve also covered a bit about TrustLess Systems like Blockchains and Zero Trust Security.

I have been writing a textbook to go along with the course. In addition to being a textbook for an undergraduate course, it is my goal for it to be a stand-alone book for the interested, non-expert reader. It’s aimed at, well, basically anyone who might like to learn a bit more. Whilst it is moderately scientific in nature, it’s more of a personal journey. This of course means that the references are minimal in the text, but there is to be a large “Further Reading” section at the end. As it stands in-text references are basically hyperlinks to various sources (ResearchGate, Amazon, Google Books, even (the horror!) Wikipedia).

Since I live on Turtle Island and am a treaty person, I am also working toward acknowledging Indigenous ways of knowing in this work. Trust is a hugely contextual and cultural phenomenon and in the past I haven’t done enough to bring all of this in. As a result there is much to do to bring in diversity.

This is important: the systems we create today will have an impact on the people of tomorrow. I know that if anything builds on the work I do it will have such an impact. This isn’t self-importance: my work is quite heavily cited and I basically founded the field of computational trust. These things are facts. It is the responsibility of all of us to acknowledge this and to try to be the difference.

In this, I, like the book, am a work in progress.

The ‘give back’ bit is that it will be released Creative Commons licence (in specific Creative Commons Attribution-NonCommercial-Share Alike, or by-nc-sa), like all the words that I have written on this website.

The book will be finished in the next few months, but in the interim I’m going to use this blog to post a few different ‘bits’ of it — if you like it, let me know. If not, the same. I’m open to all feedback.

So, for today, a section from the “Reputation and Recommendation” chapter.

I’m no fan of these systems actually, as will likely become apparent. I think there are so many better ways we could be doing the things we do, but I also appreciate that, on the Internet, nobody knows you are a dog. So sure, let’s throw some mathematics at it and see if we can fix that, shall we?

Anyway, the following discusses social credit with a little lean toward the Chinese Social Credit System. The pictures are ones I drew. I’m no artist but like I said, this is a personal journey. I hope you like it.

———————————————————

This brings us to dystopian disguised as ‘good for society’. In China, you may have heard, there is a social credit system. What is this? Well, it’s not (totally) Douglas’ idea of social credit, that’s for sure. Not sure what that means? Look at http://socialcredit.com.au.

The Social Credit System in China is still in development as of the time of writing, which means that there are many questions about how it works and how it might not. There are also different kinds of representation in the system itself (like numerical values for credit or black/whitelisting for credit). It basically works like this: do things that are socially acceptable or correct — like donate blood or volunteer — and you get credit. Do things that people don’t like — like playing loud music, jaywalking, bribery and so on — and you lose credit . How is it enforced? By citizen participation (oh, those crowds again, we’ll get back to crowds, don’t worry), facial recognition systems (and we know how perfect those are, right?), a whole lot of AI. Things like that. There’s also evidence that you can buy credit, but of course, that would be wrong, so it never happens (yes, that was sarcasm too).

And so: control.

The Social Credit System in China is a control mechanism. It’s possible to see it as a form of reputation, and the behaviour is not far from Whuffie: if you have bad credit you will be blacklisted, and you won’t be allowed to travel (already happened), or stand for politics, or get your children into universities. To become un-blacklisted? Do good things for society.

You get the idea.

Doesn’t it sound wonderful? Sure. Until you start asking questions like ‘who gets to decide what is good and what isn’t?’ Is posting on a social network that isn’t Chinese good or not? What about reading a certain kind of book? How about running for political office?

Like many things which seem interesting, promising, and plausible in first light, there are huge issues here.

What could possibly go wrong? Blackmail, coercion, corruption, mistaken identity. The list is quite long.

And just in case you think it couldn’t happen here, wherever here is, consider: a good (financial) credit score gets you a long way. Moreover, see those reputation scores you’re building on all those nice sites you use? Who controls them? Who decides what is ‘good’?

Figure R21: War is Peace. Freedom is Slavery.  Ignorance is Strength.Social Credit is Truth.
Figure R21: War is Peace. Freedom is Slavery. Ignorance is Strength.Social Credit is Truth.

In fact, the concept of social capital is closely linked to this. Social capital is basically the idea that positive connections with people around us mean that we are somehow happier, more trusting, less lonely and so on… Social capital, like reputation, can be used in situations where we are in trouble (ask for help) or need a little extra push (getting your child into that next best school) or a small recognition (like getting your coffee paid for by a co-worker every so often). You can gain social capital, and you can lose it. And if you lose it, then you don’t get the good things. It isn’t a specific number but the crowd you are part of calculates and implicitly shares it — by showing that you are accepted, by valuing your presence, things like that. It’s about shared values and making sure that you share them in order to move groups, society, companies, people who look like you, forward.

Does that sound at all familiar?

Political capital is a similar thing, you could see it as an extension of social capital.

It’s all reputation.

It has come to the attention of some thinkers (Like Rogers and Botsman, 2010) that all of this stuff hanging around is pretty useful. I mean, if you have a great reputation in one context, why is it that this isn’t used in other contexts? This has led to the idea of “Reputation Banks” where you can manage reputation capital to be able to use it in different contexts. Good reputation capital means you get to choose your passengers as an Uber driver, or get nice seats at restaurants, and so on.

How familiar does that sound?

By the way, I think it’s an absolutely awful idea.

So, why do I sound so down about all of this?

Reputation systems, especially when pushed to the limits we see in China’s Social Credit System or even the concept of social capital, are a means to control the behaviour of others. This is where the whole nudge theory stuff comes from. That’s fine when we think of some of the behaviour that we don’t like. I’m sure I don’t need to tell you what that might be. And there’s one of the problems, because your opinion and mine may well differ. I might happen to think that certain behaviours are okay — perhaps I have a more profound insight into why they happen than you do. Whereas you just see them as a nuisance. In the superb “This is Water” David Foster Wallace talks about putting yourself aside for just a moment and trying to figure out why things are happening, or why people are behaving the way they are. Like trust, this stuff is very personal and subjective. It’s also information-driven in a way that we haven’t figured out properly yet. If someone is exhibiting a certain kind of (for the sake of simplicity, let’s call it ‘anti-social’) behaviour, why are they doing it? Do you know? I am sure I don’t. But the crowd believes that it does (I told you we’d get back there).

What is the crowd? Well, it’s usually not the people who are exhibiting behaviour that challenges it in some way (I’m sorry, that was a difficult sentence to parse). Let’s imagine it’s neurotypical, probably Caucasian, depending on where you are, almost certainly what you might call ‘middle to upper class’, possibly male-dominated. None of that has worked out particularly well for the planet so far, so why would we expect it to work out now in systems that expand its power exponentially?

It’s also stupid. Crowds are not ‘wise’. Their behaviour may be explainable by some statistical measures, but that doesn’t mean that what the crowd thinks is good for everyone actually is. Figure R22: Sure, crowds are wise...

It doesn’t mean that what anyone might think is good for us actually is.

To argue the point a little, consider the flying of a plane. If you put enough people together in a crowd who don’t know how to fly it, they’re not going to get any better at it (thanks to Jeremy Pitt for this example). You need an expert. Some problems are expert problems. Some problems (and their solutions) are domain-specific. I would venture to suggest that figuring out who is more skilled than anyone else, or which is the correct switch to flick on an airplane controls, are certainly both expert and domain-specific problems.

What if the behaviour you see in someone — for example a woman shouting at her son — is the result of a personal tragedy that she is dealing with that leaves her emotionally and physically exhausted (I got this example from “This is Water” which if you haven’t read is worth it — see, a recommendation!)? None of this information fits into a Social Credit System. Even if a credit agency is supposed to let you put a note someplace to explain a discrepancy, that doesn’t change the score. If you have missed some payments because you had to pay for your child’s medication, the score doesn’t care. It’s a score. It makes people money, and it helps people who have money decide how to treat those who may not.

If you use a reputation system to decide whether to buy something from someone, then use it as a tool to help, not a tool to tell. A tool to inform, not to dictate. Or, in the language we’ve used up to now, a tool to empower, not to enforce. Enforcement goes two ways — it enforces behaviour the crowd sees as correct, and it enforces the ‘right’ choice (the choice that the system wants you to make).

Reputation systems are fine. They give you information that can help you make decisions. Just don’t use them to judge people or things. Or to believe one kind of thing over another. Or to choose friends or people to date. Use your head, that’s what it’s for.

What are Trust Systems?


I’ve been working for many years on what is called computational trust. It’s basically taking trust the way humans do it, thinking about how computers might do it, and creating ways to make that happen. This usually involves a bit of mathematics, but since I am not that much a fan of maths, I tend to make it simple (others make it much more complicated, and that’s fine). I teach a course at Ontario Tech University called Trust (and Trustless) Systems and it’s based on the research and philosophizing that I do. The idea of Trust Systems is one that has evolved in my head to become something identifiable (and even published). It’s not entirely new as a name (for example, in 2006 this paper talks about trust systems, and there’s even a company in the UK called Trust Systems, but we won’t be treading on their toes with what we talk about here).

New or not, it’s a way of thinking about trust systemically, and making sure that all the bits fit together. It’s recently become much more important because the AI that is coming to dominate our lives in many ways is something we are going to have to think really hard about in terms of trust.

So what exactly is a Trust System? For me, it’s a combination of three important things working more or less together to accomplish something. I call them my “3 Ps”:

  • People
  • Process
  • Place

People are pretty obvious — they are us! But it goes a little further than that because a Trust System is a partnership where humans and computational systems interact. So people in our sense is humans and computational systems — systems such as robots, autonomous agents, regular general purpose computers, phones, tablets, things like that. Don’t worry, as we go on down this rabbit hole, it’s going to become more clear!

Process is the way the computational part of the partnership works. This is through trust models, for the most part. There are many many (many) trust models out there. I have my own and in this place I’ll talk about what that looks like as well as introducing others. But people are part of this too, and so in our process bit

Place refers to the context the partnership exists in — the physical place, the time, the electronic ‘place’ (networks, status, things like that) — in fact, any kind of data that we can gather from the environment.

That last sentence should make you think about privacy. I think about it too. All the time. One of the problems with things like reputation systems for people, social credit, things like that, is that privacy is directly impacted. I’ll be talking much more about that as time goes by too.

I long ago decided I had to write a book about all this stuff. After all, I’m an academic and writing is pretty much one of the most important things we do. Writing in an accessible way, maybe not so much, so I decided to write a book about Trust (and Trustless) Systems that was both informative and accessible. And also fun. I’m in the middle of it now and I do all the writing and the illustrating (and I’m a pretty rubbish artist!). I’ll be posting it here in chapters, and also as an ebook. I’ll also be posting explainer videos and interviews with experts in the field (with their permission). It’s my hope that this becomes a focus point for my version of what things like computational trust are all about. But it won’t stop there because I rarely can keep my mouth shut about many things — politics, life, working at home and teaching from home, things like that. I’ll post stuff about that too. And finally, because I was asked by some of my students, I’m going to start posting explainers for how to program (not just how to code). Just not quite yet. I’m a little busy and I need to make sure I do this stuff properly.

What technology do I use? Before I tell you that, I’m going to tell you I don’t make any money from telling you. I use these tools because they work for me, I’ve used them for a long time for the most part, and I love the way they fit together to make a great experience for me. I’m happy to answer any questions about how I use them if you ask them. An M1 Mac mini, an iPad Pro (and the essential Apple Pencil and Magic Keyboard). I’m using Ulysses to write it (and to write this blog too!) and enjoying it a lot. I use Tayasui Sketches as well as Explain Everything to do the pictures. I’m also recording a bunch of interviews as well as explainers (I use Explain Everything for them too, as well as LumaFusion to edit and finalize the videos. I use Notability a whole bunch for keeping, marking up and maintaining my little reference library, as well as sometimes to record lectures and explainers. Other than that, regular stuff!

I’m going to be posting irregularly, but I’m writing the book as I go. I plan to post drafts here for anyone to look at and comment on, so watch this space.

Thanks for reading this far! Until next time.

First Things

The usual thing to say is welcome, I guess, so, welcome.

I’m going to use this to post musings about whatever tends to bubble up. Some will be about trust and trustworthiness, some about computational trust (I created that one), sometimes just stuff that is happening. Probably not many people will read it, which is just fine.

Who am I? My name is Steve and I am from England. I live in Ontario, Canada, close to the border with Quebec. I’m a Prof at Ontario Tech University, I teach a bunch of stuff but my first love is trust (especially with computers). I can teach people about trust, business, privacy, Design Thinking and a few other things to do with security.

I read a lot (fantasy, SF, mystery, non-fiction), write random stuff, and am working on improving my sketchnoting. I use Explain Everything, LumaFusion, Tayasui Sketches Pro and Ulysses. I’m currently in the process of writing a book on Trust Systems, and I’ll publish it here, along with a bunch of associated material. I have another site at http://stephenmarsh.wikidot.com which has even more things on it.

This is not a regular thing. I won’t be posting every day. Possibly not every week. But sometimes.