Fostering Trust under Intermediate Agency
I wish I could hone in further on what trust is, but the concept itself seems sufficiently fuzzy that I cannot.
Etymological search has proved interesting. It's not definitive -- past meanings, origins, and evolutions of meaning are neither present meaning nor psychological or philosophical demonstrations of fundamental principles. But they do illuminate usages and connections through time.
Reputation is related to credit (Italian, credere, credo, "to believe"), and to trust, ultimately from the PIE root *deru-, "to be firm, solid, steadfast". So this is related to risk, probability, reliability, likelihood.
"Past performance is no guarantee of future returns."
I'm not even sure democratic means work, all told. Athens democratically condemned Aristotle to death, only to regret the decision shortly afterward (and he wasn't the only one this happened to). I've been poking at philosophical and information-theoretical concepts of truth and epistemology, as an outsider, for a few years. I tend to side fairly strongly with the pragmatists and empiricists -- truth is that which gives reliable guidance to the behaviour or properties of the observed world, according to the models and sensations we have for predicting/reasoning and perceiving it.
So, trust, based on evidence, collective assessment, admission of error where that's due, and more, seems a fairly good basis.
There's another problem, which I've been noodling at as well, that information technology improvements seem, as a whole, to create breakdowns in social trust institutions. The more you can ensure that someone does what they're supposed to be doing by checking up on them frequently -- every month, week, day, hour, minute, second, microsecond, ... -- the less you have to trust that they're going to do what they said, because you can verify it instead.
Instead of "trust, but verify", we end up with "verify, and distrust".
And that very much cuts both ways.
Neal Stephenson's The Diamond Age features a vignette in which members of one of the phyles comprising a future, quasi-dystopian society, go through trust rituals. Individuals, unknown to one another, are tasked with performing some action. The failure of one to do this, either when or as required, means the certain death of the other. That's an interesting concept.
Someone in 20th century politics or military noted that the best way to earn trust is to give it. Give someone control over a thing, with the understanding that they might succeed or fail, but that the outcome is almost entirely in their hands.
The question then -- and no, I don't have the answer -- is whether or not this can be extended, not to, but into our mediated spaces (I think the term "online" is a badly dated misnomer which misconstrues the fact that it is not a separate space, but a continuation of physical and immediate presences, though one with greater levels of intermediate agency imposed), in ways that don't merely foster reliability, but social trust itself.
That's where I'm at.
From a G+ thread by Akira Bergman, exploring questions of identity, information, reliability, truth, and trust, and whom I'd really like to entice over here. If he'd only trust me....