Facebook has now introduced a reputational metric to help detect and then limit fake, false and flaky content. This metric assigns Facebook users a secret “reputation score” based on the trustworthiness that other users ascribe to them. These reputational scores are not evidence-based assessments of trustworthiness or untrustworthiness. Reputation is, after all, in the eye of the beholder. While the judgment of others’ trustworthiness can either be sound or unsound, we often get it wrong. Some people thought Bernie Madoff was trustworthy and entrusted their savings to him — but then he made off with their money. Other people mistakenly judge vaccine conspiracy theorists as trustworthy, needlessly putting their children’s health at risk.
Reputational evidence is not enough to support well-placed trust or well-placed mistrust because it need not track factual claims, available evidence or trustworthy undertakings. Reputations, as all of us know, can be unearned and undeserved. Facebook has rightly been careful to indicate that its reputation scores are not intended to be considered the final word on someone’s credibility. It has not published details of the metric’s methodology or its limitations, however. This may not matter greatly, since reputational metrics are simply not designed for tracking trustworthiness.
Reputational rankings only offer useful and reliable clues to others’ trustworthiness in a limited range of cases, but unfortunately these are special instances, and we cannot generalize from them. Reputational rankings can work well when consumers rank standardized products and services, such as manufactured goods or the services provided by hotel or restaurant chains. These rankings can provide reasonably accurate indicators of trustworthiness provided they meet two conditions. First, the rankings must reflect the experiences, and not merely the attitudes, of those who have actually used (or tried to use) the standardized product or service. And second, the rankings must come from a diverse and adequately representative range of users.
If either of these conditions are not met — if a product or service is variable rather than standardized, or if the scores are provided by a small or unrepresentative set of respondents — reputational rankings may not offer evidence of trustworthiness. For that reason, when considering content on Facebook, rankings that tally users’ attitudes toward political messages or other campaigns are unlikely to offer good evidence of trustworthiness or untrustworthiness. Social media users’ reactions to the claims made by political campaigners, reputation managers, hidden persuaders and “influencers” are not a reliable indicator of trustworthiness.
So in most circumstances, reputational metrics won’t offer a reliable shortcut for judging trustworthiness or untrustworthiness. This does not greatly matter in everyday life, since many of us are fairly good at judging trustworthiness in situations involving familiar people and activities. But many of us are unable to judge the trustworthiness of complex claims and activities, particularly if they involve technical information or arcane expertise, complex institutional structures or many intermediaries.
In some standardized cases we can appeal to proxy evidence provided by audits and inspections or to readily-available evidence of an institution or agent’s track record. But in many cases, we need to assess complex and incomplete evidence that must be probed and interrogated, checked and challenged, if we are to reach even tentative estimates of others’ trustworthiness in specific matters.
In short, reputational metrics can’t show who is trustworthy in which matters. There is no easy short cut for detecting fake, false or flaky content, for judging whose claims are factual or evidence-based, or for telling whose commitments can and can’t be trusted. Judging who is trustworthy in which matters requires a focus on facts and evidence. Appeals to reputations and attitudes are not an adequate substitute.