#robertkmerton

dredmorbius@joindiaspora.com

Lead epitomises much of my revised thinking on technology, impacts, speech, liability, risk, and other concepts.

See "Leaded petrol is gone – but lead pollution may linger for a very long time" and discussion on @Andrew Pam 's post

I increasingly view technology as a verb: technology is a means, ,a mechanism or process, to some ends. (This borrows heavily from J.S. Mill.) The devices we build as artefacts of technology merely serve to channel and control those processes. Materials and inputs take part in the processes, some are consumed, some are not. We tend to mistake the tangible objects for the intangible process (more on that discussing cognizability).

Impacts

The problem begins when we realise that there are intended and unintended consequences. There's the end we want, and the end we get. All technology has positive and negative impacts, varying with time, cognizability, and expressibility.

Time is the easiest of these three to address: there are short- / near-term effects, and long-term effects. Ends that happen closer to means are easier to recognise and realise.

Cognizability is a somewhat unfashionable word (though you may recognize similarities to others) expressing the ability for a thing to be perceived or known. And for technology, more cognizable effects dominate in social realisation over less cognizable ones. In general, simple, clear, distinct, large, and immediate effects are more cognizable.

Expressability simply means the ease or difficulty of of describing or communicating about a factor. Something that's complex, multi-factored, long-term, subtle, and indistinct, is exceedingly difficult to communicate especially in mass media which relies on a minimum viable audience and a low common level of understanding and perception. There's also the challenge of competing for time and attention within a crowded media sphere.

This gives multiple factors or a matrix defining technological impacts:

X = f(p, n, t, c, e)

Where X is technology (from the Greek chi), p is positive impacts, n is negative impacts, t is time, c is cognizability, e is expressibility.

This also ties strongly to Robert K. Merton's notions of both latent vs. manifest functions, and of unintended consequences.

Risk

Too much to get into here, but I increasingly find discussions of risk to be unsatisfactory. Generally:

  • Risks have contexts. Individual risk isn't the same as global risk. Your individual risk of dying in an automobile accident may be roughly equal to that of dying in a meteor impact. One is common but small-scale (at least in the current era), one is uncommon but global. But the odds of all of humanity, or all life on Earth, dying in an auto accident is minuscule relative to of dying in a meteor impact. Global catastrophic risks are global. I don't know if it's the Western focus on individualism that gives rise to this fallacy, but I see it constantly.

  • There's a distinction between randomness and uncertainty. Radioactive decay is random, but (in aggregate) its behaviour is highly certain. Abstract risks, say, of Roko's Basilisk, are highly uncertain. We simply don't know what the probabilities are. (Numerous other "it can only happen once, because once it happens, it's all over" events are similar: global total nuclear war, grey goo, Skynet, global catastrophic logistical collapse, etc.) Treating these as intrinsically similar is ... well, I'm pretty sure it's just plain wrong.

  • Risks accrue differently to different parties. All life is a risk-externalising mechanism, and within its own domains, market-capitalism is as well. Profits are privatised, risks are socialised, as we've become profoundly aware over the past two decades. This is inherent.

  • Risks in space differ profoundly from risks in time. Private insurance works best for small-scale risks which occur frequently, at small scale, within a given market, in an uncorrelated fashion. Automobile accidents and house fires are classic examples. Rare, large-area, highly-correlated risks affecting many policyholders simulataneously, are far more difficult to insure against. Wildfires, urban firestorms, earthquakes, major flooding events, sea level rise, cyclonic storms, droughts, and famines are wide-spread events, some are global. Conventional commercial insurance providers fail to address these well if at all. In most cases, "insurance" comes in the way of government (state or national) disaster response, or international aid. An asteroid impact, gamma-ray burst, nearbye supernova, major solar storm, or supervolcano erruption, would be truly global. Global warming moves more slowly but is of a similar nature (as are other global catastrophic risks.)

Liability

Numerous private industries benefitted by use of lead whilst externalising most of the costs and impacts. (Thomas Midgely somewhat infamously was not immune to the effects and did suffer lead poisoning.) More generally, though, investors and creditors faced minimal direct exposure, whilst front-line workers and the public at large, especially in poorer areas more exposed to contamination, bore the brunt.

Profits were privatised, costs socialised.

Speech

Industry and its advocates were strongly motivated to confound the issue. They lied, misled, delayed, and otherwise contaminated not just the physical environment but the epistemological one. It's here that I have some extreme misgivings over popular notions of free speech, in which rights to say anything are at odds with the general public's right to accurate and truthful information. It seems to me that there's a profound conflict here, and a growing problem. It's not one that's easily resolved, though my thinking in terms of #AutonomousCommunication is poking around that space. See here https://joindiaspora.com/posts/622677903778013902fd002590d8e506

(I'm not happy with that term. "Information Autonomy" or "Communication Autonomy" are probably better.)

See also especially Oreskes and Conway, Merchants of Doubt.

#lead #leadedGasoline #environment #contamination #risk #speech #liability #technology #manifestation #RobertKMerton #NaomiOreskes #MerchantsOfDoubt #ErikConway

dredmorbius@joindiaspora.com

Steven Pinker's Panglossianism has long annoyed me

A key to understanding why is in the nature of technical debt, complexity traps (Joseph Tainter) or progress traps (Ronald Wright), closely related to Robert K. Merton's notions of unintended consequences and manifesst vs. latent functions.

You can consider any technology (or interventions) as having attributes along several dimensions. Two of those are impact (positive or negative) and realisation timescale (short or long).

Positive Negative
Short realisation Obviously good Obviously bad
Long realisation Unobviously good Unobviously bad

Technologies with obvious quickly-realised benefits are generally and correctly adopted, those with obvious quickly-realised harms rejected. But we'll also unwisely reject technologies whose benefits are not immediately or clearly articulable, and reject those whose harms are long-delayed or unapparent. And the pathological case is when short-term obvious advantage is paired with long-term nonevident harm.

By "clearly articulable", I'm referring to the ability at social scale to effectively and accurately convey true benefit or harm. The notion of clear articuability itself not being especially clearly articuable....

For illustration: cheesecake has obvious short-term advantage, walking on hot coals obvious harms. A diet and gym routine afford only distant benefits. Leaded gasoline, Freon, DDT, and animal wet markets have all proven long-term catastrophic consequences.

As Merton notes, the notion of latent functions is itself significant:

The discovery of latent functions represents significant increments in sociological knowledge. There is another respect in which inquiry into latent functions represents a distinctive contribution of the social scientist. It is precisely the latent functions of a practice or belief which are not common knowlege, for these are unintended and generally unrecognized social and psychological consequences. As a result, findings concerning latent functions represent a greater increment in knowledge than findings concerning manifest functions. They represent, also, greater departures from "common-sense" knowledge about social life. Inasmuch as the latent functions depart, more or less, from the avowed manifestations, the research which uncovers latent functions very often produces "paradoxical" results. The seeming paradox arises from the sharp modification of a familiar popular perception which regards a standardized practice or believe only in terms of its manifest functions by indicating some of its subsidiary or collateral latent functions. The introduction of the concept of latent function in social research leads to conclusions which show that "social life is not as simple as it first seems." For as long as people confine themselves to certain consequences (e.g., manifest consequences), it is comparatively simple for them to pass moral judgements upon the practice or belief in question.

-- Robert K. Merton, "Manifest and Latent Functions", in Social Theory Re-Wired

Emphasis in original.

In the argument between those arguing for optimism vs. pessimism, the optimists have the advantage of pointing to a current set of known good states --- facts in the present which can be clearly pointed to and demonstrated. A global catastrophic risk by definition has not yet ocurred and therefore of necessity exists in a latent state. Worse, it shares non-existence with an infinite universe of calamities, many or most of which can not or never will occur, and any accurate Cassandra has the burden of arguing why the risk she warns of is not among the unrealisable set. The side arguing for pessimism cannot point to any absolute proof or evidence, only indirect evidence such as similar past history, theory, probability distributions, and the like. To further compound matters, our psychological makeup resists treating such hypotheticals with the same respect granted manifested scenarios.

(There are some countervailing dynamics favouring pessimism biases. My sense is that on balance these are overwhelmed by optimism bias.)

The notion of technical debt gives us one tool for at least conceptualising, if not actually directly measuring, such costs. As a technical project, or technological adoption, progresses, trade-offs are made for present clear benefit at the exchange for some future and ill-defined cost. At which point a clarification of natures of specific aspects of risk is necessary. The future risk is not merely stochastic, the playing out of random variance on some well-known variable function, but unknown. We don't even know the possible values the dice may roll, or what cards are within the deck. I don't know of a risk terminology that applies here, though I'd suggest model risk as a term: the risk is that we don't yet have even a useful model for assessing possible outcomes or their probabilities, as contrasted with stochastic risk given a known probability function. And again, optimists and boosters have the advantage of pointing to demonstrable or clearly articulable benefits.

Among other factors in play are the likely value function on the one hand and global systemic interconnectedness on the other.

For some entity --- a cell, an individual, household, community, firm, organisation, nation, all of humanity --- any given intervention or technology offers some potential value return, falling to negative infinity at some origin (death or dissolution), and rising, at a diminishing rate, always (or very nearly almost) to some finite limit. Past a point, more of a thing is virtually always net negative. Which suggests that the possible positive benefit of any given technology is limited.

The development of an increasingly interdependent global human system --- economic, technical, political, social, epidemiological, and more --- means both that few effects are localised and that the system as a whole runs closer to its limits, with more constraints and fewer tolerances than ever before. This is Tainter's complexity trap: yes, the system's overall complexity affords capabilities not previously possible, but the complexity cost must be paid, the cost of efficiency is lost resilience.

Pinker ... ignores all this.


Adapted from a comment to a private share.

#StevenPinker #DrPangloss #risk #JosephTainter #RonaldWright #RobertKMerton #complexity #resilience #efficiency #ModelRisk #interdependence #optimism #pessimism #bias #manifestation #UnintendedConsequences #LatentFunctions

dredmorbius@joindiaspora.com

On Surveillance Capitalism, Manifestation, Latency, Tangibility, and Cognizability

On why pervasive facial recognition is recognised as "creepy" in ways that other forms of surveillance, such as the massive amounts of personal and location data tracking afforded by mobile phones, is not.

In addition to the frequently noted fact that your phone is separable in ways your face, Nick Cage and John Travolta excepted, is not, there's the notion of manifest versus latent (or tangible vs. intangible) perceptions.

Humans are visual creatures. To "see" is synymous with "to understand". Vision is a high-fidelity sense, in ways that even other senses (hearing, smell, taste, touch) are not. And all our senses are more immediate than perceptions mediated by devices (as with radiation or magnetism) or delivered via symbols, data, or maths.

This is a tremendously significant factor in individual and group psychology. It's also one that's poorly explored and expressed -- Robert K. Merton's work on latent vs. manifest functions, described as the consequences or implications of systems, tools, ideas, or institutions, is about the closest I've been able to find, and whilst this captures much of the sense I'm trying to convey, it doesn't quite catch all of it.

But his work does provide one extraordinarily useful notion, that of the significance of latent functions (or perceptions):

The discovery of latent functions represents significant increments in sociological knowledge. There is another respect in which inquiry into latent functions represents a distinctive contribution of the social scientist. It is precisely the latent functions of a practice or belief which are not common knowlege, for these are unintended and generally unrecognized social and psychological consequences. As a result, findings concerning latent functions represent a greater increment in knowledge than findings concerning manifest functions. They represent, also, greater departures from "common-sense" knowledge about social life. Inasmuch as the latent functions depart, more or less, from the avowed manifestations, the research which uncovers latent functions very often produces "paradoxical" results. The seeming paradox arises from the sharp modification of a familiar popular perception which regards a standardized practice or believe only in terms of its manifest functions by indicating some of its subsidiary or collateral latent functions. The introduction of the concept of latent function in social research leads to conclusions which show that "social life is not as simple as it first seems." For as long as people confine themselves to certain consequences (e.g., manifest consequences), it is comparatively simple for them to pass moral judgements upon the practice or belief in question.

-- Robert K. Merton, "Manifest and Latent Functions", in Social Theory Re-Wired.

Emphasis in original.

Another related concept and term is cognizibility, that is, the capacity of being known or apprehended, a concept I first encountered in William Stanley Jevons's qualities of the material of money. Which has a clear relation to recognition as well.

(Adapted from an Hacker News comment.)

#Manifestation #ManifestVsLatent #tangible #cognisability #RobertKMerton #Sociology #Surveillance #SurveillanceCapitalism #SurveillanceState