#latentfunctions

dredmorbius@joindiaspora.com

Perrow, Normal Accidents, and complex systems determinants

From comments to a post by @Joerg Fliege, preserved for easier retrieval.

Charles Perrow's model in Normal Accidents is Interactions vs. Coupling. This seems ... overly reductionist? Simple is good, too simple is not.

Breaking down Perrow's taxonomy, dimensions or factors I might apply. Ranges are generally from "easy" to "hard" in terms of successful control:

  • Coupling flexibility: loose/tight
  • Coupling count: low/high
  • Internal complexity: low/high
  • Threshold sensitivity: high/low
  • Self-restabilisation tendency: high/low
  • Constraints/tolerances (design, manufacture, operational, maintenance, training, financial): loose/tight
  • Incident consequence: low/high
  • Scale (components, mass, distance, time, energy (kinetic/potential), information, decision): low/high (absolute log)
  • Decision or response cycle: long/short
  • Environmental uniformity: high/low
  • Environmental stability: high/low
  • State determinability: high/low
  • Risk determinability: high/low
  • Controls coupling: tight/loose
  • Controls response: high/low
  • Controls limits: high/low
  • Controls complexity: low/high

That's a bunch of factors, giving a complex model, but many of these are related. I see general parameters of complexity or arity, of change (itself complexity), of tolerances or constraints, of responses, of controls, of perception or sensing. These themselves are elements of a standard control or systems model.

                   update (learn)
                         ^
                         |
state -> observe -> apply model -> decide -> act (via controls)
  ^        ^  ^                                        |
  |       /    \                                       |
  |  system    environment                             |
  |                                                    |
  +----------------------------------------------------+

Coupling is how the system relates to its environment and controls. Those couplings may also be sensors or controls.

Consequence refers to result of undesired or uncontrolled states. Relates strongly to resilience or fragility.

Internal complexity, threshold sensitivity, self-stabilisation, constraints, tolerances, and scale (a form or attribute of complexity) are all aspects of the system and its model. Consequence is a component of risk.

Decision cycle --- how rapidly responses must be made to ensure desired or controlled function --- is its own element.

Environmental uniformity and stability are exogenous factors.

State and risk determinability apply to observation and model, respectively. State is overt or manifest, risk is covert or latent. State is inherently more apparent than risk.

The controls aspects all all relate to how intuitive, responsive, limited, and complex control is. Controls mapping directly to desired outcome decrease complexity. Controls providing precise and immediate response likewise. High limits (large allowed inputs) increase control, low limits decrease it and require greater planning or more limited environments. Complexity ... my need some further refinement. Degrees of control mapping to freedoms of movement of the controlled system are useful, but complexity of interactions or in specifying inputs generally adds complexity.

On scale, I added the note "absolute log". That recognises that it's not simple large or small that is complex, but departure from familiar or equilibrium norms. A model isn't a smaller representation but a simplified one -- we model both galaxies and atoms. Starting with some familiar scale or equilibrium state, noting the orders of magnitude above or below that of a given system along various dimensions, and taking the absolute value of that, seems a reasonable first approximation of complexity of that system in that dimension.

Reducing my factors:

  • System complexity: coupling, scale, internal complexity, stability, constraints, tolerances.
  • Environmental complexity: uniformity, stability, observability, predictability.
  • State determinability.
  • Risk determinability.
  • Model complexity, accuracy, and usefulness.
  • Decision cycle: required speed, number of decisions & actions with time.
  • Consequence: Risks. Result of undesired or uncontrolled state. These may be performance degredation, harm or damage to the system itself, loss of assets, reduced production or delivery, harm to operators, harm to third-party property, environmental degradation, epistemic harm, global sytemic risk.
  • Controls: appropriateness, completeness, precision, responsiveness, limits, complexity.

That may still be too many moving parts, but I'm having trouble reducing them.

Perhaps:

  • Complexity (state, system, environment, model, controls)
  • Determinability (state, risk, consequence, decision)
  • Risk (Or fragility, resilience, consequence?)

I'm not satisfied, but it's a start.

#complexity #CharlesPerrow #ComplexSystems #NormalAccidents #control #ControlTheory #SystemsTheory #Cybernetics #Risk #Manifestation #UnintendedConsequences #ManifestFunctions #LatentFunctions #RobertKMorton

dredmorbius@joindiaspora.com

Steven Pinker's Panglossianism has long annoyed me

A key to understanding why is in the nature of technical debt, complexity traps (Joseph Tainter) or progress traps (Ronald Wright), closely related to Robert K. Merton's notions of unintended consequences and manifesst vs. latent functions.

You can consider any technology (or interventions) as having attributes along several dimensions. Two of those are impact (positive or negative) and realisation timescale (short or long).

Positive Negative
Short realisation Obviously good Obviously bad
Long realisation Unobviously good Unobviously bad

Technologies with obvious quickly-realised benefits are generally and correctly adopted, those with obvious quickly-realised harms rejected. But we'll also unwisely reject technologies whose benefits are not immediately or clearly articulable, and reject those whose harms are long-delayed or unapparent. And the pathological case is when short-term obvious advantage is paired with long-term nonevident harm.

By "clearly articulable", I'm referring to the ability at social scale to effectively and accurately convey true benefit or harm. The notion of clear articuability itself not being especially clearly articuable....

For illustration: cheesecake has obvious short-term advantage, walking on hot coals obvious harms. A diet and gym routine afford only distant benefits. Leaded gasoline, Freon, DDT, and animal wet markets have all proven long-term catastrophic consequences.

As Merton notes, the notion of latent functions is itself significant:

The discovery of latent functions represents significant increments in sociological knowledge. There is another respect in which inquiry into latent functions represents a distinctive contribution of the social scientist. It is precisely the latent functions of a practice or belief which are not common knowlege, for these are unintended and generally unrecognized social and psychological consequences. As a result, findings concerning latent functions represent a greater increment in knowledge than findings concerning manifest functions. They represent, also, greater departures from "common-sense" knowledge about social life. Inasmuch as the latent functions depart, more or less, from the avowed manifestations, the research which uncovers latent functions very often produces "paradoxical" results. The seeming paradox arises from the sharp modification of a familiar popular perception which regards a standardized practice or believe only in terms of its manifest functions by indicating some of its subsidiary or collateral latent functions. The introduction of the concept of latent function in social research leads to conclusions which show that "social life is not as simple as it first seems." For as long as people confine themselves to certain consequences (e.g., manifest consequences), it is comparatively simple for them to pass moral judgements upon the practice or belief in question.

-- Robert K. Merton, "Manifest and Latent Functions", in Social Theory Re-Wired

Emphasis in original.

In the argument between those arguing for optimism vs. pessimism, the optimists have the advantage of pointing to a current set of known good states --- facts in the present which can be clearly pointed to and demonstrated. A global catastrophic risk by definition has not yet ocurred and therefore of necessity exists in a latent state. Worse, it shares non-existence with an infinite universe of calamities, many or most of which can not or never will occur, and any accurate Cassandra has the burden of arguing why the risk she warns of is not among the unrealisable set. The side arguing for pessimism cannot point to any absolute proof or evidence, only indirect evidence such as similar past history, theory, probability distributions, and the like. To further compound matters, our psychological makeup resists treating such hypotheticals with the same respect granted manifested scenarios.

(There are some countervailing dynamics favouring pessimism biases. My sense is that on balance these are overwhelmed by optimism bias.)

The notion of technical debt gives us one tool for at least conceptualising, if not actually directly measuring, such costs. As a technical project, or technological adoption, progresses, trade-offs are made for present clear benefit at the exchange for some future and ill-defined cost. At which point a clarification of natures of specific aspects of risk is necessary. The future risk is not merely stochastic, the playing out of random variance on some well-known variable function, but unknown. We don't even know the possible values the dice may roll, or what cards are within the deck. I don't know of a risk terminology that applies here, though I'd suggest model risk as a term: the risk is that we don't yet have even a useful model for assessing possible outcomes or their probabilities, as contrasted with stochastic risk given a known probability function. And again, optimists and boosters have the advantage of pointing to demonstrable or clearly articulable benefits.

Among other factors in play are the likely value function on the one hand and global systemic interconnectedness on the other.

For some entity --- a cell, an individual, household, community, firm, organisation, nation, all of humanity --- any given intervention or technology offers some potential value return, falling to negative infinity at some origin (death or dissolution), and rising, at a diminishing rate, always (or very nearly almost) to some finite limit. Past a point, more of a thing is virtually always net negative. Which suggests that the possible positive benefit of any given technology is limited.

The development of an increasingly interdependent global human system --- economic, technical, political, social, epidemiological, and more --- means both that few effects are localised and that the system as a whole runs closer to its limits, with more constraints and fewer tolerances than ever before. This is Tainter's complexity trap: yes, the system's overall complexity affords capabilities not previously possible, but the complexity cost must be paid, the cost of efficiency is lost resilience.

Pinker ... ignores all this.


Adapted from a comment to a private share.

#StevenPinker #DrPangloss #risk #JosephTainter #RonaldWright #RobertKMerton #complexity #resilience #efficiency #ModelRisk #interdependence #optimism #pessimism #bias #manifestation #UnintendedConsequences #LatentFunctions