#unintendedconsequences

dredmorbius@joindiaspora.com

James Burke, on how he would extend Connections (2004)

In 2004 the creator of the 1979 public broadcasting masterpiece of technological history, James Burke, spoke with KCSM, a local San Francisco Bay Area telelvision station, about that series and its successors. Those included two further series based on Connections, neither fully measuring up to the original in my view, as well as The Day the Universe Changed, which did, and even possibly exceeded it.

The interview is online at the Internet Archive, and I recommend it in its entirety:

https://archive.org/details/JamesBurkeReConnections_0

At about 47:45 into the interviewer, Burke is asked where he would take the series today, extending the story of eight key inventions: the telephone, plastics, the atomic bomb, mass production, manned space flight, the jet airplane, television, and the B-52 bomber. I find his rationale of specific interest: that he considers "the connective principle that you'd go for something fairly unexpected". It's not the invention itself or its direct effects that are most significant, but what it interacts with that matters. Anticipating those interactions is difficult, and I think he misses on a few of his answers, but two in particular seem painfully prescient, particularly concerning telephones and jet airplanes.

The underlying principle is that of unintended consequences, a concept developed by sociologist Robert K. Merton, and strongly tied to another concept of his, that of manifest and latent functions. A manifest function is one which is readily and immediately apparent. A latent function is its opposite. Merton specifically discusses the significance of these, and notes that awareness of latent functions represents a greater advance of knowledge specifically because they are less apparent to the observer. It is a more valuable insight delivering greater value.

I've transcribed the segment here, lightly edited for clarity.


Transcript

Q: James you ended the 1979 Connections with eight key modern inventions. Now the world has changed a lot since then --- if you were to make Connections now I'm curious how you'd continue each of those threads in turn? Could I list them for you

I'll try

The Telephone

I think I'd probably go forward on the connective principle that you'd go for something fairly unexpected. I think for the telephone i think i'd go to what's going to happen when very very cheap wireless communication gets to the Third World.

Q: The idea that the third world doesn't need copper wire; they can go right to wireless.

And it's going to cause massive social change

Plastics

Plastic is a difficult one. I suppose really continuing that kind of work, the next big thing in that field would be finding some kind of plastic solar cell that makes it very very cheap to generate electricity. The point about that being that when that happens you change the face of the planet, and you need to start thinking about heat budgets, and when you cannot use electricity not when you can.

The Atomic Bomb

I think probably the atomic bomb would have taken us to the Internet. When Russia gets the bomb, the DEW [distant early warning] line across the northern frontier of Canada gets built to protect us against incoming bombers, and that's the beginning of distributed networks and that's the beginning of the internet.

Q: ... And that gives us darpanet which gives us the Internet

That's right.

Mass Production

I suppose from mass production i would have gone forward to the end of mass production, because that's what's happening, and the consumer as designer. I dont think it's too far fetched to see quite soon intelligent agents acting on behalf of the consumer to go and make the object that the consumer wants to buy, customised totally to that individual consumer's desire.

Q: True mass customisation

Yes.

Manned space flight ...

Ah...

Q: The story that ends ...

.. Nowhere... I would have gone forward I think to unmanned spaceflight, things like GPS, Earth imaging, and maybe ultimately the use of satellite imagery to look at things like taxes and land ownership and that kind of social aspect, rather than adventuring out into the black yonder.

The Jet Airplane

Pandemics.

Pandemics. [Repeated]

The more people fly, the more they contact each other, the more we're going to see more and more virus moving around the world.

Television

I would probably have gone to us being out of a job.

Q: Fade to black.

Almost.

I think when those relatively new cellphones with a camera looking at you and a camera looking at the scene get out there, that's the end of formal reporting and the end of formal international television in the old sense. There will be 20 million reporters.

Q: Do we end up with better news, or do we end up with Farenheit 451, where you're inside the television serial?

I think we end up with better news. I tell you why: I think what we're getting at the moment is ethnocentric news, we're the kind of news that the British or the American or the French television people think their audience wants, and we're getting it from that point of view. With these new devices you're going to get very local news, from a very local point of view, whether you like it or not.

The B-52

Q: You finished the original connections, the penultimate image, with the B-52. Is that still the ultimate invention , the final connection, or have we moved to something else and what would it be now?

I think if I had to do the series all over again and end with something that pulled everything together, it would be some aspect of the coming marriage between electronics and nanotechnology, between the life sciences and electronics. I think that's going to bring about a revolution the likes of which we have not even begun to understand yet.

Q: You mean like biochips, DNA?

The modification of life. Why not?

Q: Does that become the final link in the chain? Forever?

In a sense, of course, it is, because from then on you don't discover, you invent.


What Burke gets right and wrong

One concept I've (dredmorbius) been playing with is what the mechanisms of technology are, arriving at a list of nine fundamental modalities: materials, fuels, power transmission and transformation, process ("technical" or "how") knowledge, causal ("scientific" or "why") knowledge, networks, systems, information, and hygiene (unintended / unwanted consequences or side effects).

In many of these cases, Burke is, or perhaps ought to be, addressing the last cause --- the unintended and often disruptive side effects or consequences of technology.

In the two cases I'd noted, his views strongly suggest this. Most directly in the case of air travel (and when you listen to the passage, he sounds far more certain of this conclusion than of most others, with no hesitancy). Events of the past two years with the COVID-19 global pandemic strongly bear out his concern. Similarly, his assessment that the impact of telephones would be severely disruptive, notably in the developing world, also seems borne out. His comments predate the Arab Spring, Syrian civil war, and conflict in Myanmar. I strongly suspect we've not seen the full development. And of course, there's the role that mobile phones, social media, and algorithmic amplification and manipulation have played in the US and Europe.

Burke's answers in the case of plastics and television strike me as weaker, though he's also less certain. To my view, what we're discovering with plastics is the consequences of both bioactive materials, where many compounds found in plastics mimic hormones found in plants and animals, and of the simple problem of a material which does not readily degrade, with plastics accumulating in the environment both on land and in the oceans. For television, his focus on capture devices (camera-equipped phones) neglects both the much role of journalism itself and of the ultimate distribution network. He does make a strong point on the ethnic (and socioeconomic) framing focus of media, and I'd argue that much of the present cultural backlash being seen in the US and elsewhere is a response to formerly-repressed viewpoints and frames being presented. Established power does not like this, and rarely does. But simple cameras-on-the-ground are not the same as true investigative journalism. As Lee Scott-Heron said, the revolution will not be televised. In a later interview he explained: the revolution is in your head, out of reach of the camera lens. Storytelling involves image, yes, but also narrative which connects and relates those elements. He also fails to consider the role of networks and gatekeepers in filtering and amplifying stories, whether in broadcast or online instances, which has a huge impact on what stories reach what audiences. This is true both in the mass-media sense of broadcast, but also of the directly-targeted case of online media.

A book I've been reading, Andrew L. Shapiro's The Control Revolution (1999) addresses this last point at length.

His insight that spaceflight turns out to largely be of informational significance is also powerful, though I'd add in climate science to the list of benefits delivered.

Still, prediction is hard, as they say, and in both specifics and spirit, Burke does quite well here.

#JamesBurke #Connections #Technology #Forecasting #UnintendedConsequences

dredmorbius@joindiaspora.com

If Software Companies Ruled the World (1987)

... Where the shmoo-factor comes in and software executives begin to grit their teeth is when a PC user decides to make a copy of a commercially-produced program for a friend. Suddenly there are two programs where there once was one, and there's a good cha nce that the recipient of the copied disk will never break down and buy his own legitimate copy. This scenario, which is repeated daily all over the world, is the bane of the software industry, which contends it is losing millions of dollars in potential sales through this penny-ante thievery. ...

https://www.jaykinney.com/Texts/shmoo.html

#software #copyright #unintendedConsequences #licensing #WholeEarthCatalog #WholeEarthReview #1987 #satire #predictions

dredmorbius@joindiaspora.com

Perrow, Normal Accidents, and complex systems determinants

From comments to a post by @Joerg Fliege, preserved for easier retrieval.

Charles Perrow's model in Normal Accidents is Interactions vs. Coupling. This seems ... overly reductionist? Simple is good, too simple is not.

Breaking down Perrow's taxonomy, dimensions or factors I might apply. Ranges are generally from "easy" to "hard" in terms of successful control:

  • Coupling flexibility: loose/tight
  • Coupling count: low/high
  • Internal complexity: low/high
  • Threshold sensitivity: high/low
  • Self-restabilisation tendency: high/low
  • Constraints/tolerances (design, manufacture, operational, maintenance, training, financial): loose/tight
  • Incident consequence: low/high
  • Scale (components, mass, distance, time, energy (kinetic/potential), information, decision): low/high (absolute log)
  • Decision or response cycle: long/short
  • Environmental uniformity: high/low
  • Environmental stability: high/low
  • State determinability: high/low
  • Risk determinability: high/low
  • Controls coupling: tight/loose
  • Controls response: high/low
  • Controls limits: high/low
  • Controls complexity: low/high

That's a bunch of factors, giving a complex model, but many of these are related. I see general parameters of complexity or arity, of change (itself complexity), of tolerances or constraints, of responses, of controls, of perception or sensing. These themselves are elements of a standard control or systems model.

                   update (learn)
                         ^
                         |
state -> observe -> apply model -> decide -> act (via controls)
  ^        ^  ^                                        |
  |       /    \                                       |
  |  system    environment                             |
  |                                                    |
  +----------------------------------------------------+

Coupling is how the system relates to its environment and controls. Those couplings may also be sensors or controls.

Consequence refers to result of undesired or uncontrolled states. Relates strongly to resilience or fragility.

Internal complexity, threshold sensitivity, self-stabilisation, constraints, tolerances, and scale (a form or attribute of complexity) are all aspects of the system and its model. Consequence is a component of risk.

Decision cycle --- how rapidly responses must be made to ensure desired or controlled function --- is its own element.

Environmental uniformity and stability are exogenous factors.

State and risk determinability apply to observation and model, respectively. State is overt or manifest, risk is covert or latent. State is inherently more apparent than risk.

The controls aspects all all relate to how intuitive, responsive, limited, and complex control is. Controls mapping directly to desired outcome decrease complexity. Controls providing precise and immediate response likewise. High limits (large allowed inputs) increase control, low limits decrease it and require greater planning or more limited environments. Complexity ... my need some further refinement. Degrees of control mapping to freedoms of movement of the controlled system are useful, but complexity of interactions or in specifying inputs generally adds complexity.

On scale, I added the note "absolute log". That recognises that it's not simple large or small that is complex, but departure from familiar or equilibrium norms. A model isn't a smaller representation but a simplified one -- we model both galaxies and atoms. Starting with some familiar scale or equilibrium state, noting the orders of magnitude above or below that of a given system along various dimensions, and taking the absolute value of that, seems a reasonable first approximation of complexity of that system in that dimension.

Reducing my factors:

  • System complexity: coupling, scale, internal complexity, stability, constraints, tolerances.
  • Environmental complexity: uniformity, stability, observability, predictability.
  • State determinability.
  • Risk determinability.
  • Model complexity, accuracy, and usefulness.
  • Decision cycle: required speed, number of decisions & actions with time.
  • Consequence: Risks. Result of undesired or uncontrolled state. These may be performance degredation, harm or damage to the system itself, loss of assets, reduced production or delivery, harm to operators, harm to third-party property, environmental degradation, epistemic harm, global sytemic risk.
  • Controls: appropriateness, completeness, precision, responsiveness, limits, complexity.

That may still be too many moving parts, but I'm having trouble reducing them.

Perhaps:

  • Complexity (state, system, environment, model, controls)
  • Determinability (state, risk, consequence, decision)
  • Risk (Or fragility, resilience, consequence?)

I'm not satisfied, but it's a start.

#complexity #CharlesPerrow #ComplexSystems #NormalAccidents #control #ControlTheory #SystemsTheory #Cybernetics #Risk #Manifestation #UnintendedConsequences #ManifestFunctions #LatentFunctions #RobertKMorton

dredmorbius@joindiaspora.com

Steven Pinker's Panglossianism has long annoyed me

A key to understanding why is in the nature of technical debt, complexity traps (Joseph Tainter) or progress traps (Ronald Wright), closely related to Robert K. Merton's notions of unintended consequences and manifesst vs. latent functions.

You can consider any technology (or interventions) as having attributes along several dimensions. Two of those are impact (positive or negative) and realisation timescale (short or long).

Positive Negative
Short realisation Obviously good Obviously bad
Long realisation Unobviously good Unobviously bad

Technologies with obvious quickly-realised benefits are generally and correctly adopted, those with obvious quickly-realised harms rejected. But we'll also unwisely reject technologies whose benefits are not immediately or clearly articulable, and reject those whose harms are long-delayed or unapparent. And the pathological case is when short-term obvious advantage is paired with long-term nonevident harm.

By "clearly articulable", I'm referring to the ability at social scale to effectively and accurately convey true benefit or harm. The notion of clear articuability itself not being especially clearly articuable....

For illustration: cheesecake has obvious short-term advantage, walking on hot coals obvious harms. A diet and gym routine afford only distant benefits. Leaded gasoline, Freon, DDT, and animal wet markets have all proven long-term catastrophic consequences.

As Merton notes, the notion of latent functions is itself significant:

The discovery of latent functions represents significant increments in sociological knowledge. There is another respect in which inquiry into latent functions represents a distinctive contribution of the social scientist. It is precisely the latent functions of a practice or belief which are not common knowlege, for these are unintended and generally unrecognized social and psychological consequences. As a result, findings concerning latent functions represent a greater increment in knowledge than findings concerning manifest functions. They represent, also, greater departures from "common-sense" knowledge about social life. Inasmuch as the latent functions depart, more or less, from the avowed manifestations, the research which uncovers latent functions very often produces "paradoxical" results. The seeming paradox arises from the sharp modification of a familiar popular perception which regards a standardized practice or believe only in terms of its manifest functions by indicating some of its subsidiary or collateral latent functions. The introduction of the concept of latent function in social research leads to conclusions which show that "social life is not as simple as it first seems." For as long as people confine themselves to certain consequences (e.g., manifest consequences), it is comparatively simple for them to pass moral judgements upon the practice or belief in question.

-- Robert K. Merton, "Manifest and Latent Functions", in Social Theory Re-Wired

Emphasis in original.

In the argument between those arguing for optimism vs. pessimism, the optimists have the advantage of pointing to a current set of known good states --- facts in the present which can be clearly pointed to and demonstrated. A global catastrophic risk by definition has not yet ocurred and therefore of necessity exists in a latent state. Worse, it shares non-existence with an infinite universe of calamities, many or most of which can not or never will occur, and any accurate Cassandra has the burden of arguing why the risk she warns of is not among the unrealisable set. The side arguing for pessimism cannot point to any absolute proof or evidence, only indirect evidence such as similar past history, theory, probability distributions, and the like. To further compound matters, our psychological makeup resists treating such hypotheticals with the same respect granted manifested scenarios.

(There are some countervailing dynamics favouring pessimism biases. My sense is that on balance these are overwhelmed by optimism bias.)

The notion of technical debt gives us one tool for at least conceptualising, if not actually directly measuring, such costs. As a technical project, or technological adoption, progresses, trade-offs are made for present clear benefit at the exchange for some future and ill-defined cost. At which point a clarification of natures of specific aspects of risk is necessary. The future risk is not merely stochastic, the playing out of random variance on some well-known variable function, but unknown. We don't even know the possible values the dice may roll, or what cards are within the deck. I don't know of a risk terminology that applies here, though I'd suggest model risk as a term: the risk is that we don't yet have even a useful model for assessing possible outcomes or their probabilities, as contrasted with stochastic risk given a known probability function. And again, optimists and boosters have the advantage of pointing to demonstrable or clearly articulable benefits.

Among other factors in play are the likely value function on the one hand and global systemic interconnectedness on the other.

For some entity --- a cell, an individual, household, community, firm, organisation, nation, all of humanity --- any given intervention or technology offers some potential value return, falling to negative infinity at some origin (death or dissolution), and rising, at a diminishing rate, always (or very nearly almost) to some finite limit. Past a point, more of a thing is virtually always net negative. Which suggests that the possible positive benefit of any given technology is limited.

The development of an increasingly interdependent global human system --- economic, technical, political, social, epidemiological, and more --- means both that few effects are localised and that the system as a whole runs closer to its limits, with more constraints and fewer tolerances than ever before. This is Tainter's complexity trap: yes, the system's overall complexity affords capabilities not previously possible, but the complexity cost must be paid, the cost of efficiency is lost resilience.

Pinker ... ignores all this.


Adapted from a comment to a private share.

#StevenPinker #DrPangloss #risk #JosephTainter #RonaldWright #RobertKMerton #complexity #resilience #efficiency #ModelRisk #interdependence #optimism #pessimism #bias #manifestation #UnintendedConsequences #LatentFunctions