#systemstheory

dredmorbius@joindiaspora.com

Bullshit Arguments That Must Die: The "Materialism" Snarl Word

So, I've been accused of materialism, by which I'm understanding dialectical materialism.

Elsewhere, after responding to a tired rejection of the risks of computational propaganda:

The literal basis of all of our lives is propaganda. We are wholly propagandized from the start to end of every single day. How can people be so empty to turn and say 'Russia!', after the 87th advertisement experienced in three hours?

With:

  1. Changing the nature of the propaganda changes the social and political balance.
  2. Computational propaganda operates at a scale, rate, capability, anintensity, and pervasiveness unmatched in history. In an already globally interconnected and precarious world.

And noting that the printing press triggered the Reformation and Thirty Years War. Widespread literacy, the Revolutions of 1840. Yellow Journalism the Boer, Spanish American, and First World Wars. Radio and hi-fi sound recording and playback, Fascism. Our current tools are vastly more powerful, my interlocutor brilliantly quipped:

Yeah, materialism is a hell of a drug.

Asked for clarification they quote Wikipedia.

If I'm reading them accurately, "materialism" sounds like a snarl word shallow dismissal.

Theories of history have evolved: mythic, religious, Great Man, ideas. A systems model, in which inputs, information, relationships, and capabilities are mutually determinative is closest to my views, though you seem more interested in projecting a preconceived label than inquiring as to my own understanding.

Changing any element of the system, and information capabilities, including sensing, processing, storage, retrieval, and transmission --- media technology being a large part of this --- do have major impacts on systemic function. As does material ability to effect the environment or culture through capital and energy, scientific and technical understanding, motivating values, and other factors.

Ideas tend to emerge regardless of capabilities (subject to limits on empirical observation, existing foundations, and collaborative capabilities). But their ability to become prominent or dominent is enabled or limited by capabilities, including technological. There are times that similar concepts have arisen independently in widely separated areas, the Axial Age (Greek, Hindu, and Chinese philosophical traditions) being one example. Democratic government, empiricism, liberal democracy, socialist principles, nationalism/tribalism, feudalism, market economics, polytheism, monotheism, and humanism similarly. Ideas themselves face Darwinian selection and fitness to specific niches.

And yes, scale effects absolutely matter. The modern industrialised world, and its ideas, aren't possible without vast energy, material resource, ag productivity, transport, communications, information processing/storage/retrieval, organisational, technological, infrastructural, transformational, and sanitary/hygenic/public health capabilities and scales. No quantity or intensity of ideas will move an Airbus A380 through the skies across continents, smelt a gigaton of steel, sequence a virus and distribute a vaccine worldwide, or create and run a broadcast network or social media server farm without requisite energy, materials, scientific and technical knowledge, and the human and social systems to manage and direct them. Put your ideas on an airless bit of space dust and see what they accomplish.

Long-time readers my recognise most of the nine elements of my ontology of technological mechanisms here; fuels, materials, process knowledge, causal knowledge, networks, systems, power transmission and transformation, information, and hygiene factors.

#BullshitArgumentsThatMustDie #ideas #materialism #Cybernetics #SystemsTheory#TechOntology

dredmorbius@joindiaspora.com

Perrow, Normal Accidents, and complex systems determinants

From comments to a post by @Joerg Fliege, preserved for easier retrieval.

Charles Perrow's model in Normal Accidents is Interactions vs. Coupling. This seems ... overly reductionist? Simple is good, too simple is not.

Breaking down Perrow's taxonomy, dimensions or factors I might apply. Ranges are generally from "easy" to "hard" in terms of successful control:

  • Coupling flexibility: loose/tight
  • Coupling count: low/high
  • Internal complexity: low/high
  • Threshold sensitivity: high/low
  • Self-restabilisation tendency: high/low
  • Constraints/tolerances (design, manufacture, operational, maintenance, training, financial): loose/tight
  • Incident consequence: low/high
  • Scale (components, mass, distance, time, energy (kinetic/potential), information, decision): low/high (absolute log)
  • Decision or response cycle: long/short
  • Environmental uniformity: high/low
  • Environmental stability: high/low
  • State determinability: high/low
  • Risk determinability: high/low
  • Controls coupling: tight/loose
  • Controls response: high/low
  • Controls limits: high/low
  • Controls complexity: low/high

That's a bunch of factors, giving a complex model, but many of these are related. I see general parameters of complexity or arity, of change (itself complexity), of tolerances or constraints, of responses, of controls, of perception or sensing. These themselves are elements of a standard control or systems model.

                   update (learn)
                         ^
                         |
state -> observe -> apply model -> decide -> act (via controls)
  ^        ^  ^                                        |
  |       /    \                                       |
  |  system    environment                             |
  |                                                    |
  +----------------------------------------------------+

Coupling is how the system relates to its environment and controls. Those couplings may also be sensors or controls.

Consequence refers to result of undesired or uncontrolled states. Relates strongly to resilience or fragility.

Internal complexity, threshold sensitivity, self-stabilisation, constraints, tolerances, and scale (a form or attribute of complexity) are all aspects of the system and its model. Consequence is a component of risk.

Decision cycle --- how rapidly responses must be made to ensure desired or controlled function --- is its own element.

Environmental uniformity and stability are exogenous factors.

State and risk determinability apply to observation and model, respectively. State is overt or manifest, risk is covert or latent. State is inherently more apparent than risk.

The controls aspects all all relate to how intuitive, responsive, limited, and complex control is. Controls mapping directly to desired outcome decrease complexity. Controls providing precise and immediate response likewise. High limits (large allowed inputs) increase control, low limits decrease it and require greater planning or more limited environments. Complexity ... my need some further refinement. Degrees of control mapping to freedoms of movement of the controlled system are useful, but complexity of interactions or in specifying inputs generally adds complexity.

On scale, I added the note "absolute log". That recognises that it's not simple large or small that is complex, but departure from familiar or equilibrium norms. A model isn't a smaller representation but a simplified one -- we model both galaxies and atoms. Starting with some familiar scale or equilibrium state, noting the orders of magnitude above or below that of a given system along various dimensions, and taking the absolute value of that, seems a reasonable first approximation of complexity of that system in that dimension.

Reducing my factors:

  • System complexity: coupling, scale, internal complexity, stability, constraints, tolerances.
  • Environmental complexity: uniformity, stability, observability, predictability.
  • State determinability.
  • Risk determinability.
  • Model complexity, accuracy, and usefulness.
  • Decision cycle: required speed, number of decisions & actions with time.
  • Consequence: Risks. Result of undesired or uncontrolled state. These may be performance degredation, harm or damage to the system itself, loss of assets, reduced production or delivery, harm to operators, harm to third-party property, environmental degradation, epistemic harm, global sytemic risk.
  • Controls: appropriateness, completeness, precision, responsiveness, limits, complexity.

That may still be too many moving parts, but I'm having trouble reducing them.

Perhaps:

  • Complexity (state, system, environment, model, controls)
  • Determinability (state, risk, consequence, decision)
  • Risk (Or fragility, resilience, consequence?)

I'm not satisfied, but it's a start.

#complexity #CharlesPerrow #ComplexSystems #NormalAccidents #control #ControlTheory #SystemsTheory #Cybernetics #Risk #Manifestation #UnintendedConsequences #ManifestFunctions #LatentFunctions #RobertKMorton