#jevonsparadox

dredmorbius@joindiaspora.com

Ransomware and other Cybersecurity Attacks as a Threshold Phenomenon

There are threats which emerge when a viability and/or cost threshold is crossed and realised. The recent set of high-profile cybersecurity and ransomware attacks[1] being a case in point.

There's a historical analogue to these.

For cities, recurring plauges began occurring during Roman times and limited maximum city populations to about 1 million until the advent of modern sanitation, hygiene, public health, waste removal, and food quality. (Actual medical care and treatment had little to do with this, though vaccines and antibiotics helped.) Soberingly, plagues and disease co-evolved with their host populations and civilisations.[2]

Industrial pollution lagged industrial development by about 50--100 years, with air and water quality and material contamination (heavy metals, asbestos, organic solvents, synthetic hormone disruptors and other bio-active contaminants, etc.) playing major roles in this. Global-level contamination, as with CO~2~ and CFC emissions, resulting in global warming and ozone depletion respectively, also lagged by about 50--100 years from substantial usage.

Increases in travel, transport, and communications almost always directly facilitate fraud. The Greek/Roman gods Hermes/Mercury represented communication, messages, travel, transportation, commerce, trickery, and theives. The term "Confidence Man" arose from Herman Melville's novel of the same name, set on the first great highway of the United States, the steamboat-plied Mississippi.

Mail begat mail fraud. Telegraph and telephones begat wire fraud. Cheap broadcast radio and television, payola and game-show fraus. Email begat spam and phishing.

Some of these have been somewhat contained, all remain actively practiced. Falling costs mean that attacks are now launched from far further afield than previously. Ironically, it's postal mail with its high costs and low speed which has proved most resistant to rampant abuses.

The 1990s and 2000s computerised business practices employed computers with abysmal security, but those systems were spared even worse negative consequences by the general lack of networking, the relatively small size of global computer networks, limited disk storage, limited network bandwidth, and the effectual air-gapping of paper-driven steps in processing. Billing might be submitted or computed electronically, but a paper check still had to be cut and signed. Draining accounts or data simply wasn't possibly without running up against the inherent limitations of computer infrastructure at the time even had a payment mechanism similar to today's cryptocurrencies been available. The poor security of the 1990s and 2000s largely resulted in highly localised tragedies, not wide-ranging ones. There were some exceptions, they were rare, typically occuring well apart from one another and not as part of a connected series of attacks or vulnerabilities.

If my assessment is correct, we'll be seeing much more of the types of attacks notable in the past year.

Attackers have low costs. Victims have highly-interconnected, but poorly-defended systems, comprised of multiple components, each complex on its own, and lacking any effective overall security accountability. End-to-end automation exists, facilitating both productive work and effective attacks. A viable and tracking-resistant payment mechanism exists. Regions from which attacks can be made with impunity exist, and are well-connected to global data networks.

Backups alsone are not an effective defence as these protect against data loss but not data disclosure. Full defence will require radically different thinking, protection, risk assessment, and law-enforcement capabilities.

Until then, get used to more of this, at both large and small scales.

There are some potential bright lights.

  • I suspect attackers aren't targeting specific facilities but are instead conducting automated and scripted attacks against vulnerable facilities.
  • For data-encryption ransom attacks, this means that the decryption key is all but certainly derivable from information on the attacked system, perhaps encoded as filenames or contents. Determining this mechanism may at least allow for data recovery. (It of course does nothing against data disclosure, long-term surveillance, or access denial attacks.) The likelihood that attackers have some database of victims + passwords seems low.
  • Attackers are themselves subject to trust and suspicion attacks, and turning members or safe-harbours against attackers is probably a useful countermeasure.
  • State-level sanctions, flling short of military attacks, may also prove effective.

Notes:

  1. Note that ransomware is only one of a class of current cyberthreats. Others include information disclosure, service disruption (both as a side effect and as a direct objective), surveillance, leveraged further attacks, manipulation and computational propaganda, and the like.

  2. The notion that disease co-evolves with the host population is a long-standing one. The association with historical plauges is a major element of Kyle Harper's The Fate of Rome, which looks at this co-evolution as a major contributor to the ultimate fall of the Roman Empire.

#cybercrime #ransomware #infosec #costs #JevonsParadox #thresholds #emergentPhenomena

dredmorbius@joindiaspora.com

Data Facilitates Surveillance, Privacy Violation, and Manipulation Directly Through Increased Efficiences

Digitisation and distribution (to multiple agencies, organisations, and firms) has been ongoing mostly since the 1960s, though accelerating greatly as disk storage costs passed through the threshold of personal-budget levels in the late 1990s.

I'd been working with industry data in the early 1990s, when several analytic departments at a mid-sized firm might share a couple of gigabytes of mincomputer storage. At a conference during the 1990s, in an audience of several hundred data analysts, only a few hands went up for dealing with GB-scale datasets (the raised hand representing telecoms data as I recall). I realised circa 2000 that storage capable of storing a few hundred bytes data (plenty for a basic dossier) on every individual in a large country, or soon the world, would be within a modest household budget. Shortly afterward, the first news stories of data brokers started appearing, as well as Total Information Awareness, often contracting with those same data brokers.

Early social networking sites were beginning to apply collaborative-filtering moderation systems, which I quickly realised, having helped in the design of several myself, were themselves prodigious personal preferences data collection systems on the part of reviewers --- rating systems like many swords cut two ways; reviewers rate content, but ratings and content preferences also rate the reviewers. (An interesting twist on the Quis custodiet ipsos custodes question.)

In 1900, the only routinely digitised mass citizen data were US Census tabulations, updated decadally and not generally accessible. By 1960, telephone, banking, and airlines data (through SABRE) were digitised, largely as with Census data, on punch cards. Tape and further expansion to credit, insurance, and utility data developed by the 1970s, though punch cards remained in heavy use through the 1980s. The first widespread data privacy outcries came in the 1970s, see for example Newsweek's 1970 article, "The Assault on Privacy (1970)" (PDF), though early infotech pioneers such as packet-switched networking pioneer Paul Baran were writing on data, surveillance, privacy, and ethical concerns in the 1960s. (An aside; those publications are freely available online by RAND at my request.) Marketing and advertising were increasingly represented by the 1990s, as well as healthcare data, though records there remained (and still remain) highly fragmented.

By the 1990s, previously offline court and legal documents began getting digitised in bulk (a practice begun years earlier), sometimes by local courts, more frequently by aggreggation services such as LexisNexis, Westlaw, JustCite, HeinOnline, Bloomberg Law, VLex, LexEur, and others who took advantage of pubic access to compile and store their own aggreggations. Often literally by sending individuals to those rural courthouses mentioned above, and recording or duplicating records, one at a time, from clerks.

Access costs matter. And by costs I'm referring to all inputs, not just money: time, knowledge, rates of availability, periodic caps (e.g., 4 records/hr., but a daily cap of 8 records, 16/week, 32/month, effectively imposing an 8 hr/month access restriction), travel, parsing or interpretation, ability to compile independent archives (rather than relying on the source or origin archive), etc.

Aggregation itself is an invasion of privacy. Reduced search, parsing, and inference-drawing costs enable observation, surveillance, and manipulation.

Reduced costs don't simply facilitate existing uses, but facilitate new, lower-value, activities. This is a rephrasing of the Jevons paradox; increased efficiency increases consumption. Trying to reduce consumption through greater efficiency is like fucking for virginity. Another characteristic is that many of these new uses are of very limited, or negative, social benefit. Very often of fraud, or predatory practices.

Technnology is far less an equaliser than a power multiplier, amplifying inequalities. Information technology especially so.

Data corrupts. Absolute data corrupts absolutely.

#data #InfoTech #surveillance #SurveillanceState #SurveillanceCapitalism #privacy #manipulation #DataAggregation #JevonsParadox #ActonsLaw #PaulBaran