Tuesday, May 27, 2025

Lapse in Judgment

On Sept. 26, 1983, Stanislav Yevgrafovich Petrov saved the world. An officer with the Soviet Air Defense Forces, he oversaw a computer-controled network of satellites that monitored the skies for a possible nuclear strike from the United States. On that day the computer system reported five incoming missiles. Petrov double-checked the data: The probability of a nuclear attack was 100%.

The system was new and Petrov didn't entirely trust it. There was no confirmation from ground radar and Petrov thought if the U.S. was going to strike, it would be all-out, not just five missiles. Nevertheless, his duty was to report the attack -- a duty reinforced by a command structure that could be brutally severe -- and if he did, the the chances were good that Soviet leadership, given the very short time window, would order a retaliatory strike.

Petrov didn't report it. He exercised judgment and made a decision. A subsequent investigation uncovered a flaw in the Soviet early-detection system.

I learned of this episode recently when reading *A Web of Our Own Making by Antón Barba-Kay. The book is a philosophical critique of our digital age and a hard look at what our digital realms are doing to us as people. There's much that's not good. One thing is the erosion of the willingness, and possibly the ability, of people to exercise judgment. Instead we're handing that chore over to computers and big data.

The allure is that a computer's judgment will be unbiased and fair. Human emotion and fallibility will not intrude. But what is the risk of deferring to computers? Beyond the fact that computers are themselves fallible and their datasets often imperfect, there's the reality that the less people exercise judgment the less good at it they will be (use it or lose it) and the more they'll feel the need to rely on technology to tell them what the correct decisions are. Barba-Kay doesn't hedge:

"Data analysis has eclipsed human judgment as our shared conception of what's most authoritative."

In Petrov's time, the dilemma was clear cut. The technology gave him an answer and he had to decide whether to trust it or not. It was a moment that demanded a decision, one with potentially catastrophic consequences. Today's deferral of decision-making to algorithms and datasets is piecemeal, we parcel out our decisions in bits. And maybe that's more pernicious.

A while back, my brother and I were playing with AI, trying to produce an image we wanted for a role-playing game. Despite using templates and elaborate prompt instructions, the best we could do was only kind of close to what we wanted. The temptation is to settle for that instead of the the more challenging and time-consuming work of creating the image ourselves. That kind of temptation permeates people's experience with the digital realm. Our choices are defined there and we tend to settle for what's available.

AI is, of course, the amazing, splashy, scary new thing but digital services have been funneling people's choices for some time by learning what they like (at a given moment) and reinforcing those preferences. "They give us what they make us feel we want," writes Barba-Kay.

It ends up, he says, as a kind of abdication of responsibility to companies and the data they've collected. They exist to do our thinking for us.

Online chats and social media message boards have their own dynamics that erode thoughtfulness. The like feature becomes an addictive pursuit where folks post to please and attract attention, a further channeling of behavior toward narrow, self-supporting avenues of thought. These venues favor shock or sensation over careful thinking. Strong emotions are exciting; circumspection is boring.

The allure of buying things to satisfy immediate wants, the draw of social media cuteness, and the drama of chatroom shouting matches -- it all creates an emphasis on the new and the buzz of the now. The scope of our digital world is short term, Barba-Kay says, and erodes society's capacity for long term thinking. Addressing problems like environmental degradation in a meaningful way has become essentially impossible. The speed of the digital world and the slow pace of climate change are, as Barba-Kay says, "temporal opposites."

"So long as the new is normal, no sustained vision can accumulate into settled practice."

Then there's the fracturing of people into groups based on interests and viewpoints. Online we can avoid folks who don't share our perspectives. Barba-Kay calls the subsequent groupings of like-minded people the "social imaginary" and it tends to erode the skills we need to get along in actual, physical communities where "we must inescapably work out our differences about shared concerns."

Actual communities, as opposed to virtual ones, have the tendency to soften divisions and act as anchors against extreme views. Virtual communities tend to encourage divisions and reinforce extremes.

Then there's the kind of slow divorce from mundane virtues of non-digital life. One recent study, Barba-Kay points out, shows that people are more likely to be honest when they write things down on paper, as opposed to texting, emails, chats, or other digital communication. The ink on paper has a "presence" the digital realm cannot provide, and it's a presence people tend to instinctively respect. Online, the need to adhere to the constraints of honesty fades.

Participation in social media is a projecting of a ghostly self that's physically invulnerable. This projected self is a kind of armor, and a license to say and do things you might not otherwise. It also offers a way to curate your projected personality. To do that in face-to-face socializing takes more effort and can never be wholly complete. Some of what's less savory always slips through. But knowing that people see you as something other than your idealized self is important, if for nothing else than as an antidote for self delusion. Online our idealized selves can feel like our real selves.

Strangely, this dynamic can make us more psychologically vulnerable. The lack of constraint can lead to ugly, vicious comments that almost no one would make in face-to-face settings. The sting of those can create surprising anxiety even when it's clear the comments are unfounded. The glow of one's online persona is dimmed.

Ultimately, I believe, and so does Barba-Kay, that the non-digital world is far richer than the virtual one can ever be. Technology is at its best when it enables us to make our lives richer, to see more. Digital technology certainly can, and does, do this. But that aspect has almost become an afterthought. Instead, digital technology has evolved to become a siren call beckoning us away from the substantive world into a virtual one where we end up seeing less

"Whatever boon it [digital technology] offers -- and those are untold and undeniable -- we are, in my view, continuing to trade away the richest possibilities of human vision, the kinds of ideals that justify our very existence, for the lesser comforts of safe and predictable rule by data."

Perhaps the fundamental lapse in judgment is the belief that technology can make us better. It can't. It can make us better off, but not better.