It was a steady human hand that saved us all from dying in the early morning hours of Sept. 26, 1983. A warm thought against the cold threat of nuclear war in the Russian mind of Stanislav Petrov came in the form of a simple, knowing “No. I will not comply.”
The sophisticated military computer systems powering Stanislav’s screen were aggressively asserting that five nuclear-armed intercontinental ballistic missiles were inbound, launched from the United States. “The siren howled, but I just sat there for a few seconds, staring at the big, back-lit, red screen with the word ‘launch’ on it,” he later recounted in an interview with the BBC. Abandoning protocol, which required phoning leadership for immediate escalation to quicken retaliation for nuclear war, Stanislav did the unthinkable: He slowed down to think. He checked to see if there had been a computer error. It was confirmed. There had been a glitch in the machine.
More from Spin:
- A Day in the Life of… The Story So Far
- They Might Be Giants: A Tale in Three Acts
- Road Work: When We Were Young, Oasis, Pixies
There were no nukes screaming across the open sea to knock our planet’s assembly of nations into history’s gutters of Armageddon. For that to be true, it would have required two human officers in the United States to have the gall to separately, simultaneously, insert and turn their doomsday keys to initiate the launch of missiles. This, known as the “Two-Man Rule,” combined with the unexpected, audacious gut instinct of Stanislav Petrov to not pick up the phone, are two of the human fail-safes embedded in a system otherwise designed to burn away the atmosphere you and I breathe. These are the humans in the loop.
Human-In-The-Loop, or HITL for short, is the technical term for the design decision of placing humans in a mechanical system that could otherwise be fully automated. In the recent explosion of artificial intelligence-based businesses, the phrase “human-in-the-loop” has leapt from military protocol and deep engineering methodology to take on a new halo of Silicon Valley tech jargon. It is coming to represent a sales pitch, brand promise, design ethos, and career opportunity. “Incorporating human judgment is crucial,” reads a recent blog post from Amazon Web Services that serves as an example of this trend, “especially in complex and high-risk decision-making scenarios. This involves building a human-in-the-loop process where humans play an active role in decision making alongside the AI system.”
“Be the Human In The Loop,” proclaims Wharton Business School in a title session for their Global Youth Program Cross-Program Speaker Series that features Wharton professors lecturing to the hundreds of high school students gathering on their campus for summer business programs. Across the continent, Stanford University follows “A day in the life of a human-in-the-loop engineer” in Symmetry Magazine, a joint media production with the U.S. Department of Energy, to highlight cutting-edge professional development opportunities happening at their National Accelerator Laboratory (SLAC). Meanwhile, MIT’s social impact organization, Solve, cites HITL as an opportunity to achieve global economic equality by “providing refugees with work and skills to drive forward the artificial intelligence industry.”
These academic institutions are urgently mirroring trends being driven by tech’s private sector, always quick to turn a new phrase into a business necessity. Companies interested in adding the assurance or selling point of HITL to their AI-based systems can work with firms like humansintheloop.org, who offer to “close the loop for your AI model by automating the retraining process with human feedback” through a variety of services provided by, well, humans. It’s an interesting turn of events when humans become the product to assist the technology. Perhaps it’s the logical continuation of our digital evolution.
It takes an intercontinental ballistic missile roughly 15 minutes to reach across the planet. How long does it take for a TikTok dance to go viral? Our modern world is full of loops we humans tend to get caught in. The personification of “the algorithm” by social media users has become shorthand to describe their sense of being used. Cultural resentment over being made to dance like monkeys or tricked into pushing buttons on screens has boiled over into moments of full-blown anger that look and feel like sparks of a modern global labor movement. Look at the rise of “anti-work” and the rage-quitting of day jobs in the West or at “tang ping” in China, the subtle act of simply lying down and doing nothing as protest.
Perhaps these loops are the wheels of capitalism turning at ever-increasing speeds. How quickly will they turn and what of our humanity will get caught in them?
I’ve stood on the floor at a marketing conference, pulling my hair, audibly groaning, and gasping in protest alongside a crowd of fellow creative professionals around me as we heard one of the world’s foremost neuroscientists describe the frontiers of intelligence being explored by researchers for the furthering of technology.
The human brain, it seems, has its unique advantages. It is capable of computing complexities that even our most advanced AI can’t quite achieve. Some researchers have supposedly seized upon the opportunity that human brain tissue may stay active up to 10 hours after death and so are theorizing ways to transport brain cells to server rooms where they can be preserved and wired to add their uniquely complex processing power to advance computer systems. Observing the horror in our faces, the neuroscientist laughed and assured us that our Matrix nightmares can wait as scientists are also researching ways to use lab-grown brain tissue to create new systems called “biocomputers” that won’t require our dead bodies.
Maybe it was a joke to poke at the artistic sensibilities of the crowd already jittery over generative AI. Maybe it was a warning, or maybe a genuine pitch. To this day, I still can’t tell if he was sharing these visions for our brave new world with a grin or a sardonic smile.
There are some loops we just don’t want to be a part of.
Inside Japan’s Tokyo Detention House is a room with a trap door. This is the execution room where hundreds have been sentenced to death by the capital punishment of hanging. The opening of the trap door is operated by one of the three buttons housed in the next room. It is a system designed so that when the prison’s staff members walk in to perform the dark ritual of death, they press each button simultaneously and so do not know who is personally responsible. This is a common practice echoed the world over in systems designed for capital punishment, including distributing blanks to firing squads and the use of elaborate sequences of false switches for the delivery of lethal injections.
These are efforts to remove humans from the loop for systems that carry out decisions most people don’t want to make, but someone must. Some might say that is core to the purpose of technology: to do the things we humans won’t or can’t. Others might say that the purpose of technology is to be scaffolding for our human potential, to extend beyond our reach so that we may grow into a greater reality. It’s what we choose to do with the humanity in our hands that decides.
Technology needs human hands on its buttons because they need to be pressed to activate the “yes” in their designs, but it is only our humanity that knows the power of saying “no” and the importance of asking “why?”.
This is not the first time our species has confronted grave ethical questions of how to handle the power of our creations. It won’t be the last. This is the loop of loops we humans are destined to be in. How we navigate each loop’s choices is each generation’s challenge to decide.
To see our running list of the top 100 greatest rock stars of all time, click here.