- Whilst technology has transformed the world we live in, the ethical debate relating to employment has evolved very little.
- These same discussions we are having now have been around for over 500 years and since Queen Elizabeth I.
“The future of work” is suddenly everywhere—which is an interesting feat for a 500-year-old discussion.
Today many worry that strides in artificial intelligence—new machines that can parse legal documents, diagnose diseases, drive trucks, and complete other jobs once thought too complex to automate—will result in widespread unemployment, just as, in the late 16th century, Queen Elizabeth I supposedly denied a patent to the inventor of a new automated knitting machine because she feared it would take the jobs of “young maidens who obtain their daily bread by knitting.”
Technology has, of course, transformed the world since the 16th century. But the debate around how it will impact jobs in the future has evolved remarkably little in the process.
As is the case today, pessimists throughout history have fretted about the impact of new inventions on the value of human labor, while optimists have pointed to past examples of how technology has improved the human condition. In our current discussion, there’s also a common counter-argument to this point. “Those weren’t thinking machines,” summarizes Vasant Dhar, a data scientist and professor at NYU. “This is not the same as last time, not the same as previous kinds of technology that changed the nature of work.”
But this, too, is not a unique argument. In 1933, the New York Times argued that the technology of the era would have unique consequences in a story headlined “the threat of the machine age:”
“We are frightened today because in the lessons of the past there is no reassurance. The past never knew such momentum, such vibration, such dislocation, such jarring transitions as we are in for.”
We’ve been having the same conversation for hundreds of years. Here are some highlights from the last 150 of them.
A group of New York City tailors threatened to strike unless their employer stopped using sewing machines. A newspaper article detailed the inevitable consequences of the machines for seamstresses:
“Oh dear,” said a poor girl, as she held up a salt bag to my view, “this was sewed by a machine; it is too bad; poor girls will soon have nothing to do. I know sights and sights of girls who used to make their living by sewing these bags and other coarse things, and now they are all out of work; it is too bad.”
The same article concluded that the women whose labors could be replicated by machines should direct their talents to higher pursuits. “I do not think the time will ever come when dresses will be cut and fitted by machines, and they grow altogether more and more elaborate in their forms and finishings,” it noted.
“Shovelers” who handled grain that arrived at US ports formed a protective union that refused to work with employers who used grain elevators.
The Metropolitan Record, a newspaper, scolded “very many young women” for fearing the arrival of French’s Conical Washing-Machine. “This machine will lighten the labor, save the hands, and relieve many of the wearing and disagreeable features of hand-washing, but is not designed to, and will not, take the place of a single young woman at service, we feel confident.”
England’s “Red Flag Act” required a person carrying a flag to walk in front of steam-powered locomotives, so frightening was the prospect of a machine guiding the route without the help of a horse’s intelligence. Later, the red flags rule was applied to automobiles.
“We are just at the beginning of the revolution,” said Raymond B. Fosdick, then the president of the Rockefeller Foundation, during a commencement address at Wellesley College in 1922. “We could not stop it if we would. It is advancing by leaps and bounds, gaining in impetus with each year. It is giving us more machines, faster machines, machines increasingly more intricate and complex. Life in the future will be speeded up infinitely beyond the present.”
He asked: “Can education run fast enough, not only to overcome the lead which science has obtained, but to keep abreast in the race?’
Economist John Maynard Keynes famously coined the term “technological unemployment.” But also, he argued that technological advances could eventually lead to an “age of leisure and of abundance” and predicted that we would work a 15-hour week—not out of necessity, but because we wouldn’t know what else to do with ourselves.
In 1932, a final report on social trends presented to US president Herbert Hoover used Keynes’ term. “There are so many new inventions indicating displacement of labor that technological unemployment may be an even more serious problem of the near future than it is now,” it concluded.
Ford replaced an original engine assembly line, where each person did a special job as a car rolled down the line, with an automated control that performed more than 500 operations, “without the touch of a hand,” according to the New York Times.
“A few men at a control board can direct an operation that formerly took hundreds or thousands of workers,” noted the Times in another story, echoing the fear for jobs that is persistent today. It also repeated the same optimistic argument that many people use to counter fears of automation with today: “The advent of the horseless carriage struck a mortal blow at the carriage industry, and the harness manufacturers and even the faithful horse, but it created many thousands of new jobs making, selling and servicing automobiles.”
“The rise in unemployment has raised some new alarms around an old scare word: automation,” wrote Time Magazine in 1961.
US president Lyndon Johnson set up a “National Commission on Technology, Automation, and Economic Progress” at the same time. “If we understand it, if we plan for it, if we apply it well, automation will not be a job destroyer or a family displacer,” he said. “Instead, it can remove dullness from the work of man and provide him with more than man has ever had before.”
In 1992, the Miami Herald profiled a photo stripper who became “a victim to modern technology.” Mortgage originators contemplated impending obsolescence as computer systems collected borrower data and provided consultation to underwriting, appraisal, and loan products. Newsweek summed up the general outlook with a conclusion that does not look so different from those being made two decades later: “The hot growth areas: health care and computer-related work,” it wrote. “Things look less rosy for bookkeepers, typists, copy-machine operators—and anyone whose job can be vaporized by automation.”
Jeremy Rifkin in 1995 authored a bestselling book called The End of Work, in which he argued there’s no reason that humans would need to keep working as many hours in an automated future.
Businessweek put the future of work on its cover in 2007, and Time did the same in 2009. Roy Bahat, the head of Bloomberg’s work-focused venture arm, calls this the period of “the future of working for us.” Rather than technology replacing people, popular imagination was fascinated by remote work, videoconferencing, and collaboration software and the impact they could have on the structure of employment.
The Museum of Modern Art in New York hosted an exhibit with futuristic visions of what the future of work might look like. It included a futuristic work station in which every surface is a screen and a pod and a “cushy cocoon” to seek respite from the noise of an open office.
In 2013, researchers at Oxford published a study on “the future of employment” that predicted almost half of US occupations were at high risk of being automated. Three years later, the Obama administration, like the Hoover and Johnson administrations before it, published a report that detailed the possible impact of technology on jobs and the economy.
The current “future of work” debate has taken off, almost becoming a business in itself.
“There’s an unmistakable acceleration,” says Erik Brynjolfsson, an MIT professor and the co-author of several books about how technology impacts work and business. When he and his co-author, Andrew McAffe, launched their first book in 2014, he says, they were part of a small group that was talking about the topic. “Now it’s become more and more of a mainstream topic of discussion. A lot of it I think is that the evidence has been piling up more and more. And so you can’t deny it.”
“There’s the obvious evidence,” says Mcafee, “and then the serious rigorous research about the hollowing out of the middle class, the polarization of the economy, the declines in entrepreneurship and mobility. We weren’t as aware of those things three and a half years ago as we are today.”
Many of the fears and reassurances prevalent in years long past remain today. “There will always be limits to how creative a computer can be,” read one HBR headline in 2017, much as the Metropolitan Record assured young women their jobs would not be replaced by the washing machine.
“Can self-driving cars ever really be safe?” Adage asked earlier this year, echoing the same concern that surrounded automobiles. ”Higher education must prepare for the rise of the machines,” urged Times Higher Education, and many others this year.
Elon Musk recently repeated the existential worry about whether no more work will remove the meaning of life for many people. “A lot of people derive meaning from employment,” the Tesla and SpaceX founder said when asked about “advice for the future” at the World Government Summit. “If there’s not a need for your labor, what is the meaning? Do you have meaning? Do you feel useless? That’s a much harder problem to deal with.”
Will the future of work look different this time?