Discover more from Union Forward
The AI Revolution and The New Roaring '20s
Artificial intelligence will fundamentally reshape American life, work, and politics.
A new industrial revolution is on humanity’s horizon.
In 2023, predictions about artificial intelligence (A.I.) have spanned from utopian dreams to apocalyptic nightmares, and attempts to identify comparable moments in history range from the development of the atomic bomb to the ancient discovery of how to control fire.
The emergence of A.I. today appears reminiscent of the agricultural revolution 12,000 years ago. The agricultural revolution planted the seeds of human civilization which would allow for the emergence of cities, cultures, technological advancements and systems of government.
Agriculture revolutionized the hunter-gatherer way of life, drastically altering the human experience and the course of our history in ways that tribes of hunters and gatherers could have never imagined.
Today, A.I. sows the seeds for an equally monumental transformation of human society which we cannot yet fully grasp.
In the short term, A.I. has the potential to cause significant economic disruption and social unrest by displacing millions of jobs with automation.
The surge in popularity of A.I. tools like ChatGPT crystallized the extent of the looming economic transformation. Writers, accountants, software engineers, journalists, paralegals, graphic designers, customer service workers, teachers, artists and many more are questioning whether their jobs have expiration dates.
Some prominent voices in the field of A.I. downplay the risk of vast job loss, however, pointing to history to suggest that new jobs can be expected to replace the old ones.
Coupled with the potential for vast—though perhaps transitory—job loss is the emergence of convincing “deepfakes” which threaten to cloud our sense of reality and fuel political propaganda.
On April 25, the Republican National Committee released a 30-second advertisement crafted entirely using A.I. image and voice generation tools.
The R.N.C.’s advertisement featured fake reports of a Chinese attack on Taiwan, the collapse of hundreds of U.S. regional banks, the southern border being overrun by a surge of illegal immigrants, and the city of San Francisco being closed by officials due to rampant crime.
The advertisement indicated that it was generated artificially with a small disclaimer in the top-left corner of the frame and an acknowledgement in the video’s description:
Media outlets, social media platforms and politicians may be compelled to stoke fear about A.I. technology—or use it themselves to create videos like the one above—in the simple hopes of generating more advertising revenue and campaign donations, respectively.
A.I. should not be a bogeyman. It carries the potential to substantially elevate living standards, accelerate economic productivity, advance scientific pursuits, and transform medical care.
Language models like ChatGPT which reduce the amount of time we spend on mundane tasks, at the end of the day, benefit humanity.
On the other hand, significant economic upheaval coupled with a rise in persuasive political propaganda could throw the 2024 into uncharted territory, particularly if the two major parties nominate the two candidates whom the American people have been abundantly clear of their opposition to: President Joe Biden and former President Donald Trump.
Harnessing the incredible potential of A.I. while safeguarding against the severe risks associated with it will be a delicate balancing act for humanity.
The real threats of job loss and democratic erosion loom large, as does the risk that fear of this technology could strangle innovation which would improve all of our lives.
Automation and Job Loss
Both A.I. optimists and pessimists concur that it is impossible to foresee what the world will look like on the other side of this industrial revolution.
History suggests that new jobs will replace the old ones, although many people remain unconvinced that all or most jobs replaced by automation are going to come back.
Senator Richard Blumenthal admitted during the Senate’s artificial intelligence hearing in May that his “biggest nightmare” is the “loss of huge numbers of jobs” as a result of automation.
In its March 2023 report on A.I., Goldman Sachs estimated that approximately two-thirds of jobs in the U.S. are “exposed to some degree of automation.” The report relies on history as a solid indicator that most jobs are likely to be replaced:
“Worker displacement from automation has historically been offset by the creation of new jobs, and the emergence of new occupations following technological innovations accounts for the vast majority of long-run employment growth.
The combination of significant labor cost savings, new job creation, and higher productivity for non-displaced workers raises the possibility of a productivity boom that raises economic growth substantially.” — Goldman Sachs Economics Research
Moreover, the report was confident that most jobs are only “partially exposed” to automation, meaning that many workers are likely to be “complemented rather than substituted” by A.I. technology.
New jobs, the report argues, will quickly emerge either directly in response to the emergence of A.I. or due to an overall increase in demand for labor generated by an anticipated boom in economic productivity.
OpenAI's CEO, Sam Altman, provided a similarly nuanced perspective during his Senate testimony:
“It’s important to understand and think about GPT-4 as a tool, not a creature … it’s a tool that people have a great deal of control over … GPT-4, and other systems like it, are good at doing tasks, not jobs.
GPT-4 will, I think, entirely automate away some jobs, and it will create new ones.”
Even temporary job losses can cause significant disruption to Americans’ livelihoods. Between 60 and 80 percent of Americans have reported living paycheck to paycheck in the past five years, suggesting that many Americans could face emergency situations soon after losing a job.
Gary Marcus, a professor emeritus of psychology and neural science at New York University and founder of machine learning company Geometric Intelligence, argued during the Senate’s hearing that history has limited value in predicting how revolutionary technology will transform our economy in the modern world:
“History is not a guarantee of the future. It has always been the case in the past that we have had more jobs … new jobs, new professions come in as new technologies come in.
I think this time is going to be different, and the real question is over what time scale? … I think in the long run, so-called ‘artificial general intelligence’ really will replace a large fraction of human jobs.”
The transformative nature of A.I. may call for a similar transformation in American politics.
Close to a century ago, President Franklin Roosevelt advocated his New Deal agenda as a complete overhaul of America’s economic system that was necessitated by the Great Depression. Many Americans consider automation to necessitate a similarly robust agenda of reforms today.
Universal basic income (UBI) has gained particular popularity amid the rise of automation. Several leading tech figures, including Sam Altman, Elon Musk (Tesla, SpaceX, and Twitter CEO), Mark Zuckerberg (Meta CEO), and Jack Dorsey (Bluesky co-founder and former Twitter CEO), have come out in support of the ambitious idea.
Andrew Yang, founder of Venture for America and the Forward Party, grounded his signature proposal during his 2020 presidential campaign—a UBI of $1,000 a month for every adult U.S. citizen—in a belief that millions of Americans would see their incomes vanish amid the spread of automation in the 2020s and beyond.
Yang cites manufacturing workers, truck drivers, retail workers, fast food workers and call center workers as examples of major sources of employment that industries are looking to automate in the near future.
Automation remains in its early stages of development and adoption. It is not going to eliminate millions of jobs next month, but its potential to reshape broad swaths of our economy could destabilize millions of Americans’ livelihoods in a relatively short period of time.
The maiden autonomous truck completed a cross-country delivery in December 2019, transporting 40,000 pounds of butter, and McDonald’s opened its first location equipped to serve customers without any human employees in December 2022.
A series of recent layoffs in leading U.S. tech companies have eliminated over 150,000 jobs, and the reversal of pandemic-era hiring sprees in the tech sector tells only a slice of the story. Many have cited A.I. technology as a major or minor reason for layoffs.
Alphabet, the parent company of Google, announced a cut of 12,000 jobs in January in order to refocus on A.I. as a top priority. IBM announced a pause on hiring for 7,800 positions which would instead be replaced with A.I. earlier this month. Amazon—which uses hundreds of thousands of robots in its warehouses—announced plans to eliminate 18,000 jobs earlier this year. Dropbox announced the elimination of 500 jobs in order to keep the company at the “forefront of the A.I. era.”
While a tremendous productivity boom is on America's horizon, the question remains whether it will be a boom for all Americans or for a privileged few.
Similar to the agricultural revolution 12,000 years ago, automation is poised to break down and transform the nature of human work and economies.
Automation will ultimately free us from a significant amount of mundane or physically punishing work, but we venture into the unknown when considering the impact on Americans’ livelihoods and the new kinds of jobs which will emerge.
Supporters of UBI are convinced that guaranteed incomes will be necessary in the post-automation world, and that such an ambitious policy is vital to ensuring that the automation boom is enjoyed by all Americans.
A.I. and the 2024 Election
The echoes of the Roaring Twenties and the Great Depression a century ago remind us that rapid progress can lead to both prosperity and upheaval.
Two prominent facets of A.I. technology which now command public attention for their disruptive potential are the proliferation of unnervingly realistic deepfake videos and the capability of generative language models to manipulate user behavior, including voting behavior.
At the same time, an increase in A.I.-generated news risks pushing politicians to restrict freedom of speech in the name of protecting people from misinformation and manipulation. The unprecedented risks associated with A.I. technology could electrify forces of fear and polarization in a political climate already marked by distrust.
During the hearing, a consensus emerged that A.I.-generated content should be required to carry “nutrition labels” which indicate its artificiality.
In principle, these labels would allow A.I.-created content, including creative and parody works, to be freely produced without further distorting our perception of reality beyond the ways in which social media already has.
A.I. is poised to unlock new creative avenues to a generation of young creators. The entrepreneurial opportunities presented by A.I., along with creators' freedom of expression, must not be lost to a reactionary wave of government restrictions.
A.I. is also poised to reshape our media environment in ways that are less concerning than deepfakes. Americans already broadly distrust traditional news sources today, and A.I. technology could ultimately revolutionize media just as the widespread adoption of the radio during the 1920s transformed politics and culture.
There was some agreement among senators and experts that an A.I. regulatory agency is necessary. Both Sam Altman and Gary Marcus advocated for companies to make a case for their product’s safety in order to obtain a license to operate–a license which could be revoked by the agency.
However, Christina Montgomery, IBM’s Chief Privacy and Trust Officer and Chair of its A.I. Ethics Board, questioned if establishing a new agency might complicate regulatory efforts.
"Precision regulation" is Montgomery's approach to A.I. regulation. She pushed for regulation that targets specific, high-risk A.I. use cases, rather than implementing sweeping regulations across the entire industry.
High-risk models—including those used for election information, medical information, and psychiatric advice—should, in her view, adhere to strict transparency requirements, including full disclosure of the data used to train the model.
Senator Josh Hawley voiced concerns during the hearing over the potential for generative language models to create misleading or manipulative election information, noting that voters often turn to Google to learn about candidates. Sam Altman’s response was blunt:
“It’s one of my areas of greatest concern, the … general ability of these models to manipulate, to persuade, and to provide sort of one-on-one, interactive disinformation.”
The information provided to a user by a generative language model, and the values inherent in the model's responses, depend heavily on the training data set. Transparency, therefore, became a key theme of the hearing, though it was also emphasized because of an immediate concern with a lack of transparency from OpenAI regarding GPT-4’s training data:
“One of the things that I’m most concerned about with GPT-4 is that we don’t know what it’s trained on, I guess Sam [Altman] knows but the rest of us do not, and what it is trained on has consequences for … the biases of the system.
It makes a difference if they’re trained on the Wall Street Journal as opposed to the New York Times, or Reddit.” — Professor Gary Marcus
Selective training data could theoretically be used to instill certain ideologies in generative language models, allowing for the creation of personalized echo chambers.
Generative language models can also be used to create fake articles and news. NewsGuard recently published a report identifying 49 amateur news websites “entirely or mostly generated” by A.I., all posting “vast amounts of clickbait articles.”
In the context of the 2024 presidential election, it is not difficult to imagine how a failure to impose some regulations on A.I. could saturate the democratic process with various kinds of political propaganda.
It is neither difficult to imagine how regulatory overreach in response to A.I. could seize on public fear and dismantle our liberties. A.I. will offer independent journalists new tools to reach new audiences, which cannot be restricted if freedom of speech is to survive and thrive.
Navigating the age of A.I. will have few simple answers. Risks appear to lie around every corner, and beyond all of those risks lies a transformed world that we cannot yet imagine.
The history of the atomic bomb offers a potent lesson for us today.
The U.S. and the Soviet Union faced the unimaginable fear of stepping to the precipice of nuclear war on a number of occasions.
One such instance occurred on a fall morning in 1983, when Soviet lieutenant colonel Stanislav Petrov’s computer system falsely alerted him that the U.S. had launched a nuclear strike. He had only minutes to decide whether to report the attack to his superiors or trust his instincts that it was a false alarm.
Petrov decided not to tell his superiors. Had he been blinded by fear in that instant and reported a nuclear attack, that day could have gone quite differently.
The emergence of A.I. puts all of us into Petrov’s shoes. Fear of the risks that come with A.I. can appear blinding. Yet the very state of being blinded by fear is all there is to fear.
“The only thing we have to fear is fear itself.” — President Franklin Roosevelt
Part II of Union Forward’s artificial intelligence newsletter will discuss ‘Democracy In The Age of A.I.’ and the broad questions that surround it — Coming soon.
Our free and independent work relies on your support.
Union Forward is not backed by major donors or ad revenue.
We’re backed by patriots who don’t recognize America in the 21st century, by Gen Z students who are distrustful of the society they grew up in, by parents who are concerned with the country they are leaving for their children.
Our work here is 100% funded by readers like you who subscribe and who share our stories with friends, family, and on social media.