[youtube https://youtu.be/Fqve-yuHbb4]
Will innovation in Artificial Intelligence improve the human condition, or will it permanently displace America’s labor force, destroy the dignity of hard work, and compromise societal ethics?
This was the subject of the fourth collegiate forum at the Nixon Presidential Library on April 9 co-presented by the Nixon Foundation, and U.S. Vigilance, a non-partisan collaboration dedicated to reasoned debate and responsible citizenship.
Participating students represented the Claremont consortium of schools. They included Matthew Ludlam, a sophomore at Pomona College studying neuroscience; William Gu, also a sophomore at Pomona College studying economics and psychology; and John Church a freshman at Claremont McKenna College studying philosophy and computer science. Robert Zwerling, principal of U.S. Vigilance, and an engineer and business executive specializing in Artificial Intelligence innovation, moderated the discussion.
Overview of Artificial Intelligence
Artificial Intelligence or AI is defined by Merriam Webster dictionary as the capability of a machine to imitate intelligent human behavior. Its origins date as early as 1950, when computer scientist Alan Turing conceived the idea of a test for thinking machines. A decade later, General Motors introduced “Unimate” which began replacing human beings on the assembly line.
After a thirty year interlude, the technological boom of the late 1990s revived innovation. In 1999, IBM’s “Deep Blue,” a chess playing computer, beat world champion Gary Kasparov. More than a decade later the company’s Watson, became a Jeopardy champion.
Today, the world is experiencing a robotics revolution in engineering, finance, space, medicine, and transportation.
It’s even branched into every day consumer products and services. Users of Apple products can submit their queries by voice to “Siri,” and Amazon launched Alexa in 2014, a speaker like device that can complete an enormous amount of tasks — from playing back music and reading the news to controlling home thermostats and security systems. Uber and Google are now experimenting with self driving vehicles.
Will AI Replace Humans in White Collar Jobs?
While automation has taken over many grinding and dangerous factory jobs, AI is now taking aim at white collar jobs — especially for rules based professions like accounting.
The students said they were tailoring their respective educational experiences to adapt to this long term reality. They also want to seize the opportunity.
“In my dad’s generation the dream was to become an astronaut and explore space,” said Church, who will join an AI project this summer that analyzes and predicts small cap stocks. “Artificial intelligence is the greatest frontier of our generation,”
For Ludlam, who hopes to embark on a career in neurosurgery, AI will compliment what only humans can do in the interactive part of surgery.
Ludlam emphasized that students should be constantly learning, and making themselves valuable in the job market — so as not to be caught unexpected by massive technological change.
Can Universal Basic Income (UBI) Help Humans Adapt to the AI Revolution?
Zwerling asked the panelists whether the job market should be protected from innovation in AI, and if Universal Basic Income (UBI) was a viable policy to accommodate displaced workers.
UBI refers to an economic policy in which the government provides a minimum income to all of its citizens. In countries where it has been implemented, the rich are taxed on their wealth; the unemployed are often offered job-training programs, and when they do find work their new income gets added to what they’re already receiving from the government.
In 1969, the Nixon administration devised a version of UBI which it called the Family Assistance Plan. Though it never passed Congress, the policy was aimed at the working poor. It would have given the heads of household a Federal minimum which individual states could supplement. As new workers earned more income, a negative income tax would be put into place. The new worker could retain the first $60 per month without a reduction in benefits, which would then be reduced by 50 cents for each additional dollar earned.
Though it was projected to be more costly than traditional welfare systems, for Nixon, it was “designed to correct the condition it deals with” and provide “equality of treatment across the Nation, a work requirement, and a work incentive.”
The panel was divided on the concept.
Gu didn’t believe that it would provide a work incentive, and most people wouldn’t use it to enrich themselves, rather they would become lazy and get by on the minimum income.
“There is very little innovation in those countries [with UBI],” Gu maintained.
Church was more optimistic — contending that it provided people the security to work more. Furthermore, he asserted, that if those who were unemployed had money in their pockets, their spending would be a boon to the economy.
Ludlam believes the country needed to realistic about UBI. While it was interesting in theory, it shouldn’t be scaled up before AI becomes a significant factor in the labor market. He warned too that AI shouldn’t be subjected to overregulation, as innovation could lead to more jobs in the future.
Ethical Considerations and AI’s Future Impact on Labor
For Gu and Church, the optimistic view of AI is that it could give way to a more leisurely society, like those portrayed in Jane Austen novels. People would be able to enjoy life more, and pursue crafts in the arts and humanities.
While Ludlam agreed that a goal of life should be personal edification, he added that people had a natural inclination for work — and their desire to feel useful needs to people satiated.
He also said that AI would ultimately lead society toward a meritocratic system, and break down barriers of gender, class, and race. With the assistance of thinking machines, the objective would be to complete the task at hand, thus eliminating cultural biases.
Gu disagreed, contending that AI will only make the work more simple.
“Why are you going to hire the best person if you can just hire an AI?” He asked.
Church added that AI wouldn’t ultimately eliminate cultural biases — as they reflect the attitudes and morals of the people that program them.
A member of the audience raised the incident in Tempe, Arizona — where a driverless vehicle designed by Uber stuck and killed a pedestrian — asking the panelists, “who is liable, when no one is liable?”
Gu, replied, “If an AI makes a mistake you sue the company, and that incentivizes them to build a better product.”
“In certain situations it’s somewhat morally ambiguous,” Church followed.
“None of us have the experience to answer that fully. It’s almost as if we could design the perfect person, how could ever agree on what their morals should be?”