“Businesses are deploying potentially dangerous AI tools faster than their harms can be understood or mitigated,” Public Citizen warns. “History offers no reason to believe that corporations can self-regulate away the known risks.”
“Until meaningful government safeguards are in place to protect the public from the harms of generative AI, we need a pause.”
So says a report on the dangers of artificial intelligence (AI) published Tuesday by Public Citizen. Titled Sorry in Advance! Rapid Rush to Deploy Generative AI Risks a Wide Array of Automated Harms, the analysis by researchers Rick Claypool and Cheyenne Hunt aims to “reframe the conversation around generative AI to ensure that the public and policymakers have a say in how these new technologies might upend our lives.”
Following the November release of OpenAI’s ChatGPT, generative AI tools have been receiving “a huge amount of buzz—especially among the Big Tech corporations best positioned to profit from them,” the report notes. “The most enthusiastic boosters say AI will change the world in ways that make everyone rich—and some detractors say it could kill us all. Separate from frightening threats that may materialize as the technology evolves are real-world harms the rush to release and monetize these tools can cause—and, in many cases, is already causing.”
Claypool and Hunt categorized these harms into “five broad areas of concern”:
- Damaging Democracy: Misinformation-spreading spambots aren’t new, but generative AI tools easily allow bad actors to mass produce deceptive political content. Increasingly powerful audio and video production AI tools are making authentic content harder to distinguish [from] synthetic content.
- Consumer Concerns: Businesses trying to maximize profits using generative AI are using these tools to gobble up user data, manipulate consumers, and concentrate advantages among the biggest corporations. Scammers are using them to engage in increasingly sophisticated rip-off schemes.
- Worsening Inequality: Generative AI tools risk perpetuating and exacerbating systemic biases such [as] racism [and] sexism. They give bullies and abusers new ways to harm victims, and, if their widespread deployment proves consequential, risk significantly accelerating economic inequality.
- Undermining Worker Rights: Companies developing AI tools use texts and images created by humans to train their models—and employ low-wage workers abroad to help filter out disturbing and offensive content. Automating media creation, as some AI does, risks deskilling and replacing media production work performed by humans.
- Environmental Concerns: Training and maintaining generative AI tools requires significant expansions in computing power—expansions in computing power that are increasing faster than technology developers’ ability to absorb the demands with efficiency advances. Mass deployment is expected to require that some of the biggest tech companies increase their computing power—and, thus, their carbon footprints—by four or five times.
In a statement, Public Citizen warned that “businesses are deploying potentially dangerous AI tools faster than their harms can be understood or mitigated.”
“History offers no reason to believe that corporations can self-regulate away the known risks—especially since many of these risks are as much a part of generative AI as they are of corporate greed,” the statement continues. “Businesses rushing to introduce these new technologies are gambling with peoples’ lives and livelihoods, and arguably with the very foundations of a free society and livable world.”
On Thursday, April 27, Public Citizen is hosting a hybrid in-person/Zoom conference in Washington, D.C., during which U.S. Rep. Ted Lieu (D-Calif.) and 10 other panelists will discuss the threats posed by AI and how to rein in the rapidly growing yet virtually unregulated industry. People interested in participating must register by this Friday.
Demands to regulate AI are mounting. Last month, Geoffrey Hinton, considered the “godfather of artificial intelligence,” compared the quickly advancing technology’s potential impacts to “the Industrial Revolution, or electricity, or maybe the wheel.”
Asked by CBS News‘ Brook Silva-Braga about the possibility of the technology “wiping out humanity,” Hinton warned that “it’s not inconceivable.”
That frightening potential doesn’t necessarily lie with existing AI tools such as ChatGPT, but rather with what is called “artificial general intelligence” (AGI), through which computers develop and act on their own ideas.
“Until quite recently, I thought it was going to be like 20 to 50 years before we have general-purpose AI,” Hinton told CBS News. “Now I think it may be 20 years or less.” Eventually, Hinton admitted that he wouldn’t rule out the possibility of AGI arriving within five years—a major departure from a few years ago when he “would have said, ‘No way.'”
“We have to think hard about how to control that,” said Hinton. Asked by Silva-Braga if that’s possible, Hinton said, “We don’t know, we haven’t been there yet, but we can try.”
The AI pioneer is far from alone. In February, OpenAI CEO Sam Altman wrote in a company blog post: “The risks could be extraordinary. A misaligned superintelligent AGI could cause grievous harm to the world.”
More than 26,000 people have signed a recently published open letter that calls for a six-month moratorium on training AI systems beyond the level of OpenAI’s latest chatbot, GPT-4, although Altman is not among them.
“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” says the letter.
While AGI may still be a few years away, Public Citizen’s new report makes clear that existing AI tools—including chatbots spewing lies, face-swapping apps generating fake videos, and cloned voices committing fraud—are already causing or threatening to cause serious harm, including intensifying inequality, undermining democracy, displacing workers, preying on consumers, and exacerbating the climate crisis.
These threats “are all very real and highly likely to occur if corporations are permitted to deploy generative AI without enforceable guardrails,” Claypool and Hunt wrote. “But there is nothing inevitable about them.”
Government regulation can block companies from deploying the technologies too quickly (or block them altogether if they prove unsafe). It can set standards to protect people from the risks. It can impose duties on companies using generative AI to avoid identifiable harms, respect the interests of communities and creators, pretest their technologies, take responsibility, and accept liability if things go wrong. It can demand equity be built into the technologies. It can insist that if generative AI does, in fact, increase productivity and displace workers, or that the economic benefits be shared with those harmed and not be concentrated among a small circle of companies, executives, and investors.
Amid “growing regulatory interest” in an AI “accountability mechanism,” the Biden administration announced last week that it is seeking public input on measures that could be implemented to ensure that “AI systems are legal, effective, ethical, safe, and otherwise trustworthy.”
According to Axios, Senate Majority Leader Chuck Schumer (D-N.Y.) is “taking early steps toward legislation to regulate artificial intelligence technology.”
In the words of Claypool and Hunt: “We need strong safeguards and government regulation—and we need them in place before corporations disseminate AI technology widely. Until then, we need a pause.”
This work is licensed under Creative Commons (CC BY-NC-ND 3.0)