EU Enacts Landmark Social Media Law to End Self-Regulation by Big Tech

“As the U.S. agonizes over misinformation and hate speech on social media and the harm it does to democracy,” said one journalist, the European Union passed the Digital Services Act “to tackle the problem.”

By Kenny Stancil  Published 4-23-2022 by Common Dreams

Photo: Jason Howie/flickr/CC

The European Union on Saturday passed a landmark law that seeks to reduce social media’s harmful effects by requiring Big Tech corporations to quash disinformation and illicit content on their platforms or else face multibillion-dollar fines.

The Digital Services Act (DSA) would compel Facebook, YouTube, TikTok, Twitter, and other platforms “to set up new policies and procedures to remove flagged hate speech, terrorist propaganda, and other material defined as illegal by countries within the European Union,” the New York Times reported.

“The law aims to end an era of self-regulation in which tech companies set their own policies about what content could stay up or be taken down,” the newspaper noted. “It stands out from other regulatory attempts by addressing online speech, an area that is largely off-limits in the United States because of First Amendment protections.”

Calling the legislation a “major milestone for E.U. citizens,” Thierry Breton, the bloc’s internal market commissioner, said that “the time of big online platforms behaving like they are ‘too big to care’ is coming to an end.”

The text of the DSA is not expected to be finalized for several weeks. In addition, final votes must still be taken, though policymakers don’t anticipate significant changes to the deal agreed upon by E.U. member states, parliamentarians, and executive officials.

According to The Guardian, the groundbreaking rules, which would take effect by 2024 at the latest, include:

  • Banning advertising aimed at children or based on sensitive data such as religion, gender, race, and political opinions;
  • Allowing E.U. governments to request removal of illegal content, including material that promotes terrorism, child sexual abuse, hate speech, and commercial scams;
  • Forcing social media platforms to allow users to flag illegal content in an “easy and effective way” so that it can be swiftly removed; and
  • Online marketplaces like Amazon will need similar systems for suspect products, such as counterfeit sneakers or unsafe toys.

If Big Tech firms violate the new rules, they could face financial penalties amounting to 6% of their annual global revenue and be ordered to alter their practices.

Bloomberg reported that “platforms would have to adhere to a code of conduct, allow enforcement agencies to examine the algorithms that decide what users see and report back on how they’re dealing with harmful material. If it’s found they’re not doing enough, they could be told to alter the algorithms. Additional powers to combat disinformation could be triggered during a crisis such as a war or a pandemic.”

As the Times reported:

The Digital Services Act is part of a one-two punch by the European Union to address the societal and economic effects of the tech giants. Last month, the 27-nation bloc agreed to a different sweeping law, the Digital Markets Act [DMA], to counter what regulators see as anti-competitive behavior by the biggest tech firms, including their grip over app stores, online advertising, and internet shopping.

Together, the new laws underscore how Europe is setting the standard for tech regulation globally. Frustrated by anti-competitive behavior, social media’s effect on elections and privacy-invading business models, officials spent more than a year negotiating policies that give them broad new powers to crack down on tech giants that are worth trillions of dollars and that are used by billions of people for communication, entertainment, payments, and news.

According to a new report from Corporate Europe Observatory and Global Witness, which obtained lobbying documents via freedom of information requests, Big Tech firms have dumped tens of millions of dollars into lobbying E.U. lawmakers since the DSA and the DMA were proposed in December 2020, ramping up spending in recent months in a last-minute bid to reshape the laws.

The approval of the new social media rules in Brussels, which came after 16 hours of negotiations, stands in sharp contrast to the situation in Washington.

Legislative action in the U.S. remains stalled even after a sweeping congressional probefederal antitrust cases, and damning testimony from Facebook whistleblower Frances Haugen have exposed the deleterious consequences of Big Tech’s profit-maximizing business model, which hinges on surveillance advertising and other manipulative techniques that increase the amount of time people spend consuming content—sometimes pushing people toward dangerous conspiracy theories in the process.

Margrethe Vestager, executive vice president of the European Commission, the E.U.’s executive branch, has led the bloc’s recent efforts to regulate Big Tech. Social media platforms, she said, should be “transparent about their content moderation decisions, prevent dangerous disinformation from going viral, and avoid unsafe products being offered on marketplaces.”

The DSA “will be a model,” according to Alexandra Geese, a German Green Party member of the European Parliament who helped write the law. Geese claimed to have already spoken with lawmakers in Japan, India, and other nations about the E.U.’s new legislation.

Although Haugen, whose recommendations helped shape the DSA, has said the law could represent a “global gold standard” for minimizing social media-induced harm, Bloomberg reported that “previous frustrated efforts” in the E.U. and elsewhere to regulate Big Tech “suggest the hardest work is still to come.”

The news outlet added:

The social media giants will no longer be left to police themselves, though much depends on what the E.U. decides is harmful and how rigorously the new rules are enforced. A lot of “fake news” and the misinformation flagged by Haugen isn’t illegal and can’t be taken down unless it violates the platforms’ terms and conditions. The alternative is to stop objectionable content appearing in feeds. But the algorithms that decide what users see are complex, and there’s little precedent to guide the E.U.’s researchers when they start their work.

The E.U. will need to find the funds to employ hundreds of people to monitor the DSA and the DMA. And even heavy fines might simply be shrugged off by the cash-rich tech giants. National regulators have never come close to applying the maximum fines allowed in the E.U.’s current data rules. There are also technical obstacles. For example, how do you know that someone is too young to be targeted with ads without collecting data on them in the first place? The way the DSA is implemented will be up to the E.U.’s 27 member states, which all have different legal regimes. Their varying interpretations of what represents illegal hate speech could mean a post is be taken down in Germany but left up in Denmark.

“Effective enforcement is absolutely key to the success of these new rules,” Agustín Reyna, director of legal and economic affairs at the European Consumer Organization, told the Times. “Great power comes with greater responsibility to ensure the biggest companies in the world are not able to bypass their obligations.”

This work is licensed under Creative Commons (CC BY-NC-ND 3.0).
Share Button

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload CAPTCHA.

Protected with IP Blacklist CloudIP Blacklist Cloud