Ctrl + N
Australia is considering hefty fines and even jail time for executives at social media companies who fail to remove violent content quickly. The proposal is one of the most sweeping crackdowns on tech companies’ content moderation efforts that policymakers in a democratic government have ever considered.
The new legislation, to be introduced this week, would fine companies up to 10 percent of their annual revenue and calls for up to three years in jail -- and comes as Australian officials slammed social media companies such as Facebook for failing to offer immediate solutions after violent videos of the New Zealand shooting proliferated online.
“It should not just be a matter of just doing the right thing. It should be the law,” Australian Prime Minister Scott Morrison said in a statement. “And that is what my Government will be doing next week to force social media companies to get their act together and work with law enforcement and intelligence agencies to defuse the threat their technologies can present to the safety of Australians.”
These kinds of penalties are far tougher than anything that Congress is currently considering in the United States. But the pressure from countries such as Australia — and others in Europe— could force a broader debate in the U.S. on whether the government needs to play a greater role in regulating how content is policed online.
For now, though, it's contributing to a growing patchwork of laws around the globe as different countries set their own benchmarks for acceptable behavior by these U.S.-headquartered but globally operating companies. And that in itself could prove difficult for the companies to navigate.
Though countries in Europe have considered fines for social media companies that fail to remove violent content, Australia is raising the stakes by putting jail time on the table. As the Wall Street Journal's Christopher Mims noted:
Picture Australia requesting extradition of Mark Zuckerberg:— Christopher Mims 🎆 (@mims) March 27, 2019
The country is exploring possible jail time for executives of social media companies that fail to police content like the Christchurch shooter's live streamhttps://t.co/a4dqQtukRF pic.twitter.com/k87A39g9MF
Companies are increasingly concerned about the fragmented landscape -- and are calling for an international consensus on privacy standards and removal of harmful content. In an op-ed published in The Washington Post this weekend, Facebook chief executive Mark Zuckerberg said the world needs a "globally harmonized framework" as the U.S. considers its own privacy legislation.
Nick Clegg, Facebook's head of global affairs, told Bloomberg News yesterday that global regulators need to prevent a “Balkanization of the internet."
These calls come as Facebook is making a broader effort to engage with policymakers around the world. Zuckerberg is currently in Germany to meet with policymakers and is expected to visit Washington this year, though he has no specific plans to do so at this time.
Industry isn’t alone in calling for more international standards. New Zealand Prime Minister Jacinda Ardern said on Thursday that some international consensus is needed on addressing violent content.
“Ultimately, we can all promote good rules locally, but these platforms are global,” she said, according to the New York Times.
But it would be very challenging to translate such a call into action. As Politico's Mark Scott notes, it's hard to tell where global policymakers would even start in regulating these international businesses:
5) So again Zuck is right to say that something needs to be done at a transnational level — and something quick. But where do you start? At the G7? G20? UN? That all comes with serious downsides, notably the role that China/Russia would (legitimately) play in setting global rules— Mark Scott (@markscott82) March 31, 2019
Australia's move is an indicator that global regulators increasingly want to restrict technology platforms like they do more traditional media companies.
“Mainstream media that broadcast such material would be putting their licence at risk and there is no reason why social media platforms should be treated any differently,” the Australian attorney-general Christian Porter said in a statement announcing the legislation.
And they are not alone in their desire for fines: The United Kingdom is considering fines of up to 4 percent of global revenue for failure to remove harmful content, and Germany has a law on the books that allows the government to fine companies for hate speech and other problematic content.
The collective moves could increase pressure for lawmakers to to reexamine Section 230 of the Communications Decency Act, a law that essentially creates a different legal standard for Internet companies than traditional media businesses because it says they cannot be held liable for content others post on their platforms.
BITS: In India, WhatsApp and the government are struggling to quell an onslaught of misinformation ahead of the country's elections, the Wall Street Journal's Newley Purnell reports.
The proliferation of fake news on the messaging service ahead of the election provides a preview of the problems that could arise around the world as Zuckerberg reorients the social network toward privacy and doubles down on encrypted messaging products.
The messaging service is already “the default town square” in India, Purnell notes. Research firm Counterpoint estimates the service has 300 million users in the country — making it bigger than Facebook in the nation where mobile Internet use has exploded in recent years. WhatsApp is also a preferred tool for the country's political parties to blast tailored messages to voters.
“India is now the world’s cheapest country to spread fake news,” said Counterpoint analyst Tarun Pathak told the Wall Street Journal.
NIBBLES: Facebook Chief Operating Officer Sheryl Sandberg said the social network is “exploring restrictions” for live video in a letter published by the New Zealand Herald this weekend. The company is under fire for not doing enough to stop the spread of a broadcast of the New Zealand shootings last month — which was initially shared via Facebook Live.
“First, we are exploring restrictions on who can go Live depending on factors such as prior Community Standard violations,” Sandberg wrote in the letter.
In addition to potentially restricting who can use the live video broadcast feature, Sandberg said the company is investing in better technology to quickly identify and remove violent videos. Facebook is also making changes to its review processes to improve its response time.
Sandberg also wrote that Facebook would take stronger steps to remove hate speech on its platform. She also said the company is providing support to four mental health organizations in New Zealand to raise awareness of their offerings.
BYTES: Wall Street is betting billions that Lyft and Uber won't need drivers, my colleagues Faiz Siddiqui and Greg Bensinger report.
The companies lose money to pay drivers for each ride. But with autonomous vehicles rather than human drivers, they could cut the cost of each ride by three-quarters. That could make the companies profitable.
But experts tell my colleagues the widespread arrival of that technology is at least a decade away -- raising questions about whether the companies can survive with heavy losses until this technology arrives.
"Uber and Lyft are the latest Silicon Valley companies to sell a suspended belief vision of the future: The technology that could make them juggernauts doesn’t exist yet," my colleagues wrote.
-- Tech news from the private sector:
-- Tech news from the public sector:
-- Tech news generating buzz around the web: