This week, Senator Joe Manchin announced that he won’t support HR 1, the sweeping election reform legislation that passed the House and has been languishing in the Senate, effectively torpedoing its passage. But policymakers shouldn’t scrap the bill entirely. For legislators who are serious about expanding platform liability to fight online misinformation, a few provisions hidden deep in HR1 provide one of the best options for reform.
Many of the legislators who have been hesitant to support HR1, including Senator Manchin, have professed a strong desire to regulate online misinformation, specifically calling for reform of Section 230 to expand tech platform liability. Absent from the debate around HR 1 is the fact that provisions—buried within hundreds of pages of the bill’s dense legislative language—would make tech platforms liable for one key type of online misinformation: voter suppression. Out of the dozens of proposals to reform Section 230, this section of HR 1 is one of the most promising.
J. Scott Babwah Brennen is the senior research associate at the Center on Science and Technology Policy at Duke University. Matt Perault is the director of the Center on Science and Technology Policy and an associate Professor of the Practice at Duke’s Sanford School of Public Policy.
HR 1 would expand platform liability by criminalizing voter suppression. While Section 230 makes it difficult to hold platforms liable for content they host in cases brought under state law or federal civil law, it does not bar suits based on federal criminal law. Any case that uses federal criminal law as the basis for liability is essentially immune from Section 230.
HR 1 cobbles together several previously introduced bills that seek to reform the election process. One of them, the Deceptive Practices and Voter Intimidation Prevention Act, would make it a federal crime to make false statements concerning the “time, place, or manner” or an election, the “qualifications for or restrictions on voter eligibility,” or public endorsements. Currently, no federal law prohibits these practices.
The bill was introduced in 2007 by then-Senator Barack Obama. At the time, Obama noted that efforts to intimidate and mislead “usually target voters living in minority or low-income neighborhoods.” He claimed the legislation would “ensure that for the first time, these incidents are fully investigated and that those found guilty are punished.” (The bill sat dormant soon after Obama began his presidential campaign.)
Although the bill was unveiled a decade before Russia’s Internet Research Agency and Macedonian teenagers became a routine feature of news headlines, it anticipated some of the challenges in online communication that we face today. If passed, it would be the first US federal law to include criminal penalties for spreading misinformation online.
Criminalizing voter suppression wouldn’t just expand platform liability for voting misinformation. It would also likely deter some people from using online misinformation campaigns to try to suppress the vote, since prosecutors could pursue cases against perpetrators who engage in deceptive practices. It would also give platforms a basis for working with law enforcement in voter suppression cases. While platforms regularly provide data in response to law enforcement requests today, they do so only after receiving a lawful request. Without applicable law, no federal law enforcement authority can issue a lawful request, and platforms don’t have a legal basis for providing data. With new law, the government could request relevant data held by platforms, and platforms could comply.
This solution isn’t perfect. Critics would likely challenge the constitutionality of the law under the First Amendment. In the past, the Supreme Court has been skeptical of laws restricting election speech, though they have upheld laws needed to “protect voters from confusion and undue influence” and to “ensur[e] that an individual’s right to vote is not undermined by fraud in the election process.”
Legal cases against platforms would also face serious challenges. For a platform to be found liable, a prosecutor would need to establish that a statement was “materially false,” that the platform knew the statement was false, and that it had the “intent to impede or prevent another person from exercising the right to vote.” Proving all this would be difficult, particularly in cases where platforms were merely hosting content posted by a user.
Changing the law might also not dramatically change platform policies or behavior, since several platforms already prohibit voter suppression. Twitter, for instance, forbids “posting or sharing content that may suppress participation or mislead people about when, where, or how to participate in a civic process.”