Lawmakers from opposites sides of the aisle are looking to sunset Section 230 of the Communications Decency Act, because it has “outlived its usefulness.” House Energy and Commerce Committee Chair Cathy McMorris Rodgers and ranking member Frank Pallone, Jr. have released a bipartisan draft legislation introducing their proposed bill, which is seeking to render the provision ineffective after December 31, 2025. In the op-ed piece the lawmakers wrote for The Wall Street Journal, they admitted that Section 230 “helped shepherd the internet from the ‘you’ve got mail’ era into today’s global nexus of communication and commerce.” However, they said that big tech companies are now exploiting the same law to “shield them from any responsibility or accountability as their platforms inflict immense harm on Americans, especially children.”

They added that the lawmakers who previously tried to address issues with Section 230 didn’t succeed because tech companies refused any meaningful cooperation. Their bill would compel tech companies to work with government officials for 18 months to conjure and enact a new legal framework to replace the current version of Section 230. The new law will still allow for free speech and innovation, but it will also encourage the companies “to be good stewards of their platforms.” Rodgers and Pallone said that their bill will give companies the choice between ensuring the internet is “a safe, healthy place for good” and losing their Section 230 protections altogether.

Section 230 shields online publishers from liability when it comes to content posted by their users. Companies like Meta and Google have repeatedly used it in the past to get lawsuits dismissed, but it has come under intense scrutiny in recent years. Last year, a bipartisan group of senators introduced a bill that would amend the section to require big platforms to pull down content within four days if they were deemed illegal by courts. Another bipartisan group also proposed a “No Section 230 Immunity for AI Act,” which seeks to hold companies like OpenAI liable for harmful content, such as deepfake images or audio created to ruin somebody’s reputation.

This article contains affiliate links; if you click such a link and make a purchase, we may earn a commission.

Source link


Leave a Reply

Your email address will not be published. Required fields are marked *