For years, major social media companies have alluded to their responsibility for the harm caused on their platforms, especially through algorithms that promote harmful content. That is why Senators John Curtis (R-UT) and Mark Kelly (D-AZ) have proposed the Algorithm Accountability Act, through which they intend to amend Section 230 to impose a “duty of care” on social media platforms. Section 230 is precisely the shield behind which companies protect themselves, which was added in 1996 to the Internet law in the United States. This allows companies like YouTube, owned by Google, and Facebook and Instagram, owned by Meta, to be completely protected in these situations.
For years, major social media companies have alluded to their responsibility for the harm caused on their platforms, especially through algorithms that promote harmful content. That is why Senators John Curtis (R-UT) and Mark Kelly (D-AZ) have proposed the Algorithm Accountability Act, through which they intend to amend Section 230 to impose a “duty of care” on social media platforms. Section 230 is precisely the shield behind which companies protect themselves, which was added in 1996 to the Internet law in the United States. This allows companies like YouTube, owned by Google, and Facebook and Instagram, owned by Meta, to be completely protected in these situations.
Algorithm Accountability Act
Faced with the impunity and protection that large social media companies like YouTube, Facebook, or Instagram enjoy, Senators John Curtis (R-UT) and Mark Kelly (D-AZ) have decided to put a stop to it. Last Wednesday, they introduced the Algorithm Accountability Act, through which they aim to amend Section 230 of the Communications Decency Act in the United States. In a joint interview, Kelly stated, “Too many families have been harmed by social media algorithms designed with a single goal: to make money by hooking people. Time and again, these companies refuse to take responsibility when their platforms contribute to violence, crime, or self-harm. We are going to change that”.
What changes are being proposed?
Through modifications to Section 230, the aim is to impose a “duty of care” on platforms, requiring them to stop using algorithms that promote harmful and/or dangerous content. If a court determines that the algorithm has pushed content that radicalized a person, leading them to injure themselves or even die, the companies would be held liable if “a reasonable person would see it as foreseeable and attributable to the algorithm”.
Additionally, it also proposes invalidating pre-dispute arbitration agreements and waivers of joint actions included in terms of use for social media platforms. According to Curtis, “What started as a common-sense protection for a fledgling industry has become a shield of immunity for some of the most powerful companies on the planet”.
Previous proposals
Both the Biden administration and Trump’s first administration tried to eliminate or modify this law without success. At first, the attempt was to repeal it entirely, but since the SAFE TECH Act has been introduced several times since 2020, along with the current Algorithm Accountability Act, the goal has been to reduce protections for large social media companies. In 2021, the Protecting Americans from Dangerous Algorithms Act was also introduced, but it never got past the Senate. Although this is something that keeps being attempted, the truth is that it is taking longer than it should, and large social media companies remain protected.
