A look at the landscape of online content, AI, usage and regulations

Project Liberty is an international impact organization founded by Frank McCourt to build a new civic architecture for a safer, healthier internet. They have a recent article on regulating technology in this new world of AI and concerns about social media. The topics have never seen less futuristic than now as I track bills in the Minnesota Legislature (MN HF3488 compensation for kids’ content online and MN4400: Prohibiting Social Media Manipulation Act ) and watch the federal courts (Supreme Court looks at online speech). I thought I’d share some of Project Liberty’s recent newsletter

Growing concern

From the discrimination of algorithms, to AI-generated deepfakes and disinformation, to content on social media platforms that’s harmful for children’s mental health, there is a growing consciousness worldwide about the problems caused by today’s technology.

  • Last week, a report issued by the US State Department was published, with the conclusion that the most advanced AI systems could “pose an extinction-level threat to the human species.”
  • Last month, Project Liberty Foundation released research finding that the majority of adults globally believe that social media companies bear “a great deal” of responsibility for making the internet safe.
  • Last year, a poll done by Project Liberty Alliance member Issue One found that 87% of the US electorate want government action to combat the harms being caused by social media platforms.

Tech is fast, passing laws is slow

Lawmakers are beginning to take action.

While the US lags behind Europe in comprehensive regulations around tech (the EU has been the world’s leader in passing laws to regulate big tech for years), Europe’s speed in passing laws has not translated into ease of enforcing them.

The era of DIY regulation

In the absence of comprehensive laws and sound enforcement, there’s a patchwork of solutions emerging at every level.

  • States: Filling the void left by inaction at the US federal level, US states are taking action. Nearly 200 bills were introduced in local state legislatures in 2023 aimed at regulating AI (only 12 of which became law), and this year states across the US will debate over 400 AI-related bills. To limit the harms caused by social media, US states have taken a variety of approaches, leading to a lack of consistency and a patchwork of directives, according to a report by Brookings last year.

This entry was posted in New Media, Policy by Ann Treacy. Bookmark the permalink.

About Ann Treacy

Librarian who follows rural broadband in MN and good uses of new technology (blandinonbroadband.org), hosts a radio show on MN music (mostlyminnesota.com), supports people experiencing homelessness in Minnesota (elimstrongtowershelters.org) and helps with social justice issues through Women’s March MN.

Leave a Reply