2024 Showed It Really Is Possible to Rein in AI

Spread the love

Almost all of the big AI news this year has been speculation about how fast the technology is advancing, the damage it’s causing, and how soon it will pass the point where humans can control it. But 2024 also saw governments make significant strides in regulating algorithmic systems Here’s a breakdown of the last year’s most important AI legislation and regulatory efforts at the state, federal, and international levels.

State

US state lawmakers took the lead on AI regulation in 2024, introducing Hundreds of bills— some aimed to create study committees, while others would impose serious civil liability on AI developers if their creations cause catastrophic harm to society. The vast majority of bills failed to pass, but several states enacted meaningful legislation that could serve as models for other states or Congress (assuming Congress resumes).

As AI slop flooded social media ahead of the election, politicians from both parties jumped behind anti-dipfake legislation. More than that 20 states Now there is a ban against deceptive AI-generated political ads in the weeks just before an election Bills aimed at curbing AI-generated pornography, particularly images of minors, have received strong bipartisan support in states including Alabama, California, Indiana, North Carolina and South Dakota.

Unsurprisingly, it’s the tech industry’s backyard, with some of the most ambitious AI proposals coming from California. A high-profile bill would force AI developers to take security precautions and hold companies liable for catastrophic damage caused by their systems. That bill passed both houses of the legislature but was a terrible lobbying effort Finally vetoed By Governor Gavin Newsom.

Newsom, however, signed more than a dozen other bills Aimed at less apocalyptic but more immediate AI damage. A new California law requires health insurers to ensure that the AI ​​systems they use to determine coverage are fair and equitable. Another requires generative AI developers to create tools that label content as AI-generated. And a pair of bills bans the distribution of AI-generated likenesses of deceased people without prior consent and mandates that contracts for AI-generated likenesses of living people must clearly state how the content will be used.

Colorado Pass First of its kind in US law The tools are non-discriminatory to ensure that organizations that develop and use AI systems take reasonable steps to ensure the necessary steps are taken. Consumer advocates call the law a Important baselines. Similar bills are likely to be hotly debated in other states in 2025.

And, middle finger to both our futuristic robot overlord and the planet, Utah A law has been enacted Prohibits any government entity from giving legal personality to artificial intelligence, inanimate objects, bodies of water, atmospheric gases, weather, plants and other non-human objects.

federal

Congress talked a lot about AI in 2024, and the House ended the year with a release 273-page bipartisan report Outline guiding principles and recommendations for future regulation. But when it comes to actually passing laws, federal lawmakers do very little.

On the other hand, there were federal agencies Busy all year round AI is trying to meet the goals set out in President Joe Biden’s 2023 executive order. And several regulators, notably the Federal Trade Commission and the Department of Justice, have cracked down on misleading and harmful AI systems.

It wasn’t particularly sexy or headline-grabbing for agencies to act on the AI ​​executive order, but it laid important groundwork for future governance of public and private AI systems. For example, federal agencies have started and created an AI-talent recruitment frenzy value Responsible model development and loss mitigation.

And, in a major step toward increasing public understanding of how the government uses AI, the Office of Management and Budget (MOST) has scrambled to disclose its subsidiaries. Critical information The AI ​​systems they use may affect human rights and safety.

On the enforcement side, the FTC Comply with Operation AI Targeting companies using AI in fraudulent ways, such as writing fake reviews or offering legal advice, and so on allowed AI-gun detection company Evolve for making misleading claims about what its product can do. The organization too disposal An investigation into facial recognition company IntelliVision, which falsely claimed its technology was free of racial and gender bias, and forbidden Pharmacy chain Rite Aid refrained from using facial recognition for five years after an investigation determined the company was using the tools to discriminate against shoppers.

The DOJ, meanwhile, has joined state attorneys general in a lawsuit charging the real estate software company. RealPage is a massive algorithmic pricing project That has increased rents across the country. It has won several anti-trust cases against Google, including one involving the company Monopoly on internet search That could significantly shift the balance of power in the growing AI search industry.

Global

The European Union’s AI Act in August has been implemented. The law, which is already serving as a model for other jurisdictions, requires AI systems that perform high-risk tasks, such as aiding in hiring or medical decisions, to reduce risk and meet specific standards around training data quality and human supervision. . It prohibits the use of other AI systems, such as algorithms that can be used to assign social scores to a country’s residents that are then used to deny rights and privileges.

In September, China issued a major AI safety regime Structure. Like a similar framework published by the US National Institute of Standards and Technology, it is not mandatory but creates a common standard for AI developers to follow when identifying and mitigating risks in their systems.

One of the most interesting parts of AI policy the law Comes from Brazil. In late 2024, the country’s Senate passed a comprehensive AI security bill. It faces a challenging road ahead, but if passed, it would create unprecedented protection for copyrighted material commonly used to train generative AI systems. Developers will be required to disclose what copyrighted material is included in their training data, and manufacturers will have the power to prohibit the use of their work to train AI systems or negotiate compensation agreements based in part on the size of the AI. Developers and how the material will be used.

Like the EU’s AI law, the proposed Brazilian law would require certain security protocols to be followed for high-risk AI systems.

Leave a Reply

Your email address will not be published. Required fields are marked *