OpenAI presents its preferred version of AI regulation in a new ‘blueprint’

Spread the love

Monday OpenAI published It’s being called an “economic blueprint” for AI: a living document that lays out the policies the company thinks it can build with the US government and its allies.

The blueprint, one of which is forwarded by Chris Lehane, OpenAI’s VP of Global Affairs, asserts that the United States must work to attract billions in funding for the chips, data, power and talent needed to “win at AI.”

“Today, while some countries are putting AI and its economic potential on the sidelines,” Lehane wrote, “the U.S. government can pave the way for its AI industry to continue the nation’s global leadership in innovation while protecting national security.”

There is OpenAI Repeatedly to call US government to take more main action On AI and infrastructure to support technology development. The federal government has largely left AI regulation to the states, described as untenable in the OpenAI Blueprint.

Only in 2024, state lawmakers introduced About 700 AI-related bills, some of which conflict with others. Texas’ Responsible AI Governance Act, for example, imposes strict liability requirements on its developers Open source AI models.

OpenAI CEO Sam Altman criticized existing federal laws on the books, ie Law of Chipswhich was aimed at reviving the US semiconductor industry by attracting domestic investment from the world’s top chip makers. In a recent interview with Bloomberg, Altman said “That’s the law of chips.”[has not] was as effective as any of us could have hoped,” and he thinks there is a “real opportunity” for the Trump administration to “do something better as a follow-up.”

“That’s something I really deeply agree with [Trump] It’s wild how hard it’s become to build things in the United States,” Altman said in the interview. “Power plants, data centers, anything like that. I understand how bureaucratic craft builds, but it’s not helpful for the country in general. It’s especially helpful. Not when you think about what needs to happen for the US to lead AI. And the US really needs to lead AI.”

To fuel the data centers needed to develop and run AI, OpenAI’s blueprint recommends “dramatically” increasing federal spending on power and data transmission and meaningful construction of “new energy sources” such as solar, wind farms and nuclear. OpenAI — with A.I rival — There is in the past Throwing his support behind nuclear power projects, to argue Next-generation servers need them to meet the power demands of farms.

Tech giants Meta and AWS have run into trouble with their nuclear efforts, For reasons though Which has nothing to do with nuclear power.

In the near term, OpenAI’s blueprint suggests that governments will “develop best practices” for deploying models to protect against misuse, “streamline” the AI ​​industry’s engagement with national security agencies, and develop export controls that enable sharing of models with allies. limit[ing]Exporting them to “adversary countries”. Additionally, the blueprint encourages that the government share some national security-related information, such as briefings on AI industry threats, with vendors, and to secure resources for vendors to assess their models for risk. helps

“The federal government’s approach to the Frontier Model safety and security should streamline requirements,” the blueprint reads. “Responsibly exporting models to our allies and partners will help them build their own AI ecosystems, including their own developer communities innovating with AI and distributing its benefits, as well as making AI based on US technology, not Chinese Communist Party-funded technology.” . “

OpenAI already counts several US government departments as partners, and — should its blueprint gain currency among policymakers — stands to add more. The company also has contracts with the Pentagon for cybersecurity work and other related projects team up With defense startup Anduril, the US military will supply its AI technology to systems used to counter drone strikes.

In its blueprint, OpenAI calls for the US private sector to draft standards that are “recognized and respected” by other countries and international organizations. But the company stops short of approving binding rules or orders. “[The government can create] A defined, voluntary pathway for developing organizations [AI] Working with government to define model evaluation, model testing and exchange information to protect companies,” reads the blueprint.

The Biden administration Its AI has taken a similar approach with executive orderswhich sought to formulate several high-level, voluntary AI safety and security standards. The executive order established the US AI Safety Institute (AISI), a federal government agency that studies risks in AI systems, which has OpenA has partnered with companies includingI evaluate the safety of the model. But Trump and his ally there is Biden has promised to rescind the executive orderputting its codification – and the AISI – at risk of being undone.

OpenAI’s blueprint also addresses copyright as it relates to AI, a Hot-button topics. The company makes the case that AI developers should be able to use “publicly available information,” including copyrighted material, to build models.

OpenAI, along with many other AI companies, launch models Public data from across the web. There are companies Licensing contract With so many platforms and publishers and offers limited means For manufacturers to “opt out” of its model development. But there is OpenAI said It would be “impossible” to train AI models without using copyrighted material, and a the number of the creator there is case The company was accused of training them on the job without permission.

“[O]Actors, including developers in other countries, make no effort to respect or engage with IP rights owners,” the blueprint reads. “If the United States and like-minded countries do not address this imbalance with intelligent measures that help advance AI in the long term, the same content will still be used elsewhere for AI training, but to the benefit of other economies. [The government should ensure] That AI has the ability to learn from public, publicly available information, just like humans, while also protecting creators from unauthorized digital copying.”

It remains to be seen which parts of OpenAI’s blueprint, if any, affect the law. But the proposals are a signal that OpenAI intends to remain a key player in the race to unify US AI policy.

In the first half of last year, OpenAI more than tripled its lobbying spending, spending $800,000 versus $260,000 in all of 2023. The company also brought former government leaders into its executive ranks, including former Defense Department official Sasha Baker, head of the NSA Paul Nakasoneand Aaron Chatterjee, formerly the Commerce Department’s chief economist under President Joe Biden.

It hires and is expanded Its global affairs division, OpenAI, has been more vocal about what AI laws and regulations it prefers, e.g. Throwing his weight behind Senate bill that would establish a federal rule-making agency for AI and provide federal grants for AI R&D. The company did too the opposition Bill, especially in California SB 1047Arguing that it will stifle AI innovation and push talent out.

Leave a Reply

Your email address will not be published. Required fields are marked *