Silicon Valley spooks the AI safety advocates

Spread the love

Silicon Valley leaders, including White House AI and crypto czar David Sachs and OpenAI Chief Strategy Officer Jason Kwon, have caused a stir online this week for their comments about groups promoting AI security. In separate instances, they alleged that some proponents of AI protection are not as virtuous as they appear and are either acting in their own interests or behind the scenes in the interests of billionaire puppet masters.

AI protection groups that spoke to TechCrunch say the allegations from Sachs and OpenAI are Silicon Valley’s latest attempt to intimidate its critics, but certainly not the first. In 2024, some venture capital firms spreading rumours That’s a California AI security bill, SB 1047Will send startup founders to jail. The Brookings Institution identified the rumor as one of many “Misrepresentation” about the bill, but Gov. Gavin Newsom ultimately vetoed it anyway.

Whether Sacks and OpenAI intended to intimidate critics or not, their actions have given quite a few AI security advocates a scare. Several nonprofit leaders reached by TechCrunch last week asked to speak on condition of anonymity to protect their groups from retaliation.

The debate underscores Silicon Valley’s growing tension between building AI responsibly and turning it into a mass consumer product — a theme that my colleagues Kirsten Korosek, Anthony Ha, and I unpacked this week. Equity Podcast We also dive into a new AI protection law passed in California to regulate chatbots and OpenAI’s approach to erotica in ChatGPT.

Sachs wrote on Tuesday, a Posted on X Allegation that anthropology – which is raise concerns On AI’s ability to contribute to unemployment, cyberattacks and catastrophic damage to society – only fearing the passage of laws that will benefit themselves and drown small startups on paper. Anthropic was the only major AI lab Approve California Senate Bill 53 (SB 53).), a bill that sets security reporting requirements for major AI companies, was signed into law last month.

The sack was a response Viral Essay From Anthropologie co-founder Jack Clark about his fears about AI. Clark gave the essay as a keynote address at the Curve AI security conference in Berkeley a week ago. Sitting in the audience, it must have felt like a genuine account of technologists’ reservations about his product, but Sachs didn’t see it that way.

Sachs said Anthropic is running a “sophisticated regulatory capture strategy,” though it’s worth noting that a truly sophisticated strategy likely wouldn’t make enemies outside of the federal government. A Follow up post on X, Sachs noted that Anthropic has “consistently positioned itself as an enemy of the Trump administration.”

TechCrunch event

San Francisco
|
October 27-29, 2025

Also this week, OpenAI’s Chief Strategy Officer, Jason Kwon, wrote a Posted on X Explains why the company is sending subpoenas to AI security nonprofits, such as Encode, a nonprofit that advocates for responsible AI policies. (A subpoena is a legal order demanding documents or evidence.) Kwon said that after Elon Musk sued OpenAI — over concerns that the ChatGPT-maker had strayed from its nonprofit mission — OpenAI found it suspicious how several companies opposed its restructuring. Encode filed an amicus brief in support of Musk’s lawsuit, and other nonprofits have spoken out publicly against OpenAI’s restructuring.

“It raises questions of transparency about who is funding them and whether there was any coordination,” Kwon said.

NBC News reported this week that OpenAI has sent extensive subpoenas to Encode and The other six are non-profits That criticized the company, asking for their communications with two of OpenAI’s biggest opponents, Musk and Meta CEO Mark Zuckerberg. OpenAI also asked Encode for communication regarding support for SB 53.

A prominent AI security leader told TechCrunch that there is a growing divide between OpenAI’s government affairs team and its research organization. OpenAI’s security researchers often publish reports exposing the risks of AI systems, OpenAI’s policy unit lobbied against SB 53, saying it would have uniform regulations at the federal level.

OpenAI’s Head of Mission Alignment, Joshua Achiam, talks about his company sending subpoenas to nonprofits Posted on X this week

“Perhaps at a risk to my entire career I would say: It doesn’t look great,” Achiam said.

Brendan Steinhauser, CEO of the AI ​​safety nonprofit Alliance for Secure AI (which was not subpoenaed by OpenAI), told TechCrunch that OpenAI is convinced its critics are part of a Musk-led conspiracy. However, he argues that this is not the case, and much of the AI ​​security community is quite critical of xAI’s security practices, or lack thereof.

“On OpenAI’s part, this is to silence critics, intimidate them, and discourage other nonprofits from doing the same,” said Steinhauser. “As for the sack, I think he’s worried about that [the AI safety] The movement is growing and people want to hold these companies accountable.”

Sriram Krishnan, White House senior policy adviser for AI and former a16z general partner, started a conversation this week. Social media posts On its own, AI security advocates are out of touch. He urged AI security companies to “talk to people in the real world using, selling, adopting AI in their homes and organizations.”

A recent Pew survey found that nearly half of Americans More anxious than excited About AI, but it’s not clear what exactly worries them. Another recent survey went into more detail and found that American voters care more Job loss and deepfakes Rather than the catastrophic risks posed by AI, which is what the AI ​​security movement is primarily focused on.

Addressing these security concerns could come at the expense of the AI ​​industry’s rapid growth — a trade-off that worries many in Silicon Valley. With AI investment driving much of America’s economy, fears of overregulation are understandable.

But after years of unchecked AI progress, the AI ​​security movement seems to be gaining real momentum towards 2026. Silicon Valley’s efforts to combat security-focused groups may be a sign that they are working.

Leave a Reply

Your email address will not be published. Required fields are marked *