Irregular raises $80 million to secure frontier AI models

Spread the love

On Wednesday, AI Security Farm announced $ 1 million for the new fund in a round led by irregular Sikoia Capital and Redpoint Venture, with the participation of Wis CEO Asf Rapport. A source close to the deal said that the round is worth 450 million dollars irregularly valued.

“Our opinion is soon, a lot of economic activities are about to come from human-on-AI-AI interaction and I-on-AI interaction,” co-founder Dan Lahav told TechCrunch, “And it is about to break the security stack along multiple points.”

Previously known as Pattern Lab, irregularly important players in AI evaluation. The work of the organization has been quoted in the assessment of security 3.7 Sonnet for Claud As well as O3 of OpenAI and O4-Mini ModelThe More generally, the structure of the organization to score a model’s weakness-identification capacity (Dubbed solv) Is widely used in art.

Although the existing risk of irregular models has done significant work, the company is raising funds by focusing on something more ambitious: identifying risk and behaviors before appearing in the wild. The company has developed a broad system of simulated environment, enabled the intensive test of a model before it was released.

Co-founder Omar Nevo said, “We have complex network simulations where both of us take the role of attackers and defenders.” “So when a new model is published, we can see where the defenses are and where they don’t do it.”

Protection has become a matter of intense focus for the AI ​​industry, as the potential risks raised by the border models due to further risk rise. Open Its internal security measures overhuld This summer, keeping an eye on the potential corporate espionage.

At the same time, the AI ​​models are Growing When looking for software weaknesses – a power with serious impacts for both the attacker and defender.

TechCrunch event

San Francisco
|
October 27-29, 2025

For irregular founders, it is the first in many protection headaches caused by the growing capabilities of large language models.

“If the target of the Frontier lab is increasingly sophisticated and capable models are created, our goal is to protect these models,” Lahav says. “But this is an ongoing goal, so inherently there is more need to do more in the future.”

Leave a Reply

Your email address will not be published. Required fields are marked *