Confident Security, ‘the Signal for AI,’ comes out of stealth with $4.2M

Spread the love

Consumers, businesses and governments are cheap, fast and seemingly shocking to the magical AI equipment as well as a question continues in the way: How do I keep my data private?

Technology giants like OpenAI, ethnographic, Jai, Google and others are silently holding and holding user data to monitor their models for protection and protection, even in the context of some initiatives where companies think that their data has exceeded their data limit. For high -controlled industries or organizations on the border, the gray area can be a dealbreaker. There is a fear of where the data goes, who can see it and how it can be used to slow down the AI adoption in sectors like healthcare, finance and government.

Enter San Francisco-based startup ConfidentWhose goal is “AI’s signal”. Company products, confuscs, end -to -end encryption tools that are covered around foundational models, guarantees that prompts and metadata cannot be saved, seen or used for models or even by any third party.

“The founder and CEO of the Confident Security Jonathan Mortansen told TechCrunch,” The second that you leave your data to someone else, you have originally reduced your privacy. “” And the goal of our product is to remove that trade-off “”

The decibel, South Park Commons, former Ants and Swux, TechCrunch have learned exclusively. The company wants to act as an intermediary seller between AI vendors and their customers – like hypersscale, government and initiative.

Even AI companies saw the value of the confident security equipment supply of enterprise clients as a way to unlock that market, Mortansen said. He also added that Confishek is also suitable for hitting new AI browsers in the market, such as Flolaxity has recently been published CometTo guarantee customers that their sensitive data is not being stored on a server that agencies or bad actors can access, or prompts related to their work “not being used to train AI to do your job.”

Confishek has been modeled after Apple’s Private Cloud Computer (PCC) architecture, which Mortnsen says that “10x is better than anything in guarantee that Apple cannot see your data” when it operates some AI jobs safely in the cloud.

TechCrunch event

San Francisco
|
October 27-29, 2025

Like Apple’s PCC, the System of Confident Security encrypts through cloudflairs or fast services and works with anonymous data first through routing, so the servers never see the original source or content. Next, it uses advanced encryption that simply allows the decryption under strict conditions.

“So you can say that you have just allowed it to decrypt if you don’t want to log data and you are not going to use it for training, and you won’t let anyone see it,” said Mortnsen.

Finally, the AI assumption is logged in public and open to review so that experts can verify its guarantee.

“The future of AI is ahead of the confidence of the confidence, depending on the confidence of the infrastructure itself,” said Jess Leo, partnering of the decibel, in a statement. “Many initiatives without solutions like this just cannot proceed with AI.”

This is still the first day for the old company of the year, but Mortansen said that Confishek was tested, outwardly monitored and it was produced. The team is in talks with banks, browser and search engines to add confsecs to their infrastructure stacks.

“You have come up with AI, we are bringing privacy,” said Mortnsen.

Leave a Reply

Your email address will not be published. Required fields are marked *