A Single Poisoned Document Could Leak ‘Secret’ Data Via ChatGPT

Spread the love

The latest generator AI models are not just alone Texter– Stated, they can easily be stuck in your data for giving personalized answers to your questions. OpenAI Chatzipt can be link In your Gmail Inbox, your GitHub code is allowed to visit or to look for appointments on your Microsoft Calendar. However, these connections are likely to abuse – and researchers have shown that it can only take a “poisonous” document to do this.

Security researchers Michael Bergury and Tamir’s Ishe Sharbat’s new inquiries today have been published at Black Hat Hacker Conference in Las Vegas, showing how a vulnerability of OpenA’s connectors allows sensitive information to get a sensitive information out of a Google Drive account from a Google Drive account to allow a Google Drive account to get out of a Google Drive account. Indirect injection attackThe In a demonstration of the attack, Dubbed agentflareThe bid shows how the developer privacy was possible to collect in the form of API keys, which was stored in a demonstration drive account.

Weakness highlights how to attach AI models to the external system and share more data throughout them increases the potential attack surface for contaminated hackers and increases the way the weaknesses can be introduced possible.

“The user does not need to do anything to compromise, and the user does not need to do anything to get out of the data,” City, CTO of the security agency Jenny, told Bergury Wird. “We have shown it completely zero-checn; we only need your email, we share the document with you and this is. So yes, it’s very bad,” says Bergury.

Opena did not respond to Ward’s request to comment on the weaknesses of the connectors. The company introduced connectors for ChatzPT as a beta feature early this year and it is Website list At least 17 different services that can be attached to its accounts. This is saying that the system allows you to “bring your tools and data to the Chatzipi” and “to look for search files, pull live data and direct the content in the chat”.

Burguri says he explored Open A initials this year and the company quickly introduced the strategy to prevent the technique he used to extract data through connectors. The way the attack works only means that limited amount of data can be extracted at once – full documents could not be removed as part of the attack.

“Although this problem is not precise with Google, it depicts why the strong protection against the immediate injection attack is important,” Senior Director of Google Workspace Product Management and Senior Director Andy Wayne points to the organization Recently developed the AI protection systemThe

Leave a Reply

Your email address will not be published. Required fields are marked *