Follow ZDNET: Add america arsenic a preferred source connected Google
ZDNET's cardinal takeaways
- Hackers usage punctual injection to bargain the backstage information you usage successful AI.
- ChatGPT's caller Lockdown Mode aims to forestall these attacks.
- Elevated Risk labels pass you of AI tools and contented that could beryllium risky.
Prompt injection attacks airs a serious threat to anyone who uses AI tools, but particularly to professionals who trust connected them astatine work. By exploiting a vulnerability that affects astir AIs, a hacker tin insert malicious codification into a substance prompt, which whitethorn past change the results oregon adjacent bargain confidential data.
Also: 5 customized ChatGPT instructions I usage to get amended AI results - faster
Now, OpenAI has introduced a diagnostic called Lockdown Mode to amended thwart these types of attacks.
Lockdown Mode
Lockdown Mode enhances the extortion against prompt injections and different advanced threats. With this mounting enabled, ChatGPT is constricted successful the ways it tin interact with outer systems and data, thereby restricting an attacker's quality to exfiltrate delicate files.
An optional information setting, Lockdown Mode isn't indispensable for astir ChatGPT users, OpenAI said successful a news merchandise connected Friday. Rather, the diagnostic is geared much toward security-minded users, specified arsenic executives oregon information pros astatine salient organizations. With that successful mind, Lockdown Mode is disposable for ChatGPT Enterprise, ChatGPT Edu, ChatGPT for Healthcare, and ChatGPT for Teachers.
Also: These 4 captious AI vulnerabilities are being exploited faster than defenders tin respond
Lockdown Mode works by determining which tools and capabilities successful ChatGPT are astir astatine risk. The extremity is to restrict entree to immoderate delicate information successful a speech oregon from a connected app that could beryllium exploited done punctual injection.
(Disclosure: Ziff Davis, ZDNET's genitor company, filed an April 2025 suit against OpenAI, alleging it infringed Ziff Davis copyrights successful grooming and operating its AI systems.)
As 1 example, web browsing successful Lockdown Mode limits entree to cached contented truthful that nary unrecorded requests permission OpenAI's network. Other features are wholly disabled unless OpenAI tin corroborate that the information is safe. Here, the thought is to forestall an attacker from stealing information done web browsing.
ChatGPT concern plans already connection enterprise-level information protection, which administrators tin power via the Workspace settings. Lockdown Mode adds an other furniture of defense. Workspace admins tin besides take which apps and actions are controlled by Lockdown Mode.
Elevated Risk labels
But that's not all. OpenAI volition present besides show an Elevated Risk statement erstwhile you entree definite features that could beryllium risky. Accessible successful ChatGPT, the ChatGPT Atlas browser, and the Codex coding assistant, these labels are designed to springiness you intermission earlier you enactment with a instrumentality oregon contented that could beryllium exploited.
For example, developers who usage Codex tin springiness the instrumentality web entree truthful that it tin hunt the web for assistance. With this entree enabled, the Elevated Risk statement volition pass you of imaginable risks, changes that whitethorn occur, and erstwhile specified entree is warranted.
The Elevated Risk labels are designed arsenic a short-term solution to astatine slightest pass you of imaginable dangers. Looking to the future, OpenAI said it plans to adhd much information features crossed the committee to code further risks and threats, yet obviating the request for specified labels.

5 hours ago
6








English (US) ·