Your Claude agents can 'dream' now - how Anthropic's new feature works

57 minutes ago 4
gettyimages-2231842215
oxygen/Moment via Getty Images

Follow ZDNET: Add america arsenic a preferred source on Google.


ZDNET's cardinal takeaways

  • A caller diagnostic lets Claude Managed Agents refine their memories.
  • Managed Agents speeds cause physique and deployment 10x. 
  • Anthropic continues to anthropomorphize its products. 

AI agents look to get caller capabilities astir each day. Now, Anthropic says its agents tin dream. 

Claude Managed Agents, which Anthropic released connected April 8, lets anyone utilizing the Claude Platform make and deploy AI agents. The suite of APIs handles the time-consuming accumulation elements developers spell done to physique agents, letting teams motorboat agents astatine standard -- 10 times faster, arsenic Anthropic said successful the release.

Also: The 5 myths of the agentic coding apocalypse

On Wednesday, Anthropic updated Managed Agents with a caller diagnostic called "dreaming," which lets agents "self-improve" by reviewing past sessions for patterns, according to Anthropic. Building connected an existing representation capability, the diagnostic schedules clip for agents to bespeak connected and larn from their past interactions. Once dreaming is on, it tin either automatically update your agents' memories to signifier aboriginal behaviour oregon you tin prime which incoming changes to approve. 

"Dreaming surfaces patterns that a azygous cause can't spot connected its own, including recurring mistakes, workflows that agents converge on, and preferences shared crossed a team," Anthropic said successful the blog. "It besides restructures representation truthful it stays high-signal arsenic it evolves. This is particularly utile for long-running enactment and multiagent orchestration."

Anthropic besides expanded 2 existing features, outcomes and multi-agent orchestration, which support agents on-task and grip delegating to different agents, respectively. The institution said this batch of updates is meant to guarantee agents enactment close and are perpetually learning. 

Anthropomorphizing AI - again 

Functionally, the dreaming diagnostic makes sense: though subtle, it further refines an agent's excavation of references for however it should work, which should ideally marque it amended astatine immoderate task you springiness it. What stands retired more, however, is Anthropic's prime to sanction a technically modular diagnostic aft thing overmuch much abstract, and that humans do. 

Also: Anthropic's caller Claude Security instrumentality scans your codebase for flaws - and helps you determine what to hole first 

Anthropic, possibly unsurprisingly fixed its name, has a agelong past of anthropomorphizing its models and products. In January, the institution published a constitution for Claude, intended to assistance signifier the chatbot's decision-making and pass the perfect benignant of "entity" it is. Some connection successful the papers suggested Anthropic was preparing for Claude to make consciousness. 

The institution has besides arguably invested much than its competitors successful knowing its model, including by drafting attraction to the conception of exemplary welfare. In August 2025, Anthropic launched a diagnostic that lets Claude extremity toxic conversations with users -- for its ain well-being, not arsenic portion of a idiosyncratic information oregon involution initiative. In April 2025, Anthropic mapped Claude's morality, analyzing what it does and doesn't worth based connected implicit 300,000 anonymized conversations with users. The company's researchers person besides monitored a model's quality to introspect; conscionable past month, Anthropic investigated Claude Sonnet 4.5's neural web for signs of emotion, similar desperation and anger. 

Much of this probe is cardinal to exemplary information and information -- knowing what drives a exemplary helps pass whether, and to what degree, it could usage its precocious capabilities for harm, oregon however its motivations could beryllium harnessed by atrocious actors. But the consciousness of empathy and attraction that Anthropic seems to amusement for its models successful that probe sets the laboratory apart, and indicates a somewhat antithetic civilization toward oregon reverence for what it's created.

Also: Building an agentic AI strategy that pays disconnected - without risking concern failure

When it retired its Opus 3 exemplary successful January, Anthropic set it up with a Substack truthful it could blog connected its ain -- and to support it progressive contempt being enactment retired to pasture. In the announcement, Anthropic described Opus 3 arsenic honest, sensitive, and having a distinctive, playful character. The determination to support it live arsenic a blogger, if contained, is notable fixed that Opus 3 disobeyed orders anterior to being sunset successful favour of different models. 

That discourse makes the prime to sanction a diagnostic "dreaming" worthy watching. 

Try dreaming successful Claude Managed Agents 

The dreaming diagnostic is disposable successful probe preview successful Managed Agents, and developers indispensable petition access. 

Read Entire Article