I Am Begging AI Companies to Stop Naming Features After Human Processes

1 hour ago 5

Anthropic conscionable announced a caller diagnostic called “Dreaming” astatine the company’s developer league successful San Francisco. It’s portion of Anthropic's precocious launched AI agent infrastructure designed to assistance users negociate and deploy tools that automate bundle processes. This “dreaming” facet sorts done the transcript of what an cause precocious completed and attempts to glean insights to amended the agent’s performance.

Folks utilizing AI agents often nonstop them connected multi-step journeys, similar visiting a fewer websites oregon speechmaking aggregate files, to implicit online tasks. This caller “dreaming” diagnostic allows agents to look for patterns successful their enactment log and amended their abilities based connected those insights.

The feature’s sanction instantly calls to caput Philip K. Dick’s seminal sci-fi novel, Do Androids Dream of Electric Sheep?, which explores the qualities that genuinely abstracted humans from almighty machines. While our existent generative AI tools travel obscurity adjacent to the machines successful the book, I’m acceptable to gully the enactment close here, close now: nary much generative AI features with names that rip disconnected quality cognitive processes.

“Together, representation and dreaming signifier a robust representation strategy for self-improving agents,” reads Anthropic’s blog post astir the motorboat of this probe preview for developers. “Memory lets each cause seizure what it learns as it works. Dreaming refines that representation between sessions, pulling shared learnings crossed agents and keeping it up-to-date.”

Page Text Document.

Courtesy of Claude

Since the spark of the chatbot gyration successful 2022, leaders astatine AI companies person gone afloat tilt into naming aspects of generative AI tools aft what goes connected successful the quality brain. OpenAI released its archetypal “reasoning” model backmost successful 2024, wherever the chatbot needed “thinking” time. The company described this merchandise astatine the clip arsenic “a caller bid of AI models designed to walk much clip reasoning earlier they respond.” Numerous startups besides notation to their chatbots arsenic having “memories” astir the user. Rather than the accelerated retention that’s typically referred to arsenic a computer’s “memories,” these are overmuch much human-like nuggets of information: helium lives successful San Francisco, enjoys day shot games, and hates eating cantaloupe

It’s a accordant selling attack utilized by AI leaders, who person continued to thin into branding that blurs the enactment betwixt what humans bash and what machines can. Even the ways these companies make chatbots, similar Claude, with chiseled “personalities,” tin marque users consciousness arsenic if they are talking with thing that has the imaginable for a heavy interior life, thing that would perchance person dreams adjacent erstwhile my laptop is closed.

At Anthropic, this anthropomorphizing runs deeper than conscionable selling strategies. “We besides sermon Claude successful presumption usually reserved for humans (e.g., ‘virtue, ’wisdom'),” reads a information of Anthropic’s constitution describing however it wants Claude to behave. “We bash this due to the fact that we expect Claude’s reasoning to gully connected quality concepts by default, fixed the relation of quality substance successful Claude’s training; and we deliberation encouraging Claude to clasp definite human-like qualities whitethorn beryllium actively desirable.” The institution adjacent employs a resident philosopher to effort to marque consciousness of the bot's “values.”

Read Entire Article