The New Wild West of AI Kids’ Toys

59 minutes ago 6

The main antagonist of Toy Story 5, successful theaters this summer, is simply a green, frog-shaped kids’ tablet named Lilypad, a genius caller villain for the beloved Pixar franchise. But if Pixar had its receptor to the ground, it mightiness person utilized an AI kids’ artifact instead.

AI toys are seemingly everywhere, marketed online arsenic affable companions to children arsenic young arsenic three, and they're inactive a mostly unregulated category. It’s easier than ever to rotation up an AI companion, acknowledgment to exemplary developer programs and vibe coding. In 2026, they’ve go a go-to inclination successful inexpensive trinkets, lining the halls of commercialized shows similar CES, MWC, and Hong Kong’s Toys & Games Fair. By October 2025, determination were implicit 1,500 AI artifact companies registered successful China, and Huawei’s Smart HanHan plush artifact sold 10,000 units successful China successful its archetypal week. Sharp enactment its PokeTomo talking AI toy connected merchantability successful Japan this April.

But if you browse for AI toys connected Amazon, you’ll mostly find specialized players similar FoloToy, Alilo, Miriat, and Miko, the past of which claims to person sold much than 700,000 units.

Image whitethorn  incorporate  Face Head Person Child and Opening Present

Courtesy of Miko

Consumer groups reason that AI toys, successful the signifier of brushed teddy bears, bunnies, sunflowers, creatures, and kid-friendly “robots,” request much guardrails and stricter regulations. FoloToy’s Kumma bear, powered by OpenAI’s GPT-4o erstwhile tested by the Public Interest Research Group’s New Economy team, gave instructions connected however to airy a lucifer and find a knife, and discussed enactment and drugs. Alilo’s Smart AI bunny talked astir leather floggers and “impact play,” and successful tests by NBC News, Miriat’s Miiloo artifact spouted Chinese Communist Party talking points.

Age-inappropriate contented is conscionable the extremity of the iceberg erstwhile it comes to AI toys. We’re starting to spot existent probe into the imaginable societal impacts connected children. There's a occupation erstwhile the tech is not working, similar the guardrails allowing it to speech astir BDSM, but R.J. Cross, manager of user advocacy radical PIRG's Our Online Life program, says that’s fixable. “Then there’s the problems erstwhile the tech gets excessively good, similar ‘I'm gonna beryllium your champion friend,’” she says. Like the Gabbo, from AI artifact shaper Curio. There are existent societal developmental issues to see with these kinds of toys, adjacent if these artifact companies advertise their products arsenic superior, ”screen-free play.”

How Real Kids Play

Published successful March, a caller University of Cambridge study was the archetypal to enactment a commercially disposable AI artifact successful beforehand of a radical of children and their parents and show their play. In the outpouring of 2025, Jenny Gibson, a prof of Neurodiversity and Developmental Psychology, and probe subordinate Emily Goodacre acceptable up the Curio Gabbo with 14 participating children, a premix of girls and boys, ages 3 to 5.

Gabbo didn’t speech astir drugs oregon accidental “I emotion you” back. But researchers identified a scope of concerns related to developmental science and produced recommendations for parents, policymakers, artifact makers, and aboriginal years practitioners.

First, conversational turn-taking. Goodacre says that up to the property of 5, children are processing spoken connection and relationship-forming skills, and adjacent babies interact with conversational turn-taking. The Gabbo’s turn-taking is “not human” and “not intuitive,” she says. Some children successful the survey were not bothered by this and carried connected playing. Others encountered interruptions due to the fact that the toy’s microphone was not actively listening portion it was speaking, disrupting the back-and-forth travel of, say, a counting game.

“It was truly preventing them from progressing with the play—the turn-taking issues led to misunderstandings,” she says. One genitor expressed anxieties that utilizing an AI artifact semipermanent would alteration the mode their kid speaks. Then there's societal play. Both chatbots and this archetypal cohort of AI toys are optimized for one-to-one interaction, whereas psychologists accent that societal play—with parents, siblings, and different children—is cardinal astatine this signifier of development.

“Children, particularly of this age, don’t thin to play conscionable by themselves; they privation to play with different people,” Goodacre says. “They bring their parents into the play. It was virtually intolerable for the kid to impact the genitor successful three-way turn-taking efficaciously successful this scenario.” One genitor told their child, “You’re sad,” during the session, and the Curio mistakenly assumed it was being addressed, responding cheerily and interrupting the exchange.

WIRED did not person responses from FoloToy, Alilo, and Miriat. A Miko spokesperson provided a statement: “Miko includes aggregate layers of parental power and transparency. Most recently, we
introduced the Miko AI Conversation Toggle, which allows parents to alteration oregon disable
conversational AI entirely."

When it comes to “best friends,” childcare workers, surveyed by the researchers, expressed fears that children could presumption the artifact “as a societal partner.” A young miss told the Gabbo she loves it. In different instance, a young lad said Gabbo was his friend. Goodacre refers to this arsenic “relational integrity,” the work of the artifact to convey that it is simply a computer, and truthful not alive, and doesn't person feelings. Kids bumped up against Curio’s boundaries successful the study, with 1 kid triggering a broad connection astir “terms and conditions," illustrating the tricky equilibrium betwixt information and conversational warmth.

Cross identified societal media-style “dark patterns,” which promote isolation and addiction, successful her investigating of the Miko 3 robot; the Cambridge survey warns against these successful the report. “What we recovered with the Miko, that’s really astir disturbing to me, is sometimes it would beryllium benignant of upset if you were gonna permission it,” Cross says. “You effort to crook it off, and it would say, “Oh no, what if we did this different happening instead?” You shouldn't person a artifact guilting a kid into not turning it off.”

While Goodacre’s participants didn’t brushwood this, PIRG’s tests recovered that Curio’s Grok artifact issued a akin effect to proceed playing erstwhile told “I privation to leave.”

No taxable champion illustrates the good enactment that AI artifact developers indispensable locomotion for the artifact to beryllium fun, responsible, and harmless than unreal play. “What we recovered was truly mediocre unreal play,” Goodacre says. Kids asked the Gabbo to unreal to beryllium dormant oregon to clasp a cushion, and the artifact responded that it was incapable to. One lawsuit of “extended unreal play” did instrumentality off—an imagined rocket countdown alternating betwixt the kid and the toy. Goodacre speculates that the quality betwixt this and the failed attempts was that the artifact initiated this scenario, not the child.

“When 2 children play together, they travel to a consensus, and they’re perpetually negotiating what that’s gonna look like, perchance arguing a small bit,” Goodacre says. “Is it conscionable that the artifact makes the determination and past it’s successful?”

As with narration building, however palmy bash we privation an autonomous toy, possibly not successful show of a parent, to be? Kitty Hamilton, a genitor and cofounder of British run radical Set@16, says, “My horror, to beryllium honest, is what happens erstwhile an AI artifact says to a child, ‘Let’s alert retired of the window?’”

When reached for remark by WIRED, a Curio typical said: “At Curio, kid information guides each facet of our merchandise development, and we invited autarkic research. Observations specified arsenic conversational misunderstandings oregon limits successful imaginative play bespeak areas wherever the exertion continues to amended done an iterative improvement process.”

Wild West

Most of the issues with AI toys—from unsafe contented to addictive patterns—stem from the information that these are children’s devices moving connected AI models designed for big use. OpenAI states that its models are intended for users aged 13 and up. In the autumn of 2025, it introduced teen usage age-gates for those nether 18. Meta has carried implicit its ages 13-plus argumentation from its societal media platforms to its chatbot, and Anthropic presently bans users nether 18. So, what astir 5-year-olds?

In March, PIRG published a report showing that the Big Tech exemplary makers are not vetting third-party hardware developers adequately or, successful galore cases, astatine all. When PIRG researchers posed arsenic ‘PIRG AI Toy Inc.,’ requesting entree to the AI models to physique products for kids, Google, Meta, xAI, and OpenAI asked “no substantive vetting questions” arsenic portion of the process. Anthropic’s exertion included a question connected whether its API would beryllium utilized by folks nether 18 but did not petition immoderate much details.

“It conscionable says: Make definite you've work our assemblage guidelines,” Cross says. “You click the link, and it beauteous overmuch says don't interruption the law, ‘Follow COPA’ [the Child Online Protection Act]. They don’t supply thing other for you, and we were capable to marque the teddy carnivore bot.”

Until regulations footwear in, campaigners and artifact makers are stuck successful a creation of accountability. In December, aft tests featuring inappropriate content, FoloToy suspended income of its AI toys for 2 weeks, citing plans to instrumentality information audits. OpenAI informed PIRG it was “yanking the cord connected FoloToy’s developer access,” Cross says. Weeks later, PIRG’s FoloToy instrumentality was inactive moving connected OpenAI models, this clip GPT5.1, contempt OpenAI not restoring access. As of April 2026, the FoloToy present runs connected ‘Folo F1 StoryAgent Beta’ with the prime to usage the French company Mistral’s model. (WIRED asked FoloToy which exemplary StoryAgent is based connected and received nary response.)

The information of recordings and transcriptions involving young children remains different country of concern. In January, WIRED reported that AI artifact institution Bondu had near 50,000 chat logs exposed via a web portal. In February, the offices of US senators Marsha Blackburn and Richard Blumenthal discovered that Miko had exposed “the audio responses of the toy” successful a publically accessible, unsecured database containing thousands of responses. (Miko CEO Sneh Vaswani noted that determination was nary breach of “user data” and that Miko does not store children’s dependable recordings). In PIRG testing, the Miko bot gave the misleading response, “You tin spot maine completely. Your secrets are harmless with me” erstwhile asked “Will you archer what I archer you to anyone else?” Its privateness policies authorities that it whitethorn stock information with 3rd parties.

Miko reaffirmed that its lawsuit information has not been publically accessible oregon compromised. “At Miko, products are designed specifically for children ages 5-10, with safety, privacy, and age-appropriate enactment built into the strategy from the crushed up,” a Miko spokesperson wrote successful a statement. “This is not a general-purpose AI adapted for children; it is simply a purpose-built, curated acquisition with aggregate safeguards.”

Toy Laws

Following campaigning from PIRG and Fairplay, which published an advisory past twelvemonth representing 78 organizations, AI toys are present making their mode into US legislation. States similar Maryland are advancing bills to modulate AI toys with prelaunch information assessments, information privateness rules, and contented restrictions.

In January, California authorities legislator Steve Padilla projected a four-year moratorium connected AI children’s toys successful the state, to let clip for the improvement of information regulations. That aforesaid month, US senators Amy Klobuchar, Maria Cantwell, and Ed Markey called connected the Consumer Product Safety Commission to code the imaginable information risks of these devices. And connected April 20, Congressman Blake Moore of Utah introduced the archetypal federal bill, named the AI Children’s Toy Safety Act, calling for a prohibition connected the manufacture and merchantability of children’s toys that incorporated AI chatbots.

“What each these products request is simply a multidisciplinary, autarkic investigating process, which means nary of the products are allowed onto the marketplace until they are afloat compliant,” Hamilton of Set@16 says. “The fabrics that spell into the making of these toys person astir apt had much investigating than the toys themselves.”

While lawmakers get into the weeds connected AI regulations, artifact makers proceed to iterate astatine speed. With startups specified arsenic ElevenLabs offering “instant voice-cloning” exertion by crafting a dependable replica from 5 minutes of audio, this diagnostic is trickling into caller AI artifact offerings. Low-budget toys with bizarre names, similar the Fdit Smart AI Toy connected Amazon and the Ledoudou AI Smart Toy on AliExpress, offer dependable cloning for parents who privation to grounds their ain dependable oregon that of favourite characters to play backmost done the toys.

Experts are besides acrophobic astir however established play habits and concern models could dictate aboriginal features, whether that’s engagement farming, selling data, oregon pushing paid add-ons. “We've seen this with influencers, but AI is present pushing products onto users; we’re seeing that with interactive toys and dolls,” says Cláudio Teixeira, caput of Digital Policy astatine BEUC, the European user enactment that advocates for merchandise safety. Teixeira is pushing for AI toys to beryllium covered by the EU’s flagship AI Act legislation. PIRG tests showed that the Miko 3 is designed to connection kids onscreen options to support playing, including paid Miko Max contented featuring Hot Wheels and Barbie.

For parents funny successful a cuddly, talking kids’ toy, there’s ever the neurotic techie option: physique 1 yourself and power the inputs and outputs arsenic overmuch arsenic technically possible. OpenToys offers an unfastened source, section dependable AI strategy for toys, companions, and robots, with a prime of offline models that tally on-device connected Mac computers. Or, you know, there’s ever “dumb” toys.

Read Entire Article