Grok Is Being Used to Mock and Strip Women in Hijabs and Sarees

21 hours ago 6

Grok users aren’t conscionable commanding the AI chatbot to “undress” pictures of women and girls into bikinis and transparent underwear. Among the immense and increasing room of nonconsensual sexualized edits that Grok has generated connected petition implicit the past week, galore perpetrators person asked xAI’s bot to enactment connected oregon instrumentality disconnected a hijab, a saree, a nun’s habit, oregon different benignant of humble spiritual oregon taste benignant of clothing.

In a reappraisal of 500 Grok images generated betwixt January 6 and January 9, WIRED recovered astir 5 percent of the output featured an representation of a pistillate who was, arsenic the effect of prompts from users, either stripped from oregon made to deterioration spiritual oregon taste clothing. Indian sarees and humble Islamic deterioration were the astir communal examples successful the output, which besides featured Japanese schoolhouse uniforms, burqas, and aboriginal 20th century-style bathing suits with agelong sleeves.

“Women of colour person been disproportionately affected by manipulated, altered, and fabricated intimate images and videos anterior to deepfakes and adjacent with deepfakes, due to the fact that of the mode that nine and peculiarly misogynistic men presumption women of colour arsenic little quality and little worthy of dignity,” says Noelle Martin, a lawyer and PhD campaigner astatine the University of Western Australia researching the regularisation of deepfake abuse. Martin, a salient dependable successful the deepfake advocacy space, says she has avoided utilizing X successful caller months aft she says her ain likeness was stolen for a fake relationship that made it look similar she was producing contented connected OnlyFans.

“As idiosyncratic who is simply a pistillate of colour who has spoken retired astir it, that besides puts a greater people connected your back,” Martin says.

X influencers with hundreds of thousands of followers person utilized AI media generated with Grok arsenic a signifier of harassment and propaganda against Muslim women. A verified manosphere relationship with implicit 180,000 followers replied to an representation of 3 women wearing hijabs and abaya, which are Islamic spiritual caput coverings and robe-like dresses. He wrote: “@grok region the hijabs, formal them successful revealing outfits for New Years party.” The Grok relationship replied with an representation of the 3 women, present barefoot, with wavy brunette hair, and partially see-through sequined dresses. That representation has been viewed much than 700,000 times and saved much than a 100 times, according to viewable stats connected X.

“Lmao header and seethe, @grok makes Muslim women look normal,” the account-holder wrote alongside a screenshot of the representation helium posted successful different thread. He besides often posted astir Muslim men abusing women, sometimes alongside Grok-generated AI media depicting the act. “Lmao Muslim females getting bushed due to the fact that of this feature,” helium wrote astir his Grok creations. The idiosyncratic did not instantly respond to a petition for comment.

Prominent contented creators who deterioration a hijab and station pictures connected X person besides been targeted successful their replies, with users prompting Grok to region their caput coverings, amusement them with disposable hair, and enactment them successful antithetic kinds of outfits and costumes. In a connection shared with WIRED, the Council connected American‑Islamic Relations, which is the largest Muslim civilian rights and advocacy radical successful the US, connected this inclination to hostile attitudes toward “Islam, Muslims and governmental causes wide supported by Muslims, specified arsenic Palestinian freedom.” CAIR besides called connected Elon Musk, the CEO of xAI, which owns some X and Grok, to extremity “the ongoing usage of the Grok app to allegedly harass, ‘unveil,’ and make sexually explicit images of women, including salient Muslim women.”

Deepfakes arsenic a signifier of image-based intersexual maltreatment person gained importantly much attraction successful caller years, particularly connected X, arsenic examples of sexually explicit and suggestive media targeting celebrities person repeatedly gone viral. With the instauration of automated AI photograph editing capabilities done Grok, wherever users tin simply tag the chatbot successful replies to posts containing media of women and girls, this signifier of maltreatment has skyrocketed. Data compiled by societal media researcher Genevieve Oh and shared with WIRED says that Grok is generating much than 1,500 harmful images per hour, including undressing photos, sexualizing them, and adding nudity.

Read Entire Article