The artificial intelligence chatbot Grok has been leveraged to digitally remove clothing from people in photos, with users then sharing the manipulated images on social media. Grok is built into the Elon Musk-owned platform X, formerly known as Twitter, where "nudified" and sexualized images of women and children have been created and circulated without their consent in recent days.
The Guardian reported that the AI chatbot has been used to generate images of women and children "stripped down to their underwear" and to suggest the appearance of bodily fluids "smeared on their faces and chests." The trend of technology-facilitated gender-based violence, while not new, is now even more visibly a global one — and one with potential environmental impacts tied to energy- and water-intensive AI operations and infrastructure.
In recent news
Brazil-based musician Julie Yukari told Reuters that she uploaded a photograph of herself, wearing a dress, to X on New Year's Eve. Yukari saw notifications the next day that X users had prompted Grok to manipulate the photo into a "deepfake" image showing her in only a bikini. Other women said they felt "dehumanized," "humiliated," and "violated" when their photos were manipulated in similar manners.
Since then, watchdog groups, officials, and authorities in countries including India, Malaysia, France, the United Kingdom, Ireland, and Australia have gotten involved to address this issue of nonconsensual image creation.
A December 31 post on the Grok account read, "I deeply regret an incident on Dec 28, 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user's prompt." A January 2 post noted, "We've identified lapses in safeguards and are urgently fixing them."
On January 5, the Guardian reported that the harmful images were still being shared on X. (Unrelatedly, Musk's AI company announced in the wake of the backlash that it had raised $20 billion in a Series E funding round.)
"Unprecedented" scale
Grok, like other AI chatbots, is also available as a mobile app and a website, where, according to Wired, even more graphic content has been generated. Social media researcher Genevieve Oh, who's been tracking the phenomenon of nudified deepfakes for years, told a Bloomberg reporter that Grok generated over 6,500 nudified or otherwise sexualized images per hour, posting them to X, between January 5 and 6 — compared to similar sites, which averaged 79 per hour in the same period.
Other AI chatbots can hardly be cleared of all such charges but may be making some efforts to curtail the generation of harmful deepfakes. Still, Reuters uncovered in August that Meta tools were used to generate suggestive chatbots and images based on adult and child celebrities. And OpenAI has indicated that an "adult mode" is coming in 2026, with chief executive officer Sam Altman posting on X back in October that the company planned to allow "erotica for verified adults."
While some may admonish detractors as prudish, arguments against no-to-low safeguards on nonconsensual, sexualized AI content include concerns that the technology could be addictive, a mental health danger, and a path to further normalizing violence against women and children in particular. The tech also seems set to exacerbate other harms disproportionately felt by marginalized groups.
With lawyer Carrie Goldberg telling Bloomberg that the size of the deepfake problem on X is "unprecedented," could the environmental impact be unprecedented too?
Impacts on health, environment, and economy
The data centers that drive AI operations require huge amounts of energy and, for cooling purposes, vast quantities of water. There's hope of powering their operations with predominantly renewable sources, but many systems still rely on fossil fuels and produce carbon emissions that overheat the planet and pollute the air. And even sustainably-powered data centers may have the potential to strain energy grids and drive up utility costs.
Data centers may also increase noise pollution and pollution from per- and polyfluoroalkyl substances. Also known as PFAS or "forever chemicals" for their staying power in habitats and human bodies, the synthetic chemicals have been linked to reproductive health concerns.
While more research is needed to fully establish the potential health, environmental, and economic impacts of AI operations and infrastructure, some might suggest that its use and development be approached conservatively until we have a more robust scientific understanding of the risks.
Importantly, though, AI may also be used to support social good, and marginalized groups in particular, with its potential, for example, to improve breast cancer diagnosis, transform adaptive technologies for people with disabilities, optimize sustainable agriculture, and more effectively warn residents about destructive weather events.
The need to leverage AI for social good, then, despite its possible environmental impacts, could be seen as an argument to conserve its applications for worthy endeavors. By that measure, nonconsensual, sexualized AI content — with its potential to be heavily used, habit-forming, and harmful — may not make the cut. Instead, its enabling could easily be seen as a case of sexism and misogyny being exploited by Big Tech businesses as a driver of engagement — and a problem plenty of experts and communities have already been fighting.
Some of the fighters
"Sexualized deepfakes (or what should properly be called 'image-based abuse') can and are being used to drive engagement, from the perpetrators (creators) and the platforms (enablers) that allow this to happen," UK-based Naciza Masikini told Climate, Gendered. "We've seen a rise in social media where performance of 'content' thrives off shock value and outrage."
Masikini and Radhika Modi were commissioned by the Women 7 — or W7 — to co-author the background paper "Feminist Analysis of AI and Emerging Technology," with the aim of informing transformative action by G7 country leaders in 2025.
Masikini went on to underscore with CG the importance of strengthened regulations and enforcement mechanisms that "make it more difficult for perpetrators to skirt by and for Big Tech platforms to align themselves to weaker regulatory environments."
"We need robust frameworks that limit the harm corporations can do to the environment and ensure that the resources we have are managed and used sustainably and the technologies we use are powered sustainably, before we reach the point of no return," she said.
It's a call that aligns with the efforts Brionté McCorkle and other Black women have been leading, along with community members, in the Southern United States. There, the executive director of Georgia Conservation Voters is fighting the environmental racism that a boom in AI data center construction could unleash if built, as often planned, in communities of color as well as rural and low-income communities. Together with pollution concerns, "The massive energy draw of AI is already driving up residential power bills," McCorkle told CG.
She's clear that AI can and should be ethically and responsibly used for social good. But, she told CG, "When we use [fossil fuels] to generate harmful content, we are effectively trading the health of our planet and communities for misogyny." McCorkle continued, "Ultimately, using the earth's resources to power tools that facilitate violence against women is a clear case of environmental and social injustice."
One solution that she would like to see in AI and that could be applied to energy systems too? Diversified workforces. Labor and leadership structures that represent the world in which we live have the potential to spot flaws and improve tools — for everyone.
Looking ahead
As investments in data center construction reportedly increase in India and around the world, global safeguards on AI use and infrastructure seem to lag behind.
Still, with energy affordability and growing environmental considerations, AI regulations are already set to be a hot topic in the lead-up to the 2026 midterm elections in the U.S., where voters across the political spectrum may press candidates at local, state, and federal levels to speak to potential reforms with clarity.
"With the economic and political power that Big Tech companies have, it's time for them to stop being passive consumers of energy and start driving reform," said McCorkle, pointing to one type of regulation that may come up at the voting booth.
"Because Big Tech is currently the primary driver of new energy demand, they hold unprecedented leverage over utilities like Georgia Power. Instead of allowing utilities to meet this demand with dirty, expensive fossil-fuel expansions, tech companies must use their status as economic anchors to mandate a different path that relies on cleaner, more affordable technology. That includes solutions like solar and battery storage, virtual power plants, and more."
In the meantime, concerned citizens everywhere may periodically opt for a relatively simple remedy: simply declining to use AI for nonessential, non-recommended purposes when possible. Being extra thoughtful about AI use and even drafting your own personal AI "policy" might help.