If you’ve scrolled social media much lately, you’ve probably noticed a lot of … dolls.
There are dolls all over X and Facebook feeds. Instagram? Dolls. TikTok? You guessed it: dolls, plus tutorials on how to make dolls. There are even dolls all over LinkedIn, arguably the most serious and least fun member of the gang.
You can call it the Barbie AI treatment or the Barbie box trend. Or if Barbie isn’t your thing, you can go with AI action figures, action figure starter pack, or the ChatGPT action figures trend. But however you hashtag it, the dolls are seemingly everywhere.
And while they have some similarities (boxes and packaging that mimic Mattel’s Barbie, personality-driven accessories, a plastic-looking smile), they’re all as different as the people posting them, except for one crucial, common feature: They’re not real.
In the new trend, people are using generative AI tools like ChatGPT to to reimagine themselves as dolls or action figures, complete with accessories. It’s proven quite popular, and not just with influencers.
Celebrities, politicians and major brands have all jumped in. Journalists reporting on the trend have made versions of themselves holding microphones and cameras (though this journalist won’t make you suffer through that). And users have made versions of pretty much any notable figure you can think of, from billionaire Elon Musk to actress and singer Ariana Grande.
According to tech media website The Verge, it actually started on professional social networking site LinkedIn, where it was popular with marketers looking for engagement. As a result, many of the dolls you’ll see out there seek to promote a business or hustle. (Think, “social media marketer doll,” or “SEO manager doll.”)
But it’s since leaked over to other platforms, where everyone, it seems, is having a bit of fun finding out if life in plastic really is fantastic. That said, it isn’t necessarily harmless fun, according to several AI experts who spoke to CBC News.
“It’s still very much the Wild West out there when it comes to generative AI,” said Anatoliy Gruzd, a professor and director of research for the Social Media Lab at Toronto Metropolitan University.
“Most policy and legal frameworks haven’t fully caught up with the innovation, leaving it up to AI companies to determine how they’ll use the personal data you provide.”
Privacy concerns
The popularity of the doll-generating trend isn’t surprising at all from a sociological standpoint, says Matthew Guzdial, an assistant computing science professor at the University of Alberta.
“This is the kind of internet trend we’ve had since we’ve had social media. Maybe it used to be things like a forwarded email or a quiz where you’d share the results,” he told CBC News.
But as with any AI trend, there are some concerns over its data use.
Generative AI in general presents significant data privacy challenges. As the Stanford University Institute for Human-Centered Artificial Intelligence (Stanford HAI) notes, data privacy issues and the internet aren’t new, but AI is so “data-hungry” that it ramps up the scale of the risk.
“If you’re providing an online system with very personal data about you, like your face or your job or your favourite colour, you ought to do so with the understanding that those data aren’t just useful to get the immediate outcome — like a doll,” said Wendy Wong, a political science professor at the University of British Columbia who studies AI and human rights.
That data will be fed back into the system to help them create future answers, Wong explained.

In addition, there’s concern that “bad actors” can use data scraped online to target people, Stanford HAI notes. In March, for instance, Canada’s Competition Bureau warned of the rise in AI-related fraud.
About two-thirds of Canadians have tried using generative AI tools at least once, according to new research by TMU’s Social Media Lab. But about half of the 1,500 people the researchers sampled had little to no understanding of how these companies collect or store personal data, the report said.
Gruzd, with that lab, suggests caution when using these new apps. But if you do decide to experiment, he suggests looking for an option to opt out of having your data used for training or other third-party purposes under the settings.
“If no such option is available, you might want to reconsider using the app; otherwise, don’t be surprised if your likeness appears in unexpected contexts, such as online ads.”
The environmental and cultural impact of AI
Then there’s the environmental impact. CBC’s Quirks and Quarks has previously reported on how AI systems are an energy-intensive technology with the potential to consume as much electricity as an entire country.
A study out of Cornell University claims that training OpenAI’s GPT-3 language model in Microsoft’s U.S. data centres can directly evaporate 700,000 litres of clean freshwater, for instance. Goldman Sachs has estimated that AI will drive a 160 per cent increase in data centre power demand.
The energy needed to generate artificial intelligence leaves behind a sizable carbon footprint, but it’s also increasingly being used as a tool for climate action. CBC’s Nicole Mortillaro breaks down where AI emissions come from and the innovative ways the technology is being used to help the planet.
The average ChatGPT query takes about 10 times more power than a Google search, according to some estimates.
Even OpenAI CEO Sam Altman has expressed concern about the popularity of generating images, writing on X last month that it had to temporarily introduce some limits while it worked to make it more efficient because its graphics processing units were “melting.”
it’s super fun seeing people love images in chatgpt.
but our GPUs are melting.
we are going to temporarily introduce some rate limits while we work on making it more efficient. hopefully won’t be long!
chatgpt free tier will get 3 generations per day soon.
Meanwhile, as the AI-generated dolls take over our social media feeds, so too is a version being circulated by artists concerned about the devaluation of their work, using the hashtag #StarterPackNoAI.
Concerns had previously been raised about the last AI trend, where users generated images of themselves in the style of the Tokyo animation studio Studio Ghibli — and launched a debate over whether it was stealing the work of human artists.
Despite the concerns, however, Guzdial says these kinds of trends are positive — for the AI companies trying to grow their user bases. These models are extremely expensive to train and keep running, he said, but if enough people use them and become reliant on them, the companies can increase their subscription prices.
“This is why these sorts of trends are so good for these companies that are deeply in the red.”