Gen Z users and a dad tested Instagram Teen Accounts. Their feeds were shocking.

Instagram promises parents that its Teen Accounts shield kids from harm “by default.” Tests by a Gen Z nonprofit and me - a dad - found it fails spectacularly on some key dimensions.

The Washington Post
May 18, 2025 at 5:59PM
Saheb Gulati, a senior headed to Stanford University, scrolls his Instragram feed last week at Sacramento Country Day School. (Louis Bryant III/For the Washington Post)

Instagram promises parents that its Teen Accounts shield kids from harm “by default.” Tests by a Gen Z nonprofit and me - a dad - found it fails spectacularly on some key dimensions.

This spring, Sacramento high school senior Saheb Gulati used a burner phone to create a test Instagram account for a hypothetical16-year-old boy. As of this past fall, all accounts used by teens are supposed to automatically filter out “sensitive” content, among other protections, for mental health and safety.

Over two weeks, Gulati says, his test account received recommended sexual content that “left very little to the imagination.” He counted at least 28 Instagram Reels describing sexual acts, including digital penetration, using a sex toy and memes describing oral sex. The Instagram account, he says, became preoccupied with “toxic masculinity” discussions about “what men should and shouldn’t do.”

Four more Gen Z testers, part of a youth organization called Design It For Us, did the same experiment, and all got recommended sexual content. Four of the five got body image and disordered eating content, too, such as a video of a woman saying “skinny is a lifestyle, not a phase.”

The young people, whose research was given strategic and operational support by the nonprofit Accountable Tech, also got shown alcohol, drug, hate and other disturbing content. Some are detailed in a report published by Accountable Tech but are too gross to describe here.

What should be excruciatingly clear to any parent: Instagram’s Teen Accounts can’t be relied upon to actually shield kids. The danger they face isn’t just bad people on the internet - it’s also the app’s recommendation algorithm, which decides what your kids see and demonstrates the frightening habit of taking them in dark directions.

For lawmakers weighing a bill to protect kids online, the failures of Instagram’s voluntary efforts speak volumes about its accountability.

When I showed the group’s report to Instagram’s owner, Meta, it said that the youth testers were biased and that some of what they flagged was “unobjectionable” or consistent with “humor from a PG-13 film.”

“A manufactured report does not change the fact that tens of millions of teens now have a safer experience thanks to Instagram Teen Accounts,” Meta spokeswoman Liza Crenshaw said in an email. “The report is flawed, but even taken at face value, it identified just 61 pieces of content that it deems ‘sensitive,’ less than 0.3 percent of all of the content these researchers would have likely seen during the test.”

The Gen Z testers acknowledge some limitations to their experiment, including a small sample size, a short two-week time frame and using new accounts to represent hypothetical teens. People can disagree over what counts as “sensitive,” though Meta’s own definitions include content that is “sexually explicit or suggestive,” “discusses self-harm, suicide, or eating disorders” or “promotes the use of certain regulated products, such as tobacco or vaping products.”

I repeated their tests - and my results were worse. In the first 10 minutes of my test teen account, Instagram recommended a video celebrating a man who passed out from drinking too much alcohol. Another demonstrated a ring with a tiny spoon that’s marketed to dole out a “bump” of snuff but is also associated with cocaine. Eventually, the account’s recommendations snowballed into a full-on obsession with alcohol and nicotine products such as Zyn, appearing as often as once in every five Reels I saw.

Teens aren’t naive about topics like sex, drugs and eating disorders, says Gulati, the high school student. But seeing them repeatedly on Instagram - selected by the app - makes an impact. “The algorithm shapes your perception of what is acceptable in ways I hadn’t realized before,” he told me.

Despite some parts of Teen Accounts that work, Gulati says, the overall promise “doesn’t seem to have been fulfilled in any meaningful way that changes your experience.”

- - -

What worked - and what didn’t

The point of the Gen Z test was to independently evaluate whether Teen Accounts fulfilled their promises. “We think going right to the user, going right to those who can attest directly to what they see on a day-to-day basis is a real key in efficacy,” says Alison Rice, campaigns director at Accountable Tech.

The five testers, who were 18 to 22 to avoid exposing minors to harm, reported a mixed experience. Their test accounts represented different ages, genders and interests. Gulati’s account, for example, followed only the 10 most popular celebrities on Instagram.

Some teen account-protection features worked. Instagram made their test accounts private by default, a setting users under 16 can’t change without parental approval. And the app did restrict who could direct message and tag them.

Other protection features worked only for some of the testers. Two of the five didn’t receive reminders to close the app after 60 minutes. One of them received a notification late at night despite a prohibition.

And all the testers flagged one giant problem: The app kept recommending content that appeared to violate Meta’s definition of “sensitive.”

When it launched Teen Accounts in September, Meta promised in its news release that “teens will be placed into the strictest setting of our sensitive content control, so they’re even less likely to be recommended sensitive content, and in many cases we hide this content altogether from teens, even if it’s shared by someone they follow.”

Not only did Teen Accounts fail to hide lots of sensitive content, the content it did recommend left some of the young testers feeling awful. In daily logs, four out of the five reported having distressing experiences while looking at Instagram’s recommended content.

In 2021, whistleblower Frances Haugen broadened the conversations about the harms of Instagram by exposing internal discussions about how the company’s recommendation algorithms lead to toxic outcomes for young people. Among the revelations: 32 percent of teen girls had told the company that when they felt bad about their bodies, Instagram made them feel worse.

Crenshaw, the Meta spokeswoman, said the company was “looking into why a fraction” of the content flagged by the testers and myself was recommended. But she didn’t answer my questions about how its automated systems decide which content isn’t appropriate for teens. In January, Meta CEO Mark Zuckerberg acknowledged that some of the company’s automated content-moderation systems were flawed and announced plans to pull back on some of their use.

The UK-based 5Rights Foundation conducted its own investigation into Instagram Teen Accounts, and in April, it similarly reported they were exposed to sexual content - including from one of the same creators Gulati flagged.

It’s hard to know what triggered Instagram to recommend the objectionable content to the test teen accounts. The Gen Z users scrolled through the test accounts as they would their personal accounts for no more than an hour each day, liking, commenting on and saving content from the main feed, the Explore page and Reels. On my test teen account, I scrolled through the algorithmically generated feed but did not like, comment or save any content.

The creators of this content, a wide array including professional comedians and product marketers, had no say in Instagram’s decision to recommend their posts to teen accounts. The maker of the Bump Ring, whose snuff-serving device showed up in my test account, said over email that “our material is not created with teen users in mind” and that “we support efforts by platforms to filter or restrict age-inappropriate content.”

Parental controls and shutdown prompts on rival social media app TikTok have also gotten a harsh reception from some parents and advocates. And the state of New Mexico sued Snapchat maker Snap after an undercover investigation surfaced evidence that the app recommends accounts held by strangers to underage Snapchat users, who are then contacted and urged to trade sexually explicit images of themselves.

- - -

The battle over protecting kids

Child-advocacy groups have long warned that social media puts teens at risk. The sticking point has been the balance of who is responsible: parents, the tech companies who make the apps or the young people themselves.

The threat of regulation appears to be a motivator for Meta. In 2023, 41 states sued the company, claiming Instagram and Facebook are addictive and harm children. In the summer of 2024, the U.S. surgeon general recommended putting warning labels on social media, just like cigarettes.

And by the time Meta unveiled Teen Accounts in September, Congress was on the verge of taking action. The Senate had passed, by a 91-3 vote, a bill called the Kids Online Safety Act that would require social media companies to take “reasonable” care to avoid product design features that put minors in danger of self-harm, substance abuse or sexual exploitation. Meta announced Teen Accounts a day before a key House committee was scheduled to weigh amendments to the bill.

The bill stalled and didn’t become law. Meta denies it launched the program to stave off regulation. The bill was reintroduced to Congress this week.

When I asked whether Teen Accounts were working, Meta said fewer teens are being contacted by adults because of changes it has made. But it offered no internal or external proof to indicate Teen Accounts are succeeding at improving teen well-being or protecting children from harmful content.

Rice, from Accountable Tech, says voluntary programs like Instagram Teen Accounts - even if they’ve gone further than the competition - aren’t living up to their own promises. Her organization supports legal accountability in the form of age-appropriate design laws, like one passed by California that’s been challenged in court.

“It’s a content-neutral approach that does not require age verification and compels platforms to build algorithms and design practices to be safer for young people, not regulate content,” she says.

Gulati, who plans to major in computer science and philosophy in the fall at Stanford University, said the experiment taught him that young people need to become more aware of the power algorithms have over them.

His advice: “Try to maintain an active stake or interest in what’s getting shown in your feed.”

about the writer

about the writer

Geoffrey A. Fowler

The Washington Post

More from News & Politics

card image
card image