Social media age verification is full of risks and unclear rewards

Politicians across the aisle want social media users to submit themselves to biometric scans, increased account monitoring and share their IDs — all in the name of children’s safety online. Bipartisan bills in Congress, like the Kids Online Safety Act (KOSA), and laws across the country are attempting to bar children from social media by requiring age verification to many, if not all users.

Social media apps like Instagram and TikTok have long come under fire for potential harms to children and teens using their platforms. A first-of-a-kind lawsuit against Meta and Google, parent companies of Instagram and Youtube, began trial in Los Angeles Feb. 9, with the plaintiff’s attorney claiming the apps have “engineered addiction in children’s brains.”

However, there isn’t conclusive evidence that using these platforms alone negatively impacts children’s mental health, as it can be hard to separate the effects of social media from the background of other social ills, like climate change and rising authoritarianism. Nevertheless, a public and private push to curb or even ban teen social media use, largely focusing on age verification measures.

Advertisement:

Some apps already employ a certain level of age verification measure, but on Monday, the video game-centered app Discord announced they would roll out age verification globally in March. The platform, which hosts public and private chat servers, will require users to submit to a biometric face scan or send a picture of a government ID card, along with a selfie to prove their age. If age isn’t verified, users will get a “teen-by-default” experience that blocks access to age-restricted channels and imposes other safeguards.

According to Discord officials, some users’ won’t have to submit images or IDs with their age being determined through an “age inference model [that] uses account information such as account tenure, device and activity data, and aggregated, high-level patterns across Discord communities.” In other words, the platform will instead use the trove of data it already has on users to estimate age with predictive models, similar to an AI age verification system rolled out by YouTube last year.

“I would argue there are other ways that we could accomplish children’s safety goals online that don’t require scraping as much personal data from people.”

Discord provided multiple assurances about the safety of users data in the age verification process, but just four months ago a massive hack of a third-party vendor compromised 70,000 Discord users revealing government ID photos and other personal data. The platform noted that the hacked customer service vendor is not part of the new roll out, but many users are still concerned about privacy.

Advertisement:

“I would argue there are other ways that we could accomplish children’s safety goals online that don’t require scraping as much personal data from people and putting up as many barriers to adults who want to access those websites,” Ash Johnson, a senior policy manager at the Information Technology and Innovation Foundation, told Salon.

Meanwhile, the nonprofit Mental Health Coalition unveiled their Safe Online Standards (SOS) Tuesday, aiming to create “standards and ratings for kids’ mental health and social media to catalyze healthful engagement for kids online.” Major social media companies including Meta Platforms, Snapchat and TikTok have agreed to share information for evaluation. The goal of SOS is to impose standards in the social media industry much like video game, music and television industries caters to independent rating systems that indicate the age appropriateness of content.

Dr. Dan Reidenberg, the founder and director of SOS, hopes these standards will help social media companies tailor policies to internet safety for teens and provide families relevant information to decide if their children should use these platforms.


Start your day with essential news from Salon.
Sign up for our free morning newsletter, Crash Course.


The standards aim to “show how well social media and technology companies protect mental health, support well-being, and handle suicide-related content.” They were developed by an independent committee outside of the government or input from tech companies, according to Reidenberg.

Advertisement:

“We feel strongly that there needs to be some standards that tech is held accountable to,” he told Salon.

When it comes to the age verification issue, Reidenberg said “ there are a lot of other concerns around that,” both with the accuracy of the technology and safety of user data. However, he didn’t condemn the practice writ-large.

Under the standards for digital literacy and wellbeing, SOS states social media companies should provide “‘on-ramps’ for developmentally informed, age-appropriate use.” The standards seem to recommend age verification through “quizzes and tasks” instead of direct proof through face scans or IDs.

“The internet lets young people connect outside of the controlled spheres of their parents’ worldview — and that’s the danger that proponents of age verification are terrified of.”

Reidenberg says initial ratings should be available to the public by fall 2026, with a blue shield indicating the best score, a yellow caution for companies meeting partial standards and a red hand for apps that do not meet the standards.

As these tech companies begin evaluations under these private standards, governments across the world are already passing bans. Last week, Spain announced it was joining Australia with laws that prohibit social media use for anyone younger than 16. Greece, Turkey, France and about a dozen other European countries are eyeing similar bans — and the U.S. is not far behind, with some states already taking the initiative.

Advertisement:

In 2025, the Electronic Frontier Foundation (EFF) filed nine friend-of-the-court briefs against various states — including California, Texas and Florida — imposing age restrictions for social media companies. The nonprofit, which advocates for “civil liberties in the digital world,” argues that these laws and similar laws proposed in Congress violate young people’s First Amendment rights, burden adults’ rights and jeopardize all users’ privacy and data security.

The largest justification for age restrictions and verification requirements on social media apps is that they harm children’s mental health. Erica Portnoy, a senior staff technologist at EFF, argues that “social media is just today’s satanic panic.”

“You’ll find that the greatest proponents of age verification are those whose greatest concern is control,” she said in a statement to Salon. “Age verification lets adults control the narratives that young people see — it’s not subtle.’”

Portnoy cited 2023 comments from KOSA sponsor Sen. Marsha Blackburn, R-Tenn., who said a top priority was “protecting minor children from the transgender in this culture.” She cited the bill, which was originally co-sponsored by Senate Minority Leader Chuck Schumer, D-N.Y., as a solution.

“The internet lets young people connect outside of the controlled spheres of their parents’ worldview — and that’s the danger that proponents of age verification are terrified of,” Portnoy said.

An April 2025 Pew Research Center survey found 74 percent of teens said social platforms make them feel more connected to their friends and 63 percent said it gives them a place to show off their creative side. The subheading for the survey results says “roughly 1 in 5” teens report that social media hurts their mental health, but Portnoy pointed out the actual figure shown lower in the results is 14 percent, which “is literally less than 1 in 7.”

Advertisement:

Age verification requirements put more than just personal data at risk, with critics concerned these tools can prevent young people from anonymously seeking information about controversial topics like abortion access and LGBTQ+ resources. Centering the need for identity documentation to access the internet also harms those who don’t have access like immigrants or citizens without IDs, Johnson of the Information Technology and Innovation Foundation said.

”There are a lot of different ways to increase safety for kids and give them and their parents more tools to limit their online behavior and encourage more healthy use of the internet, without having to turn to things like age verification, [which] are a huge privacy and security and free speech risk,” she said.

“There are a lot of factors in modern society that can be detrimental to kids’ mental health,” Johnson added. “It’s not as simple as, if we take care of this one thing, kids are not gonna spend too much time online and they’re all gonna go back outside and start playing with each other again.”

Advertisement:

Read more

about tech


Advertisement:

Comments

Leave a Reply

Skip to toolbar