Children playground miniatures are seen in front of displayed Youtube logo in this illustration taken April 4, 2023. REUTERS/Dado Ruvic/Illustration/File Photo

On Wednesday, YouTube will begin testing a new age-verification policy conducted by an artificial intelligence (AI) model with the goal of ensuring younger audiences don’t view material for older audiences. 

According to a press release issued on July 29, the video platform stated its motive for the policy and how it will go about achieving it.

“Starting August 13, 2025, we’ll begin rolling out an age estimation model to determine if a US-based user is under the age of 18. This will happen regardless of the birthdate you entered when creating your account. We’ll then use that to extend age-appropriate product experiences and protections to more teens (like enabling digital wellbeing tools and only showing non-personalized ads).”

The platform is using various criteria to determine if a user is under 18, while allowing the option, if the age was incorrectly determined by the model, to confirm user age through “government ID, selfie, or a credit card.”

“The age estimation model uses a variety of signals such as YouTube activity and longevity of the account,” the press release continued. “If we determine you’re under 18, you’ll be notified. As always, you’ll have the option to verify your age (through government ID, selfie, or a credit card) if you believe our age estimation model is incorrect.”

Concerning viewers, YouTube detailed the changes that will affect them depending on their adult or minor status. 

“If we determine that a user is under 18, standard protections for teen accounts on YouTube will automatically be enabled. These protections are already applied for users who tell us they are under 18 when making their account,” the platform said.

Of note, YouTube will enable “digital wellbeing tools by default.” These include ‘take a break’ and ‘bedtime reminders.’ Whether these reminders, if dismissed, will stop the viewer from using YouTube was unstated. 

The platform will also minimize “recommendations of videos with content that could be problematic if viewed in repetition.” Certain safeguards in this area will prevent the following types of videos from appearing on an underage user’s feed:

  • “Idealizing specific fitness levels or body weights
  • Featuring real-world social aggression (non-contact fights and intimidation)
  • Portrayal of delinquency or negative behaviors, such as cheating on a test, lying for personal benefit or participating in public pranks and stunts that negatively impact others.”