The world is getting tougher on kids’ online safety in 2026

The new year is a time for resolutions. This year, governments, platforms, and campaigners all seem to have hit on the same ones: Children should spend less time online, and companies should know exactly how old their users are.

From TikTok’s infinite scroll to chatbots like xAI’s Grok that can spin up uncensored answers to almost any question in seconds, addictive and inappropriate online options leave legislators and regulators worried. The result is a new kind of arms race: Lawmakers, often spooked by headlines about mental health, extremism, or sexual exploitation, are turning to age gates, usage caps, and outright bans as solutions to social media’s problems. Just in the past week, we’ve seen Grok become Exhibit A in the debate about harmful content as it helps undress users, while states consider or enact bans, blocks and time limits on using tech.

“Right now, the regulatory debate seems to exclusively focus on how certain internet services are net negatives, and banning access to minors to such services,” says Catalina Goanta, associate professor in private law and technology at Utrecht University in the Netherlands. That black-and-white approach is easy for politicians to parse, but doesn’t necessarily [communicate] the nuance involved in tech and its potential for good. “The scientific debate shows us a much more nuanced landscape of what can be harmful to minors, and that will depend on so many more aspects than just a child having a phone in their hands,” says Goanta.

Legislators are moving quickly to throw a protective shield around younger users. A December 2025 proposed law in Texas would have required Apple and Google to verify user ages and get parental consent for minors’ app downloads, but was blocked just before Christmas.

Meanwhile, as outright bans are being blocked, states are pushing forward with rules that cap social media access. Virginia’s default one-hour daily cap for under-16s was launched with a requirement for “commercially reasonable” age checks. However, it  has already been challenged in court by a lawsuit filed by NetChoice, an association that seeks to “make the Internet safe for free enterprise and free expression.” The group, which includes Amazon, Google, Meta and OpenAI as members, says imposing a time block on social media is like limiting the ability to read books or watch documentaries.

“All of the laws have been challenged, and the court’s ruling on the Texas law doesn’t bode well for the other state laws,” says Adam Kovacevich, founder and CEO of the Chamber of Progress, which he describes as “a center-left tech industry policy coalition.”

But, he says, some of this tough talk is also allegedly helped by big tech firms themselves, “It’s important to keep in mind that the app store age verification bills have been written and advanced by Meta, largely as a way of getting themselves from defense onto offense.”.

The Texas law isjust one out of many that are cropping up around the United States—and around the world. Across the Atlantic, France is pursuing an Australia-style ban on social media for under-15s this year, while the U.K.’s official (if not likely) opposition party, the Conservatives, has also backed a social media ban for under-16s.

That court challenge is an augur of what’s to come in 2026, reckons Kovacevich. “Legislators keep pushing and pushing with age verification mandates, warning labels, and design mandates, and they keep running into the same two buzzsaws again and again,” he says: “Users’ privacy rights and the First Amendment.”

The legislative surge is part of a broader tech temperance movement aimed at social media, apps, and AI. In the U.K., the Online Safety Act’s child-safety provisions came into practical effect in July 2025, requiring platforms likely to be accessed by children to implement “highly effective” age-assurance measures and shield young users from content promoting self-harm, suicide, violence, and pornography.

With Grok, the law is facing its first big test for the body in charge, communications regulator Ofcom. Across the European Union, the Digital Services Act’s rules on minors’ data and recommender systems are also tightening. The question now is whether courts—and users—will tolerate the friction these laws create.

“Regulators have to resolve an inherent tension,” says Goanta. “Do we want children to have agency over their access to and conduct on the internet—the children’s rights narrative. Or do we consider that they have limited capacity because they are not yet fully developed, and their guardians get to make decisions for them?” She points out that there can be plenty of solutions that fall between both extremes. “But the resulting spectrum should be the focus of debates, and not moral panics.” 


source https://www.fastcompany.com/91474179/the-world-is-getting-tougher-on-kids-online-safety-in-2026


Discover more from The Veteran-Owned Business Blog

Subscribe to get the latest posts sent to your email.

Published by Veterans Support Syndicate

Veterans Support Syndicate is a partner-centric organization that unites with diverse networks to elevate the quality of life for U.S. veterans nationwide. Leveraging deep collaborative efforts, they drive impact through Zen Force, a holistic virtual team providing mental health advocacy and resources. They also champion economic independence via VetBiz Resources, supporting veteran entrepreneurs through launch and growth. Together, they ensure those who served receive the support they deserve.

Discover more from The Veteran-Owned Business Blog

Subscribe now to keep reading and get access to the full archive.

Continue reading

Design a site like this with WordPress.com
Get started