Better by design
How digital environments can support children's safety, agency, and well-being
Report on digital safety for children and youths
Recent US rulings against Meta and YouTube marked a turning point in how we think about digital child safety. For the first time, US courts held platforms liable not for the content that appeared on their services, but for how those services were built: design choices that made harm to children foreseeable. Courts penalized features seen as addictive, unsafe, or insufficiently protective. The legal rationale is significant: if harmful design can be punished, better design is possible.
That question – what better platform design might actually look like – is at the heart of a new paper published in Science by TUM professors and TUM Think Tank lab PIs Sandra Cortesi and Urs Gasser. Drawing on findings from the year-long Frontiers in Digital Child Safety expert group, the paper identifies four design approaches that support children’s rights, agency, and well-being.
Beyond the ban debate
"Our argument is not against regulation. Legal requirements are indispensable. However, we believe that policymakers should do more than just establish red lines. Rather, they should require providers to design their platforms and products in a child-friendly manner. That is more demanding than a blanket ban, but also more promising. After all, what we really want is for children and youths to be able to learn how to use media autonomously and in a way that has a positive impact on them."
- Urs Gasser, Co-author
Research across psychology, education, and computer science shows that blanket restrictions are unlikely to be effective, often erode children's trust, and do little to prepare young people for digital life. Broad prohibitions also flatten important differences across age groups, developmental stages, and types of risk. A stronger approach combines legal and regulatory pressure with design strategies that improve the environments children inhabit.
Four design approaches
Drawing on the year-long Frontiers in Digital Child Safety expert process, bringing together more than 40 researchers, child rights advocates, and practitioners coordinated at the TUM Think Tank, Harvard's Berkman Klein Center, and the University of Zurich, Cortesi and Gasser identify four concrete and complementary design approaches.
Designing for trust and gradual autonomy
One of the clearest lessons from evidence is that trust provides a more durable foundation for child safety than control. Surveillance tools and unilateral restrictions may reassure adults, but they erode children's trust and drive technology use underground. Collaborative approaches, where caregivers and children jointly adjust privacy and usage settings, foster open communication and greater adherence. And as children mature, systems should expand responsibility and freedom rather than impose the same restrictions across all ages.
Creating pathways for help-seeking and reporting
Even when children face serious risks in the digital environment, from cyberbullying to grooming, many stay silent. Fear of punishment, shame, or not being believed often keeps them from reaching out to adults when support is most needed. Anonymous reporting tools increase disclosure, and simple, multilingual interfaces outperform complex ones. And when peers seek help, others are more likely to follow. Design can make help-seeking a routine response rather than a last resort.
On-device supports: real-time guardrails and nudges
Risks often surface in the moment, whether unsolicited content, peer pressure, or predatory contact. On-device approaches provide real-time supports that respond as risks occur. Guardrails and nudges, such as a prompt before sharing an image or a reminder to take a break, can encourage healthier digital habits without undermining autonomy. Children are also more likely to accept these supports because they preserve choice rather than impose restrictions. AI-based detection tools can flag grooming or distress in real time, but they must be designed with safeguards against false positives, bias, and overreach.
Resilience through education, participation, and user-interface design
Restrictions limit access, but they do not build resilience. Safety is most effective when it is woven into everyday learning rather than treated as a one-off awareness exercise. And when children help co-design the tools they use, the results are often more relevant, engaging, and widely adopted. This approach recognizes children as capable partners whose resilience grows through preparation and participation.
Design and accountability together
According to Cortesi and Gasser, these design approaches will not take hold without stronger accountability structures. They point to structural obstacles including business incentives that favor engagement over safety, collective action problems that penalize first movers, and regulatory capacity gaps that leave safety claims untested. In response, they call for upstream policy interventions, including liability rules, transparency requirements with independent auditing, and regulatory sandboxes for testing safety approaches before full deployment. The authors argue that this combination of design and accountability is more demanding than blanket bans, but ultimately more promising because it addresses root causes, builds capacity, and can support digital environments that are not only safer, but richer and more empowering for children.
About the research
The paper, "Digital child safety at the frontier: From evidence to action", is published in Science. It draws on the work of the Frontiers in Digital Child Safety report, co-convened by the TUM Think Tank at the Munich School of Politics and Public Policy, Harvard's Berkman Klein Center for Internet & Society, and the University of Zurich.