The EU's Digital Services Act offers a legal framework for addressing social media's mental health impact, but a gap between theoretical tools and practical transparency raises a hard question: can regulation actually work when platforms control the data needed to enforce it?
The EU passed the Digital Services Act into law in 2022. Back then, the idea that a legal framework could hold social media platforms accountable for psychological damage felt almost theoretical. Today, that framework exists on paper, but the gap between what the law promises and what platforms actually reveal is becoming impossible to ignore.
The DSA's Mental Health Toolkit
The DSA applies to large online platforms with over 45 million users in the EU. For those platforms, the law imposes risk assessment and mitigation obligations. Researchers Przemysław Pałka and Ewa Ilczuk argue these obligations can be understood through three 'mental goods': individual mental well-being, mental health as a component of public health, and the fundamental right to mental integrity.
This framing matters. It gives regulators a legal vocabulary to talk about psychological damage without needing a brand new statute. Numerous empirical studies already indicate social media use is correlated with, and sometimes might be causing, mental harms like addiction, anxiety and depression, or lowering of cognitive abilities. The theoretical tools, in other words, have a real evidence base behind them.
The Transparency Problem
But here is where it gets complicated. You cannot regulate what you cannot see.
In August 2024, Meta closed down CrowdTangle, a tool used by researchers and journalists to trace the spread of information on Facebook and Instagram. That move directly undermined the ability of outside observers to study how content travels on the platforms the DSA is supposed to oversee. When the data goes dark, enforcement gets harder.
Who Watches the Watchers?
DSA enforcement is split between national authorities and the European Commission, which oversees the largest platforms directly. The Commission can fine platforms up to 6% of their annual turnover for breaking the rules. Those are serious penalties on paper.
Yet without independent researchers able to verify platform behavior, regulators are left relying heavily on what companies choose to disclose. Transparency gaps, broken ad repositories, and superficial systemic risk reports all weaken the enforcement chain. The law gives the Commission a stick. The platforms get to decide how much of the picture the Commission actually sees.
What Comes Next
In 2023, the European Parliament called on the European Commission to introduce new rules to combat social media-related mental harms. But Pałka and Ilczuk point out that it might take years before any new laws following that call become applicable. Other countries are moving too: Canada's Bill C-63, the Online Harms Act, was tabled in the House of Commons on May 30, 2024, though its specifics focus more on legal compliance than on measuring mental health outcomes.
The uncomfortable reality is that the DSA may already contain the legal architecture needed to address mental harm. The problem is not the absence of law. The problem is that platforms can limit what regulators and researchers know about the harm happening on their services.
So the real question is not whether the legal framework exists. It does. The question is whether any regulation can function when the companies being regulated hold the keys to the evidence. What would it actually take to force genuine transparency from platforms that profit from keeping their algorithms opaque?
Comments