How should we approach the regulation of social media and the risk of harm to children?

In the UK legal system, there are the same two main standards of proof: “beyond reasonable doubt” in criminal cases, and “on the balance of probabilities” in civil cases.

However, there are also many examples of modified standards of proof when a specific situation requires one. For example, Fairchild relaxes the usual burden of proof for causation in negligence claims, in cases where medical science cannot determine which of several exposures to a harmful agent actually caused a disease (for example, cases involving diseases like mesothelioma caused by multiple asbestos exposures). It allows a claimant to establish causation by simply demonstrating that a defendant's actions materially increased the risk of developing the disease.

Equally, in UK health and safety law, whilst the burden is on the prosecution to prove that an employee and/or a contractor was exposed to a material risk of harm, once they have done so, the burden then shifts to the employer to prove that it took all reasonably practicable steps to reduce the risk. That involves balancing the risk of harm against the cost, time, and effort (the burden) required to eliminate or reduce that risk.

So how then should one approach the regulation of social media and the risk of harm to children?

In my view, we should not even be talking about having to prove causation to the lower civil standard. If there is even a material risk of harm to children, and the cost and burden of fixing it is minimal (simply changing the inherently addictive nature of social platforms for a start) when compared to the possible benefits (to the children - not the media companies) then one should have to positively act to reduce the risk.

What might reasonably practicable steps to reduce the risk look like?

⁃ Age verification for setting up and logging onto social media accounts;

⁃ Design these platforms for kids not for harvesting their data and their attention;

⁃ Time prompts and/or automatic log out when a certain amount of time has been spent on social media platforms;

⁃ IRL content moderators for content aimed at accounts for under 18s, and automatic removal of harmful content;

⁃ Warning labels on “news items” that “this has not been fact checked”;

⁃ Warning labels on “videos” that “this content may be AI generated”.

I’m sure there are many more.

Would this “solve” all the problems associated with social media and children? No. But would it be a really good start, and shift the burden of regulation from the parents and the children back onto to Big Tech to do (much) better. Yes.

And yes I’m well aware that this is “supposed” to be what the Online Safety Act will do… but find me someone who actually thinks it will achieve this…

For far superior insights on all things digital wellbeing, check out the blog posts from our Gen-Z FlippGen Digital Rebels here: https://www.flippgen.com/blog

Next
Next

Navigating Online Spaces as an LGBTQ+ Individual