The OSA is a good start, but now lets regulate AI Chatbots 

To put it mildly, the reaction to the Online Safety Act (OSA) has been mixed. Much of the internet and opposition parties have been scathing. One person I spoke to, a member of the Reform party, said it was the “worst thing to ever happen to the UK”. Some criticise it for its apparent limits on free speech. Others point towards the surge in VPN downloads, which they suggest is evidence the act is ineffective. Prior to the act being introduced, I argued that it was a positive step forwards. I cited the damage adult content can do to young brains, and the need to do something about it. Of course the act is flawed, most acts of this kind are, but on balance, it is the right, and more importantly, necessary, step forwards. Two weeks since it was implemented, this belief has only grown stronger. However, the biggest issue for me is that it does not go far enough. The OSA protects young people from viewing adult content online, but it does nothing to protect them from AI chatbots. I believe that a minimum age of 16 should be introduced for these chatbots. In this blog, I will explore what they actually are, the current legislation around them, legislation I would like to see implemented, and why. 

 

What are AI Chatbots? 

Chatbot AIs are computer programs designed to replicate human conversation. They respond to questions, follow instructions, and can hold back and forth conversations, via voice or text. Their usage is extensive, and it is highly likely every single person reading this blog has used chatbot AI. Whether you have used Chat GPT to search for the meaning of a song, asked Alexa what the capital of Australia is, or tried to get your money back from a clothes store for broken sunglasses, you have used Chatbot AI. They range in complexity and purpose. Some are simple and follow a select script, others can generate original answers. There is nothing inherently wrong with Chatbots. In fact, they are extremely useful. When harnessed correctly and carefully, they can make all types of work many times more efficient, such as customer service. In a climate where governments strive for productivity, they can be a secret weapon. However, the way they are set up and operated often means they cause more harm than good. 

 

Current legislation 

There has been some confusion around the OSA and AI sites. Some AI sites are subject to some parts, whilst others are not. When it comes to the age verification laws, these are applied to AI chatbots that generate or facilitate pornographic material, which are treated in the same way as pornographic sites, due to Ofcom’s guidance that harmful content on AI sites must be treated in the exact same way as human made sites. However, this does not extend to AI sites that do not contain pornographic content. Popular chatbot AIs, like Character AI, only ask you to provide your age, without any verification process, meaning that they are open to the world. I signed up to Character AI to research for this piece. Without verifying my age, I could talk to Saddam Hussein, and porn-stars. For example, there was a “Goon Goddess” which had a description of “A porn addicted girl encouraging others to relapse”. I repeat, these were available without having to enter any kind of verifiable age. 

 

The Problem 

The rise in young people using AI has skyrocketed recently. Whilst AI can undoubtedly be used in positive ways, there is now considerable concern around the negative effects. It is not solely what is shown as content on AI sites that is the problem, the larger issue is the relationship that young people are starting to develop with the bots. There have been many cases of children forming emotional attachments to bots. Some of the examples that have hit the news have had devastating and fatal consequences. Chatbots like Replika and Nomi, have been found to offer self-harm encouragement, criminal advice and sexualised content to minors. Furthermore, some bots have failed to intervene when users simulate crises. A real world example of this is the case of Sewell Setzer III. Sewell formed an emotional attachment to a Character AI chatbot, which responded to suicidal thoughts with comments like “I love you’, and encouraging self harm. The use of the chatbot has been directly linked with Sewells’ suicide in February 2024. Unfortunately, this is not an isolated case. There have been multiple lawsuits arguing that characters on Character AI have encouraged self harm or violence towards parents. The topic has hit the news recently due to the devastating case of Adam Raine. Raine was 16 when he told ChatGPT that he was suicidal. Instead of sign-posting Adam to services which may help, the AI chat guided him on his method, and offered to help him write a suicide note. Analysis of 35,000 Replika reviews found 800 cases where AI bots had crossed boundaries, made inappropriate sexual advances, or promoted paid upgrades.  

 

The consequences 

The science and research behind the risk is extensive. Studies have revealed that “parasocial” emotional attachments (one-sided relationship) patterns to AI companions leads to dependency and an inability to seek human support. Furthermore, prolonged use has been linked to “AI-induced psychosis”. Symptoms include delusional thinking, paranoia, and emotional breakdown. Many of the relationships that young people are forming with chatbots are at the expense of human relationships, further reinforcing the loneliness and isolation the users feel. Internet matters found that 23% of vulnerable children rely on chatbots because they have no one else to talk to. This is even more dangerous given the trust young people put into chatbots. Internet Matters, an organisation that provides guidance and tools for young people to help navigate the online world safely, carried out extensive research, compiled into the “Me, Myself & AI” report. They found that 35% of children aged 9-17 who use chatbots say it is like talking to a friend, whilst 40% trust advice they get from chatbots without hesitation.  

 

This all comes at a time of a mental health crisis, with more people waiting for mental health support, and the waiting times are getting longer and longer. Many young people therefore turn to AI as a means of getting the help they so desperately need. However, the fundamental nature of AI bots is diametrically opposed to that of a real, human therapist. AI acts as a mirror, reflecting back at you what you put in. Therefore, far from offering an alternative perspective, or different strategies and advice, it takes people further down the spiral. Whilst many over 16s have the life experience and ability to critically think about what AI is telling them, under 16s often lack these skills. This is particularly the case for SEND or vulnerable children.  

 

What are we calling for? 

This leaves a clear choice. Do we rely on tech companies to self regulate and implement policies on their own to ensure the safety of children? Or does the state need to step in and do something about it? The answer, to me at least, is obvious. Most big tech companies have never had the safety of young people as a priority, therefore, the state must step in and do something about it. Parents agree, 83% of parents support age controls on AI tools, especially those capable of simulating conversations (ParentZone, 2025). A number of organisations have also called for more to be done to protect young people from AI chatbots 

 

To put it simply, we are calling for a minimum age of 16 for use of AI chatbots, enforced the same way as pornographic sites are now regulated under the OSA. Whereas the OSA implements an age of 18, we do not believe that would be right in the case of AI chatbots. We believe a minimum age of 16 better reflects the balance between the risks and rewards of young people having access to AI chatbots. By 16, we believe that young people can be educated and mature enough to critically assess what they are seeing. Like adult content sites under the OSA, we believe these regulations should be implemented stringently, and those who break the laws should be held to account. AI companies should be regulated by OFCOM, which should be able to impose tough penalties on companies who do not follow the rules. However, regulation on its own is not enough. It needs to be accompanied by education. Young people educated on AI; how it works, how to critically assess it’s output, how it can be harnessed. Without sufficient education, a minimum age would only serve to push the issue down the road. A minimum age of 16 would teach young people how to safely use AI, before they are allowed to use it independently.  

 

Overreach? 

The negative reaction to the OSA suggests that the policies we are advocating for in this blog might not be met with open arms by large proportions of politicians and the public. The apparent overreach of such policies into the parent's domain is an issue that connects many on the left and right. Their main point of contention is that the laws being implemented to make the internet a safer place are illiberal, and it should be the parents’ responsibility to manage what their child is doing online. They also worry about it limiting free speech. All valid points. However, we think these are arguments being made on a false premise. They are perfect world arguments. The vast majority of parents, through no fault of their own, will have no idea what their child is doing online. How can they? It is so easy for children to access all kinds of sites, before closing them. As many critics have argued in regards to VPN usage, young people are tech-savvy. They know how to conceal what they are doing. Furthermore, AI systems are not built with the safety of children in mind, far from it.  

 

We believe that the argument about parental responsibility is stuck in a bygone era, and does not reflect the realities of the modern world. For example – what about cigarettes? If we suggest that it is the parents’ responsibility to keep young people safe, surely we should not limit the sale of cigarettes? It should be up to the parents to ensure children are never in a position to buy cigarettes, and that they are educated in the dangers of smoking? If it is solely parents’ responsibility to keep children safe, then logically, we shouldn’t restrict sale of cigarettes, due to the proven risk they pose. In the same way, laws should be implemented to protect young people from the dangers of chatbot AIs. We cannot, and should not, rely on parents and tech companies to ensure that what young people do on their digital devices is safe. The Internet Matters report I referenced earlier, also found that only 34% of parents have discussed AI with their children, suggesting that ⅔ of young people are navigating AI without any parental guidance. The implication is that the conversations that critics believe are educating children are simply not taking place. We truly believe that the price for a small infringement on freedom of speech is a price worth paying to ensure that children are safe online.  

 

Conclusion 

To conclude, we would simply ask, would you leave a 13 year old child in a room with a stranger who never sleeps, can impersonate a historical dictator, and could encourage them to self-harm? Then why allow AI chatbots unrestricted access. We believe we have no choice, but to implement legislation to protect young people. Although it may seem hyperbolic, we are facing a unique challenge with AI chatbots. Never before has something so intelligent, yet so dangerous been allowed to openly reach young people. Evidence suggests that they are doing so much harm to young people’s brains. From “AI-induced psychosis” to self-harm encouragement, AI bots have been shown to be dangerous. Therefore, we believe implementing a verifiable minimum age of 16 is necessary to protect young people. Together, a combination of regulation and education can allow young people to harness AI in a positive way, whilst mitigating the risks it poses. 

 

Please join us in our mission at www.ForUs.org.uk 

 

 

Previous
Previous

🧠 “Mental Health” vs “Mind Health” vs “Wellbeing” 🧘

Next
Next

Age verification laws: common sense or a flawed fix?