Viewing Generative AI and children’s safety in the round
Research and recommendations for Government and technology companies
Children and young people are often the first to start using new technologies, including Generative (Gen) Artificial Intelligence (AI).1 Generative AI is a form of AI that produces (generates) new content, such as images and text.2 This report explores how Gen AI is impacting children’s safety and wellbeing online and offline.
Findings and recommendations in the report are drawn from:
- an analysis of research on Gen AI commissioned by the NSPCC and conducted by legal and technology consultancy AWO. The research included two phases of qualitative interviews and a workshop capturing a wide range of professional voices from a variety of sectors such as child safety, Gen AI developers and policymakers.
- a consultation with a panel of 11 young people aged 13-16 from the NSPCC’s Voice of Online Youth.
- insights from Childline about children’s experiences with AI.
This research aimed to understand the range of harms that children experience from Gen AI, identify solutions to these harms and recommend necessary policy responses.
References
Ofcom (2023) Online nation 2023 report (PDF). [London]: OfcomCabinet Office, Central Digital and Data Office (2024) Generative AI framework for HMG. [Accessed 21/01/2025].
Key findings
Generative AI technology poses a variety of risks to children
While many are aware of the harm caused by AI-generated child sexual abuse material (AI-CSAM), Gen AI is also being used to bully, sexually harass, groom, extort, and mislead children.
NSPCC research has identified 27 solutions that address various features of these risks
These solutions span technical, educational, legislative, and policy changes that could be implemented to make Gen AI safer. They represent what is currently available; we want these to be taken as a baseline, and for more and better solutions to be developed in the future.
Recommendations
Companies must adopt a duty of care for children’s safety
Gen AI companies must prioritise the safety and rights of children in product design and development, focusing on risk assessments and identifying effective solutions.
Child protection needs to be central to AI legislation
The Government must pass legislation that holds Gen AI companies accountable for the safety of children and empowers regulatory bodies to enforce child protection measures.
Children should be at the heart of Gen AI decisions
Children’s needs must be central to the design, development, and deployment of Gen AI technologies. Decision-makers should engage with children to develop educational resources and guidance on safe Gen AI use.
The research and evidence base around Gen AI and child safety should be developed and promoted
The Government and regulatory bodies should invest in studies to better understand the impact of Gen AI and support the development of evidence-based policies.
Voice of Online Youth Participant
Citation
Please cite as: NSPCC (2025) Viewing Generative AI and children’s safety in the round. London: NSPCC.