The Digital Futures Commission has now concluded. Our new project is the Digital Futures for Children centre, joint with 5Rights Foundation and London School of Economics and Political Science.

Our consultation response to DCMS’s policy paper on ‘a pro-innovation approach to regulating AI’

By Sonia Livingstone, Kruakae Pothong and Ayça Atabey

In the UK government’s recently published policy paper, ‘Establishing a pro-innovation approach to regulating AI’, the government proposes a pro-innovation framework, underpinned by a set of cross-sectoral principles tailored to the specific characteristics of artificial intelligence (AI). Responding with our Media@LSE hats on, we advocated a child rights approach that highlights 7 key points.

  1. A missing link: embed children’s rights in the regulation of AI

Although we see the potential of the UK government’s cross-sectoral pro-innovation approach, we propose a rights-based framework to address the structural challenges that arise. Building on our earlier response to “Data: a new direction”, we argue that the cross-sectoral principles for regulating AI should be grounded in human rights, including children’s rights. They should be aligned with the UNCRC’s General Comment 25, the Council of Europe’s Guidelines to Respect, Protect and Fulfil the Rights of the Child in the Digital Environment and Guidelines on Children’s Data Protection in an Education Setting, the OECD’s Recommendation on Children in the Digital Environment, and UNICEF’s Manifesto on Good Data Governance for Children.

We believe aligning the cross-cutting principles for regulating AI with internationally accepted child rights (on which the Age Appropriate Design Code (AADC) is based) would avail broader markets for UK businesses beyond the UK’s borders.

2. Special protection for children

The government’s proposals lack clarity on how vulnerable groups, such as children, would receive special protection. Yet it is widely recognised by different jurisdictions, international organisations, and research centres that children need additional safeguards in their interaction with AI-driven technologies.

We recommend requiring businesses to conduct impact assessments based on the 5Rights’ framework for algorithmic oversight and deploying the widely-endorsed method of Child Rights Impact Assessment (CRIA) in cases where the AI-driven applications are likely to be accessed by children or where such applications collect and/or process data about children. Importantly, we advocate pathways to correct input data and contest decisions made by AI about people. Redress must also be made easily accessible to those who are affected, including children.

3. Context matters

We agree in principle that AI should be regulated “based on its use and the impact it has on individuals, groups and businesses within a particular context.” But we do not agree with the delegation of responsibilities for designing and implementing proportionate regulatory responses across sector-specific regulators. Such delegation of responsibilities risks further confusion over the interpretation and application of the regulatory principles.

This confusion is already observed in the contested interpretation of the applicable scope of the Age Appropriate Design Code (AADC) in the education context, putting UK children’s rights at risk, as documented in the Digital Future Commission’s recent EdTech report on Google Classroom and ClassDoJo. This flagged the pressing problems of data governance in UK schools, noting that other countries have already acted to restrict certain uses of EdTech or renegotiate data processing with EdTech companies. In relation to EdTech, therefore, the regulation of AI should be made by a regulator with expertise specific to the education sector.

4. Cross-sectoral principles

The proposed cross-cutting sectoral principles acknowledge certain elements of child rights, for example, non-discrimination and fairness. But the principles as they stand omit some rights – for instance, the risk of commercial exploitation posed by AI-driven applications because they are not underpinned by a holistic framework that includes practical ways for balancing different elements of human and child rights. Therefore, we strongly recommend that the government re-align its cross-sectoral principles with existing rights-based principles such as that of the AADC.

                     Image by Family_Stock 2022 from vecteezy

5. Implementation of the UK government’s approach to regulating AI

There are still many unknowns about the short- and long-term impacts AI can have on individuals and the scale of the risks. Therefore, the government should take an ex-ante approach to AI regulation, based on rights-based principles. While evidence of benefits to individuals, particularly vulnerable groups including children, remains thin it would be reckless to apply an ex-post facto approach to regulate AI applications. Having said that, the ex-ante approach to regulation should prioritise embedding rights-based principles into AI applications by design.

6. Challenges for businesses operating on a global scale

In today’s data-driven world, cross-border trade and international cooperation are enabled through common frameworks and standards that promote the same level of protection (e.g., cross-border data transfer frameworks). The proposed approach has no statutory footing and thus lacks regulatory certainty. It also deviates from global trends for AI regulation which in turn undermines UK businesses’ market opportunities beyond its own borders.

Several jurisdictions and international frameworks have started to regulate AI (e.g., the Council of Europe framework on AI aimed to be ratified in 2024; Canada etc.) and provide standards and guidance. These frameworks already contribute to a cross-border understanding of a global approach to regulating AI, and generally, put strong emphasis on the potential impacts AI has on human rights. In comparison, the UK government’s approach is weak due to its failure to address the most debated aspects of AI (namely, mitigating risks related to human rights and freedoms, including children’s rights), and lack of legislative basis when focusing on responsible innovation in AI.

7. Monitoring the effectiveness of AI governance

Finally, we refer to 5Rights Foundation report `Shedding light on AI – A framework for algorithmic oversight’ which sets out four steps of AI oversight (the ‘4 Is’) to help businesses across different sectors mitigate the harmful impacts of AI systems on children. This also gives regulators a clear way to inquire, analyse and assess whether a system is conforming to standards. And it can help them to develop practical insights which can support the process of monitoring.

On 4 October, the White House Office of Science and Technology released its Blueprint for an AI Bill of Rights to help guide the design, development, and deployment of automated systems. It is designed to be used by people across different actors, including parents who could “use the framework as a set of questions to ask school administrators about what protections exist for their children”. This goal reflects the needs of today’s digital world, where actors in the school ecosystem, including parents, are left in the middle of a complex data governance ecosystem in schools.

On 17 October, the Council of Europe published its report on AI and Education highlighting the need to ensure that “AI empowers and not overpowers educators and learners, and that future developments and practices are genuinely for the common good.”

Given the growing calls for AI regulation that empowers people by protecting their rights, we call on the UK government to ensure children’s best interests and rights are carefully considered in any efforts to establish frameworks to regulate AI.

This blog is part of the Guidance for Innovators series. You can view all our blogs here.