Predictive analytics for children: New research by The Engine Room and UNICEF

A pile of jigsaw pieces
Cathy Richards

While predictive analytics (PA) has been proliferating over the last decade in a range of sectors, PA systems are, as noted by UNICEF and Govlab, “often designed with (consenting) adults in mind, without a focus on the unique needs and vulnerabilities of children.” 

In 2020 and 2021, The Engine Room conducted research in partnership with UNICEF to look into predictive analytics specifically in relation to children. Among other questions, we asked: 

  • How is PA currently being used or proposed in contexts that impact or will impact children? 
  • What are the potential benefits and risks involved?
  • What should be taken into account when deciding on how – if at all – to implement PA systems in these contexts?  

We looked at how PA has been or is being used or planned in a number of relevant sectors, including health, education, social welfare and child protection. 

Read our research report

We’re excited to be able to share our findings from this research, which can now be found here: Predictive Analytics for Children: an assessment of ethical considerations, risks and benefits [PDF]. You can also access the research brief here.

Watch our webinar

Research findings: Where predictive analytics might impact children, special caution is needed

Our research posits that it is crucial to carefully consider the ethical implications of predictive analytics specifically as they relate to children, who as a demographic may have distinctly different needs (as reflected in the Convention on the Rights of the Child). Since machine learning algorithms are typically trained on the data of adults, assumptions based on the behaviour of adults are baked into these models – meaning that their outputs may well be inappropriate for children. 

Our research found that, despite limited evidence of PA’s effectiveness in real-world implementation, it does have potential to offer benefits in certain contexts, if responsibly deployed in a way that builds upon contextual knowledge and local expertise, and if used as just one stream of information in combination with these others. 

Potential benefits might include, for example, improving humanitarian crisis management or enabling more efficient resource allocation, and these benefits best reveal themselves at the population level – that is, in using predictive analytics with population-based outcomes where children are indirectly impacted. It’s worth noting, however, that though the benefits of PA are often touted as increasing efficiency, the evidence in relation to this claim is not comprehensive. 

When PA is used to generate predictions that are targeted to individual children, the potential risks are significant. Our research showed that using PA in this context requires comprehensive risk mitigation measures to avoid causing harm; these are outlined in the report. In this context in particular, predictive analytics should only be understood as part of a much broader assessment to support decision-making – that is, PA should never be relied upon as the sole source of information for making decisions about children’s lives. 

We hope our findings serve to support both decision-makers and those already working with PA systems, and ultimately contribute towards ensuring the protection of children’s rights throughout. 

Read the full findings here. We are also always happy to hear feedback! Please send any thoughts or questions to research[at]theengineroom.org.

Read some of our favourite pieces on this topic

We’re grateful for the existing work that we built upon. Below is a limited list of the literature that was particularly influential to us in our research.  

  • Clayton, Vicky, and Michael Sanders, ‘Machine Learning in Children’s Services’, What Works for Children’s Social Care, September 2020.
  • Eubanks, Virginia, ‘Automating Inequality: How High Tech Tools Profile, Police and Punish the Poor’, St. Martins, New York, 2017. 

MORE