While predictive analytics (PA) has been proliferating over the last decade in a range of sectors, PA systems are, as noted by UNICEF and Govlab, “often designed with (consenting) adults in mind, without a focus on the unique needs and vulnerabilities of children.”
In 2020 and 2021, The Engine Room conducted research in partnership with UNICEF to look into predictive analytics specifically in relation to children. Among other questions, we asked:
- How is PA currently being used or proposed in contexts that impact or will impact children?
- What are the potential benefits and risks involved?
- What should be taken into account when deciding on how – if at all – to implement PA systems in these contexts?
We looked at how PA has been or is being used or planned in a number of relevant sectors, including health, education, social welfare and child protection.
Read our research report
We’re excited to be able to share our findings from this research, which can now be found here: Predictive Analytics for Children: an assessment of ethical considerations, risks and benefits [PDF]. You can also access the research brief here.
Join our upcoming webinar with Unicef
Together with Unicef, we’ll be hosting two webinars on December 2, 2021, at 10am and 4pm CET. We’d love you to join us! Anyone with an interest in the topic is welcome.
Please register for your preferred time slot via the sign-up links below (the content of each webinar will be the same).
10am CET/4am EST: Registration Link
4pm CET/10am EST: Registration Link
Research findings: Where predictive analytics might impact children, special caution is needed
Our research posits that it is crucial to carefully consider the ethical implications of predictive analytics specifically as they relate to children, who as a demographic may have distinctly different needs (as reflected in the Convention on the Rights of the Child). Since machine learning algorithms are typically trained on the data of adults, assumptions based on the behaviour of adults are baked into these models – meaning that their outputs may well be inappropriate for children.
Our research found that, despite limited evidence of PA’s effectiveness in real-world implementation, it does have potential to offer benefits in certain contexts, if responsibly deployed in a way that builds upon contextual knowledge and local expertise, and if used as just one stream of information in combination with these others.
Potential benefits might include, for example, improving humanitarian crisis management or enabling more efficient resource allocation, and these benefits best reveal themselves at the population level – that is, in using predictive analytics with population-based outcomes where children are indirectly impacted. It’s worth noting, however, that though the benefits of PA are often touted as increasing efficiency, the evidence in relation to this claim is not comprehensive.
When PA is used to generate predictions that are targeted to individual children, the potential risks are significant. Our research showed that using PA in this context requires comprehensive risk mitigation measures to avoid causing harm; these are outlined in the report. In this context in particular, predictive analytics should only be understood as part of a much broader assessment to support decision-making – that is, PA should never be relied upon as the sole source of information for making decisions about children’s lives.
We hope our findings serve to support both decision-makers and those already working with PA systems, and ultimately contribute towards ensuring the protection of children’s rights throughout.
Read some of our favourite pieces on this topic
We’re grateful for the existing work that we built upon. Below is a limited list of the literature that was particularly influential to us in our research.
- Berman, Gabrielle, Kerry Albright, ‘Children and the Data Cycle: Rights and Ethics in a Big Data World’, Innocenti Working Papers, June 2017.
- Byrne, Jasmina, Emma Day, and Linda Raftree, ‘The Case for Better Governance of Children’s Data: A Manifesto’, UNICEF, May 2021.
- Clayton, Vicky, and Michael Sanders, ‘Machine Learning in Children’s Services’, What Works for Children’s Social Care, September 2020.
- Eubanks, Virginia, ‘Automating Inequality: How High Tech Tools Profile, Police and Punish the Poor’, St. Martins, New York, 2017.
- Glaberson, Stephanie K., ‘Coding Over the Cracks: Predictive Analytics and Child Protection’, Fordham Urban Law Journal, vol. 46, no. 2, 2019.
- Leslie, David, Lisa Holmes, Christina Hitrova, and Ellie Ott, ‘Ethics Review of Machine Learning in Children’s Social Care’, What Works for Children’s Social Care, January 2020.