This research article proposes a solution for the regulation of machine learning algorithms (MLAs) in public sector decision-making. The use of MLAs across government is on the rise, and has already been documented in benefit calculation assessments, visa applications, child welfare services, and policing. At their best, MLAs can be an innovative tool for more effective public service: cutting costs, raising standards, saving time, and, in theory, improving the quality of decisions made. However, their use poses new challenges for justice, in the form of bias, faulty cross-correlation, inaccuracy, rigidity, automation bias and ‘black box’ opacity.
This article is made up of three parts: in the first, I outline the regulatory gaps that arise when common law judicial review grounds are mapped onto MLA decision-making; in the second, I critically review the statutory protections offered by the Data Protection Act 2018, the Human Rights Act 1998, and the Equality Act 2010, concluding that they offer only piecemeal approach to target the risks posed by MLAs. In the third part of the article I propose that the case law on the Carltona principle - which regulates the extent to which a duty or power can be devolved in a minister’s department - offers a promising solution to guard against the risks of MLA decision-making holistically.