Start Submission

Reading: Exploring Bias and Accountability in Military Artificial Intelligence

Download

A- A+
Alt. Display

Letter to the editor

Exploring Bias and Accountability in Military Artificial Intelligence

Author:

Philip Alexander

West Bengal National University of Juridical Sciences, IN
About Philip
Philip Alexander is a law student at the West Bengal National University of Juridical Sciences
X close

Abstract

With the evolution of machine learning, predictive risk assessment will establish itself as the standard rather than the exception. However, the excessive reliance on computer software can be detrimental to a state’s international human rights framework, threatening the fundamentality of customary international law. This letter highlights the concerns with algorithmic detention during armed conflict, a procedure where the detainee is attributed a recidivist score by an algorithm based on the degree of threat they pose to national security. Accordingly, the algorithm determines whom to detain and the duration of their detention. The primary concern with predictive detention is the instability of data collection during hostility, making the entire procedure manifestly arbitrary. Furthermore, algorithms are pre-set with data inputted by humans. As a result, there will always be room for human error or discriminatory biases that affect its decision-making. These concerns require immediate redressal before military-owned algorithms for the purpose of security detention enter the mainstream.
How to Cite: Alexander, P., 2022. Exploring Bias and Accountability in Military Artificial Intelligence. LSE Law Review, 7(3), pp.396–405.
Published on 16 Mar 2022.
Peer Reviewed

Downloads

  • PDF (EN)

    comments powered by Disqus