Algorithms and Crime
Algorithms and Crime
research-article2019
EUC0010.1177/1477370819876762European Journal of CriminologyZavršnik
Article
Algorithmic justice:
2021, Vol. 18(5) 623–642
© The Author(s) 2019
Aleš Završnik
University of Ljubljana, Slovenia
Abstract
The article focuses on big data, algorithmic analytics and machine learning in criminal justice
settings, where mathematics is offering a new language for understanding and responding to crime.
It shows how these new tools are blurring contemporary regulatory boundaries, undercutting the
safeguards built into regulatory regimes, and abolishing subjectivity and case-specific narratives.
After presenting the context for ‘algorithmic justice’ and existing research, the article shows
how specific uses of big data and algorithms change knowledge production regarding crime. It
then examines how a specific understanding of crime and acting upon such knowledge violates
established criminal procedure rules. It concludes with a discussion of the socio-political context
of algorithmic justice.
Keywords
Algorithm, bias, criminal justice, machine learning, sentencing
Corresponding author:
Aleš Završnik, Institute of Criminology at the Faculty of Law, University of Ljubljana, Poljanski nasip 2,
Ljubljana, SI-1000, Slovenia.
Email: ales.zavrsnik@pf.uni-lj.si
624 European Journal of Criminology 18(5)
elections, with ‘political contagion’, similar to the ‘emotional contagion’ of the infamous
Facebook experiment (Kramer et al., 2014) involving hundreds of millions of individuals
for various political ends, as revealed by the Cambridge Analytica whistle-blowers in
2018 (Lewis and Hilder, 2018).
This trend is a part of ‘algorithmic governmentality’ (Rouvroy and Berns, 2013) and
the increased influence of mathematics on all spheres of our lives (O’Neil, 2016). It is a
part of ‘solutionism’, whereby tech companies offer technical solutions to all social prob-
lems, including crime (Morozov, 2013). Despite the strong influence of mathematics and
statistical modelling on all spheres of life, the question of ‘what, then, do we talk about
when we talk about “governing algorithms”?’ (Barocas et al., 2013) remains largely unan-
swered in the criminal justice domain. How does the justice sector reflect the trend of the
‘algorithmization’ of society and what are the risks and perils of this? The importance of
this issue has triggered an emerging new field of enquiry, epitomized by critical algorithm
studies. They analyse algorithmic biases, filter bubbles and other aspects of how society
is affected by algorithms. Discrimination in social service programmes (Eubanks, 2018)
and discrimination in search engines, where they have been called ‘algorithms of oppres-
sion’ (Noble, 2018), are but some examples of the concerns that algorithms, big data and
machine learning trigger in the social realm. Predictive policing and algorithmic justice
are part of the larger shift towards ‘algorithmic governance’.
Big data, coupled with algorithms and machine learning, has become a central theme
of intelligence, security, defence, anti-terrorist and crime policy efforts, as computers
help the military find its targets and intelligence agencies justify carrying out massive
pre-emptive surveillance of public telecommunications networks. Several actors in the
‘crime and security domain’ are using the new tools:1 (1) intelligence agencies (see, for
example, the judgment of the European Court of Human Rights in Zakharov v. Russia in
2015, No. 47143/06, or the revelations of Edward Snowden in 2013); (2) law enforce-
ment agencies, which are increasingly using crime prediction software such as PredPol
(Santa Cruz, California), HunchLab (Philadelphia), Precobs (Zürich, Munich) and
Maprevelation (France) (see Egbert, 2018; Ferguson, 2017; Wilson, 2018); and (3) crim-
inal courts and probation commissions (see Harcourt, 2015a; Kehl and Kessler, 2017).
The use of big data and algorithms for intelligence agencies’ ‘dragnet’ investigations
raised considerable concern after Snowden’s revelations regarding the ’National Security
Agency’s access to the content and traffic data of Internet users. Predictive policing has
attracted an equal level of concern among scholars, who have addressed ‘the rise of pre-
dictive policing’ (Ferguson, 2017) and the ‘algorithmic patrol’ (Wilson, 2018) as the new
predominant method of policing, which thus impacts other methods of policing. Country-
specific studies of predictive policing exist in Germany (Egbert, 2018), France (Polloni,
2015), Switzerland (Aebi, 2015) and the UK (Stanier, 2016). A common concern is the
predictive policing allure of objectivity, and the creative role police still have in creating
inputs for automated calculations of future crime: ‘Their choices, priorities, and even
omissions become the inputs algorithms use to forecast crime’ (Joh, 2017a). Scholars
have shown how public concerns are superseded by the market-oriented motivations and
aspirations of companies that produce the new tools (Joh, 2017b). Human rights advo-
cates have raised numerous concerns regarding predictive policing (see Robinson and
Koepke, 2016). For instance, a coalition of 17 civil rights organizations has listed several
Završnik 625
risks, such as a lack of transparency, ignoring community needs, failing to monitor the
racial impact of predictive policing and the use of predictive policing primarily to inten-
sify enforcement rather than to meet human needs (ACLU, 2016).
In Anglo-American legal systems, tools for assessing criminality have been used in
criminal justice settings for a few decades. But now these tools are being enhanced with
machine learning and AI (Harcourt, 2015b), and other European countries are also looking
at these systems for several competing reasons, such as to shrink budgets, decrease legiti-
macy and address the overload of cases. The trend to ‘algorithmize’ everything (Steiner,
2012) has thus raised the interest of policy-makers. The European Union Fundamental
Rights Agency warns that discrimination in data-supported decision-making is ‘a funda-
mental area particularly affected by technological development’ (European Union Agency
for Fundamental Rights, 2018). The Council of Europe’s European Commission for the
Efficiency of Justice adopted a specific ‘European Charter on the Use of AI in Judicial
Systems’ in 2018 to mitigate the above-mentioned risks specifically in justice sector.
In general, courts use such systems to assess the likelihood of the recidivism or flight
of those awaiting trial or offenders in bail and parole procedures. For instance, the well-
known Arnold Foundation algorithm, which is being rolled out in 21 jurisdictions in the
USA (Dewan, 2015), uses 1.5 million criminal cases to predict defendants’ behaviour in
the pre-trial phase. Similarly, Florida uses machine learning algorithms to set bail
amounts (Eckhouse, 2017). These systems are also used to ascertain the criminogenic
needs of offenders, which could be changed through treatment, and to monitor interven-
tions in sentencing procedures (Kehl and Kessler, 2017). Some scholars are even dis-
cussing the possibility of using AI to address the solitary confinement crisis in the USA
by employing smart assistants, similar to Amazon’s Alexa, as a form of ‘confinement
companion’ for prisoners. Although at least some of the proposed uses seem outrageous
and directly dangerous, such as inferring criminality from face images, the successes of
other cases in the criminal justice system seem harder to dispute or debunk. For instance,
in a study of 1.36 million pre-trial detention cases, scholars showed that a computer
could predict whether a suspect would flee or re-offend better than a human judge
(Kleinberg et al., 2017).
The purpose of this article is two-fold: first, it examines the more fundamental changes
in knowledge production in criminal justice settings occurring due to over-reliance on
the new epistemological transition – on knowledge supposedly generated without a pri-
ori theory. Second, it shows why automated predictive decision-making tools are often at
variance with fundamental liberties and also with the established legal doctrines and
concepts of criminal procedure law. Such tools discriminate against persons in lower
income strata and less empowered sectors of the population (Eubanks, 2018; Harcourt,
2015a’; Noble, 2018; O’Neil, 2016). The following sections of this article map the nov-
elties of automated decision-making tools – defined as tools utilizing advances in big
data, algorithms, machine learning and ‘’AI – in criminal justice settings.
scoring, Citron and Pasquale (2014) identified the opacity of such automation, arbitrary
assessments and disparate impacts. Computer scientists have warned against the reduc-
tionist caveats of big data and ‘exascale’ computers: ‘computers cannot replace arche-
typal forms of thinking founded on Philosophy and Mathematics’ (Koumoutsakos,
2017). Research on the role of automation in exacerbating discrimination in search
engines (Noble, 2018) and social security systems (Eubanks, 2018) showed how the new
tools are dividing societies along racial lines. However, research on the impacts of auto-
mated decision-making systems on fundamental liberties in the justice sector has not
gained enough critical reflection.
Legal analysis of the use of algorithms and AI in the criminal justice sector have
focused on the scant case law on the topic, especially on the landmark decision in
Loomis v. Wisconsin (2016) in the USA, in which the Supreme Court of Wisconsin
considered the legality of using the COMPAS (Correctional Offender Management
Profiling for Alternative Sanctions) risk assessment software in criminal sentencing.
The Loomis case raised two related legal issues, that is, due process and equal protec-
tion (Kehl and Kessler, 2017). In the same year, ProPublica’s report on ‘Machine bias’
(Angwin et al., 2016) triggered fierce debates on how algorithms embed existing
biases and perpetuate discrimination. By analysing the diverging assessments of the
outcomes of the COMPAS algorithm – by ProPublica, on the one hand, and the algo-
rithm producer Northpointe, on the other – scholars have identified a more fundamen-
tal difference in the perception of what it means for an algorithm to be ‘successful’
(Flores et al., 2016). Probation algorithm advocates and critics are both correct, given
their respective terms, claims the mathematician Chouldechova (2016), since they
pursue competing notions of fairness.
Along with these concerns, scholars dubbed the trend to use automated decision-
making tools in the justice sector ‘automated justice’ (Marks et al., 2015), which causes
discrimination and infringes the due process of law guaranteed by constitutions and
criminal procedure codes. Similarly, Hannah-Moffat (2019) warns that big data tech-
nologies can violate constitutional protections, produce a false sense of security and be
exploited for commercial gain. ‘There can and will be unintended effects, and more
research will be needed to explore unresolved questions about privacy, data ownership,
implementation capacities, fairness, ethics, applications of algorithms’ (Hannah-
Moffat, 2019: 466).
According to Robinson (2018), three structural challenges can arise whenever a law
or public policy contemplates adopting predictive analytics in criminal justice: (1) what
matters versus what the data measure; (2) current goals versus historical patterns; and (3)
public authority versus private expertise.
Based on research on the automation of traffic fines, O’Malley (2010) showed how a
new form of ‘simulated justice’ has emerged. By complying with automated fining
regimes – whereby traffic regulation violations are automatically detected and processed
and fines enforced – we buy our freedom from the disciplinary apparatuses but we suc-
cumb to its ‘simulated’ counterpart at the same time, claims O’Malley. The judicial-dis-
ciplinary apparatus still exists as a sector of justice, but a large portion of justice is
– concerning the ‘volumes of governance’ – ‘simulated’: ‘the majority of justice is deliv-
ered by machines as ‘a matter of one machine “talking” to another’ (O’Malley, 2010).
Završnik 627
Scholars are also critical of the purposes of the use of machine learning in the criminal
justice domain. Barabas et al. (2017) claim that the use of actuarial risk assessments is
not a neutral way to counteract the implicit bias and to increase the fairness of decisions
in the criminal justice system. The core ethical concern is not the fact that the statistical
techniques underlying actuarial risk assessments might reproduce existing patterns of
discrimination and the historical biases that the data reflect, but rather the purpose of risk
assessments (Barabas et al., 2017).
One strand of critique of automated decision-making tools emphasizes the limited
role of risk management in the multiple purposes of sentencing. Namely, minimizing the
risk of recidivism is only one of the several goals in determining a sentence, along with
equally important considerations such as ensuring accountability for past criminal behav-
iour. The role of automated risk assessment in the criminal justice system should not be
central (Kehl and Kessler, 2017). It is not even clear whether longer sentences for riskier
prisoners might decrease the risk of recidivism. On the contrary, prison leads to a slight
increase in recidivism owing to the conditions in prisons, such as the harsh emotional
climate and psychologically destructive nature of prisonization (Gendreau et al., 1999).
Although cataloguing risks and alluding to abstract notions of ‘fairness’ and ‘trans-
parency’ by advocating an ‘ethics-compliant approach’ is an essential first step in
addressing the new challenges of big data and algorithms, there are more fundamental
shifts in criminal justice systems that cannot be reduced to mere infringements of indi-
vidual rights. What do big data, algorithms and machine learning promise to deliver in
criminal justice and how do they challenge the existing knowledge about crime and ideas
as to the appropriate response to criminality?
Constitutional Court, which limited the possibility of such screening (4 April 2006) by
claiming that the general threat situation that existed after the terrorist attacks on the
World Trade Center on 9/11 was not sufficient to justify that the authorities start such
investigations (Decision 1BvR 518/02). The legislation blurred the start and finish of
the criminal procedure process. The start of a criminal procedure is the focal point
when a person becomes a suspect, when a suspect is granted rights, for example
Miranda rights, that are aimed at remedying the inherent imbalances of power between
the suspect and the state. The start of the criminal procedure became indefinite and
indistinct (see Marks, et al., 2015) and it was no longer clear when a person ‘trans-
formed’ into a suspect with all the attendant rights.
Similarly, the notion of a ‘person of interest’ blurs the existing concepts of a ‘suspect’.
In Slovenia, the police created a Twitter analysis tool during the so-called ‘Occupy
Movement’ to track ‘persons of interest’ and for sentiment analysis. The details of the
operation remain a secret because the police argued that the activity falls under the pub-
licly inaccessible regime of ‘police tactics and methods’. With such reorientation, the
police were chasing the imagined indefinable future. They were operating in a scenario
that concerned not only what the person might have committed, but what the person
might become (see Lyon, 2014) – the focus changed from past actions to potential future
personal states and circumstances.
The ‘smart city’ paradigm is generating similar inventions. In the Eindhoven
Living Lab, consumers are tracked for the purpose of monitoring and learning about
their spending patterns. The insights are then used to nudge people into spending
more (Galič, 2018). The Living Lab works on the idea of adjusting the settings of the
immediate sensory environment, such as changing the music or the street lighting.
Although security is not the primary objective of the Living Lab, it is a by-product
of totalizing consumer surveillance. The Lab treats consumers like Pavlov’s dogs,
because the Lab checks for ‘escalated behaviour’ to arrive at big-data-generated
determinations of when to take action. The notion of escalated behaviour includes
yelling, verbal or otherwise aggressive conduct or signs that a person has lost self-
control (de Kort, 2014). Such behaviour is targeted in order for it to be defused
(Galič, 2018). The goals of the intervention are then to induce subtle changes, such
as ‘a decrease in the excitation level’.’ This case shows how the threshold for inter-
vention in the name of security drops: interventions are made as soon as the behav-
iour is considered ‘escalated’.
To conclude, the described novel concepts, that is, ‘terrorist sleeper’, and so on, are
producing new knowledge about crime and providing new thresholds and justifications
to act upon this knowledge. The new mathematical language serves security purposes
well, writes Amoore (2014). However, the new concepts, such as ‘meaning extraction’,
‘sentiment analysis’ and ‘opinion mining’, are blurring the boundaries in the security and
crime control domain. The concepts of a ‘suspect’, ‘accused’ or ‘convicted’ person serve
as regulators of state powers. Similarly, the existing standards of proof serve as thresh-
olds for ever more interference in a person’s liberties. But these new concepts and math-
ematical language no longer sufficiently confine agencies or prevent abuses of power.
The new mathematical language is helping to tear down the hitherto respected walls of
criminal procedure rules.
Završnik 629
(Chouldechova, 2016). These then are the fairness trade-offs: ‘Predictive parity, equal
false-positive error rates, and equal false-negative error rates are all ways of being “fair”,
but are statistically impossible to reconcile if there are differences across two groups –
such as the rates at which white and black people are being rearrested’ (Courtland, 2018).
There are competing notions of fairness and predictive accuracy, which means a simple
transposition of methods from one domain (for example, earthquake prediction to sen-
tencing) can lead to distorted views of what algorithms calculate. The current transfer of
statistical modelling in the criminal justice domain neglects the balance between compet-
ing values or, at best, re-negotiates competing values without adequate social consensus.
A similar refocusing was carried out by Barabas et al. (2017), who argued ‘that a core
ethical debate surrounding the use of regression in risk assessments is not simply one of
bias or accuracy. Rather, it’s one of purpose.’ They are critical of the use of machine
learning for predicting crime; instead of risk scores, they recommend ‘risk mitigation’ to
understand the social, structural and psychological drivers of crime.
Building databases
Databases and algorithms are human artefacts. ‘Models are opinions embedded in math-
ematics [and] reflect goals and ideology’ (O’Neil, 2016: 21). In creating a statistical
model, choices need to be made as to what is essential (and what is not) because the
model cannot include all the nuances of the human condition. There are trustworthy
models, but these are based on the high-quality data that are fed into the model continu-
ously as conditions change. Constant feedback and testing millions of samples prevent
the self-perpetuation of the model.
Defining the goals of the problem and deciding what training data to collect and how
to label the data, among other matters, are often done with the best intentions but with
little awareness of their potentially harmful downstream effects. Such awareness is even
more needed in the crime control domain, where the ‘training data’ on human behaviour
used to train algorithms are of varying quality.
On a general level, there are at least two challenges in algorithmic design and appli-
cation. First, compiling a database and creating algorithms for prediction always
require decisions that are made by humans. ‘It is a human process that includes several
stages, involving decisions by developers and managers. The statistical method is only
part of the process for developing the final rules used for prediction, classification or
decisions’ (European Union Agency for Fundamental Rights, 2018). Second, algo-
rithms may also take an unpredictable path in reaching their objectives despite the
good intentions of their creators.
Part of the first question relates to the data and part to the algorithms. In relation to
data, the question concerns how data are collected, cleaned and prepared. There is an
abundance of data in one part of the criminal justice sector; for example, the police col-
lect vast amounts of data of varying quality. These data depend on victims reporting
incidents, but these reports can be more or less reliable and sometimes they do not even
exist. For some types of crime, such as cybercrime, the denial of victimization is often
the norm (Wall, 2007). Financial and banking crime remains underreported, even less
prosecuted or concluded by a court decision. There is a base-rate problem in such types
Završnik 631
of crime, which hinders good modelling. The other problem with white-collar crime is
that it flies below the radar of predictive policing software.
Furthermore, in the case of poor data, it does not help if the data-crunching machine
receives more data. If the data are of poor quality, then ‘garbage in garbage out’.
The process of preparing data in criminal justice is inherently political – someone
needs to generate, protect and interpret data (Gitelman, 2013).
Building algorithms
The second part of the first question relates to building algorithms. If ‘models are opin-
ions embedded in mathematics’ (O’Neil, 2016: 21), legal professionals working in crimi-
nal justice systems need to be more technically literate and able to pose relevant questions
to excavate values embedded in calculations. Although scholars (Pasquale, 2015) and
policy-makers alike (European Union Agency for Fundamental Rights, 2018) have been
vocal in calling for the transparency of algorithms, the goal of enabling detection and the
possibility of rectifying discriminatory applications misses the crucial point of machine
learning and the methods of neural networks. These methods are black-boxed by default.
Demanding transparency without the possibility of explainability remains a shallow
demand (Veale and Edwards, 2018). However, third-party audits of algorithms may pro-
vide certification and auditing schemes and increase trust in algorithms in terms of their
fairness.
Moreover, building better algorithms is not the solution – thinking about problems
only in a narrow mathematical framework is reductionist. A tool might be good at pre-
dicting who will fail to appear in court, for example, but it might be better to ask why
people do not appear and, perhaps, to devise interventions, such as text reminders or
transportation assistance, that might improve appearance rates. ‘What these tools often
do is help us tinker around the edges, but what we need is wholesale change’, claims
Vincent Southerland (Courtland, 2018). Or, as Princeton computer scientist Narayanan
argues, ‘It’s not enough to ask if code executes correctly. We also need to ask if it makes
society better or worse’ (Hulette, 2017).
Runaway algorithms
The second question deals with the ways algorithms take unpredictable paths in reaching
their objectives. The functioning of an algorithm is not neutral, but instead reflects
choices about data, connections, inferences, interpretations and inclusion thresholds that
advance a specific purpose (Dwork and Mullingan, 2013). In criminal justice settings,
algorithms calculate predictions via various proxies. For instance, it is not possible to
calculate re-offending rates directly, but it is possible through proxies such as age and
prior convictions. Probation algorithms can rely only on measurable proxies, such as the
frequency of being arrested. In doing so, algorithms must not include prohibited criteria,
such as race, ethnic background or sexual preference, and actuarial instruments have
evolved away from race as an explicit predictor. However, the analysis of existing proba-
tion algorithms shows how prohibited criteria are not directly part of calculations but are
still encompassed through proxies (Harcourt, 2015b). But, even in cases of such
632 European Journal of Criminology 18(5)
‘runaway algorithms’,’ the person who is to bear the effects of a particular decision will
often not even be aware of such discrimination. Harcourt shows how two trends in
assessing future risk have a significant race-related impact: (1) a general reduction in the
number of predictive factors used, and (2) an increased focus on prior criminal history,
which forms part of the sentencing guidelines of jurisdictions in the USA (Harcourt,
2015b). Criminal history is among the strongest predictors of arrest and a proxy for race
(Harcourt, 2015b). In other words, heavy reliance on the offender’s criminal history in
sentencing contributes more to racial disparities in incarceration than dependence on
other robust risk factors less bound to race. For instance, the weight of prior conviction
records is apparent in the case of Minnesota, which has one of the highest black/white
incarceration ratios. As Frase (2009) explains, disparities are substantially greater in
prison sentences imposed and prison populations than regarding arrest and conviction.
The primary reason is the heavy weight that sentencing guidelines place on offenders’
prior conviction records.
culture and even biases. They may work against or in favour of a given suspect. Human
empathy and other personal qualities that form part of flourishing human societies are in
fact types of bias that overreach statistically measurable ‘equality’. Such decision-mak-
ing can be more tailored to the needs and expectations of the parties to a particular judi-
cial case (Plesničar and Šugman Stubbs, 2018). Should not such personal traits and skills
or emotional intelligence be actively supported and nurtured in judicial settings? Bernard
Harcourt (2006) explains how contemporary American politics has continually privi-
leged policing and punishment while marginalizing the welfare state and its support for
the arts and the commons. In the light of such developments, increasing the quality of
human over computerized judgement should be the desired goal.
pool of knowledge that needed some form of intervention by the decision-makers, for
example so as to at least include it in the context of a specific legal procedure, automated
decision-making tools adopt a decision independently. The narrative, typically of a risky
individual, is not to be trusted and only the body itself can provide a reliable confession.
For example, a DNA sample can reveal the home country of a refugee and not their
explanation of ’their origin. Today, moreover, even decision-makers are not to be
believed. Harcourt (2015b) succinctly comments that, for judicial practitioners, the use
of automated recommendation tools is acceptable despite the curtailment of the discre-
tion of the practitioners, whose judgement is often viewed negatively and associated with
errors. Relying on automated systems lends an aura of objectivity to the final decision
and diffuses the burden of responsibility for their choices (Harcourt, 2015a).
In the transition towards the complete de-subjectivation in the decision-making pro-
cess, a sort of erasure of subjectivity is at work (see Marks et al., 2015). The participants
in legal proceedings are perceived as the problem and technology as the solution in the
post-human quest: ‘The very essence of what it means to be human is treated less as a
feature than a bug’ (Rushkoff, 2018). It is ‘a quest to transcend all that is human: the
body, interdependence, compassion, vulnerability, and complexity.’ The transhumanist
vision too easily reduces all of reality to data, concluding that ‘humans are nothing but
information-processing objects’ (Spiekermann et al., 2017). If perceived in such a man-
ner, machines can easily replace humans, which can then be tuned to serve the powerful
and uphold the status quo in the distribution of social power.
Psychological research into human–computer interfaces also demonstrates that auto-
mated devices can fundamentally change how people approach their work, which in turn
leads to new kinds of errors (Parasuraman and Riley, 1997). A belief in the superior
judgement of automated aids leads to errors of commission – when people follow an
automated directive despite contradictory information from other more reliable sources
of information because they fail to either check or discount that information (Skitka
et al., 2000). It is a new type of automation bias, where the automated decisions are rated
more positively than neutral (Dzindolet et al., 2003). Even in the existing stage of non-
binding semi-automated decision-making systems, the process of arriving at a decision
changes. The perception of accountability for the final decision changes too. The deci-
sion-makers will be inclined to tweak their own estimates of risk to match the model’s
(see Eubanks, 2018).
effect in the crime control domain (Ferguson, 2017). However, under-policing is even
more critical. First, the police do not scrutinize some areas as much as others, which
leads to a disproportionate feeling and experience of justice. Second, some types of
crime are more likely to be prosecuted than others. The central principle of legality – the
requirement that all crimes be prosecuted ex officio, as opposed to the principle of oppor-
tunity – is thus not respected. The crimes more likely committed by the middle or upper
classes then fly under the radar of criminal justice systems. Such crimes often become
‘re-labelled’ as ‘political scandals’ or merely occupational ‘risk’. Greater criminality in
the banking system as a whole has been ignored (Monaghan and O’Flynn, 2013) and is
increasingly regarded as part of the ‘normal’ part of financialized societies. This is not to
claim that there are no options for the progressive use of big data and AI in the ‘fight’
against ‘crimes of the powerful’. For instance, the Slovenian Ministry of Finance’s finan-
cial administration is using machine learning to detect tax-evasion schemes and tax fraud
(Spielkamp, 2019), which could be directed towards the most powerful multinational
enterprises and their manipulations with transfer pricing. However, predictive policing
software has not been used to such ends to the same extent as for policing ‘the poor’.
Algorithmic decision-making systems may collide with several other fundamental lib-
erties. Similar to ‘redlining’, the ‘sleeping terrorist’ concept (as discussed above) infringes
upon the presumption of innocence. The mere probability of a match between the attributes
of known terrorists and a ‘sleeping’ one directs the watchful eye of the state to the indi-
vidual. Similarly, there is a collision with the principle of legality, that is lex certa, which
requires the legislature to define a criminal offence in a substantially specific manner.
Standards of proof are thresholds for state interventions into individual rights.
However, the new language of mathematics, which helps define new categories such as
a ‘person of interest’, re-directs law enforcement agency activities towards individuals
not yet considered a ‘suspect’. The new notions being invented contravene the estab-
lished standards of proof in criminal procedure.
In the specific context of a criminal trial, several rights pertain to the principle of the
equality of arms in judicial proceedings and the right to a fair trial. Derivative procedural
rights, such as the right to cross-examine witnesses, should be interpreted so as to also
encompass the right to examine the underlying rules of the risk-scoring methodology. In
probation procedures, this right should entail ensuring that a convicted person has the
possibility to question the modelling applied – from the data fed into the algorithm to the
overall model design (see the Loomis case).
Systems of (semi-)automated decision-making should respect a certain set of rights
pertaining to tribunals. That is, the principle of the natural court requires that the criteria
determining which court (or a specific judge thereof) is competent to hear the case be
clearly established in advance (the rule governing the allocation of cases to a particular
judge within the competent court, thus preventing forum shopping), and the right to an
independent and impartial tribunal.
Conclusion
Automated analysis of judicial decisions using AI is supposed to boost efficiency in the
climate of the neoliberal turn and political pressure ‘to achieve more with less’. In such
Završnik 637
with rich hypertextual potential’, claims Birkhold (2018). Also, we want empathetic
judges (Plesničar and Šugman Stubbs, 2018) and not executory ‘cold-blooded’ machines.
Criminal procedure has several conflicting goals. If the end goal of the criminal proce-
dure is to find the historical truth, then Miranda rights and exclusionary rules are an
absolute obstacle in the truth-discovering process. If its goal is to reconcile the parties
and solve a dispute between an accused and the state, then these rules have an important
place in criminal procedure. Their goal is to facilitate meaning and other values from the
process, such as respect for the rule of law.
Finally, we must acknowledge the fact that, in criminal justice systems, some proce-
dures should not be subjected to automatization (similarly in the context of policing, by
Oswald et al., 2018). There is simply too high an impact upon society and upon the
human rights of individuals for them to be influenced by a reduced human agency rele-
gated to machines.
Acknowledgements
I am grateful to the reviewers for their comments. Special thanks are also owed to the participants
at the colloquium ‘Automated Justice: Algorithms, Big Data and Criminal Justice Systems’, held at
the Collegium Helveticum in Zürich, 20 April 2018, for their helpful comments and suggestions.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/
or publication of this article: The research leading to this article has received funding from the
EURIAS Fellowship Programme (the European Commission Marie Skłodowska-Curie Actions –
COFUND Programme – FP7) and the Slovenian Research Agency, research project ‘Automated
Justice: Social, Ethical and Legal Implications’, no. J5-9347.
ORCID iD
Aleš Završnik https://orcid.org/0000-0002-4531-2740
Note
1. For the purposes of this article, the notions ‘new tools’ and ‘automated decision-making
tools’ are used to encapsulate the terms ‘big data’, ‘algorithms’, ‘machine learning’ and ‘arti-
ficial intelligence’. Each of these notions is highly contested and deserves its own discussion,
but a detailed analysis of these tools is not central to identifying the social and legal implica-
tions of their use in criminal justice systems.
References
ACLU (American Civil Liberties Union) (2016) Statement of Concern About Predictive Policing
by ACLU and 16 Civil Rights Privacy, Racial Justice, and Technology Organizations, 31
August. URL (accessed 4 September 2019): https://www.aclu.org/other/statement-concern-
about-predictive-policing-aclu-and-16-civil-rights-privacy-racial-justice.
Adler F, Mueller GOW and Laufer WS (1998) Criminology, 3rd edn. Boston, MA: McGraw-Hill.
Aebi C (2015) Evaluation du système de prédiction de cambriolages résidentiels PRECOBS. MA
thesis, École des Sciences Criminelles, Université de Lausanne, Switzerland.
Ambasna-Jones M (2015) The smart home and a data underclass. The Guardian, 3 August.
Završnik 639
Amoore L (2014) Security and the incalculable. Security Dialogue 45(5): 423–439.
Angwin J, Larson J, Mattu S and Kirchner L (2016) Machine bias: There’s software used across
the country to predict future criminals. And it’s biased against blacks. ProPublica, 23 May.
URL (accessed 4 September 2019): https://www.propublica.org/article/machine-bias-risk-
assessments-in-criminal-sentencing.
Barabas C, Dinakar K, Ito J, Virza M and Zittrain J (2017) Interventions over predictions: Reframing
the ethical debate for actuarial risk assessment. Cornell University. arXiv:1712.08238 [cs,
stat].
Barocas S, Hood S and Ziewitz M (2013) Governing algorithms: A provocation piece, 29 March.
URL (accessed 4 September 2019): https://ssrn.com/abstract=2245322.
Beccaria CB ([1764] 1872) An Essay on Crimes and Punishments. With a Commentary by M. de
Voltaire. A New Edition Corrected. Albany: W.C. Little & Co. URL (accessed 4 September
2019): https://oll.libertyfund.org/titles/2193.
Benbouzid B (2016) Who benefits from the crime? Books&Ideas, 31 October.
Birkhold MH (2018) Why do so many judges cite Jane Austen in legal decisions? Electric
Literature, 24 April.
Brkan M (2017) Do algorithms rule the world? Algorithmic decision-making in the framework of
the GDPR and beyond. SSRN Scholarly Paper, 1 August.
Caliskan A, Bryson JJ and Narayanan A (2017) Semantics derived automatically from language
corpora contain human-like biases. Science 356(6334): 183–186.
Chouldechova A (2016) Fair prediction with disparate impact: A study of bias in recidivism pre-
diction instruments. Cornell University. arXiv:1610.07524 [cs, stat].
Citron DK and Pasquale FA (2014) The scored society: Due process for automated predictions.
Washington Law Review 89. University of Maryland Legal Studies Research Paper No.
2014–8: 1–34.
Cohen JE, Hoofnagle CJ, McGeveran W, Ohm P, Reidenberg JR, Richards NM, Thaw D and Willis
LE (2015) Information privacy law scholars’ brief in Spokeo, Inc. v. Robins, 4 September
2015. URL (accessed 4 September 2019): https://ssrn.com/abstract=2656482.
Courtland R (2018) Bias detectives: The researchers striving to make algorithms fair. Nature
558(7710): 357–360.
Danziger S, Levav J and Avnaim-Pesso L (2011) Extraneous factors in judicial decisions.
Proceedings of the National Academy of Sciences 108(17): 6889–6892.
De Kort Y (2014) Spotlight on aggression. Intelligent Lighting Institute, Technische Universiteit
Eindhoven 1: 10–11.
Desrosières A (2002) The Politics of Large Numbers: A History of Statistical Reasoning.
Cambridge, MA: Harvard University Press.
Dewan S (2015) Judges replacing conjecture with formula for bail. New York Times, 26 June.
Dwork C and Mullingan DK (2013) It’s not privacy, and it’s not fair. Stanford Law Review 66(35):
35–40.
Dzindolet MT, Peterson SA, Pomranky RA, Pierce LG and Beck HP (2003) The role of trust in
automation reliance. International Journal of Human-Computer Studies 58(6): 697–718.
Eckhouse L (2017) Opinion | Big data may be reinforcing racial bias in the criminal justice system.
Washington Post, 2 October.
Egbert S (2018) About discursive storylines and techno-fixes: The political framing of the implemen-
tation of predictive policing in Germany. European Journal for Security Research 3(2): 95–114.
Ekowo M and Palmer I (2016) The Promise and Peril of Predictive Analytics in Higher Education.
New America, Policy Paper, 24 October.
Eubanks V (2018) Automating Inequality: How High-Tech Tools Profile, Police, and Punish the
Poor. New York: St Martin’s Press.
640 European Journal of Criminology 18(5)
European Union Agency for Fundamental Rights (2018) #BigData: Discrimination in data-sup-
ported decision making. FRA Focus, 29 May.
Ferguson AG (2017) The Rise of Big Data Policing: Surveillance, Race, and the Future of Law
Enforcement. New York: NYU Press.
Flores AW, Bechtel K and Lowenkamp CT (2016) False positives, false negatives, and false analy-
ses: A rejoinder to ‘Machine bias: There’s software used across the country to predict future
criminals. and it’s biased against blacks’. Federal Probation Journal 80(2): 9.
Franko Aas K (2005) Sentencing in the Age of Information: From Faust to Macintosh. London:
Routledge-Cavendish.
Franko Aas K (2006) ‘The body does not lie’: Identity, risk and trust in technoculture. Crime,
Media, Culture 2(2): 143–158.
Frase RS (2009) What explains persistent racial disproportionality in Minnesota’s prison and jail
populations? Crime and Justice 38(1): 201–280.
Galič M (2018) Živeči laboratoriji in veliko podatkovje v praksi: Stratumseind 2.0 – diskusija
živečega laboratorija na Nizozemskem. In: Završnik A (ed.) Pravo v dobi velikega podatko-
vja. Ljubljana: Institute of Criminology at the Faculty of Law.
Gefferie D (2018) The algorithmization of payments. Towards Data Science, 27 February. URL
(accessed 4 September 2019): https://towardsdatascience.com/the-algorithmization-of-pay-
ments-how-algorithms-are-going-to-change-the-payments-industry-5dd3f266d4c3.
Gendreau P, Goggin C and Cullen FT (1999) The effects of prison sentences on recidivism. User
Report: 1999-3. Department of the Solicitor General Canada, Ottawa, Ontario.
Gitelman L (ed.) (2013) ‘Raw Data’ Is an Oxymoron. Cambridge, MA: MIT Press.
Hannah-Moffat K (2019) Algorithmic risk governance: Big data analytics, race and information
activism in criminal justice debates. Theoretical Criminology. 23(4): 453–470.
Harcourt BE (2006) Against Prediction: Profiling, Policing, and Punishing in an Actuarial Age.
Reprint edition. Chicago: University of Chicago Press.
Harcourt BE (2015a) Exposed: Desire and Disobedience in the Digital Age. Cambridge, MA:
Harvard University Press.
Harcourt BE (2015b) Risk as a proxy for race. Federal Sentencing Reporter 27(4): 237–243.
Hulette D (2017) Patrolling the intersection of computers and people. Princeton University,
Department of Computer Science, News, 2 October. URL (accessed 4 September 2019): https://
www.cs.princeton.edu/news/patrolling-intersection-computers-and-people.
Joh EE (2017a) Feeding the machine: Policing, crime data, & algorithms. William & Mary Bill of
Rights Journal 26(2): 287–302.
Joh EE (2017b) The undue influence of surveillance technology companies on policing. New York
University Law Review 92: 101–130.
Kehl DL and Kessler SA (2017) Algorithms in the criminal justice system: Assessing the use of
risk assessments in sentencing. URL (accessed 4 September 2019): http://nrs.harvard.edu/
urn-3:HUL.InstRepos:33746041.
Kerr I and Earle J (2013) Prediction, preemption, presumption: How big data threatens big picture
privacy. Stanford Law Review Online 66(65): 65–72.
Kleinberg J, Lakkaraju H, Leskovec J, Ludwig J and Mullainathan S (2017) Human Decisions
and Machine Predictions. Working Paper 23180, National Bureau of Economic Research,
Cambridge, MA.
Koumoutsakos P (2017) Computing . Data .. Science . . . Society: On connecting the dots. Talk
at the Collegium Helveticum, Zurich, 9 March. URL (accessed 4 September 2019): https://
www.cse-lab.ethz.ch/computing-data-science-society-on-connecting-the-dots/.
Kramer ADI, Guillory JE and Hancock JT (2014) Experimental evidence of massive-scale emo-
tional contagion through social networks. Proceedings of the National Academy of Sciences
111(24): 8788–8790.
Završnik 641
Lewis P and Hilder P (2018) Leaked: Cambridge Analytica’s blueprint for Trump victory. The
Guardian, 23 March.
Lyon D (2014) Surveillance, Snowden, and big data: Capacities, consequences, critique. Big Data
& Society 1(2):1–13.
McGee S (2016) Rise of the billionaire robots: How algorithms have redefined hedge funds. The
Guardian, 15 May.
Marchant R (2018) Bayesian techniques for modelling and decision-making in criminology and social
sciences. Presentation at the conference ‘Automated Justice: Algorithms, Big Data and Criminal
Justice Systems’, Collegium Helveticum, Zürich, 20 April. URL (accessed 4 September 2019):
https://collegium.ethz.ch/wp-content/uploads/2018/01/180420_automated_justice.pdf.
Marks A, Bowling B and Keenan C (2015) Automatic justice? Technology, crime and social con-
trol. SSRN Scholarly Paper, 19 October. In: Brownsword R, Scotford E and Yeung K (eds)
The Oxford Handbook of the Law and Regulation of Technology. Oxford University Press,
forthcoming. Queen Mary School of Law Legal Studies Research Paper No. 211/2015; TLI
Think! Paper 01/2015. URL (accessed 4 September 2019): https://ssrn.com/abstract=2676154.
Meek A (2015) Data could be the real draw of the internet of things – but for whom? The Guardian,
14 September.
Monaghan LF and O’Flynn M (2013) The Madoffization of society: A corrosive process in an age
of fictitious capital. Critical Sociology 39(6): 869–887.
Morozov E (2013) To Save Everything, Click Here: Technology, Solutionism, and the Urge to Fix
Problems that Don’t Exist. London: Allen Lane.
Noble SU (2018) Algorithms of Oppression: How Search Engines Reinforce Racism, 1st edn. New
York: NYU Press.
O’Hara D and Mason LR (2012) How bots are taking over the world. The Guardian, 30 March.
O’Malley P (2010) Simulated justice: Risk, money and telemetric policing. British Journal of
Criminology 50(5): 795–807.
O’Neil C (2016) Weapons of Math Destruction: How Big Data Increases Inequality and Threatens
Democracy. New York: Crown.
Oswald M, Grace J, Urwin S, et al. (2018) Algorithmic risk assessment policing models:
Lessons from the Durham HART model and ‘experimental’ proportionality. Information &
Communications Technology Law 27(2): 223–250.
Parasuraman R and Riley V (1997) Humans and automation: Use, misuse, disuse, abuse. Human
Factors 39(2): 230–253.
Pasquale F (2015) The Black Box Society: The Secret Algorithms That Control Money and
Information. Cambridge, MA: Harvard University Press.
Plesničar MM and Šugman Stubbs K (2018) Subjectivity, algorithms and the courtroom. In:
Završnik A (ed.) Big Data, Crime and Social Control. London: Routledge, Taylor & Francis
Group, 154–176.
Polloni C (2015) Police prédictive: la tentation de ‘ dire quel sera le crime de demain’. Rue89, 27 May.
Reynolds M (2017) AI learns to write its own code by stealing from other programs. New Scientist,
25 February.
Robinson D and Koepke L (2016) Stuck in a pattern. Early evidence on ‘predictive policing’ and
civil rights. Upturn.
Robinson DG (2018) The challenges of prediction: Lessons from criminal justice. 14 I/S: A
Journal of Law and Policy for the Information Society 151. URL (accessed 4 September
2019): https://ssrn.com/abstract=3054115.
Rosenberg J (2016) Only humans, not computers, can learn or predict. TechCrunch, 5 May.
Rouvroy A and Berns T (2013) Algorithmic governmentality and prospects of emancipation.
Réseaux No. 177(1): 163–196.
Rushkoff D (2018) Future human: Survival of the richest. OneZero, Medium, 5 July.
642 European Journal of Criminology 18(5)
Selingo J (2017) How colleges use big data to target the students they want. The Atlantic, November 4.
Skitka LJ, Mosier K and Burdick MD (2000) Accountability and automation bias. International
Journal of Human-Computer Studies 52(4): 701–717.
Spiekermann S, Hampson P, Ess C, et al. (2017) The ghost of transhumanism & the sentience
of existence. URL (accessed 4 September 2019): http://privacysurgeon.org/blog/wp-content/
uploads/2017/07/Human-manifesto_26_short-1.pdf.
Spielkamp M (2017) Inspecting algorithms for bias. MIT Technology Review, 6 December.
Spielkamp M (ed.) (2019) Automating Society. Berlin: AW AlgorithmWatch.
Stanier I (2016) Enhancing intelligence-led policing: Law enforcement’s big data revolution.
In: Bunnik A, Cawley A, Mulqueen M and Zwitter A (eds) Big Data Challenges: Society,
Security, Innovation and Ethics. London: Palgrave Macmillan, 97–113.
Steiner C (2012) Automate This. New York: Penguin.
Veale M and Edwards L (2018) Clarity, surprises, and further questions in the Article 29 Working
Party draft guidance on automated decision-making and profiling. Computer Law and
Security Review 34(2): 398–404.
Wall DS 2007 Cybercrime. The Transformation of Crime in the Information Age. Cambridge, UK;
Malden, MA: Polity Press.
Webb A (2013) Why data is the secret to successful dating. Visualised. The Guardian, 28 January.
Wilson D (2018) Algorithmic patrol: The futures of predictive policing. In: Završnik A (ed.) Big
Data, Crime and Social Control. London, New York: Routledge, Taylor & Francis Group,
108–128.