Over the last few years, legal scholars, policy-makers,activists and others have generated a vast andrapidly expanding literature concerning the ethicalramifications of using artificial intelligence,machine learning, big data and predictive softwarein criminal justice contexts. These concerns can beclustered under the headings of fairness,accountability and transparency. First, can we trusttechnology to be fair, especially given that thedata on which the technology is based are biased invarious ways? Second, whom can we blame if thetechnology goes wrong, as it inevitably will onoccasion? Finally, does it matter if we do not knowhow an algorithm works or, relatedly, cannotunderstand how it reached its decision? I arguethat, while these are serious concerns, they are notirresolvable. More importantly, the very sameconcerns of fairness, accountability andtransparency apply, with even greater urgency, toexisting modes of decision-making in criminaljustice. The question, hence, is comparative: canalgorithmic modes of decision-making improve uponthe status quo in criminal justice? There isunlikely to be a categorical answer to thisquestion, although there are some reasons forcautious optimism.