Taking Algorithms To Court
While the debates over AI systems in warfare and facial recognition technology continue to dominate headlines, advocates and experts are increasingly concerned about the rapid introduction of algorithmic systems in government services. Every day, more computational and predictive technologies are being included within
Finally, policy makers are beginning to take action. For example, New York City recently established the first city-wide Automated Decision System Task Force to study and recommend guidance on the use of such systems across all of its public agencies. But the challenges involved are daunting — algorithmic systems are spreading rapidly throughout areas of government, posing challenges to existing mechanisms for fairness and due process, and present risks to the welfare of all.
So how do we begin to understand the changes taking place? One method is to look to the courtroom where evidence, expert testimony, and judicial scrutiny often reveal new insights into the current state of systems. Recently, AI Now partnered with NYU Law’s Center on Race, Inequality and the Law and the Electronic Frontier Foundation to conduct an examination of current United States courtroom litigation where the use of algorithms by government was central to the rights and liberties at issue in the case.
Our goal was to convene legal, scientific, and technical advocates who have gained experience litigating algorithmic decision-making in various areas (from employment to social benefits to criminal justice). Coming from a wide range of backgrounds, these experts discussed the challenges across five areas of government where algorithmic decision-making is already prevalent:
- Medicaid and Government Benefits
- Public Teacher Employment Evaluations
- The Role of Social Science and Technical Experts
- Criminal Risk Assessment
- Criminal DNA Analysis
The takeaways were sobering. We heard many examples where governments routinely adopted these systems as measures to produce “cost savings” or to streamline work without any assessments of how they might disproportionately harm the most vulnerable populations they are meant to serve, and who have little recourse or even knowledge of the systems deeply affecting their lives. We also learned that most government agencies invest very little to ensure that fairness and due process protections remain in place when switching from human-driven decisions to algorithmically-driven ones. However, we also heard about several early victories in these cases based, in part, on constitutional and administrative due process litigation claims. We also saw first-hand how important multidisciplinary approaches are winning in court when algorithms are involved.
While, the playbook for litigating algorithms is still being written, we found these conversations extremely helpful in thinking about long-term solutions and protections that will continue to be part of the conversation and the legal and regulatory landscape. As evidence about the risks and harms of these systems grow, we’re hopeful we’ll see greater support for assessing and overseeing the use of algorithms in government. And further, that by continuing to convene experts from across disciplines to understand the challenges and discuss solutions, we’ll build a groundswell of strategies and best practices to protect fundamental rights, and ensure that algorithmic decision making actually benefits the citizens it is meant to serve.
To learn more about the takeaways from the workshop, read the full report.