D.C. Judge’s Thoughts on Use of AI by Judges

From D.C. Court of Appeals Judge John Howard’s concurrence las month in Ross v. U.S., about the possible upsides and downsides of judges using AI (entirely apart from whether they use AI results as arguments in their opinions):

To be clear, I cast no aspersion on the use of AI by my colleagues. I find it interesting. AI tools are proliferating and we ignore them at our own peril. Not only for the concerning capabilities they now give parties with ill intent, but for the great utility such tools could potentially provide in easing the strain on our increasingly overburdened courts.

AI tools are more than a gimmick; they are coming to courts in various ways, and judges will have to develop competency in this technology, even if the judge wishes to avoid using it. Courts, however, must and are approaching the use of such technology cautiously. Specific use cases are being considered and we must always keep in mind the limits of different AI tools in how and when we use them, particularly with regard to security, privacy, reliability, and bias, to ensure ethical use.

Broadly, an AI system can be susceptible to bias at multiple points in its execution. Model Code of Judicial Conduct Rules 2.2 and 2.3, dealing with impartiality and fairness and bias, prejudice, and harassment, are potentially implicated in reliance on a system infected with bias. Ignorance of the technology seems like little defense in consideration of the duty of competence in Rule 2.5.

Other issues abound, but security and confidentiality of court information are particular concerns. Accordingly, before using an AI tool a judicial officer or staff member should understand, among many other things, what data the AI tool collects and what the tool does with their data.

The quote has many attributions that “if it is free, you are the product.” Many AI tools benefit from what we feed into them, documents, prompts, etc., virtually every aspect of our interaction trains and hones such tools. That is part of the early-mover advantage of ChatGPT in particular, which blew away previous records to reach one million users in five days—and 100 million within two months of going live. As of January 30, 2025, it was estimated to have approximately 300 million weekly users. It is hard to imagine a company that could afford to pay that many people to test and develop their model. However, such a system raises serious practical and ethical issues for a court. Security is a preeminent concern. I briefly look at a few hypotheticals in the context of this court to illustrate.

First, take the use case of a judge utilizing an AI tool to summarize briefs filed with the court well in advance of oral argument—a practice, along with summarizing voluminous records, that some AI tools appear to be quite adept at. It is the practice of this court to announce the members of a particular panel of judges the week before an oral argument. Should a judge be using an AI tool that trains on the data they submitted, they have now surrendered data which includes—at bare minimum—the submitted data, i.e. the briefs of the parties, and potentially personally identifying data, i.e. a username, IP address, and email address. Data which, reviewed together, could expose the judge’s involvement on the panel to individuals and systems with access to that data before that information is public.

Next, fast-forward past argument and assume our hypothetical technophile jurist decides they will have the AI tool aid them in the preparation of a decision. AI tools offer many potential use cases here. For one, perhaps with careful prompting, detailing the types of facts or story that is desired, the AI tool could be used to pull from the record and produce a first draft of the factual rendition section of the decision. It could develop an initial statement of the standard of review and controlling law. In varying degrees of quality, depending on the tool and inputs, it could formulate a first take at some analysis.

However, again, should the AI tool be training itself on the data, someone with access to the data would have access to judicial deliberative information and potentially personally identifying login/user information that could identify the judge as well. Of even more concern, as the data trains the tool, another user could stumble upon it or some aspects of it regurgitated by the AI tool. Even if the odds are miniscule, confidential judicial deliberative information has potentially leaked out ahead of a decision in this scenario.

Consider further the scenario that any of the material used in either prior hypothetical contained sensitive information that would otherwise be subject to redaction, i.e. social security numbers, account numbers, minor’s names, etc. If unredacted briefs or records were loaded into the AI tool, it would be an instant failure of the court’s duty to protect such information. Three hundred million users, in the scenario of ChatGPT, described above, would potentially have access.

I pause briefly here to note that such concern does not appear to arise from the use of AI in this decision. The dissent’s generalized hypothetical questioning, without more, does not strike me as remotely unique to this case in a way that could even inadvertently expose deliberative information. The majority’s use of ChatGPT provides comparison by prompting the tool against the facts of a previous case for analysis. It strikes me that the thoughtful use employed by both of my colleagues are good examples of judicial AI tool use for many reasons—including the consideration of the relative value of the results—but especially because it is clear that this was no delegation of decision-making, but instead the use of a tool to aid the judicial mind in carefully considering the problems of the case more deeply. Interesting indeed.

The previous examples that I described as potential improper use of an AI tool, however, could be accomplished with the use of an AI tool with robust security and privacy protections. Even more exciting, AI companies have begun to announce the release of government oriented tools which promise to provide such protections and allow for such potential use cases.

As state courts across the country cautiously consider these issues, the National Conference of State Courts has taken a lead in coordinating efforts. It has put together an AI Rapid Response Team and created a policy consortium, constantly updating resources. And the D.C. Courts have not stood idly by, creating our D.C. Courts AI Task Force and partnering with the National Conference of State Courts. As the use of AI begins to appear at the D.C. Courts, litigants and the citizens of the District can be assured that cautious and proactive thought is being directed by our judges and D.C. Courts team members, toward the beneficial, secure, and safe use of AI technology.

The post D.C. Judge’s Thoughts on Use of AI by Judges appeared first on Reason.com.

Liked Liked