The field of artificial intelligence in law has moved faster than anyone has been able to document. ComputerLawyer wants to fix that. The AI researcher and creator, covering the intersection of machine learning and the legal profession, has begun work on A Survey of Large Language Models in Law. The project is an open collaboration aiming to become the canonical reference for the field. It is hosted at llm.law and is open to contributions from researchers, practitioners, and students.
The timing reflects how fast the field has moved. In the past two years alone, large language models have passed the Uniform Bar Exam, scored a perfect 180 on the LSAT, and begun interpreting statutes with performance rivaling trained attorneys. Commercial legal AI tools have multiplied, academic benchmarks have proliferated, and courts themselves have started weighing in on when and how the technology can be used. Despite the pace, no comprehensive survey has attempted to map the terrain.
“We are writing the first canonical survey of large language models in law,” reads the announcement, posted by Bonmu Ku, the AI researcher who operates under the ComputerLawyer name. “LLMs already pass the bar exam, achieve a perfect LSAT score, and interpret statutes, yet no comprehensive survey exists. Let us write one together.”
ComputerLawyer has spent the past year building a reputation for rigorous, accessible work on artificial intelligence in law. The survey builds directly on Ku’s own recent research, including AI Achieves a Perfect LSAT Score, a paper documenting the first perfect score by a language model on the Law School Admission Test. That paper drew attention for benchmarking frontier systems against a high-stakes, time-constrained reasoning test that many had assumed to be out of reach for current models.
What sets the new project apart is the format. Rather than a single-author publication, the survey is structured as an open collaboration, modeled on the way the field of artificial intelligence in law itself has grown, distributed, fast-moving, and increasingly interdisciplinary. Early focus areas include reasoning benchmarks, statutory interpretation, professional licensing exams, retrieval-augmented systems for case law, and the emerging body of empirical work on how practitioners actually use the tools.
For a domain that has spent the past two years generating headlines faster than it can generate consensus, the bet is that the reference work for artificial intelligence in law should be built the same way the field is, in the open.






