Every day courts and tribunals produce millions of words of text, most of which is read once, before it is filed away in the archive, never to be read again.
This is the “great unread” of legal experience, filled with promise for the social scientist: every document is the evidentiary residue of some legal experience.
But we do not read the unread. The reason is obvious: you and I can only read so much in a day. But if we are so limited, is a computer?
My research is computational: how can computers give legal researchers an approach on the “great unread” of legal text? Using algorithmic techniques and massive datasets of legal text and information, I build models of legal experience.
When I am not working with computers, I write award-winning legal histories and large empirical accounts of the law-in-action.
Google Scholar / CV / Obiter.AI / SSRN