Samuel Dahan.

How Samuel Dahan is democratizing AI for legal professionals

As Samuel Dahan sees it, most lawyers have a big toolbox of skills. Many are good at analyzing huge amounts of information, for instance. And getting stuff done.  

“And fighting – we’re really good at that,” he says with a smile.  

But “building tech tools” isn’t usually found in that lawyer toolbox.            

It is for Dahan, however.  

The Queen’s associate professor of law is the founder and director of the at Queen’s. It’s a research consortium made up of more than a dozen academic and industry partners that develops AI applications to help lawyers, compliance professionals, and non-lawyers solve their legal problems.

“I really like the building part of this,” says Dahan. “Especially building products that can improve access to justice.”  

The main product his team is building right now is called . It’s an open-access and open-source AI platform for legal professionals that can help them get the legal information they need. It can even prepare legal documents. Right now, it handles about 20,000 queries per month from users.

Conflict Analytics Lab team.

The current version of OpenJustice works like ChatGPT in that users ask what they want – such as an opinion on a specific landlord-tenant dispute – and then it provides a comprehensive legal analysis.  

But one of the big differences between OpenJustice and an AI chatbot like ChatGPT is that OpenJustice is specifically trained on several different legal systems, including those in Canada, the U.S., and France. The team is also collaborating with international partners to soon integrate legal frameworks from Australia, the Netherlands, and Switzerland.

Dahan, who did a PhD in empirical legal science, started seeing the need for a sophisticated tool like this when he worked at the European Commission and the European Court of Justice about a decade ago.  

“I found there were a lot of inconsistencies in the case law,” he says. “And this was usually because judges didn’t have access to the right insights at the right time.”  

That need continues, he says, even with the arrival of ChatGPT and similar tech.  

 “That generic AI has really been underperforming in law,” says Dahan. “There are a lot of errors, and that’s creating a lot of problems, including lawyers citing fake cases in court.”  

In response, some large companies have been jumping into this space recently, including Thomson Reuters with its AI assistant . But CoCounsel has also shown serious limitations with error rates, too – as high as 88 per cent, .  

Another problem is that some of these new AI assistants can cost tens of thousands of dollars, says Dahan. “And that’s definitely out of reach for most lawyers, particularly pro bono lawyers as well as medium and small practices. And they are the ones who probably need this the most because they are overwhelmed with a lot of requests.”  

This is why the Conflict Analytics Lab is focusing on developing AI that can be especially helpful to access-to-justice organizations.  

Take , for instance, which receives thousands of phone calls per day from Ontarians wanting free legal help. Dahan and his team have been building a system using the OpenJustice framework that can translate all of those calls into text, prioritize them, and even pre-draft responses.  

“So the whole question here is: Can we use AI to help more people?” says Dahan. “Can we integrate it into workflows so that it actually makes legal professionals more efficient and productive?”  

The short answer is yes, says Dahan. But the OpenJustice platform needs improvement, he adds. And this is where Queen’s alumni and the Queen’s community as a whole can come in.

The Conflict Analytics Lab has already introduced versions one and two of OpenJustice. Version two includes a tool called RAG, which connects language models with legal data to function like a search engine – but one that generates tailored responses rather than just listing links.  

This summer, the lab is set to unveil OpenJustice version three, a creator interface that empowers legal professionals to embed their own legal knowledge and reasoning into any language model of their choice, whether it’s GPT, Claude, or another specialized system. Designed with the unique challenges of pro bono practitioners and small firms in mind – who often face tedious administrative tasks without the support of large teams – this new version allows lawyers to customize AI directly to their workflows without needing to write code.

In a major step toward democratizing legal AI, the team open sourced this technology on Feb. 16 at Queen’s during the inaugural Canadian Legal AI Hackathon, hosted in collaboration with Stanford University.

“So, we’d love to hear from alumni and anyone else in the legal community who have specific projects or needs in mind that OpenJustice might help with,” says Dahan. “That could really us help us refine this next version.”    

For more information, head to and .