Let us know who you are, what you do, how you got interested in Rules as Code, and at least one thing that you would be happy to teach, or learn!
Hey, everyone. I’m Jason Morris. I’m one of the admins here at talk.rulesascode.com.
A long time ago I was a self-taught programmer and database analyst. I returned to school and ended up with a law degree. I practiced in my own firm Round Table Law for almost a decade.
In 2016 I was introduced to the topic of Computational Law, and never looked back. I went back to school for an LLM in Computational Law, which I completed in 2020. From the summer of 2020 to the summer of 2021, I was the Senior Research Engineer, Symbolic Artificial Intelligence with the Centre for Computational Law at Singapore Management University.
I am currently CEO of Lexpedite Legal Technology Ltd., and am working full-time on contract with the Government of Canada, where my role is Rules as Code Director inside Service Canada.
I am also the author of Blawx, a web-based, user friendly tool for encoding legislation in declarative logic languages and making those encodings available online via API. I also wrote L4-docassemble, the first open source legal expert system tool with defeasibility, and natural language explanations.
I am opinionated about Rules as Code, and often in the minority. So I look forward to debates. But mostly I’m excited to put talk.rulesascode.com out into the world, and help build a community that can change the world for the better.
I hope you find some value, here, and please hit up the Site Feedback category if you have any ideas for how to make it better.
Hello! I’m Jameson. I am a volunteer global director for Legal Hackers, a CodeX fellow, and a tech policy attorney. I’m excited to be here with you all.
Hi, I’m Matt Carey. I’ve written some Python packages for caselaw analysis: AuthoritySpoke, Legislice, Justopinion, Nettlesome, and Anchorpoint. I’m working on new features for my legal data API at authorityspoke.com. And I blog about legal tech at pythonforlaw.com.
Jason, the company name “Lexpedite” is absolutely perfect. Good luck with it!
Welcome, @MattCarey ! Thanks for joining us.
And thanks for the compliment on the business name. Glad you like it.
Please add some of your many projects to our knowledge base!
Hi everyone, I’m Heidi Richards from Sydney Australia (originally from Boston). I’m a semi-retired regulator (Fed, US Treasury, Reserve Bank of Australia and APRA) now advising on regulatory matters in the fintech industry. Love data analysis and done tiny bits of coding way back when, but do not have modern coding skills (open to learning though!).
When I headed policy teams I was on the lookout for tools to make drafting and presenting regulations and standards better and allow more automation within regulated industries so started researching Rules as Code. I’m very much aligned to the NZ philosophy on better rules as a process. Would love to get involved in an advisory capacity on any RAC projects!
I have some project ideas of my own but I want to find the right tools first (or help build them). Warning: As a former policy-maker I have a very pragmatic outlook. Looking for actual useable solutions that could scale to an entire rule book, based on how rule drafting humans actually operate.
Welcome, Heidi! Thanks so much for joining us.
Hi, I’m Mikael, working as a development specialist at the Finnish Tax Administration. Been there for ages. At least since 1996. Lawyer by education but certainly not actively practising, not even tax law I’m mainly occupied on a governmental level in different working groups and programs, dealing with “digitalization enablers” like new technologies (AI, DLT/blockchain, Trust over IP…) and my favourite subject, interoperability. Semantic interoperability to be precise, although the other layers in the European Interoperability Framwork developed by the EU are needed as well.
At present one of my active projects is trying to develop a way to automate production of law-derived code directly from the legal drafts written by people participating in the drafting process in various departments (ministries). Our approach is based on the “rules as code” part of the Better Rules -experiment/concept (NZ; 2018)
- produce "human readable text with all releant terms linked to a national SKOS-ontology (“business glossary”) > we have created a XML-based tool for this in 2020
- automated transformation of 1) to pseudocode, identifying statements and conditions in the human readable texts > this was… well, we got started in spring 2021
- parsing the pseudocode into Python and executing the rules in a rules engine > same thing here as in 2), but due to unfortunate events some of the results are missing :-/
Now, parts 2 and 3 are things that I don’t control (or even understand) technically (being completely dependant on the developers I can recruite to the project) > this means that I’d appreciate some peer review into our doings if anyone is interested.
Mikael (can be pronounced as “Michael” in an international context)
Hey, Mikael: Welcome! That sounds like a very ambitious project. I would love to hear more about it, and take a look at whatever you’re in a position to share! I’ve never seen anyone generate usable code from natural language text that wasn’t strictly controlled natural language. Really fascinating.
Could be that we’re just too optimistic about the outcomes, but a number of technically skilled people encouraged me to at least try this approach
Hi I am really interested in the ontology-aspect of this, and how the ontology works within the XML tool. I don’t know too much about XML or ontologies yet unfortunately but am doing some reading! It seems to me you need the context-specific legal/regulatory ontology that regulatory drafters can work within. Then going from step 1 to step 2 is tricky - I think this is combining the ontology concepts with logic statements? And probably shows where your drafting needs work. A tool that will help do this would be really awesome.
I’ve discovered there is this whole community of people who just do ontologies. There is a Financial Industry Business Ontology which seems to be gaining traction, and a Financial Regulatory Ontology that seems to have been developed and abandoned. These seem mainly to be used for data modelling within organisations and some regulatory reporting; they are very detailed (maybe too detailed) but presumably have all of the concepts I would want to use in financial regulation already defined. I just don’t have the tools/skills to access them.
I share your fascination with this stuff, @heidir. Haven’t had the chance to use it, myself, but it seems like an obvious ultimate destination for this sort of work.
The best tools that I have seen for building ontologies are the Protege tools from Stanford. “Best” in this case does not mean “simple.” The usual ontology reasoning tools are not really fit-for-purpose, I don’t think. But Flora-2 and SWI-Prolog and others have the ability to import data from RDF to do reasoning on it.
Hi Heidi, our starting point in our series of experimental projects is indeed the existence of a “context-specific legal/regulatory ontology”, which can be created on the national Interoperability Platform > the main tool https://sanastot.suomi.fi/ enables subject-matter experts to define a SKOS-ontology about relevant concepts that can then be linked to from our Law Editor (https://lakieditori3.azurewebsites.net/ - only in Finnish… but you can login as “testi” “testaaja” and check some of the legal drafts ). The SKOS-ontologies are linked to SHACL Shape -data models published with the “Data Vocabularies” tool https://tietomallit.suomi.fi/ > like if you define the concept “person” and “marital status” you can create a Class:Person and Property:MartialStatus that are linked to the SKOS-terminology concepts “person” and “martial status”. What we don’t know is whether this is of any help for the creation of “machine consumable legislation”, but we feel it’s worth looking into.
This spring we started the first of our RaC-projects with a hypothesis that it could be … well, not easy but anyway feasible to create a prototype of a tool that would translate human-readable text into some kind of pseudocode and then parse this into real code like Python… Unfortunately we had some bad luck with the choice of developer and the promising results turned to be, well, not exactly what he promised… We’ve started a second project a couple of weeks ago that tries to approach the issue in an alternative way - not too much to tell about that yet, I’m afraid, but I’ll give you updates once we got something to show. Needless to say we’ll publish our possible results openly for everyone.
Far more than “worth looking into.” Clear steps in the direction of where the technology ultimately needs to go.
I see that the terminology tool is open source… where can I find the source code? And is LakiEditori also open source? I’d LOVE to contribute a pull request for an English translation.
Also, please consider adding all of those tools to the Knowledge Base. I would do it myself, but I worry that something important would be lost in translation.
Also very excited to hear about your new project, if this is what the previous projects have produced!
Everything is Open Source > about Lakieditori, send all more technical questions to email@example.com and firstname.lastname@example.org (Jussi was the previous developer in our project, Teemu is the present > the Lakieditori is however, completely Jussi’s work)
Regarding the Terminology tool (and the “platform” as a whole") > information in English > Material in English - Yhteentoimivuusalustan julkinen dokumentaatio - DVV external Confluence
IF the agency responsible for the maintenance of the platform doesn’t answer your questions (has happened before :-() > please contact Ms Suvi Remes at the Ministry of Finance directly > email@example.com
Thanks Mikael, this indeed seems very cool. Any chance you would have time for a quick call to sort of demo it for us (in English preferably haha)?
Also I have no idea what SHACL is and the W3 page on this has not made me any the wiser!
I’m gathering that RDF is perfectly happy to allow you to say whatever you want in triples, and SHACL is a way to express rules about what constitutes a valid database, like “a person can only have one birthdate.” So RDF+SHACL gets closer to what OWL does all at the same time, I think. Just a first-blush guess.