Someone Is Writing AI Rules Without You. Here's How to Change That.
damon.houf
1w ago (edited)
A roadmap for getting your feedback adopted — and what I wish I'd raised.
Every jurisdiction is publishing guides on how lawyers should use AI. These guides get cited by courts, by regulators, by disciplinary bodies. And right now, not one of them accounts for lawyers who build their own tools.
One Submission, Two Recommendations Adopted
Last September, Singapore's Ministry of Law consulted on their draft Guide for Using Generative AI in the Legal Sector. The consultation would attract responses from the big firms in Singapore. I had written a post and shared it on LinkedIn, and some others recommended I send in my feedback. But some guy nobody had heard of sending in his repackaged blog post? I figured it would be politely received and quietly filed. On the night before it closed, I emailed in a four-page submission anyway. I almost didn't submit.
Six months later, the final guide landed. Both of my core recommendations — making AI literacy an explicit professional competency, and providing concrete protocols for consumer tools like ChatGPT and Claude — were adopted. In some places, near-verbatim. The draft went from 26 pages to 48. The contributors list names roughly 40 organisations and three individuals. I'm one of them.
These processes are real. They produce the documents that regulators, disciplinary bodies, and courts actually cite. When your bar association or ministry publishes a guide, it shapes how the wider legal community understands what's possible and what's acceptable. That's why who contributes matters — and why the absence of certain perspectives has consequences.
Here's what I mean concretely. The original draft was, if I'm being blunt, a guide to how to buy Harvey for your firm and make it work. Its mental model was enterprise procurement: select a vendor, review their data policies, deploy to your team. Consumer tools — the ChatGPTs and Claudes that most practitioners actually use — got three cautionary sentences, all warning you away from them. No practical protocols. No acknowledgement that for many lawyers, these are their only option.
By naming that gap and proposing specific protocols, I got the final guide to treat consumer tools as something to be governed responsibly — not just avoided. The act of naming something in a submission forces the framework to account for it. And how you frame it shapes whether the framework treats it as a risk to be managed or a practice to be enabled.
A Roadmap: What Made the Difference
Having compared my submission against both the draft and final guide line-by-line, here's what I think worked. This isn't a formula — but it's a pattern that legal quants can repeat.
1. Propose insertable text, not abstract critique
I didn't write "the guide should address AI literacy." I wrote the actual paragraph, keyed to the actual paragraph number. I proposed a new paragraph 17(d) with five specific competencies. The final guide's paragraph 20(a) adopted all five, in substantially the same structure.
Drafting committees work under time pressure. Language they can insert with minor edits reduces their workload. A vague suggestion adds to it.
2. Identify gaps, not disagreements
I praised the existing guide and asked for additions. The draft's approach to professional judgment was "appropriate" — it just had "limitations." The consumer tool guidance was fine — it just needed to be "expanded."
A drafting committee that receives 20 responses isn't looking for reasons to rewrite from scratch. They're looking for improvements they can incorporate without invalidating work already done. Additive suggestions clear that bar easily. Oppositional ones often don't.
3. Ground everything in lived experience
"I write as a solo in-house counsel with experience deploying AI tools in resource-constrained legal practice." That single sentence established that I wasn't theorising — I was reporting from the field.
Legal quants have a massive advantage here. You've seen what happens when a prompt goes wrong not because you read about it, but because you debugged it. No committee member has that perspective.
4. Mirror the guide's own logic back at it
The draft said lawyers should "craft precise prompts." I asked: how, without understanding prompt engineering? It said lawyers should check whether AI output "comprehensively cover[ed] all aspects." I asked: how, without understanding the tool's capabilities?
The gap becomes self-evident. You're not arguing from outside — you're showing that the guide's own recommendations require a foundation it hasn't provided.
5. Address the consultation questions directly
Most consultations publish specific questions they want feedback on. I included a section mapping my recommendations to each question. The drafting team needs to report how feedback addressed each published question. Making that easy for them makes adoption easier.
6. Frame expansion as inclusion
"These recommendations would make Singapore's AI governance framework truly inclusive across the legal sector's diversity." I wasn't opposing the guide — I was helping it achieve its stated purpose more fully.
7. Be brief and focused
Four pages. Two themes. No sprawl.
What None of Us Have Raised Yet
Every AI guide worldwide — every single one I've checked — operates on a consumer model: lawyer uses tool, lawyer verifies output, lawyer remains responsible. Not one contemplates the lawyer who builds.
I didn't raise this in my submission either. I was building and deploying my own tools at the time, but I didn't think it was a shared concern. Then the LegalQuants community formed and it turned out a lot of people were doing it quietly.
Naming consumer tools in my submission forced the guide to account for them. The same thing would have happened if I'd named builder-lawyers.
If even one paragraph of Singapore's final guide had acknowledged the possibility that ordinary lawyers — not just legaltech vendors — can build their own tools using their domain expertise, it would have shifted perception. Not because one paragraph changes the world, but because these guides are what law firms read when they're deciding what's acceptable. They're what clients read when they're assessing whether to trust a tool built by their own lawyer. An official acknowledgement would have made it easier for every legal quant to have that conversation with their firm, their client, their regulator.
Instead, the guide — like every other guide — assumes that if you're not buying from a vendor, you're using ChatGPT with the privacy settings toggled. The entire space of lawyer-built, domain-specific tools doesn't exist in the regulatory imagination yet.
The questions that flow from this are ones this community knows well. When does a lawyer-built tool cross from personal productivity aid into something requiring governance as a software product? What are proportionate testing standards for a bespoke tool versus an enterprise platform? How do you disclose to clients that the analysis was performed by something you vibe-coded yourself? What's the duty of competence when you're not just using AI but architecting how it processes legal information?
If those frameworks are written without us, they'll default to one of two extremes: ignoring builder-lawyers entirely, or imposing enterprise-grade procurement requirements that make vibe-coding impractical. Neither outcome serves clients. Neither is inevitable.
The next consultation in your jurisdiction is the first real opportunity to raise this. Now we know it's not just us.
