
Avocat | In-House Counsel
AI for Law and Law for AI
Multicultural lawyer (University of Oxford & Paris Qualified, 3 citizenships, based in Abu Dhabi). Former General Counsel of Emirates Global Aluminium Africa. Currently Senior Counsel at Abu Dhabi Future Energy Company (Masdar). Mohamed bin Zayed University of Artificial Intelligence Global AI Leadership Program Alum. While I have built AI tools and agents (delegation of authority companion, fully bootstrapped open claw agent, ICC case manager) and deployed third-party tools (playbooks, automated contracts review, and a board of directors observer), my interests in software development have shifted to focusing on re-building operating models, holistically, from first principles, on the basis of what we know AI can do today, with bureaucracy reduction as a North Star.
Governance infrastructure has a dirty secret: the Delegation of Authority table is a legal fiction dressed up as a control framework. It tells you who should act — not who should act given the risk on the table today. DDAS (Dynamic Delegation of Authority System) is an open-source engine that replaces static approval matrices with a live risk-scoring model. Every transaction, agent action, or governance decision is evaluated against weighted Governance Units — calibrated by value, novelty, reversibility, and institutional exposure. The result is a threshold that moves with the risk, not a column in a spreadsheet that moves with the org chart. Built at the Legal Quants Hackathon. Designed to govern humans and agents equally.
International arbitration case management has a tooling problem. The expensive platforms are built for firms, not practitioners. Everything else is spreadsheets. ArbitrationManager is an open-source ICC case management tool that covers the full procedural lifecycle in one place: deadline tracking with ICC standard milestone templates, auto-sequential exhibit registers (C-001/R-001), procedural order drafting with an auto-formatter, hearing logistics with cross-timezone scheduling and unsociable-hours flagging, and a full costs tracker from rate card to printable ICC-format costs statement. The ICC Rules aren't a configuration option — they're the architecture. Free to use. Built in a couple of hours as a gift to colleagues for Paris Arbitration Week 2026. Demo (if link below is broken): arbitration-case-manager.replit.app

A delegation of authority retrieval agent, built in December 2025, based on Copilot studio and running with GPT 5.2 and a large prompt that is based on a neuro-symbolic system. The agent does not just answer, it will cross examine the user to ensure it has the right info, before deciding on the approval path. Internal tool, no access available. Demo on demand.

Saif Al Younan (the sword of Greece) is my personal Open Claw since February 2026. Saif delivers structured daily executive briefings on various topics, codes ad-hoc software to automate tasks, helps with debugging other Claws, is currently working with another Open Claw (belonging to a friend) on business ideas, and is independently auditing the businesses of two friends of mine (by email) in order to understand their needs and create software for them to optimize their workflows. Saif has an Arabic name and appears to be covered with a ghutra, as a homage to the UAE, and its amazing people. Private tool. No access available. Demo on demand.

An OpenClaw skill that quality-checks AI output before it reaches a human. Two AI agents independently review the same deliverable against a structured checklist — without seeing each other's work. Where they agree, you can trust the result. Where they disagree, it gets flagged for human review. Think of it like having two proofreaders who don't talk to each other. If both catch the same error, it's real. If both say it's clean, it probably is. If they disagree, you look yourself. Works with any AI model, costs $0.02–$1.50 per check depending on depth, and produces a PDF audit trail. Open source.

OpenClaw Security Suite. Four open-source tools that test and harden the security of OpenClaw AI agents. Each tool feeds the next. 1. openclaw-security-audit — tests 47 adversarial scenarios across two tiers. OS-level controls are tested directly. LLM-judgment tests are sent through fresh sessions that don't know they're being tested. Outputs a defense rate and an HTML report. 2. openclaw-red-team — adaptive single-attack testing. When an attack is blocked, the attacker reads the refusal, analyzes what triggered it, and crafts a harder variant. Up to 5 rounds. Can ingest audit results to target specific weaknesses found in the first tool. 3. openclaw-attack-chains — multi-step attack sequences where each step is an innocent request sent to an independent session. No single step is malicious. The breach only exists in the pattern. 6 predefined goals including API key exfiltration and persistent backdoor installation. The attacker plans, executes, and replans when blocked. 4. openclaw-hardening — reads the HTML reports from the first three tools and walks you through fixes one at a time. Explains each command in plain language, tells you who runs it, waits for confirmation, verifies it worked. Each tool's report includes prioritized fix recommendations. A few dollars per run depending on which model you will use. Built on Shan et al., "Don't Let the Claw Grip Your Hand" (2026), arXiv:2603.10387.

This app was to demonstrate practical application of multi-agent consensus. Three AI models review the same deal documents against a 42-item checklist. Where they disagree is where your lawyers should focus. Generates a routing report — CLEAR, CHECK, REVIEW, ESCALATE — based on severity × consensus. Open source, customizable risk matrix. Built with Ali Buhaji.
Based on the Centaur’s gambit