AI is widening the gap between lawyers
Two lawyers can be given the same task, with access to the same tools, and produce work that looks more or less the same on first reading. Both drafts may be well written. Both may appear complete. But when the work is tested, whether by a sceptical client, an opposing party, or a judge, the difference becomes clear very quickly. One holds. The other does not. The difference goes beyond intelligence, effort, or experience: it is also a difference in method.
What is emerging in the profession is not just a divide between technical and non-technical lawyers, nor between those who use AI and those who do not. The more meaningful distinction, at least from what I see, is between lawyers who know how to structure and test their reasoning, including the reasoning that AI produces for them, and those who rely on outputs that read persuasively but have not been properly interrogated.
The clearest way to see this is in how lawyers actually use AI day to day. One approach is essentially linear: a question is asked, an output is produced, and that output is refined. The tool is treated as a source of answers, and the goal is to quickly produce something that reads well. Speed is the headline metric.
The alternative is more structured. The problem is broken down before any answer is accepted. What, precisely, is the question? What information do I have? What is missing? What is being assumed? AI is then used across each of these stages: to extract, to organise, to test, and only then to draft. The output is treated as provisional. It is not the answer; it is the start of the analysis.
Both approaches can produce text that sounds convincing. But only one produces work that holds up to scrutiny.
The difference is most visible in the structure of arguments. It is now common to see arguments that are well written, logically ordered, and apparently persuasive, but which depend on an assumption that has not been proved. AI is very good at filling gaps in reasoning with plausible language. The transitions are smooth. The tone is confident. The argument feels complete because nothing in it sounds tentative. Challenge the underlying assumption, however, and the argument collapses.
In traditional workflows, this kind of weakness is often caught through experience or adversarial review. The instinct to ask “what am I actually relying on here?” tends to develop with time. When AI is introduced without a corresponding shift in method, that instinct can quietly atrophy. The output looks finished and feels authoritative. Unless the lawyer is actively looking for what has not been proved, or what has been inferred rather than established, the weakness passes unnoticed until somebody else finds it.
This is not a failure of the technology per se; rather, it is a failure to use technology well as a lawyer. This can be seen in R (Ayinde) v London Borough of Haringey; Al-Haroun v Qatar National Bank QPSC [2025] EWHC 1383 (Admin). Both cases were heard together under the Hamid jurisdiction. In Ayinde, a barrister had cited five non-existent authorities in grounds for judicial review; in Al-Haroun, eighteen out of forty-five authorities cited in correspondence and witness statements were fake. Wasted costs orders were made against both the barrister and Haringey Law Centre, and both were referred to their regulators.
The judgment is being read, rightly, as a warning about citation. The deeper point is that the duty to verify does not change with the tool. A submission is not less the lawyer’s responsibility because a model wrote it. The professional consequences of treating fluent output as reliable output have moved from theoretical to documented.
AI, in this sense, is not creating a new problem in legal work; rather, it is amplifying an old one. The profession has always lived with a distinction between clarity and correctness. A well-written argument is not necessarily a sound one; a confident answer is not necessarily a reliable one. What AI changes is the ease and the volume with which fluent, coherent output can be produced. Where that fluency is mistaken for accuracy, the quality of work degrades, often in ways that are not obvious until the work is challenged. Where the fluency is treated as a starting point and properly interrogated, AI becomes a genuine multiplier, both of speed and of quality.
While AI reduces the cost of producing legal text, it does not reduce the cost of verifying it. Verification still requires the same legal knowledge, the same thoroughness, and the same willingness to test an assumption against the actual authority. The gap between the cost of producing a draft and the cost of standing behind it is therefore widening. Lawyers who recognise this, and who invest the time saved on drafting back into verification and structure, capture the productivity gain. Lawyers who simply ship the draft inherit the risk.
This is also why the gap between practitioners is widening more quickly than it used to. The competence baseline has not moved very much; it remains possible to produce serviceable work without engaging deeply with these tools. The upper end, however, has shifted. Lawyers who can structure their thinking, decompose problems, and test AI outputs with care now produce work that is faster, more reliable, and more adaptable than was previously possible at the same cost. While the difference may not always be visible in the final draft, it is most certainly visible in how the legal argument performs when it is actually challenged.
There is a further point that is rarely made, but which I think matters more over time: the two approaches do not just produce different work; they produce different lawyers.
The lawyer who treats AI as a source of answers is not building anything that compounds. Each matter starts more or less from scratch. Meanwhile, the lawyer who treats AI as a tool inside a structured method is building something durable: a way of breaking down problems, a library of prompts and templates, a sense of where the model is reliable and where it is not, a memory of the assumptions that tend to be smuggled in. That capability accumulates, and it transfers from matter to matter. Over a year or two, it pulls the two lawyers apart at a rate that has very little to do with the tool itself.
This is also where the reflex to think of the relevant skill as “learning a particular AI tool” starts to mislead. Knowing how to prompt, how to summarise quickly, how to generate a usable first draft: these are useful skills, but they are not the skills that make the difference. What matters is the older, less fashionable discipline of taking a vague instruction and converting it into a clear set of questions; of separating what is known from what has been assumed; of recognising when an answer rests on a premise that has not been established; and, when using AI, of treating the output as a draft to be tested rather than a conclusion to be polished.
These are not new skills. Good lawyers have always done this. What is changing is the cost of not doing it. Looking forward, this distinction is likely to become increasingly more pronounced.
As models improve, the fluency of their outputs will continue to increase. The distance between something that sounds right and something that is right will become harder to detect without deliberate effort. At the same time, the discipline of structuring and testing work compounds over time. Lawyers who develop these habits will build workflows that are both efficient and resilient, with AI sitting inside a method rather than substituting for one. Lawyers who do not may find themselves producing work that is superficially strong but increasingly fragile, and doing so in a regulatory environment that is no longer prepared to treat that fragility as an excuse.
This is not, in the end, a question of technology. It is a question of method, and of professional responsibility. Knowledge of the law remains necessary, and will remain so. The differentiator is becoming how that knowledge is used: how problems are structured, how outputs are interrogated, and how confidently a lawyer can move from a fluent answer to a reliable one. That is where the real divide is emerging.
