“Learn AI” is not a strategy – the technical skills young and aspiring lawyers actually need to know

Larissa Meredith-Flister
2w ago (edited)
I have been mentoring a student through City St George’s, University of London, and in one of our recent conversations we were discussing what technical skills she should learn as an aspiring solicitor.
The instinctive answer, “learn AI,” is already incomplete. What I am increasingly seeing, both with students and junior lawyers, is not a deficit of access to tools. It is a misunderstanding of how to approach them. People tend to fall into one of two camps: those who do not trust AI at all, avoiding it or dismissing it without serious engagement; and those who trust it far too much, accepting outputs at face value because they sound convincing.
Both approaches miss the point, though in different ways.
The two mistakes
The first is refusing to engage. I understand the instinct. The technology feels uncertain, imperfect, sometimes unreliable. But these tools are now part of how legal work gets done. Refusing to use them is not a principled position; it is a practical disadvantage. It is a bit like a lawyer deciding not to use Word. You can hold that view, but it will limit your usefulness and your employability.
The point is not that you ought to trust AI. It is that you ought to understand how to use it properly. The lawyer remains responsible for the output, just as she would be if a trainee had produced a first draft. That responsibility does not disappear because the tool is new.
The second mistake is the opposite, and in some respects more dangerous: over-trusting it.
AI is very good at sounding right. It produces clean, confident text that feels complete. If you do not understand, even at a basic level, how it generates that text, it is remarkably easy to mistake fluency for accuracy. That is a human problem, not a technical one. We are wired to trust things that sound authoritative, and large language models have, if nothing else, mastered the register of authority.
In practice, this is no different from relying blindly on a junior team member. No competent lawyer does that. But equally, no competent lawyer refuses to work with her team at all. The skill, as with most things, is in the middle: knowing when to use AI and how to verify its output.
So what should you actually learn?
If “learn AI” is not sufficient, what is?
From what I see in practice, the distinguishing skills are not about knowing more tools. They are about using them in a way that produces measurably better work:
Using AI across a task – The most important, to my mind, is learning how to use AI across a task rather than at a single step. Most people treat it like a search engine: one question, one answer. That is low-value work. The more productive approach is to break a piece of work into stages: extract key facts from documents, organise them into a timeline, identify the issues, generate a draft, then refine and verify. Using AI across those stages, rather than at one isolated point, is where real gains emerge. You do not need to be technical to do this. But you do need to think more deliberately about how you approach a piece of work, and that deliberateness is itself a skill that takes practice.
Properly structuring your work – Closely related is the ability to structure your work before you begin it. AI works best when the task is clear. Legal work rarely is. One of the most underrated professional skills is being able to take a vague instruction and turn it into something structured: what is the actual question, what information do I have, what is missing, what are the steps. Get that right and AI becomes a genuinely useful tool. Get it wrong and it amplifies confusion, producing confident nonsense with impressive speed.
Building templates and frameworks – Then there is the question of building repeatable ways of working. A great deal of legal work follows patterns: similar letters, similar arguments, similar structures. Most junior lawyers draft from scratch every time, which is slow and introduces unnecessary risk. I would encourage anyone starting out to build their own templates and frameworks: standard structures for common documents, reusable phrasing that you know is correct, checklists for recurring tasks. It is unglamorous work. It is also one of the fastest ways to improve both speed and accuracy, and it compounds over time in a way that sporadic effort does not.
Learning basic data skills – I would also place basic data skills higher than most people expect. A surprising amount of legal work is really about organising information: timelines, disclosure exercises, damages calculations, transaction records. If you can structure and analyse that information competently, even just in Excel, you will have a practical advantage over colleagues who cannot. This does not get discussed much in the profession, perhaps because it feels insufficiently “legal” (and I have heard a number of lawyers say they are “not good with numbers”), but it makes a noticeable difference in practice.
Knowing how to do repetitive tasks more efficiently – Finally, and this is more a habit of mind than a discrete skill: learn to notice when something should not be manual. There is still a great deal of repetition in legal work. Formatting documents. Copying information between systems. Reorganising data. You do not need to become a developer. But you ought to start asking, each time you find yourself doing something tediously repetitive, whether it could be done more efficiently. Sometimes the answer is simple automation; sometimes it is a better use of AI; sometimes it is just rethinking your approach to the task. Over time, that instinct matters more than any particular tool.
Do you need to learn to code?
Not in any serious depth. But having a basic understanding helps: knowing how data is structured, having a sense of how simple scripts work, perhaps experimenting with Python or no-code tools. The goal is not to become technical per se, but rather to internalise a key insight that technical literacy teaches you: the understanding that legal work, like most knowledge work, can be broken down, structured, and improved.
One may object that this has always been true, that good lawyers have always structured their work and built efficient systems. That is fair. But the tools available now mean that the gap between those who do this well and those who do not is widening faster than it used to. The floor has not moved much. The ceiling has risen considerably.
What this looks like in practice
The differences between people are rarely dramatic. They are small things, done consistently.
One person avoids AI or uses it sporadically, drafts everything from scratch, and works linearly through each task. Another uses AI across different stages, structures the task before starting, reuses and refines previous work, and avoids repeating manual steps. The final output might look similar. But one person arrives there faster, with fewer errors, and produces something that others can pick up and use immediately.
That difference becomes visible very quickly. And once it is visible, it tends to determine who gets trusted with more responsibility and who does not.
The real division
I do not think the profession is dividing into “technical” and “non-technical” lawyers. The division is simpler and, in a way, less forgiving. It is between lawyers who know how to use these tools properly and lawyers who do not.
If you want a clearer sense of where the profession is heading, Richard Susskind’s Tomorrow’s Lawyers is one of the best starting points.
But the practical point is this: knowing the law is necessary. It is no longer sufficient. You need to know how to work with it, using the tools available, in a way that is structured, efficient, and reliable. The lawyers who work that out now will not simply be more productive. They will be the ones the profession is built around in the 2030s.