The most terrifying moment of the week? Realising a bot had made a better summary of a lease agreement than a junior solicitor. We’re not in the “AI might one day change law” phase anymore. We’re in the “AI is already drafting your terms and conditions” phase.

AI in law isn’t just about time-saving anymore—it’s about judgment. Predictive coding in disclosure. Automated legal advice chatbots. Risk assessment tools for litigation. Some of it’s genuinely impressive. Some of it’s dangerously overconfident.

The problem isn’t that AI gets things wrong. It’s that it gets things wrong very confidently. You need a lawyer’s brain to spot when a “95% accuracy” clause isn’t just misleading—it’s legally disastrous.

And let’s talk about bias. AI is only as good as the data it learns from—and if you train it on biased case law or flawed enforcement data, it’s going to make biased predictions. That’s not a tech problem. That’s a legal problem.

What worries me isn’t that AI will replace lawyers. It’s that it’ll replace junior lawyers, hollowing out the future of the profession while leaving us with an AI-flavoured senior class that doesn’t remember how to proofread.

But there’s good here, too. AI can make justice more accessible, automate tedium, and democratise legal knowledge. We just need to wield it carefully. Like a scalpel. Not a sledgehammer.

I use AI—but I argue back. That, to me, is the line between using technology and being used by it.