# The Gap Nobody's Talking About: AI's Promise vs. the Work to Get There

A practical look at the gap between AI’s hype and the real work required to deploy it in legal teams—data readiness, workflow design, governance, and change management.

- Category: Product
- Published: 16 Apr 2026
- Authors: Lauren Ziegelaar
- Canonical URL: /updates/ai-promise-vs-work-to-get-there

## Content

Hey
Are you making an AI investment for your law firm, in-house legal team, or similar? That is really exciting and great to see. 
What are the drivers behind this investment? Is it time savings? Cost reduction? The ability to service new or broader markets? Have you thought about how you are going to realise that impact?
Leaders across many sectors are making bold calls on AI, big investments, and in some cases, significant workforce decisions, based on the promise of what AI could achieve. But something I don’t hear enough about is what does it actually take from an operational perspective to realise that impact? I describe this gap as leaders overestimating the impact of AI, but underestimating the work required to realise that impact.
## The plug-and-play illusion
Many of the AI tools making their way into law firms and in-house legal teams are positioned as democratising solutions. Wrap a large language model in a legal interface, roll it out across your team, and watch the efficiencies flow. The reality is quite different.
The operational lift required to make these tools deliver impact is significant. Where we see teams do this well, they are investing in dedicated people and teams to administer and maintain the tools they have invested in. This is no different to any other technology tool, but insight is that it takes around 3x the operational support to maintain and support AI solutions. This is driven by a range of factors, including the frequent updates to these tools and the broader risk profile. This is the reality of what good implementation looks like, but it's rarely part of the conversations we see when the investment decision is made.
There's also a ceiling on what most of these tools can do out of the box. Low-code and no-code functionality gets you started, but as soon as your team wants something that reflects how they actually work, you quickly find you need someone with a technical background who understands the intent of what you’re trying to achieve (ie. the service, the process, the legal specific task) to configure the tool properly. And that configuration work is where a lot of the real value lives because this is where you inject your teams unique insights and ways of working into the tool, which is what differentiates you from any other firm using the same tool.
## Individual tinkering isn't the same as scale
It is really exciting to see lawyers building their own workflows and experimenting with what these tools can do. I love hearing how people are using AI in creative, unexpected ways. But individual tinkering and large-scale organisational impact are two very different things.
Real operational improvement is built on standardisation. It's not glamorous, but it's where the compounding benefits actually come from, consistent outputs, predictable processes, the ability to measure what's working and improve it over time. This type of standardisation doesn’t happen when every person in a team is designing their own version of a workflow from their desk. It comes from thoughtful consideration and exploration of what is possible, facilitated through dedicated teams who have the skillsets and capacity to take a systems thinking approach, redesign processes, and change behaviours of people.
That doesn't mean individual experimentation has no value, it does, particularly for tasks that are genuinely individual in nature. But if you're trying to move the needle on how your team operates and delivers services, you need someone steering the ship. You need people thinking about not just how to use the tool, but how to weave it into the fabric of how your team works. And you need the infrastructure around it, like change management, maintenance, continuous testing, to make sure what gets built keeps working as intended.
## Keep going, but go in with clear eyes
None of this is an argument for slowing down. I think it's genuinely important that firms and teams keep investing in these tools and keep experimenting with them, even when they're not perfect yet. A rising tide lifts all boats, and the more firms engage seriously with AI, the better the ecosystem gets for everyone.
But if a tool isn't delivering what you hoped, it's worth asking whether the tool is really the problem, or whether there is a gap in the support infrastructure around it. The firms I've seen do this well aren't just the ones with the best technology. They're the ones with strong operational backing: product owners, process experts, change management programs, and people who are genuinely thinking about where AI can redesign how they deliver services, not just what tasks it might be able to do.
This is why AI adoption, like any technology adoption, requires a holistic approach. That means thinking carefully about who's around you as you go on this journey. Your relationships with your tech vendors and implementation partners matter a lot, they should be people who understand your context and are invested in making it work for you, not just getting the tool over the line. But it also means thinking about who's going to help you make it a success on the ground, day to day. Who's doing the change management? Who's maintaining what's been built? Who's keeping an eye on whether it's still working as intended six months from now?
If you don't have that resourcing within your team, that's okay,  it doesn't have to be all in-house. There are people and partners you can bring in to help fill those gaps. The important thing is that you're thinking about it deliberately, not hoping the tool will take care of itself. It won't. But with the right support around it, it might just do everything you hoped it would.