The Hidden Lever for Multilingual AI: The Middle Layer
Tech, localization, and global strategy - decoded.
Electrical Prisms by Sonia Delaunay, 1914
*All images in today’s article feature works by various artists that prompted me to reflect on different types of system designs*
Everyone loves to talk about models. Most conversations about multilingual AI get stuck on the same question: “Which model should we use?”
In my work as a product manager building AI systems I am finding that this is the wrong question (although still important).
The model isn’t what will make or break your localization strategy. (In fact, you may even be locked in to one - or a few - models due to company contracts, or stakeholder pressure.)
The real breakthrough for me, the difference between AI experiments that fizzle out and AI strategies that actually scale, has been what I’m calling “the middle layer.”
The Spiderweb Problem
Today, AI adoption in localization often looks like a spiderweb. Teams build their own workflows, plug directly into translation APIs, and then bring outputs to localization for validation. On the surface this seems agile, but at scale it unravels quickly: files are passed back and forth manually, and quality standards become impossible to enforce across a patchwork of irregular processes.
Human Rhizome by Chiharu Shiota, 2023
At the same time, localization teams themselves are scrambling…cobbling together increasingly complex systems to “keep up” with demands from every direction. Instead of a single scalable foundation, organizations end up with a fragile tangle of connectors, tools, and manual workarounds. The risks compound: inconsistent definitions of quality create confusion, duplicate vendor contracts inflate costs, valuable training data gets fragmented instead of feeding back into a unified loop, and compliance becomes harder to manage as sensitive content is routed through unapproved tools. What works for a few markets collapses under the weight of dozens, institutional knowledge gets trapped in silos, and trust in AI translation erodes when stakeholders experience inconsistent results.
The Standardization Trap
The Water by MinjungKim, 2022
The natural reaction is to swing to the opposite extreme by mandating a one-size-fits-all rigid workflow or forcing everyone through one vendor. But that fails too, stripping away the flexibility different teams need to succeed. On paper, it looks efficient. In practice, it usually fails. Different teams have different priorities:
Product teams care about speed.
Marketing cares about nuance (ahhh they care so so much about nuance!).
Creators care about flexibility.
Force everyone into one rigid flow, and you don’t get alignment, you get workarounds. You get duplicate shadow systems. You definitely don’t get adoption.
The Solution: Centralized and Modular
The more sustainable approach to multilingual AI isn’t rigid standardization. Instead, it’s building centralization as a platform, not a prescription. Instead of forcing every team into the same narrow workflow, the goal is to create a strong middle layer that unifies the infrastructure while still respecting the different needs of product, engineering, marketing, and creative teams.
Think of it like an operating system: different applications can run on top, each with their own requirements, but they all rely on the same stable foundation. In localization, this could look like layers that provide routing logic that automatically selects the right engine based on cost, quality, and content type. It could look like offering prompt templates that are centrally maintained for consistency but flexible enough for teams to adapt. It could look like modular options for quality checks based on content type, language pair, or stakeholder needs.
The Ten Largest, Group IV, No. 3, Youth, by Hilma af Klint, 1907
Shifting to this type of modular thinking delivers the benefits of centralization (efficiency, scalability, and trust) without sacrificing the flexibility that different teams need to thrive. In practice, this approach not only prevents the spiderweb chaos of ad hoc tools but also avoids the rigidity of one-size-fits-all processes.
Why the Middle Layer Matters
Think of the middle layer as the infrastructure glue between people, processes, and engines.
Without it, every team is hacking together their own workflows, choosing their own vendors, and measuring quality differently. Many times they are completely bypassing localization. This results in fragmentation, duplicated effort, and a product experience that feels wildly inconsistent from one market to the next.
Section of Raise Your Cup of Tea by Eva Camacho
With a middle layer system, teams can:
Route content to the right engine based on cost, quality, and content type.
Standardize evaluation while still letting teams customize prompts.
Automate quality checks that scale without requiring armies of reviewers. (And using reviewers where it counts)!
All while still meeting stakeholder demands for AI + Speed + Efficiency (which seems like such an impossible ask).
The paradox is that the middle layer is invisible when it works but painful when it doesn’t.
The Payoff of the Middle Layer
When companies get the middle layer right, the payoff is immediate: adoption soars because teams actually want to use the system, duplication drops as translation pipelines aren’t rebuilt over and over, and time to market shrinks without compromising quality.
In short, localization shifts from bottleneck to multiplier. And that’s the real takeaway:
the future of multilingual AI won’t be decided by who picks the “best” model, but by who builds the smartest middle layer. Centralized enough to give the company leverage, modular enough to serve every team, and invisible enough that localization just works.
That’s the shift the industry needs to start talking about!
Connectedness by Damian Ardestani
If you’ve ever been stuck in a meeting where everyone argued about translation engines, forward this to them. The model isn’t the game. The middle layer is!
More Ideas on Shaping the Future of Our Industry
If this sparked your curiosity, here are a few more articles I’ve written exploring other ways I see the future of localization unfolding:








Excellent article Hilary! Have you tried using agents on that middle layer?