Skip to content
Back to writing Data Leadership

How to Structure an AI Team Without Creating a Silo

The way you structure your AI team determines whether AI becomes a strategic capability or an expensive research lab. Here's what I've seen work.

Mal Wanstall

Mal Wanstall

How to Structure an AI Team Without Creating a Silo

I’ve restructured AI and data teams four times in my career. Each time I thought I’d finally found the right model. Each time I learned something that made me change my mind. The uncomfortable truth is that there is no universally correct way to structure an AI team. But there are patterns that fail predictably, and if you avoid those, you’re already ahead of most organisations.

The Three Models and Their Failure Modes

Centralised AI team. One team owns all AI work across the organisation. This is where most companies start, and it makes sense early on. You concentrate scarce AI talent in one place. You build shared infrastructure. You maintain consistent standards.

The failure mode: the team becomes a bottleneck. Every business unit needs to compete for the central team’s time. Prioritisation becomes political. The backlog grows. Business teams start hiring their own AI people secretly. After about 18 months, the central team is overwhelmed and the business is frustrated.

I’ve lived this. At Westpac, our central analytics team had a six-month backlog. Business unit leaders would come to me quarterly asking why their project hadn’t started yet. Some of them just went and hired contractors to do the work outside our standards. You can’t blame them.

Fully embedded model. AI engineers sit inside business units, reporting to business leaders. They’re close to the problems. They ship fast. They understand domain context deeply.

The failure mode: fragmentation. Each team builds their own infrastructure. They make different technology choices. There’s no shared learning. Someone in the marketing team builds a customer segmentation model using methods that the risk team solved six months ago, but nobody knew. Worse, there’s no consistency in how models are governed, monitored, or documented.

Hub and spoke. A central team provides infrastructure, standards, and specialised expertise. Embedded team members sit with business units for day-to-day work but maintain a reporting line or strong connection to the centre. This is the model I’ve seen work most often, but it has its own failure modes.

The failure mode: confusion about who has authority. The embedded person’s business leader wants them to ship a feature fast. The central team wants them to follow the standard architecture. Neither side has clear authority. The embedded person gets squeezed. If you don’t resolve the governance question clearly, hub and spoke creates role confusion and burnout.

What I’ve Landed On

After trying variations of all three models, here’s the structure that’s worked best for my context. Your context may be different.

flowchart TB
    Gov["Governance & Strategy\n2-3 people"]
    Platform["Platform Team\n6-10 engineers\nML pipeline, feature store, deployment"]
    Gov --- Platform
    Platform --- D1["Domain Team A\nEmbedded in BU"]
    Platform --- D2["Domain Team B\nEmbedded in BU"]
    Platform --- D3["Domain Team C\nEmbedded in BU"]
    Gov -.- D1
    Gov -.- D2
    Gov -.- D3

A platform team at the centre. Six to ten engineers who build and maintain the shared data and AI infrastructure. They own the ML pipeline, the feature store, the model registry, the deployment tooling, and the monitoring stack. They don’t build models. They build the platform that other teams use to build models.

This team is critical and chronically underinvested in. When I say “platform,” I don’t mean they build a fancy internal product. I mean they create paved paths: templates, patterns, and tooling that make it fast and safe for other teams to ship AI products. The test of a good platform team is whether a domain data scientist can go from idea to production model in days, not months.

Domain data teams embedded in business units. Data scientists, data engineers, and ML engineers who report to business unit leaders and work on business-specific problems. They use the platform team’s tools and follow central standards, but they own their models and their roadmap.

The size of each domain team depends on the AI intensity of the business unit. Some units have eight people. Some have two. We don’t force a standard team size because different parts of the business have different needs.

A small governance and strategy function. Two or three people who set standards, coordinate cross-team learning, and manage the overall AI portfolio. They run a monthly AI review where domain teams share what they’re building, what they’ve learned, and where they’re stuck. This prevents the fragmentation problem without creating a bottleneck.

This function also maintains our AI registry, which is literally a spreadsheet listing every AI system in production or development, who owns it, what risk tier it falls into, and when it was last reviewed. Nothing fancy. Very useful.

The Reporting Line Question

This is where I’ve changed my mind the most. I used to believe data and AI teams should report to a central data leader, like me. I now think the embedded teams should report to business unit leaders, with a strong dotted line to the centre.

The reason: business unit leaders care more about outcomes when they’re accountable for the team’s performance. When the data scientist reports to a central function, the business leader treats them as a shared resource. When they report to the business leader, they’re invested in making them successful.

The dotted line to the centre matters though. Without it, you lose coordination, standards, and career development pathways. Data scientists need a community of peers and a career path that doesn’t require them to become general managers. The central function provides that.

Hiring for This Structure

The hardest roles to fill are the “bridge” roles: people who are technically strong enough to be credible with engineers and business-savvy enough to be useful in strategy conversations. These people are rare and they’re the ones who make the hub-and-spoke model work.

For the platform team, I hire for depth. I want people who’ve built and operated data infrastructure at scale, who understand reliability engineering, and who care about developer experience.

For embedded domain teams, I hire for breadth. I want people who can do data engineering, build models, create analyses, and present to business stakeholders. They don’t need to be world-class at any one of these. They need to be good enough at all of them to operate semi-independently.

For the governance function, I hire for judgement. People who can write a clear standard, know when to enforce it strictly and when to allow exceptions, and can facilitate productive conversations between teams with competing priorities.

Mistakes I’ve Made

I once hired a brilliant ML researcher for an embedded role. Outstanding technical skills. Could not translate their work into business terms. Could not scope a project to a reasonable timeline. Would disappear into a research rabbit hole for weeks. I’d hired for the wrong profile. That person would have thrived on a central research team. In an embedded role, they were miserable and the business was frustrated.

I’ve also made the mistake of understaffing the platform team. We had two people maintaining infrastructure used by 30 data practitioners. Response times on platform issues were measured in days. Domain teams started building their own workarounds, which created exactly the fragmentation the platform was supposed to prevent.

Get the platform team right first. If you have to choose between hiring another data scientist or another platform engineer, pick the platform engineer. One good platform engineer can make ten data scientists more productive.

How to Know If Your Structure Is Working

I look at three signals.

Time to production. How long does it take a domain team to go from idea to a model serving production traffic? If it’s more than three months for a straightforward use case, the structure or the platform has problems.

Cross-team learning. Are teams aware of what other teams are building? Are they reusing approaches and sharing lessons? If teams are solving the same problems independently, coordination is failing.

Talent retention. Are your best AI people staying? If they’re leaving because they feel isolated, under-resourced, or stuck in a role with no growth path, your structure isn’t serving them.

No structure solves every problem. But the right structure, adjusted for your specific organisation, makes good outcomes more likely and bad outcomes less costly. Start with something reasonable, measure these signals, and adjust.

Share

Data LeadershipEnterprise AI
Mal Wanstall

Mal Wanstall

AI & Innovation Strategist

15+ years shipping AI products and scaling teams across financial services, NFP, and medical technology.

Related