Building a Data Domain: Return on Experience
2 leaders, 2 different timelines and 1 return on experience
We close this Domain Driven Design series with a return on experience between two persons having the same responsibilities in the same company, 2 years apart from each other. We, Rose and Gaëlle, have several things in common, we both love research and home renovation, and most importantly in the context of this article, we both kick-started a Data Domain in the same company. Here are our stories and common take-aways.
The initial mandate
Both of us took over data domains when our organisation needed to deal with major leadership and organisational changes across its data ecosystem. Change is rooted in the company culture, but transversality was a new concept prompting us to rethink how we operate.
Rose became a data leader in 2020, for the emerging “Client” domain. She quickly realised that she needed to define the domain and its operation. The single and centralised team was governing and serving all data needs across the company, resulting in delays, frustrations, and poor SLA adherence. The ubiquitous language was poor and business users were in the dark lacking common interpretation. The long delivery chain caused multiple duplications between systems and neither completeness and consistency were ensured.
Gaëlle became a data leader in 2023, responsible for improving the maturity of the data product teams and in particular in charge of the “Product” domain, referring to the “sports article”. The notion of “Product” was deemed complex and unshared between different teams. Its boundaries, components and definition are unclear for data producers and data consumers. In a retrospective animated across all teams producing product data, participants compared product data with “The Never Ending Story”.
For both of us, our goal was to:
Build accountability over data
Improve collaboration between business, product, tech and analytics teams
Enable killing legacy systems and creating a scalable and evolutive architecture
Dealing with the Legacy
Every delivery becomes tomorrow’s legacy. Managing it does not only mean delivering something new. It means understanding the organisational, political, functional, and technical complexity that led to it, and creating something that will solve the problems discovered.
The Customer Legacy
In the Customer domain:
Multiple customer account creation paths produced inconsistent experiences.
Backend systems didn’t align, causing duplicate accounts, compliance issues, and broken data.
The central monolith was 20 years old, fragile, and reliant on one person’s knowledge.
Two critical use cases, GDPR compliance on account deletion and the detection of customers’ favourite store detection, were only possible via the central data lake reverse’s ETL.
The Product Legacy
In the Product domain:
Operational data distribution was scattered across CSVs and APIs.
Analytics product vision was supply-chain-centric and ungoverned.
Analytics teams resorted to scraping websites for fresh product data.
As a result, no unified product catalog existed within the company, with no shared understanding on product main attributes. A huge problem for any cross-functional analysis needing basic product information, which was in fact most of them.
Being part of architecture conversations matters
In order to solve those business problems, the refactoring of the transversal data ownership and architecture was mandatory.
Rose: At first, our presence in architecture discussions was tolerated rather than welcomed. We were invited because of perceived risks, not because data and analytics were valued in their own right. Yet listening in, offering a transversal perspective, and connecting domains turned out to be critical. We identified solutions that cut across silos and demonstrated how data could actively de-risk the business. That work built credibility and positioned us as connectors in conversations where data had previously been invisible.
In the Customer domain, governance followed operational re-architecture. Core services like identity, consent, communication preferences, and profile management were carved out of the monolith and rebuilt as independent sub-domains. Each came with its own Product Manager, responsible not only for APIs and operational databases, but also for data products feeding the lake.
In Product, the sequence was reversed.
Gaëlle: We began with dismantling the existing analytical grounds and re-engineering the existing analytical models. For example, a monolithic product-quality dataset was re-architected into 5 smaller data products, distributing data to the whole organisation, and maintainable over time. That enabled us to create a wish list for every data source we needed. For each source, we explained the business purpose, clarified their responsibilities for the data they stored in the lake, and identified what we could deliver for them in the meantime, before they restructured their own models. We described the governance they could adopt tomorrow and what transformations we could provide in the short term. This process forced conversations that had never happened before: what does “owning” a dataset really mean, and how does it fit into an organisation-wide vision for Product data.
This difference in sequencing shaped the culture. In Customer, operational teams were in the driver’s seat and data felt like a natural extension of their work. In Product, the Core Data team initiated the transformation, bringing source teams into the process one by one. That meant we were the ones sparking architectural conversations that simply hadn’t happened before.
Aligning using Domain-Driven Design is powerful
In Customer, Rose worked with a consulting partner well-versed in DDD and Data Mesh, adopting bounded contexts, ubiquitous language, and domain models. We were speaking the same language.
Key concepts we adopted:
Domain Model: Business-meaningful objects and rules
Ubiquitous Language: Shared language between tech and business
Bounded Contexts: Clear boundaries between subdomains
As a domain, we redefined the monolith as a set of distributed services:
Identity
Consent
Communication preferences
Profile info, etc.
Each subdomain had a Product Manager, responsible for defining the vision, migrating legacy data, and exposing clean APIs and databases.
In Product, we began with re-modelling: proposing new entities and testing them with stakeholders. Allies realising that the approach aligned naturally with DDD software principles emerged within the organisation, and we formalised the concepts from there. This gradual adoption helped smooth the cultural gap between the operational and analytical worlds, which had often functioned like parallel universes.
At that stage, Data Product Managers were and are still heavily involved in modelling, working closely with Product Managers to ensure that both operational and analytical needs were served no matter who is distributing the data to the organisation.
With different approaches, we both pushed Product Managers to take over the construction of source data products exposed to the rest of the organisation.
Smoothing the difference between operational and analytical plane is essential
Bridging the gap between the operational and analytical planes proved essential. As James March argues, organisations must balance exploitation (improving current operations) and exploration (generating new insights and opportunities). In our context, this balance meant ensuring that operational data could serve operational needs, fuel analytics and forecasting.
Both Customer and Product teams had to realign around a shared understanding: operational data is our analytical fuel. Every logging decision, every naming convention, every boundary choice shapes what analytics can or cannot deliver. The way we structure operational systems embeds decisions that reverberate into reporting, regulation, and even machine learning.
The data team’s role was therefore not just technical, but translational. We mapped each source along three dimensions:
current uses (operational reporting, ML models, regulatory obligations),
estimated value (rough € or clear business impact),
future needs (latent opportunities identified with teams).
Once Product Managers saw that their operational choices constrained or enabled downstream analytics, and conversely, that analytics could improve their product outcomes, they began to integrate these considerations into their own roadmaps. This resonates with Donald Schön’s idea of the reflective practitioner: learning loops were created, where action and reflection fed each other.
What begins as a data initiative shifts into mental models: operations and analytics were no longer two disconnected planes, but two sides of the same system.
Building a shared culture is necessary
The “BI is not our problem” mindset is hurting everyone. Treating analytics as an external responsibility created blind spots and, ultimately, mistrust. Culture change has to be hands-on. In both domains, the only way to build a shared culture is to make the operational impact of data tangible.
In Customer, we ran workshops where Data Analysts showed Product Managers how their data decisions shaped downstream analytics. One pivotal moment came when the Identity team discovered, through an analysis of a campaign with a well known influencer, how incomplete age data was undermining both targeting and measurement. Beyond workshops, we held bi-monthly training sessions: three-hour immersions where all members of the domain learned to query datasets in Redshift, measure data quality, and explore Member data in the lake. When PMs and engineers experienced the same frustrations with missing values or inconsistent formats, it created a common language and a shared sense of responsibility.
In Product, we experimented with different rituals: retrospectives and weekly updates that brought source, governance and analytics teams together. These sessions laid bare the uncomfortable truth that analytics teams were spending disproportionate time cleaning upstream data issues. They also highlighted the analytical cost of the upstream misalignment between source systems and the need for an operational data as a product approach.
Across both domains, this combination of hands-on learning and shared visibility is the foundation of a genuine cultural shift.
Redefine the role of product managers
The cultural shift is accompanied by a deeper structural one: the evolution of certain Product Managers roles into Data Product Managers (DPMs).
In the Customer domain, this change began with the migration away from legacy systems. Teams were no longer accountable only for APIs; they also had to take ownership of the data those APIs generated, the quality of streaming pipelines, and the reliability of deliveries into the lake. The scope of Product Managers expanded considerably. Business definitions had to be captured in Collibra, data quality rules and monitoring became part of their daily jobs, and data itself had to be treated as a product rather than as a byproduct of software delivery. Each domain was now expected to deliver source-aligned data products consistently exposed via APIs, streams, and the lake.
But this transformation was not politically neutral. In Product, the introduction of the DPM role sparked resistance. Why couldn’t existing PMs absorb these responsibilities? What exactly differentiated the roles? Was this an unnecessary complication or worse, a threat to individual career progression? These debates revealed a reality often missing from data mesh playbooks: redefining roles inevitably reshapes power dynamics, and those dynamics are always personal.
To make the evolution workable, we leaned on hybrid engineers and architects, people fluent in both software delivery and data engineering, who immediately saw the value of bridging the two worlds. Their presence helped make the benefits visible: that data and software deliverables are not competing outputs, but complementary ones. Framing the work in terms of complementarity rather than competition proved to be the most convincing argument. It not only encouraged cooperation but also motivated some team members to learn new skills, gradually evolving their profiles into more specialised hybrid roles.
The good, the bad and what’s left to create
Looking back, it is clear that these transformations were never just technical projects. They were organisational redesigns. The real shift happened in how people collaborated, how roles were redefined, and how decisions about data became part of the company’s operating model.
Several elements worked particularly well. Clarifying domain boundaries and having the discipline to focus on one domain at a time, prevented scope creep and created clear wins. Establishing shared ownership and a ubiquitous language gave teams a common foundation, reducing friction around meaning. By securing operational buy-in for analytics, we rooted data in business needs rather than abstract models. Most importantly, we witnessed a cultural shift: data literacy moved from being a niche skill to a shared competence, enabling conversations that previously weren’t possible.
Yet the challenges are equally instructive. Product Managers still hesitate to see data as a feature that requires real prioritisation. The economic value of data remains difficult to measure, making it harder to defend sustained investment. Cross-functional engineering talent is scarce, slowing progress where breadth and depth must coexist. Expanding beyond a single domain continues to expose a business fragilities: as Greg Parks puts it “scaling is a privilege”.
From these experiences, several principles emerge:
Data domains are governance units, not technical boundaries. They are living systems requiring constant negotiation. A static governance model is obsolete by design.
The order of operations shapes culture. Starting with a domain should be an executive decision aligning with your company needs and strategy. Sequencing defines adoption paths and resistance points.
The Data Product Manager role is politically sensitive. It challenges territories and power structures. Success depends on coalitions, not just clear job descriptions.
Operational and analytical planes must be co-stewarded. True value emerges only when both perspectives are jointly owned, not merely “aligned.”
Leadership and executive literacy is a bottleneck to scale. Complexity grows faster than our ability to understand and champion it. Training at the top is as strategic as any technical investment.
The future is polycentric governance. Multiple autonomous governance nodes, bound by negotiation and trust rather than centralised control, are the only sustainable path forward.
In the end, what we built was trust. That trust is the fragile but essential foundation for what comes next: scaling beyond a single domain and embracing a model of governance that matches the complexity of the organisation.



