Learn how to manage master data in mergers and acquisitions. Explore key challenges and a practical integration playbook.

Last week, we looked at why adoption is the real KPI of MDM. Clean records, governance models, and matching logic do not mean much if the business does not trust the output or use it in daily work. Adoption is what turns master data from a technical effort into something that changes how the organization operates.

This week, we shift that idea into one of the hardest environments for master data: mergers and acquisitions. M&A puts pressure on definitions, ownership, data quality, and trust all at once. Two companies may come together on paper in a single transaction, but their customer, product, supplier, and hierarchy data rarely line up that cleanly. This week’s article walks through the key master data challenges that show up during M&A and offers a practical playbook for integration.

Managing Master Data in Mergers and Acquisitions

Most mergers sound convincing before the real work starts.

The deck says the combined company will be stronger. Bigger market. Broader product set. Better leverage with suppliers. Cleaner operations over time. Everyone in the room can see the upside.

Then someone asks a basic question after the deal closes.

How many customers do we have now?

That should be easy to answer. It usually is not.

Finance gives one number. Sales gives another. Operations has a third version in a spreadsheet no one mentioned during diligence. Product teams cannot agree on which hierarchy should drive reporting. The systems are still standing, but the business has already started tripping over the data.

That is usually the first real sign of trouble.

In mergers and acquisitions, master data carries far more weight than most integration plans admit. It defines the shared business entities that everything else depends on. Customers. Products. Suppliers. Locations. Employees. If those records are inconsistent, duplicated, incomplete, or modeled differently across both companies, the new organization starts operating with two vocabularies and one growing pile of reconciliation work.

That is where the friction starts. Reports do not line up. Teams build local workarounds. Trust in the shared view drops early. And once people stop trusting the integrated data, they start keeping side lists. At that point, the merger may be legally complete, but operationally, the business is still split.

Handled well, master data gives the new company a common language. It helps teams agree on what they are looking at, which system should lead, and how records should behave when the two worlds meet. It does not make integration easy, but it makes it possible.

Why master data becomes a breaking point in M&A

Master data is supposed to be stable. Not static, but stable enough that multiple systems can depend on it. A customer record can move through billing, CRM, service, reporting, and analytics without changing its identity every time it crosses a boundary. The same goes for products, suppliers, and locations.

That works inside one company, at least up to a point. An acquisition changes the conditions.

Now two organizations have to decide whether their shared terms actually mean the same thing. Usually, they do not.

A customer in one company may be the legal entity that receives the invoice. In the other, it may be the sales account tied to pipeline ownership. One side may manage products at the SKU level. The other may group them by family and let another system handle the detail. A region in one business may reflect financial reporting. In the other, it may exist only to support territory management.

All of those choices may have made sense before the merger. The problem is that they were made in separate operating environments. Once the data starts moving between them, those decisions collide.

That collision does not stay in architecture diagrams. It shows up in billing, forecasting, procurement, CRM workflows, product reporting, customer support, and executive dashboards. In some environments, the first visible failure is a report that no longer balances. In others, it is slower. A pricing workflow breaks. A hierarchy rollup looks wrong. The same supplier gets paid twice under slightly different names. None of that feels abstract when it lands on a business team.

And there is another issue that matters just as much. Trust erodes fast in post-merger integration. Once teams see two sets of records that should match but do not, they stop assuming the shared view is reliable. People become cautious. Then skeptical. Then they go back to the extracts they trust.

That is how a master data problem turns into an adoption problem.

The biggest master data challenges in mergers and acquisitions

The specific details vary, but the failure patterns are familiar. Most organizations run into some version of the same set of issues.

1. Conflicting definitions of the same entity

This is where the trouble often begins.

The new enterprise keeps using the same words, but the words are doing different jobs. Customer, product, vendor, region, business unit, even location can mean different things depending on which company you are talking to.

A simple comparison makes the problem clear:

TermCompany ACompany B
CustomerLegal entity that receives invoicesSales account tied to opportunity ownership
ProductSellable SKUProduct family with variants managed elsewhere
RegionFinancial reporting rollupSales territory structure

This is not a semantic side issue. Definitions drive process, ownership, and reporting. If the merged business never settles what these entities mean, technical integration becomes a long series of local compromises.

2. Duplicate and fragmented records

Dupes are expected in M&A. Fragmentation is usually the bigger mess.

The same customer may exist in two CRMs, an ERP, a billing platform, and a support system. One record has the clean legal name. Another has the right contact data. A third holds the only valid tax attribute. Parent-child relationships may be complete in one source and missing in the other. Addresses are formatted differently. Legacy IDs do not line up.

So the question is not just, “Which record is the duplicate?”

It is also, “Which source knows what?”

That is a harder problem to solve because the business truth is scattered. Useful data lives in pieces across systems that were never designed to agree with one another.

3. Incompatible data models

This one creates pain quietly at first.

Both companies may manage the same domain, but they structure it differently. One makes a field mandatory. The other does not. One stores status as controlled values. The other allows local variation. One uses a true hierarchy. The other uses parent IDs inconsistently and fills the gaps with manual process.

The mapping effort starts, and it looks manageable. Then the exceptions start piling up.

A field exists in one source but has no real counterpart in the other. A status code seems close enough until someone asks what it means operationally. History is stored differently. Product variants roll up differently. The future-state model starts filling with translation logic and special handling.

That is when teams begin saying things like, “We can fix that later.”

Sometimes they can. Often they just teach the new model to carry the same old ambiguity in a slightly more modern form.

4. Too many systems and no obvious source of record

Post-merger environments tend to be crowded.

You may inherit:

  • two ERPs
  • two or more CRMs
  • regional or business-unit tools with partial overlap
  • local databases that were never meant to survive this long
  • manually maintained spreadsheets that still drive real work
  • reporting layers no one included in the formal system inventory

Then comes the obvious question: which system leads?

There is rarely a clean answer. One platform may be stronger for finance. Another may be the operational leader. A third may hold the cleanest customer data for one region and the weakest product data overall. Teams often go into integration hoping a single source of truth will reveal itself once they get far enough into discovery.

It usually does not.

More often, the outcome is a domain-by-domain decision model. That is the realistic version, even if it is less satisfying.

5. Data quality problems that get worse during integration

If both companies have data quality debt, the merger brings it to the surface.

Common examples include:

  • nulls in important fields
  • inconsistent naming
  • weak validation at entry
  • free-form text where controlled values should exist
  • incomplete hierarchies
  • stale reference values
  • missing ownership
  • broken or unreliable identifiers

These problems stack on top of one another. Nulls weaken matching. Naming variation creates false duplicates. Weak hierarchies break reporting. Missing ownership delays resolution. By the time the integration team is trying to stand up a shared view, they are dealing with structural differences and quality problems at the same time.

That combination slows everything down.

6. Ownership confusion

This shows up quickly and wastes a lot of time.

Who owns the customer domain after the merger? Who decides on supplier classification rules? Who approves hierarchy changes? Who settles disputes when legacy definitions conflict?

Sometimes no one knows. Sometimes too many people think they know.

Neither situation works.

Without clear decision rights, small issues turn into meetings. With overlapping ownership, the same issue can circle for weeks because no one wants to make a call that will affect another function.

Most teams do not describe this as a master data problem at first. They describe it as stalled integration. It is often the same thing.

7. Pressure to move faster than the data can support

Business leaders want speed. That is normal. M&A value is tied to integration timing, and no one wants to explain why the combined company still operates like two separate businesses six months after close.

The problem is that data work does not compress neatly.

You can move quickly in some areas. You can defer some harmonization decisions. You can stage coexistence instead of pushing full centralization too early. What you cannot do is skip the work of understanding the data and expect the downstream consequences to stay small.

Teams try anyway.

They migrate before they profile. They map before they define. They choose a target model before they understand the shape of what they are moving. It can look like progress for a while. Then the defects begin surfacing in reporting, billing, contract management, or service operations, usually in places that are expensive to fix under deadline.

What organizations often get wrong

A lot of integration work goes sideways before anyone writes a bad rule or picks the wrong platform. The problem starts earlier, with how the work is framed.

The most common mistake is treating M&A integration like a system consolidation exercise instead of a data meaning exercise. Systems matter, but systems carry business meaning through data. If the business has not agreed on what a customer, product, or supplier represents in the new enterprise, technical alignment only hides the disagreement for a while.

A few patterns show up again and again.

The first is rushing into consolidation before the sources are understood. Teams start building mappings, target structures, and migration plans before they have profiled the data or documented the differences that matter.

The second is assuming the acquiring company’s model should win by default. That may be practical sometimes. It is not always wise. Acquirers can bring weak hierarchies, overloaded entities, or poor validation just as easily as the company being acquired.

The third is postponing governance. That sounds efficient until the first meaningful dispute appears and nobody has the authority to settle it.

The fourth is expecting a tool to solve a business definition problem. MDM platforms can support matching, survivorship, stewardship, and workflow. They cannot decide what the merged business means by customer.

The fifth is trying to harmonize everything at once. Some domains need deep alignment early. Others can coexist for a time if the controls are clear. Treating all domains the same usually creates unnecessary drag.

A practical playbook for master data integration in M&A

There is no universal script for this work. Industry, deal structure, regulatory pressure, operating model, and system sprawl all matter. Even so, the organizations that handle master data well in M&A usually follow a similar path.

1. Start with the domains that actually matter

Do not begin with an abstract goal like “harmonize enterprise data.” That sounds strategic and helps no one.

Start with the domains tied to the biggest business impact. In many mergers, that means customer, product, supplier, location, and employee. In some deals, only two or three of those domains need deep work right away. That is fine. The point is to choose based on operational consequence, not aesthetics.

A customer domain tied to revenue recognition deserves attention earlier than a lower-impact administrative list. A supplier hierarchy that drives procurement controls matters more than a nice-to-have catalog cleanup.

Pick the domains that can hurt the business if left unresolved.

2. Profile the data before you design the future state

This should happen earlier than most teams want it to.

Before you build mappings or argue over the target model, understand what is actually in the source systems. Profile for duplicates, null rates, invalid values, hierarchy gaps, broken identifiers, conflicting attributes, and missing required fields.

That fact base changes the conversation. Without it, teams speak from memory, intuition, or loyalty to the current process. With it, the integration team can point to real patterns.

This does not have to become an endless audit exercise. It just needs to be good enough to expose the shape of the problem.

3. Define common business meanings

This is the part many teams try to speed through. It is also the part that tends to decide whether the future-state model becomes usable or just formally documented.

The merged company needs shared answers to questions like:

  • What is a customer?
  • What counts as a product in the master domain versus somewhere else?
  • Which attributes are required to operate?
  • Which hierarchies matter?
  • Which differences are real business needs and which are legacy leftovers?

Write these definitions in business language. Not vendor language. Not platform terms. Not whatever the source schema happens to say.

A merged enterprise cannot govern what it has not defined. That principle holds here just as much as it does anywhere else in MDM.

4. Set matching and survivorship rules early

Once duplicate and fragmented records are visible, the team has to decide how records will be linked and which values should survive when sources disagree.

A simple working table helps:

AttributePreferred SourceFallback SourceSteward Review Needed
Legal NameERPCRMYes, if mismatch
Billing AddressERPLegacy billing systemYes
Sales ContactCRMSupport platformNo
Product CategoryProduct hubERPYes, if missing

The value of a table like this is not that it solves the problem by itself. It forces the real questions into the open.

Who trusts which source for which field? Under what conditions? When does human review matter? Those choices determine whether the merged record is something the business can live with.

5. Choose an integration pattern that fits the situation

Not every merger should force centralization on day one.

Some deals need a registry-style approach first, just to create visibility across systems. Others need a consolidation layer for reporting and analytics while the operational sources continue to coexist. Some can move toward coexistence or a more centralized pattern if the business is ready for it.

This is one place where architecture decisions have to respect operating reality. A pattern that demands more behavioral change than the organization can absorb will create resistance even if the design looks clean on paper.

Hybrid states are common in M&A. They are not a sign of failure. They are often the honest middle ground between speed and control.

6. Put lightweight governance in place before the first major conflict

This does not mean creating a giant governance machine during the first month of integration. It means putting enough structure around the work so that disputes do not turn into endless escalation loops.

At minimum, define:

  • domain owners
  • stewards
  • escalation paths
  • approval rules for shared definitions
  • change control for critical attributes

The role of governance here is practical. It should help the business make decisions, not bury the integration under ceremony. Done well, it creates movement. Done badly, it becomes another source of delay.

7. Integrate in phases

Big-bang harmonization sounds decisive. It also tends to produce wider blast radius when assumptions are wrong.

A phased approach gives the organization room to learn. One domain exposes problems the team did not see in discovery. Another reveals that regional variation matters more than expected. Another shows that the current source everyone trusted is weaker than assumed.

That is normal.

Phased integration lets the team absorb those lessons and apply them to the next wave instead of relearning them under pressure in every domain at once.

It also helps with trust. Users are more likely to adopt an integrated model that improves in visible steps than one that promises total alignment and arrives with new surprises.

8. Keep watching the data after go-live

Cutover is not the end of master data work.

The merged organization should keep tracking duplicate rates, null rates in critical fields, stewardship queue volume, hierarchy completeness, unresolved conflicts, consumer complaints, and the manual workarounds that people still rely on when the official model does not give them what they need.

If those patterns do not improve over time, the integration is not really settling. It is just stabilizing around a flawed operating state.

That is a different thing.

What success actually looks like

Success in post-merger master data is usually described too neatly.

It is not perfect uniformity. It is not one magical golden record that satisfies every function at once. It is not total platform consolidation in a single wave.

A more honest definition is less dramatic and more useful.

Success means the merged business agrees on core entity definitions. Important records can be matched and governed consistently. Critical reports stop contradicting one another. People know who owns key decisions. Integration follows a repeatable pattern instead of improvisation. Manual reconciliation starts shrinking instead of spreading.

That may not sound glamorous. It does sound real.

And real matters more than elegant in M&A.

Final thought

Mergers and acquisitions expose data weaknesses quickly. Faster than most other business events.

They force hidden assumptions into the open. They reveal weak ownership, overloaded models, brittle hierarchies, and business definitions that only seemed stable because they had never been tested against another company’s version of the same concept. They also create a narrow window in which the combined business has to decide how it wants to operate going forward.

That is why master data matters so much in M&A.

Treat it like a secondary issue, and the merged company pays for that choice through rework, mistrust, local workarounds, and slow integration. Treat it like a core integration layer, and the business has a much better chance of becoming one enterprise instead of two legacy environments held together by temporary fixes.

The teams that do this well usually are not the ones with the cleanest slide decks. They are the ones willing to define terms early, profile before guessing, assign ownership before conflicts pile up, and build the future state in stages the business can actually absorb.

That is not flashy work.

It is the kind that holds.