Why Your PPDM Implementation Failed (and How to Try Again)

We just got back from the PPDM Energy Data Convention in Houston, where (unsurprisingly) PPDM was on a lot of minds.

One conversation we keep having with operators usually starts the same way.

“We tried to do PPDM a few years ago. It didn’t really work.”

The specifics vary every time. The pattern is always the same. PPDM gets adopted, work gets done, and then somewhere along the way the project quietly stops being a project and starts being a dead weight on the technical organization.

This isn’t a rare story. The PPDM model is sound, the people who built it know what they’re doing, and the operators who pursued it had real reasons to. So why does this keep happening?

PPDM doesn’t fail. PPDM projects do.

Most failed PPDM implementations fail for the same handful of reasons. None of them are about the model itself.


Reason 1: It was framed as a technology project

The most common failure mode. Someone in IT (or worse, a vendor’s salesperson) pitched PPDM as a technology adoption. Stand up the schema, migrate the data, and the business value will follow.

It doesn’t. PPDM is a data model, not a product. Adopting it is more like adopting an accounting standard than installing software. There’s no “go-live” moment where the project is done and the value starts. The value comes from running the business on top of the modeled data, which requires every team that touches that data to actually use it.

A PPDM project that doesn’t have a business sponsor, with a specific outcome they care about, will fail. Not because the technical work was bad, but because nobody had a reason to make it real.

The fix. Tie the implementation to a specific business problem. Production reconciliation that takes too long. Reserves reporting that requires a manual scramble every quarter. Diligence requests that take weeks to answer. Anchor the PPDM work to solving that. The model is the foundation, but the foundation has to support something specific.


Reason 2: The team tried to implement everything at once

This one is almost universal. The team looked at the PPDM model, mapped it against their existing systems, and built an implementation plan that touched every domain.

Two years later they have half-finished tables for wells, production, land, completions, geoscience, and facilities. None of it is fully populated. None of it is being used in production. And the people who started the project have moved on, leaving the new team with a partial implementation and no clear way to finish it.

We covered this trap in What the PPDM Model Actually Gives You (and What It Doesn’t). The model is large, and you don’t have to implement all of it.

The fix. Subset hard. Pick the smallest part of PPDM that solves your current problem, build it end to end, put it in production, and use it for real before adding the next piece. If you can’t articulate why a particular table is in scope, leave it out for now. You can always add it later.


Reason 3: Nobody owned the data

This is the failure that hides in plain sight. The PPDM tables got built. The pipelines got written. The data landed where it was supposed to. But two months in, the production volumes in the PPDM table didn’t match the production volumes in accounting, and nobody was responsible for resolving it.

When the data starts disagreeing across systems, somebody has to make a judgment call. If there’s no owner of the well master, the working interest register, or the production allocation methodology, every disagreement becomes a meeting, a debate, and eventually a decision to stop trusting the new system and go back to the old spreadsheet.

We wrote about this organizational problem in more detail in Data Quality in Upstream Oil and Gas: What Goes Wrong and Where to Start. The technical work is necessary. It’s not sufficient.

The fix. Before you start the technical work, name the owners. Who owns the well master record? Who owns the working interest register? Who has the authority to declare which version of a production volume is the official one? These are business decisions, and they need to be made by the business, not by the data team.


Reason 4: The migration plan ignored the legacy data

A lot of failed implementations had a clean PPDM target and a clean ingestion plan for new data. What they didn’t have was a plan for the twenty years of historical data sitting in legacy systems, spreadsheets, and PDFs.

The result was a PPDM environment with current data only, which the business couldn’t trust because it couldn’t be reconciled against the historical record. So the historical record stayed in the legacy system, and now the company had two sources of truth instead of one. Worse than where they started.

The fix. Be deliberate about historical data. Decide upfront whether you’re going to backfill it, what the cutoff is, and how you’ll handle the inevitable cases where the historical data doesn’t fit the model cleanly. There’s no universally right answer. There’s a wrong answer, which is to ignore the question and hope it sorts itself out.


Reason 5: The implementation tried to live in the application layer

This one shows up in shops where the existing systems were already complicated, and somebody decided to layer PPDM on top instead of underneath.

The result was an extra translation layer between the existing systems and the new model, with all the complexity that implies. Every data flow had to be mapped twice. Every change in the source system required an update to the mapping. The PPDM tables were always slightly behind, slightly inconsistent, and slightly hard to trust.

The fix. PPDM should be where the data lives, not where it gets republished. The ingestion pipelines should pull from source systems and land directly into PPDM-aligned tables, with the canonical master records living there. The legacy systems become source feeds, not parallel copies of the truth. We walked through what that looks like for OCC data specifically in OCC Data Ingestion: Automating What Most Companies Still Do by Hand.


Reason 6: The team didn’t have upstream domain expertise

This is the failure that nobody wants to admit. The implementation team was full of competent data engineers and database designers, but nobody on the team had spent meaningful time in the upstream business.

So they built a PPDM implementation that technically worked. It just didn’t reflect how the business actually operated. Wells got modeled in ways that didn’t account for recompletions. Working interests got stored without the historical changes that mattered for revenue accounting. Production allocations got captured without the methodology that produced them.

You cannot model upstream data correctly without somebody on the team who understands upstream. The data model is industry-specific because the industry is specific.

The fix. Get domain expertise on the team. Either hire it, contract it, or pair your data engineers with people from the business who can sanity check the design as you go. We’ve written before about why this matters in Why Oklahoma Energy Companies Can’t Afford to Ignore Data Engineering.


How to try again

If the last attempt failed, the instinct is to either start completely over or to try and rescue the existing implementation through brute force. Neither tends to work.

The approach that does work looks like this.

Audit what you have. Don’t throw it out. Some of what was built is probably fine. Identify the parts that are usable, the parts that need to be replaced, and the parts that should be quietly retired. You’re not starting from zero, even if it feels like you should be.

Pick one business outcome to anchor the restart. Same advice as before, but it’s worth repeating. The reason this matters is that an outcome gives you a way to scope the rescue. You’re not “fixing the PPDM implementation.” You’re “making the monthly production reconciliation work in three days instead of eight, using PPDM as the foundation.”

Decide what’s in and out of scope. Be ruthless. Most failed implementations had scopes that were too broad. The restart should be smaller, more focused, and more honest about what it’s not trying to do.

Get the right people involved. Business owners, domain experts, and the technical team all need to be in the same conversation. If the original failure was driven by IT going it alone, fix that this time.

Run it in production for real. No parallel runs that go on forever. Pick a date, switch the official source of truth to the new system, and commit to making it work from there. Parallel runs are how implementations stay half-finished.


Why it’s worth trying again

A failed implementation feels like a sunk cost, and there’s a temptation to just leave it. We talk to operators all the time who have decided that the PPDM thing didn’t work out, and they’re not going back to it.

The problem is that the underlying issue (scattered data across systems with no shared model) doesn’t go away. It just stops being addressed. The cost of that, in slow decisions and manual reconciliation and lost analyst hours, keeps compounding. Every quarter that passes makes the gap larger.

The companies that get this right tend to share a few things. They treat PPDM as a means to an end, not an end in itself. They scope tightly, deliver something that works, and expand from there. They put domain expertise on the team. And they’re honest about the parts that didn’t work, instead of papering over them.

The migration story we sketched in From Spreadsheets to a Real Data Stack: A Realistic Migration Path for Mid-Size Operators applies just as much to a restart as to a from-scratch project. Pick the highest-pain workflow, solve it properly with PPDM as the target, and use that win as the basis for what comes next.

If you tried PPDM and it didn’t take, that’s not the end of the story. It’s information about what to do differently. The model still does what it’s supposed to do. The implementation just has to be set up to actually use it.


We Were Just at PPDM 2026

We spent April 27 through 29 at the PPDM Energy Data Convention in Houston. If you were there and we didn’t get a chance to connect, or if you weren’t there and want to talk about a stalled implementation, we’d love to hear what you’re working on.

Further Reading

Get in touch