The Defense Contract Management Agency's 14-point schedule assessment is, without exaggeration, the most widely adopted schedule quality framework in North American construction. Nearly every scheduling software vendor advertises DCMA-14 compliance. Nearly every public-sector RFP for schedule services references it. Nearly every third-party schedule auditor includes it as a baseline deliverable.
The framework is useful. It catches real problems. It provides a shared vocabulary for discussing schedule health, which is no small thing on a project with fifteen stakeholders who would otherwise each define "schedule quality" differently.
But here's the quiet problem: a schedule can pass all 14 DCMA criteria and still be analytically worthless. I've reviewed schedules that scored 93% on DCMA-14 and had five or six fundamental problems that no DCMA check would ever flag. The assessment's silence on certain categories of failure is not a flaw in the framework — it's simply the nature of any finite checklist. Fourteen checks can't cover every way a schedule can be wrong.
The problem isn't DCMA-14 itself. The problem is treating it as sufficient instead of as a starting point.
What follows is a practical look at what the DCMA-14 framework does well, where it's silent, and the six additional quality checks that serious peer reviews run on top of it to surface the issues the standard framework misses.
Quick Refresher: What DCMA-14 Actually Checks
For readers who work with it less frequently, the 14 DCMA checks cover:
- Logic (missing predecessors/successors)
- Leads (negative lag)
- Lags (excessive positive lag)
- Relationship types (overuse of FF, SS, SF)
- Hard constraints (date constraints that override logic)
- High float (activities with unreasonably high total float)
- Negative float (activities running behind critical)
- High duration (activities with durations over 44 working days)
- Invalid dates (actual dates in the future, forecast dates in the past)
- Resources (activities without resource loading, if required)
- Missed tasks (activities that should have been completed by data date but weren't)
- Critical path test (does the network still produce a valid critical path when an activity slips?)
- Critical path length index (ratio of critical-path duration to remaining project duration)
- Baseline execution index (ratio of tasks completed to tasks planned to be completed)
Run these 14 checks against a schedule and you'll catch a lot of the obvious methodology failures. Missing predecessors, excessive constraints, implausible float — all caught.
What DCMA-14 doesn't catch is the subtler category of failure: schedules where the structure is clean but the meaning is wrong. A perfectly logical, constraint-free, properly resourced schedule can still misrepresent how the project will actually be built.
That's where the additional checks come in.
The Six Additional Checks Worth Running
1. Procurement-to-installation logic verification
DCMA-14 doesn't specifically examine whether procurement activities are logically tied to their corresponding installation activities. A schedule can have all its procurement activities, all its installation activities, and pass every logic check — while having no actual logic tie between the two.
The consequence: procurement can slip weeks without the installation date moving in the schedule. The critical path looks stable. The project hits the installation activity and discovers the equipment hasn't arrived. By then, the schedule has been wrong for months and nobody knew.
The additional check: explicit verification that every long-lead equipment item has a finish-to-start relationship between its "procurement complete" milestone and its "installation start" activity. No soft-logic workarounds. No reliance on shared successor chains that happen to align.
2. Commissioning density analysis
The DCMA high-duration check flags activities over 44 working days. It says nothing about the inverse problem — dozens of short-duration activities crammed into a time window that cannot physically accommodate them.
The typical failure pattern: the last 60 days of a data center schedule show 200 activities in a single mechanical room, requiring an average of six simultaneous trades working in a space that can reasonably accommodate two. The schedule passes every DCMA check. The work is physically impossible.
The additional check: for any project phase with high activity density, calculate concurrent-activity counts against physical work area and confirm they're achievable. This is particularly critical in commissioning windows for data centers, OR suites in hospitals, and anywhere else multiple trades share tight physical space.
3. Calendar consistency at handoff points
DCMA-14 does not examine whether calendar assignments across sequential activities are consistent. A schedule can have 15 different calendars — one for each trade, one for the owner's weather assumptions, one for site access restrictions, one for each inspection authority — and pass every DCMA check.
The problem: when an activity on a 6-day calendar hands off to an activity on a 5-day calendar, the date math gets subtle. Float calculations on either side of the handoff can drift. A 10-day Saturday-included activity that finishes on a Friday hands off to a 5-day-calendar activity that picks up on the following Monday, losing the weekend. Over dozens of handoffs, the accumulated drift can meaningfully distort the forecast.
The additional check: systematic review of calendar assignments at every critical-path handoff, with explicit justification for any calendar transitions and quantification of the resulting math drift.
4. Critical path density and diversification
DCMA-14's critical path length index measures the ratio of critical path duration to remaining project duration. It says nothing about how many activities actually sit on the critical or near-critical path.
A 10,000-activity schedule with 200 critical-path activities is healthy. A 10,000-activity schedule with 1,800 critical and near-critical activities is a tripwire — a project where variance anywhere moves the finish. Both pass DCMA-14's critical path checks.
The additional check: count critical and near-critical activities (typically defined as activities with total float under 10 days), calculate them as a percentage of total activities, and flag schedules with an excessive near-critical concentration. On a project with too many paths running close to critical, any of the standard delay analyses will struggle to identify what actually drove the delay.
5. Milestone-to-activity logic integrity
Milestones are supposed to be logical representation of key project events — contract start, construction complete, RFS, substantial completion. DCMA-14 doesn't verify that summary milestones are actually driven by the activities that should drive them.
In practice, milestones frequently end up as decorative markers rather than logically-tied events. A "Substantial Completion" milestone should be driven by the late-finish of every activity required for substantial completion. When it's driven by only 4 of the 12 activities that should feed into it, the milestone date can appear stable while actual substantial completion is slipping.
The additional check: for every key project milestone, trace the logic backward and confirm every activity that physically drives that milestone is logically tied to it. Identify orphan activities that should feed into a milestone but don't.
6. Baseline integrity over time
DCMA-14 examines whether the current schedule is internally consistent. It does not examine whether the baseline has been silently adjusted, rolled forward, or re-established without formal change control.
This is the check that most often catches the worst schedule abuses. A baseline that was legitimately set in Month 1 can quietly be replaced in Month 8 by a mid-project revision labeled as "the baseline" — making historical variance disappear. A formally re-baselined schedule is fine, assuming the re-baselining was documented and owner-approved. A silently updated baseline is a serious problem.
The additional check: comparison of the current stated baseline against historical baselines captured in prior monthly updates. Flag any instances where baseline dates for specific activities have shifted between reporting periods without corresponding change-order documentation.
Why This Matters for Owners and OPMs
The practical point isn't that DCMA-14 is bad. It's good, and it should continue to be used. The practical point is that DCMA-14 compliance is a necessary condition for schedule health but not a sufficient one.
When an owner or OPM reviews a DCMA-14 report showing 94% compliance, the correct mental response is "this is the schedule's floor, not its ceiling." The schedule has passed the table-stakes check. The question of whether the schedule actually represents a buildable project with honest risk exposure is still open.
In my experience, roughly 60-70% of schedules that score above 90% on DCMA-14 still have at least one serious issue in the six additional categories. Not because the framework is failing — it's doing its job well — but because those issues simply aren't within its scope.
The "Passes DCMA" Trap
There's a specific failure mode worth flagging directly. It goes roughly like this:
A contractor's scheduling team runs DCMA-14 on every monthly update. The reports consistently score above 90%. This becomes the basis for the contractor's assertion — reinforced at every monthly owner meeting — that "the schedule is healthy." The owner, lacking a reason to disagree, accepts this framing. Six months later, when the project starts missing dates, the contractor points to the DCMA history as evidence that the schedule was always sound and the delays must be due to other causes.
The argument is structurally weak, because DCMA-14 compliance never actually represented schedule soundness. But it's rhetorically powerful, because everyone has been trained to treat DCMA scores as the quality standard.
The remedy isn't to abandon DCMA-14. It's to complement it with the additional checks that address what the framework leaves silent — and to be explicit about doing so in contracts, monthly reporting, and peer review engagements.
What This Looks Like in a Real Peer Review
A serious schedule peer review typically runs DCMA-14 as the first automated pass — maybe 20-30% of the total analytical effort. The findings of the DCMA pass identify where to look for deeper issues, but they don't constitute the review itself.
The remaining 70-80% of the analytical effort goes into the additional checks above, plus judgment-based review of critical path content, procurement integration, resource realism, commissioning density, and baseline history. This is where the value of an experienced reviewer shows up — because automated tools can't make the judgment calls about whether a resource loading is realistic for the contractor's actual workforce, or whether commissioning density is achievable given the specific equipment arrangement.
Automated tools tell you the schedule is consistent. Human review tells you whether the schedule is true.
A Recommendation for Owners and OPMs
If you're an owner or owner's representative who receives contractor schedule submittals, consider adopting the following simple enhancement to your review standard:
Require DCMA-14 compliance as a baseline. Your contract likely already does this. Keep it.
Additionally require the contractor's scheduling team to address, in their baseline submittal narrative, the six categories above. Not in DCMA-style pass/fail terms, but in narrative terms: "Here's our procurement-to-installation logic. Here's our commissioning density analysis. Here's our calendar integrity plan. Here's our baseline change-control procedure."
This single procedural change often surfaces issues that no automated assessment would catch. It also signals to the contractor's scheduling team that the owner is a sophisticated reviewer, which tends to improve the quality of the submittal considerably.
For contractors: requiring this of yourselves before submittal costs very little and eliminates a category of avoidable rework on baseline acceptance. The best scheduling teams already do most of these checks informally. Formalizing them makes the work defensible.
The Honest Framing
DCMA-14 is a floor, not a ceiling. It catches the bottom 30% of schedule quality problems. The top 70% of problems — the ones that actually drive major claims — live in the spaces the framework doesn't examine.
A schedule that scores 94% on DCMA-14 and passes the six additional checks above is genuinely defensible. A schedule that scores 94% on DCMA-14 and has never been interrogated on the additional categories is an unknown. The score tells you something important, but not enough.
When an owner's rep or claim consultant asks "is this schedule defensible?" — the right answer isn't "it passed DCMA-14." The right answer is "it passed DCMA-14 plus here's what our additional quality review found."
The difference between those two answers is often the difference between a clean project and a claim.
Schedule Peer Review, Done Thoroughly
Our schedule peer reviews include DCMA-14 analysis plus the six additional quality checks described in this article, supplemented by judgment-based review of critical path content, procurement integration, resource realism, and baseline integrity. Written reports with specific findings. Three-to-five-day turnaround on baselines up to 5,000 activities.