The default metric, in every review meeting where audiovisual production comes up, is the number of videos per month. It sits next to the engagement rate, the follower count, the cost per lead. It gets discussed as if it spoke for itself.
It doesn't. Three reasons, and any one of them is enough to disqualify it as a steering tool.
The first is format heterogeneity. A fifteen-second story shot on a phone and posted right away, and a two-minute corporate video with an interview, b-roll, color grading and light motion design, have nothing in common. They don't consume the same effort, don't mobilize the same profiles, don't go through the same review loops. "We shipped twelve videos this month" mixes those two objects and every variant in between. It says nothing useful about what the team actually produced.
The second is seasonality. For an internal team covering several brands or entities, the calendar is never flat. There are peak weeks where requests pile up, and empty weeks where nothing filmable is happening. The monthly average flattens exactly the wave structure you need when planning. A team that ships eight videos per month on average can perfectly well ship eighteen in September and zero in August. Steering happens on the waves, not on the smoothed average.
The third is the most structural, and the one people comment on least: shoot batching. In practice, you group several videos onto a single shooting day, between three and ten depending on the case. A day spent on a site can produce the raw material for eight deliverables that will then ship spread out over two months. The month you shot in and the month you publish in have nothing to do with each other. The monthly count of published videos therefore measures a delayed, sometimes distant trace of the work actually done.
Add those three biases together, and "videos per month" becomes a dial you stare at without knowing how to read it.
The right counter is the person-day of post-production
If you want a cadence metric that resists those three biases, you have to look for the one activity that doesn't get batched, doesn't jump with seasonality, and aggregates cleanly regardless of format: post-production.
Shooting compresses, as we just covered. Distribution shifts, also covered. The validation and feedback phase depends on the schedules of the people commissioning the work, so it's erratic by nature. Scripting, briefing and prep happen intermittently, in a few hours stolen between two other tasks.
Post-production, by contrast, occupies the role full-time. It doesn't get grouped, it doesn't skip, it runs linearly, video after video. It's also, in raw hours, the heaviest phase of the whole process. Each day of post is a unit comparable to every other, regardless of what came before or what comes after.
So that's the natural unit of cadence. Not the published video, the person-day of post-production.
What this changes in steering
Once that unit is set, the useful coefficient becomes the ratio between videos delivered and post-production person-days consumed over the period. A team that delivered 220 videos against 275 post person-days runs at 0.8 video per post-production day. Another, with the same volume of deliverables but 165 days, runs at 1.3.
The coefficient isn't meant to compare two teams against each other. It depends on average format, scope of responsibilities, tools, finish level. It's meant to compare you to yourself over time, and to turn a volume commitment into a person-day commitment.
In practice, when someone commissions twenty videos by the end of the quarter, the useful question becomes: how many post-production days does that represent at our current coefficient, and do we have them in reserve before the deadline. Twenty videos at 0.8 per day means twenty-five post person-days to fit in. If the team has two people on post over the period and each already has fifteen days committed elsewhere, the answer is no — and it's a number, not a gut feeling.
That's the main benefit of the counter. It turns a debate about perceived workload into an arithmetic problem, and shifts the conversation to a variable you can actually steer: post time.
Why the coefficient moves, and that's a good thing
Once you measure, you quickly find the number isn't stable. From one quarter to the next, it shifts noticeably, sometimes by 30 to 40%, without any change in the quality of deliverables.
Three variables explain most of the variation. The format mix: a quarter dominated by short verticals doesn't produce the same coefficient as a quarter packed with longer pieces and heavy color grading. Team composition: an apprentice ramping up, a freelancer being onboarded, a departure — those changes shift average velocity. And task scope: an editor who also takes on project tracking, briefs, and client feedback won't run at the same number as the same editor working only at their station.
That's exactly why you recompute the coefficient every three months and absolutely don't try to lock it. Its value-in-use isn't that of a norm, it's that of a thermometer. A coefficient that drops sharply is signaling something: a format that has gotten more complex, an editor overloaded with other tasks, a tool that has regressed, a diffuse loss of efficiency you wouldn't have seen otherwise.
If you're steering, you're already measuring
The above sounds like an instrumentation project. It isn't.
Any team that actually steers its production already has a trace, somewhere, of when each video enters post and when it leaves. A column in a shared sheet, a status in Trello, transition dates in a project tool, a task closed in Asana. The raw material exists. What's missing, in most cases, is the aggregation: nobody has sat down to add up the person-days for the quarter and divide by the number of deliverables.
That said, a coefficient is not a verdict. A team going from 0.8 to 1.2 videos per post day hasn't necessarily improved. It may simply have simplified its formats, lowered the editing bar, skipped a validation round, accepted deliverables it would have rejected six months earlier. Velocity, alone, says nothing about quality. It just says how fast you're producing what you're producing.
That's why the coefficient is never read on its own. It sits next to a qualitative read the numbers don't capture: are the videos we're shipping still at the level we set, are the people commissioning the work happy with them for the right reasons, is the team proud of what it delivers. If any of those three answers gets fuzzy, a rising coefficient is actually a warning sign, not good news. Conversely, a coefficient that drops during a quarter when the editorial bar has been raised can be perfectly healthy.
The number takes the debate out of mood. It doesn't take the debate out of standards. It's a useful dial reading, as long as you keep the other eye on the work itself.
What's left is figuring out what the number lets you decide. Two questions open up immediately. The first is how you compress shooting so that post becomes the only bottleneck that matters. The second is how many channels a given team can serve before the frequency per channel collapses. Each is a full article in itself.
