The plant manager at a mid-sized contract manufacturer walks into the Monday meeting with a binder. Inside are fifteen pages of KPIs. Weekly output, overall equipment effectiveness broken down by shift and machine, scrap rate by line, first-pass yield, inventory value at cost, labor hours per unit, on-time-in-full percentage, a cycle-time heat map, and a chart showing supplier defect rates trending sideways for the third quarter in a row. Everyone nods. Nobody changes anything. The meeting ends, the binder goes on a shelf, and two days later a production order slips because a raw material ran out at the wrong location. None of the fifteen KPIs predicted it, and none of them would have. This is the dirty secret of the modern manufacturing kpi dashboard. Most of the numbers on it are there because they were easy to compute, not because they ever changed a decision. The metrics that actually move operations are different in kind, not just in detail. They point at a specific action, at a specific location, with a specific deadline.

The Difference Between a Decision Metric and a Decoration

A useful metric answers a question that has a corresponding action. "Do we need to move material from Site A to Site B today?" is a question. "What is our total inventory value?" is not, at least not on a Monday morning. Total inventory value is an accounting figure. It matters quarterly, to the CFO, in the context of working capital. It does not tell a plant manager what to do before lunch.

This is the filter worth applying to every number on a manufacturing kpi dashboard. If the metric moves, what changes? If the answer is "nothing, we note it," the metric is decoration. Vanity metrics manufacturing teams track fall into a predictable set. Total units produced without context on demand. Aggregate OEE without a per-line drilldown. Year-over-year output compared to year-over-year headcount. These are reporting figures. They belong in board decks, not on operational screens.

The metrics that earn a spot on an ops dashboard design worth using tend to share three traits. They are tied to a specific asset (an item, a location, a production order). They have a threshold that triggers action (critical, low, shortfall, overdue). And they carry enough context that the person seeing them can act without a follow-up query. "Palm Oil at Site B is critical, enough for 3 days of production, nearest surplus is at Site A" is a decision. "Inventory health: 87%" is not.

Production Health Metrics That Move the Line

Start with availability. How many units of each product can actually ship or start production right now, given current material availability? This is available-to-promise (ATP), and it is the single most load-bearing production health metric a manufacturer can surface. When ATP on a finished good drops to zero, the system should not just say "out of stock." It should name the bottleneck material, the quantity short, and the location where the gap lives. That is the difference between a dashboard figure and an actionable kpi.

Variance is the second load-bearing metric, specifically run-level variance between expected and actual consumption. The BOM says the line should consume 100kg of a material for a run. The operator actually consumed 108. That 8% delta is either a waste problem, a measurement problem, a quality problem, or a theft problem, but it is always something. Aggregated daily, this metric hides. Captured per run, per operator, per material, it becomes one of the most honest signals on the floor.

The third decision-moving metric is location health. Every stock record belongs to a location, every location has a health state derived from the items it holds, and the organization inherits its health from its locations. Cascading location health turns a multi-site operation into a single glanceable view. A manager sees "Site B: low" and drills in to find two items at critical. The dashboard does not demand analysis. It demands a click.

Inventory KPI Signals That Predict Problems

Inventory kpi work falls into two camps. Point-in-time snapshots (how much stock is on hand) and flow metrics (how stock is moving through the system). Snapshots are easy and mostly unhelpful for daily decisions. Flow metrics are harder to compute and almost always the ones worth watching.

Consumption rate is the first flow metric that earns its screen space. For each item, what is the rolling daily usage, and how does today compare to the baseline? Two standard deviations above or below the trend is the threshold worth flagging, because that deviation is where the interesting questions live. A raw material that suddenly gets consumed twice as fast is either a BOM change, a production scaling event, or a leak. An item that suddenly stops moving is either a substitution that nobody documented or a line that has gone quiet. Neither situation belongs in a weekly report. Both belong in an alert.

Days-to-stockout is the second flow metric with real operational power. Based on the actual consumption rate (not a static assumption), how many days of cover does this item have? Items with a week or more of movement history can be projected with reasonable confidence. The result is a forward-looking list: here are the items that will run out within the planning horizon if nothing changes. Pair that list with the MRP-generated purchase recommendations and a team can walk into Monday knowing exactly what to order and when. This is closer in spirit to the kind of planning discussed in the piece on MRP planning horizons, where deterministic demand calculation replaces guesswork.

Variance tables deserve their own mention. Not the overall waste rate, but the breakdown: waste by product, by material, by operator, over time. A single operator's consumption variance being consistently higher than their peers is a training question, not a discipline question, but it cannot surface from an aggregate number.

Alert Volume as a Meta-Signal

Here is a metric almost no dashboard tracks, and almost every operation should. How many alerts fired this week, in which categories, and how many auto-resolved versus required human intervention?

Alert volume is a meta-signal. A sudden spike in critical stock alerts at one location suggests a consumption shift or a transfer pattern change. A rising count of transfer quantity discrepancies points to either a receiving problem or a dispatch problem. Consumption anomaly alerts clustering around a specific product line hint at a BOM that no longer matches reality. The alerts themselves solve individual problems. The pattern across alerts tells a structural story that no single KPI can.

A well-designed system deduplicates its alerts so the same situation does not spawn ten warnings, and it auto-resolves alerts when conditions clear. The remaining signal (the alerts that fire, persist, and required someone to do something) is one of the cleanest indicators of where operational attention is actually going. If a plant has 200 active alerts and half of them are the same three items stuck in the same three states, the right response is not better alerting. It is a process change.

The Metrics That Look Important But Rarely Move Anything

Several KPIs show up on nearly every manufacturing kpi dashboard and rarely drive a decision on their own. Overall equipment effectiveness at the plant level is one of them. Aggregate OEE smooths out exactly the information that matters: which line, which shift, which changeover. The plant-level number changes by tenths of a percent week to week and means almost nothing in isolation. The per-line, per-shift version is useful. The headline figure is decoration.

Inventory turns at the organization level is another. Turns are meaningful at the SKU level, where they say something about velocity and working capital deployed on a specific item. At the aggregate level, they average together fast movers and dead stock into a number that nobody can act on. The same critique applies to generic "accuracy" metrics. Inventory accuracy reported as "99.2%" across an entire network does not tell a warehouse manager which count to run next. The inventory-to-book variance on specific items, sorted by absolute value, does.

The underlying problem is that summary metrics compress information that only becomes useful at a specific granularity. A good ops dashboard design pushes against that compression. It shows totals for context, but the interactive surface (the place a manager clicks) is always at the level where a decision can be made. The spreadsheet-era pattern of rolling up everything into organization-wide averages gets discussed in the piece on why spreadsheet inventory fails at scale, and the same critique applies to dashboards that inherit spreadsheet thinking.

Building a Dashboard That Earns Its Screen

A working production health metrics surface usually fits on one screen. It names the handful of items or products in trouble right now, shows the state of pending transfers and production orders, lists the active alerts grouped by type, and makes every number a link to a place where action can be taken. That is the whole brief.

Everything else (historical trends, compliance reporting, financial rollups) belongs on a separate view. Not because those numbers are useless, but because they operate on a different clock and serve a different audience. Mixing daily operational signals with monthly financial rollups trains people to ignore both.

The best test for any metric on an ops dashboard is the "what do I do now" test. Read the number, picture the action, name the person who takes it. If the action is "note it for later" or "discuss at the quarterly review," the metric does not belong on the operational surface. It belongs in a report. Dashboards earn their real estate by driving behavior in the next hour, not by summarizing the last one.

Rethinking the Weekly Meeting

The Monday binder is a symptom. It grew because every new system added its own metrics without anybody auditing what the old ones were still doing there. The fix is not another metric. It is an annual cull. Walk every number on every dashboard and ask: when this number changes, what changes? If nothing, cut it. If the answer is "we have a conversation," push it to a weekly or monthly report and off the operational screen. What remains should be short, urgent, and specific.

Manufacturers that make this shift tend to notice the same thing. Decisions happen faster, not because there is more data, but because the relevant data finally has room to breathe. The questions in the Monday meeting change too. Instead of "what is our OEE trend," it becomes "what did we do about the three critical items flagged Wednesday." The metrics start driving the conversation instead of decorating it.


FalOrb helps manufacturers build operational dashboards that drive decisions, from cascading location health to run-level variance and alert volume analytics. Book a 30-minute walkthrough or email us at [email protected] to see how it applies to your operation. More at falorb.com.