The metrics conversations hotel sales managers actually have are messier than the metrics articles online suggest. The questions aren't "what is RevPAR" but rather "why does the number look fine on this report and bad on that one" and "what do I tell ownership when pace dropped 8% this week." This post answers those harder questions.
Q&A format, organized by the kind of operational questions that come up in real meetings.
Which metrics actually matter to ownership and asset management?
Five, in roughly priority order: pace versus prior year, segment mix, account-level production, lead conversion by source, and lead response time. The first two are what ownership reviews quarterly. The last three are what informs the asset manager's questions about why pace looks the way it does.
The 7-essential-metrics piece covers the broader frame.
We're tracking 15 metrics. How do we cut the list?
Cut anything that doesn't have an action attached. For each metric, ask: when this number moves up or down, what do we do differently? Metrics without an action path are decoration. The five-to-seven that survive this filter are the working scoreboard.
Why does our group ADR look healthy when revenue is down?
Probably mix shift. Group ADR going up while group share of room nights drops produces a "healthy ADR, lower revenue" pattern. Always look at ADR change and segment-mix change together. A 6% ADR gain paired with a 25% volume drop is a revenue loss disguised as a rate gain. Historical rate analysis covers more.
Should we track lead response time? Doesn't the team already know to respond fast?
Yes, and they don't actually do it. The industry median lead response time is 48 hours; the top quartile is under 12. Most teams "know" to respond fast and discover, when the metric is tracked, that their actual median is somewhere between 8 and 36 hours. Lead response time as a metric covers why this is the cheapest leverage point in most hotel sales operations.
Pace dropped 8% this week. What do we tell the asset manager?
Investigate first. Three causes account for most weekly pace drops.
Comp set rate moves. If the comp set raised rates while you held, your relative position improved but absolute pace softened. STR rate index data clarifies this in 10 minutes.
Source mix shift. If a high-converting source (repeat clients, brand referrals) dried up that week, aggregate pace drops while the actual sales process is unchanged.
A specific account or group event. If a major BT account paused for a week or a specific group block firmed differently than expected, the pace move is event-driven, not trend-driven.
The right answer to ownership is the diagnosis, not just the number. "Pace dropped 8% this week. STR shows the comp set raised rates, our index actually improved, and we expect normalization within two weeks" is a useful update.
How often should we review metrics?
Three layers.
Daily, by exception. Stuck-opportunity flags, lead-response laggards, anomaly alerts. The DOSM scans these in 5-10 minutes each morning.
Weekly, full review. 30 minutes, every Tuesday or Wednesday, with the team.
Monthly trend overlay. 60 minutes, end of month. Look at all weekly metrics on a rolling 12-week trend.
Quarterly strategic review with asset management. The output of the monthly trends gets framed for ownership.
Which metrics should we report to ownership and which should we keep operational?
Ownership-facing: pace versus prior year, segment mix, account production trend, group block value, year-over-year RevPAR index against comp set. Five numbers that fit on one slide.
Operational-only: stuck-opportunity count, lead-by-source breakdowns, salesperson activity, response time medians, follow-up cadence adherence. These inform the team's working week and aren't useful at the ownership level.
How do we benchmark our metrics?
Three benchmark sources: STR for occupancy/ADR/RevPAR (industry standard), annual brand reports for chain-specific benchmarks, and internal year-over-year and rolling-12-week trends. External benchmarks tell you whether you're on or off the industry; internal trends tell you whether you're improving.
Is forecasting accuracy a metric we should track?
Yes, indirectly. Track forecast vs. actual error over rolling quarters. If your forecast is consistently too optimistic by 8%, that's a forecasting calibration issue. If your forecast is sometimes 5% off and sometimes 15% off, that's a forecasting reliability issue with a different fix.
What not to do. Don't track forecast accuracy at the salesperson level as a performance metric. Using it for performance evaluation produces sandbagging.
Should we measure salesperson activity?
Carefully. Activity counts (calls, emails, meetings) are inputs; outcomes (stage progression, conversions, account development) are outputs. Tracking inputs without outcomes produces performative activity. Tracking outcomes without inputs misses the leading indicators.
The working pattern: track outputs as the primary performance metric, with activity as context. A salesperson with low activity but high outcomes is fine. A salesperson with high activity and low outcomes needs coaching, not more activity.
Do AI-generated metrics dashboards work?
The dashboards work; the AI-generated parts are usually a layer on top of standard reporting. The AI value usually shows up in anomaly detection (flagging unusual pattern movements), not in the metrics themselves. Predictive analytics and forecasting covers more on where AI does add value.
Where Matrix fits
Matrix ships the core working metrics (pace, segment mix, account production, lead conversion by source, response time) on the standard sales dashboard with appropriate cuts for the DOSM, sales manager, GM, and asset manager. The weekly Sales Readout pulls the ownership-facing subset automatically.
The thing we get right operationally: making the right cuts the default. The salesperson sees their own source conversion; the DOSM sees the team-level view; the asset manager sees the portfolio rollup.
The bottom line
The metrics questions hotel sales managers actually ask are about interpretation and action, not definition. Keeping the metrics list short, attaching an action to each, segmenting properly, and reviewing on a fixed cadence is what separates teams that use metrics from teams that report them.