Skip to content

SFA Data - What to Measure and Why

Every SFA implementation produces data. The question is not whether the system collects information - all of them do - but whether the information being collected is the kind that drives decisions, or the kind that makes dashboards look busy. The distinction between activity data and outcome data is the most important conceptual line in SFA measurement, and most organisations draw it in the wrong place.

Activity data records what reps did: check-ins at outlets, number of calls made, samples distributed, visit duration, promotional materials handed out, photos taken. It is input data. It tells you that effort was expended.

Outcome data records what resulted: orders placed, secondary sales recorded, new outlets activated, numeric distribution coverage achieved, strike rate by territory. It is output data. It tells you whether the effort produced a commercial result.

Both types are generated automatically by a well-configured SFA system. The problem is not availability - it is which type gets attention in management reporting.

Activity data is structurally easier to collect. A check-in is a single button tap. A visit duration is recorded passively by GPS. The system captures it without rep effort and without judgement about quality.

It also looks impressive. A dashboard showing 4,200 outlet check-ins last week, 840 call reports filed, and 98% beat compliance is a satisfying report to receive and to present upward. The numbers are large. The compliance looks high. Nothing in that report signals a problem.

But none of those numbers answer the only question that matters commercially: did the field team generate revenue? A rep can check in at 30 outlets and take zero orders. High activity, zero output. The dashboard looks identical whether the rep sold effectively or spent the day socialising at outlets and tapping a button on the way out.

Over-indexing on activity data creates a specific failure mode: managers reward visible effort rather than commercial output, and reps optimise for the metrics being measured. If check-in count is what gets reviewed in the performance conversation, reps will maximise check-ins. If visit duration is tracked, reps will manage their time at each outlet to hit the target duration. Neither behaviour necessarily produces orders.

Field sales studies show this pattern consistently. Organisations that measure activity as a proxy for performance tend to see activity metrics improve while revenue metrics stagnate - because the field team is optimising for the proxy, not the outcome. This is Goodhart’s Law applied to sales management: when a measure becomes a target, it ceases to be a good measure.

The dangerous version is when this pattern persists for 12–18 months without being identified, because the dashboards always show green. By the time declining revenue forces a reassessment, the measurement culture is entrenched and difficult to change.

A well-designed SFA measurement framework arranges data in three tiers:

These are the metrics that measure commercial results directly. They are the primary management focus.

  • Secondary sales per visit: units sold by the outlet or ordered from the distributor per rep visit. The clearest measure of whether a visit produced a transaction.
  • Strike rate: orders placed as a percentage of outlets visited. A rep visiting 25 outlets and placing 18 orders has a 72% strike rate. This is the core measure of visit quality.
  • Numeric distribution: the percentage of outlets in a territory stocking a given SKU. This measures whether the field team is building genuine market presence, not just visiting existing accounts.
  • Coverage rate: the percentage of the defined outlet universe visited within the beat cycle. Coverage that doesn’t exist can’t generate revenue.

These metrics don’t measure outcome directly but are reliably predictive of outcome when used correctly.

  • Visit frequency by outlet tier: whether A, B, and C-class outlets are being visited at their prescribed frequency. Declining visit frequency for A-class outlets is an early warning of future revenue decline.
  • Time on territory: hours spent in active territory coverage versus travel and administration. Low time on territory is a structural constraint on outcome regardless of rep quality.
  • Outlet coverage trends: whether the active outlet universe is growing, stable, or contracting. A shrinking covered outlet base means shrinking revenue opportunity.

Activity data belongs at the bottom of the hierarchy - useful for exception detection and compliance verification, but never as a primary performance measure.

  • Check-ins and call reports: verify presence, not performance. Use for beat compliance monitoring, not output assessment.
  • Samples distributed: useful for compliance and budget tracking in regulated categories; meaningless as a sales performance metric.
  • Photos and survey responses: operational verification data. Good for retail execution audits; irrelevant to revenue management.

A common assumption in early-stage SFA deployments is that more data is better. It isn’t. Data volume without data quality creates noise that is worse than silence, because it creates false confidence.

50 accurate, current outlet records - with correct contact details, validated GPS coordinates, accurate tier classification, and recent purchase history - are more operationally valuable than 500 records that are partially duplicate, geographically mislocated, or years out of date. The 500-record database looks more impressive. The 50-record database drives better decisions.

Data quality has four dimensions:

  • Accuracy: does the record reflect reality? Is the outlet still open, at that location, purchasing in that category?
  • Freshness: when was the record last validated? Outlet data degrades at approximately 15–20% per year in active trade channels as outlets open, close, relocate, and change ownership.
  • Completeness: are the fields that matter for decision-making populated? An outlet record without a tier classification cannot be used for beat planning.
  • Consistency: is the same outlet represented once in the system, consistently named, with history that traces back through time?

Organisations that treat data quality as an IT problem rather than a commercial operations problem will have low-quality data regardless of which system they use.

The most common misuse of SFA data is at the manager level. A sales manager who opens a dashboard every morning, reviews the activity metrics, notes that check-in compliance is 94%, and closes the laptop has not used SFA. They have confirmed that their reps are pressing a button.

Genuine use of SFA data at the management level means:

  • Reviewing strike rate by rep and by territory weekly, and investigating when it drops
  • Identifying outlets that haven’t placed an order in 30 days and directing rep attention proactively
  • Comparing secondary sales per visit across reps working similar territories to identify coaching opportunities
  • Monitoring coverage rate trends to catch territory gaps before they become revenue gaps

The difference is not in the data available - both the button-checker and the performance manager have access to the same system. The difference is in which questions they ask of it.

Data governance is the operational discipline that keeps SFA data accurate and useful over time. It requires:

Ownership: every outlet record, every beat plan, every tier classification has a named owner who is accountable for its accuracy. Unowned data degrades unchallenged.

Freshness standards: defined maximum ages for key data types. Outlet GPS coordinates reviewed quarterly. Tier classifications reviewed at territory redesign cycles (typically annually). Purchase history reviewed continuously.

Audit cycles: periodic sampling reviews where a manager or operations analyst validates a random sample of outlet records against ground truth. This catches systematic data entry errors before they propagate.

Exception alerting: automated flags when outlet records haven’t been updated within the freshness standard, or when secondary sales data hasn’t been captured for an outlet that should have been visited.

When outcome data is reliable and current, it changes the quality of decisions made above the field level:

Territory design: secondary sales per outlet, coverage rates by zone, and rep productivity data inform territory boundary decisions that are grounded in actual commercial activity rather than geographic intuition.

SKU ranging: if secondary sales data shows which SKUs are consistently ordered across high-performing outlets and which are chronically underordered, ranging decisions can be made from evidence. Products that aren’t selling through in field channels can be identified and addressed before they become a distributor stock problem.

Distributor performance review: comparing secondary off-take data across distributors covering comparable territory profiles produces an objective performance ranking. Underperforming distributors can be identified, investigated, and managed with data rather than with perception.

Promotional effectiveness: if SFA captures whether promotional materials were deployed at the outlet and records the purchase in the same visit, the system has the data to measure whether the promotion drove incremental orders. This closes a measurement loop that is almost impossible to close without field-level data collection.

The consistent thread is that SFA data is only as useful as the decisions it informs. Measuring more things is not the goal. Measuring the right things accurately, and building the management habits to act on them, is what determines whether an SFA investment generates a return.