Skip to content

SFA Vendor Evaluation - What to Test in a Demo

A standard product demo tells you what a vendor wants you to know about their software. It is conducted in a clean sandbox environment, follows a rehearsed script, showcases the features that look best on screen, and avoids the scenarios where the product struggles. If the evaluation process consists only of vendor-led demos, the resulting selection decision will reflect the quality of the vendor’s sales team, not the quality of the software.

A structured evaluation framework changes this dynamic. By defining specific test scenarios before any demo begins and requiring every vendor to demonstrate the same scenarios, you can compare platforms on the dimensions that actually matter for your operations.

Vendors design demos around their product’s strengths. Features that photograph well in a presentation but create friction in daily field use - complex navigation, multi-step order capture, dashboards that require export before they are useful - will be minimized or skipped entirely. The demo rep controls pacing, knows which screenshots to avoid, and will redirect when a scenario exposes a weakness.

A custom evaluation scorecard inverts this. You define the scenarios, you score each platform against the same criteria, and you require the vendor to demonstrate your workflows rather than theirs.

Start by documenting the five to eight workflows that represent the daily reality of your field team. These should come from your sales managers and reps, not from IT or procurement. Common workflows to include:

  • Daily rep beat execution from login to last visit log-out
  • Order capture for a product with minimum order quantities and promotional pricing
  • Manager exception review: identifying reps who have not met beat compliance thresholds
  • A visit to an outlet with no connectivity

Score each vendor on a consistent scale (1-5 or pass/fail) for each scenario, and define the scoring criteria in advance so that evaluators are applying the same judgment.

The Scenarios Every SFA Evaluation Should Test

Section titled “The Scenarios Every SFA Evaluation Should Test”

This is the single most important scenario to test for organizations operating in markets with variable connectivity. The evaluator should disable internet access on the test device before the scenario begins and complete an entire visit workflow - check in, complete required tasks, capture an order, check out - entirely offline. Then reconnect and verify that data syncs accurately.

What to watch for: does the offline experience match the online experience, or is it a degraded fallback? Can orders be captured offline with full product catalogue and pricing, or does the catalogue require connectivity to load? How long does sync take and are there conflicts flagged when multiple reps sync simultaneously?

Start the scenario from the beginning of a rep’s working day. Ask the system to show the rep’s planned beat for the day, navigate to the first outlet, complete a structured visit including required tasks, and move to the next outlet. Repeat for three to four outlets.

What to watch for: how many taps does it take to log a visit? Can the rep see their progress against the day’s plan? What happens if the rep visits an outlet not on the scheduled beat? Is the beat sequence visible without connectivity?

3. Order Capture with MOQs and Pricing Tiers

Section titled “3. Order Capture with MOQs and Pricing Tiers”

Configure a scenario where an outlet places an order that would violate a minimum order quantity and another that qualifies for a promotional scheme. The rep should attempt to enter both and the system should respond appropriately.

What to watch for: how the system handles MOQ enforcement - does it block the order, warn the rep, or silently allow it? How are promotional price lists applied? Can a rep see order history for the outlet before entering a new order?

4. Manager Dashboard and Exception Reporting

Section titled “4. Manager Dashboard and Exception Reporting”

Log in as a sales manager and navigate to the standard daily dashboard. From the dashboard, identify: which reps have not started their beat today, which outlets have not been visited in the last 30 days across the territory, and which reps have strike rates below the target threshold.

What to watch for: how many clicks to reach actionable information? Does the dashboard require export to a spreadsheet for any meaningful analysis? Can the manager drill down from a territory-level metric to an individual outlet record without leaving the system?

If integration with an ERP or distributor management system is in scope, require the vendor to demonstrate a live or near-live integration scenario. An order captured in SFA should be visible in the connected system within a defined timeframe.

What to watch for: is the integration pre-built or custom? What data flows in both directions? Who owns the integration maintenance? What happens to field orders when the ERP is temporarily unavailable?

Run a standard coverage report for a territory and export it. Then attempt to filter the same report by outlet tier and re-export. Ask about custom report building capabilities and whether it requires IT involvement or can be done by a business administrator.

A rep encounters an outlet that is not in the system. The evaluator attempts to add the outlet from the mobile device - with and without connectivity.

Log in as a system administrator and demonstrate: adding a new rep, assigning the rep to a territory, adjusting a beat plan, and adding a new product to the catalogue. This tests implementation and maintenance overhead.

A feature that exists but cannot be used by a rep with moderate smartphone literacy in a fast-paced field environment provides no practical value. Evaluate the mobile interface specifically for:

  • Number of taps required to complete a standard visit
  • Screen readability in bright outdoor light
  • Whether the interface works in the language and script used by reps in the field
  • Behaviour when the device is low on battery or storage

If possible, put the mobile app in front of two or three actual field reps - not tech-comfortable early adopters - and watch them attempt a standard visit without instruction. Where they hesitate or fail reveals UX friction that screenshots will not.

Request references from deployments that are similar to yours in scale, market type, and integration complexity. Ask references:

  • What problems did you encounter in the first 90 days that you did not anticipate?
  • What feature did not work as demonstrated during the evaluation?
  • How responsive is vendor support when something breaks in the field?
  • Would you choose this vendor again, and why?
  • The vendor declines to demonstrate offline mode on an actual device with connectivity disabled
  • The manager dashboard requires data export to be useful for daily decisions
  • The vendor cannot provide a reference in a market similar to yours
  • Integration is described as “straightforward” without a specific technical explanation
  • The evaluation team cannot get access to a sandbox environment to test independently

A structured evaluation is more work than watching a demo. It is also the difference between selecting a system that works in your field environment and selecting one that looked good in a conference room.