This page is written for the person actually signing the contract: typically a marketing leader or a procurement partner running point on a six-figure annual purchase. The category is young enough that contract terms are still negotiable. Most buyers do not realise this and leave value on the table.
Must-haves
If the tool does not do these, walk.
- Coverage of all five major engines, with documented sampling cadence.
- Per-prompt citation log exportable to CSV.
- Prompt-set portability: your prompts, in and out.
- Locale support that matches your markets (engine × locale × prompt).
- A documented methodology for citation extraction and deduplication.
- A named customer success contact, not a shared inbox.
Should-haves
Strong indicators of a serious tool, but acceptable to defer to year two.
- Integration with your CMS or content briefing tool.
- Sentiment / answer-content analysis, not only citation presence.
- API access at a tier you can actually afford.
- A pricing model that does not punish you for tracking more prompts.
Don’t-haves
Marketing red flags.
- “Real-time” claims with no sampling cap disclosed.
- A custom proprietary metric that does not roll up to citation rate or answer share.
- A single-engine focus, sold as the whole category.
- Pricing tied to “queries” without a per-query unit cost.
Five clauses worth negotiating
Most vendors will concede on these to close. Most buyers do not ask.
- Data ownership and portability. Your prompts, your citation log, exportable on demand and on churn. Get this in writing.
- MoQ flexibility. Default contracts assume a fixed prompt count. Negotiate a quarterly true-up against actual usage.
- Methodology change notice. If the vendor changes how they extract or dedup citations, you should be notified 30 days in advance and given a re-baseline option. Without this clause, your historical numbers can be silently invalidated.
- Engine-coverage SLA. A new engine that crosses 5% of category usage should be added within a defined window (90 days is reasonable). The tool you buy in Q1 is not the tool the market needs in Q4.
- Termination for cause. If citation accuracy falls below an agreed threshold (you will need a sampling test), termination should be a remedy. Vendors will resist; press anyway, because if accuracy goes you will be paying for noise.
Sample RFP questions
Drop these into your RFP verbatim:
- List the engines and locales you currently sample. State the cadence and any rate-limit caveats.
- Describe how citations are extracted from each engine’s response, including how you handle ambiguous, embedded, or footnote-style citations.
- How do you handle URL canonicalisation and deduplication? At what level: domain, eTLD+1, exact URL?
- How do you handle a prompt where the answer has changed mid-day? Do we get one record or two?
- What is your re-baseline policy if your extraction or dedup logic changes?
- How does the tool integrate with our content briefing or CMS workflow? Provide three customer references currently using this integration in production.
- What is the export format for the citation log, and is it self-service or vendor-mediated?
- What is the delete policy for our prompt set after termination?
A vendor who balks at these questions is telling you something useful.
A note on price
Mid-market tooling sits in the $30k–$120k annual range as of early 2026. Below $30k you are buying a hobbyist tool that will not survive a procurement review. Above $120k you should be buying a configurable platform, not a SaaS dashboard, and you should expect to negotiate aggressively.
Adjacent reading
- For the artefact set the tool should maintain see brand vault.
- For mapping vendors to buyer profiles see vendor matrix.
- For the live tool ranking see /rankings/ai-visibility-tools.