12 Questions to Ask Before You Buy

The best time to ask hard questions about an AI tool is before you've signed up, integrated your data, and built workflows around it. After that, switching costs make you a captive customer.
Here's a checklist of 12 questions that actually matter. Run every serious purchase through it.
Why Most Evaluations Fail
Most people evaluate AI tools on one dimension: can it do the thing in the demo?
That's necessary but not sufficient. You also need to know:
- What happens when it fails?
- What does it cost at scale?
- How hard is it to leave?
- Who owns the outputs it generates?
These questions don't get asked because the vendor doesn't volunteer them — and buyers are caught up in the excitement of what the tool can do.
The 12 Questions
Write down, in one sentence, what this tool does for you. Then calculate your current cost or effort for that same task without the tool. If you can't articulate the before/after clearly, you're not ready to buy. This isn't a vendor question — it's a homework question. Do it before the first demo.
Ask for the full onboarding timeline: kickoff call, data connection, configuration, testing, go-live. Who does the work — you, their team, or a partner? How long does it realistically take to get value? 'Up in 5 minutes' is usually accurate for the login screen. Getting it to work well takes longer.
AI pricing often scales aggressively. A tool that costs $99/month at your current usage can balloon to $800/month if you actually rely on it. Ask for the pricing table at 3x and 10x your projected volume. Ask what triggers a tier upgrade and whether there are usage caps or throttling below the limit.
Every AI tool makes mistakes. Ask: how do errors surface (do you see them, or do they silently fail)? What's the process to report and fix errors? Is there a human review option? Does the tool include confidence scores so you know when to verify outputs? The vendor's answer here tells you a lot about how honest they are about their own product.
If the tool writes copy, creates designs, generates code, or produces content — who owns that output? Most standard ToS give you ownership of outputs, but some terms are murkier. Check especially if you're in a regulated industry or if intellectual property rights matter for your use case. This is a ToS review question, not a sales question.
The honest answer from most enterprise AI vendors is 'no, with a signed DPA.' The honest answer from most consumer or prosumer AI tools is 'yes, by default unless you opt out' — and the opt-out may not be retroactive. Check the privacy policy and the Data Processing Agreement. If neither document exists or isn't accessible, assume your data trains the model.
Ask before you start: how do I cancel? How much notice is required? What happens to my data after I cancel? How long is it retained? Can I export everything? If the cancellation path is unclear or convoluted, assume it's designed to be. The best vendors make cancellation as easy as signup — because they're confident you'll stay.
If this tool becomes part of your workflow, its downtime is your downtime. Ask for their status page URL and look at the last 90 days of uptime data. Ask what the SLA is on their paid plan and what compensation you get for missed SLAs. No published uptime history = they're not confident in their reliability.
Not the polished case studies on their website — actual customers who use the tool for something close to your use case. Ask them: what doesn't work well? What took longer than expected? Would you buy it again? Vendors who won't connect you with references are hiding something. Vendors who can't find references for your use case may not have solved your problem yet.
Don't ask 'do you integrate with [Tool X]?' — ask to see the integration work in a live demo with real data. Ask: is it bidirectional? Does it handle errors and retries? Is it maintained by the vendor or by a third-party connector service? An integration that breaks silently can corrupt your data without you knowing.
If you're buying partly based on upcoming features, separate the roadmap into two buckets: committed (contractually guaranteed by a specific date) and aspirational (planned but subject to change). Don't pay today for aspirational features. If a feature is truly critical to your use case, get a contractual commitment or wait until it ships.
This question makes bad vendors uncomfortable and good vendors appreciative. Ask: what are the most common reasons implementations fail with customers like me? What would you do differently with a customer in my situation? What percentage of customers who sign up don't achieve the outcomes they expected? Vendors who can answer this honestly have learned from failures — that's a green flag.
The Pre-Buy Checklist (Print This)
Before signing any AI tool contract, confirm you've answered:
- I can state the specific problem this solves in one sentence
- I know the full cost at 3x my current usage
- I've seen it run on my actual messy data, not just demo data
- I know exactly how to cancel and export my data
- I've checked: is my data used for model training?
- I've spoken with at least one reference customer in a similar situation
- I know what "wrong" looks like and how errors surface
- I've reviewed the ToS for output ownership
- I know the onboarding timeline and who does the work
- Any roadmap features I'm depending on are contractually committed
If you can't check all 10 boxes, you're not done evaluating yet.
Quick Check
Quick Check
5 questions · Earn points for speed!
🔀 Random selection — different questions each play!
Key Takeaway
The point of these questions isn't to find a reason to say no. It's to make sure that when you say yes, you've said yes to the tool you'll actually be using — not the one that looked good in the demo.
Ready to complete this lesson?
You've reached the end! Hit the button below to earn your XP.