Field Notes/AI Strategy
AI Strategy

How to Choose an AI Consulting Firm: 10 Questions Every CEO Should Ask

Enterprise AI consulting is now a multibillion-dollar market. Most of that spend is wasted on the wrong firm. These ten questions will tell you the difference.

By Breyon Bradford

Co-Founder & CEO, SynthesisArc

From

SynthesisArc Strategy

April 12, 202611 min read
AI consulting firm decision framework

Enterprise AI consulting has grown into a multibillion-dollar market in a handful of years. [1] Most of that money is not buying results. It is buying presentations, methodology decks, and implementation timelines that slip while the engagement scope quietly expands.

Here is the uncomfortable truth about the AI consulting market: most firms are paid for time and deliverables, not outcomes. A firm that solves your problem in three months makes less money than one that takes twelve. Think about that incentive structure for a moment. The system rewards slowness. Your results are a side effect, not the goal.

The good firms exist. They are not always the biggest or the most famous. They are the ones that can answer these ten questions with specificity and without hesitation. If a firm hedges on more than two of them, walk away.

The 10 Questions

Question 1: Can You Show Me a Deployment You Built That Is Still Running Two Years Later?

Anyone can ship a pilot. The hard part is building something that survives contact with production, scales to real volume, and keeps running after your team leaves. Ask for references from clients where the deployment is live today, not from pilots that concluded six months ago.

Listen for specificity. Real answers include system names, production metrics, and the specific technical decisions that made it maintainable. Vague answers about successful engagements that delivered significant value should make you skeptical.

Question 2: What Does Your Governance Layer Look Like, and Who Owns It After You Leave?

AI without governance is liability without protection. Every serious AI consulting firm has a governance methodology. More importantly, they have a plan for transferring governance ownership to your team when the engagement ends.

If the answer involves a retainer to maintain the governance system, that is a dependency, not a solution. The governance layer should be yours to operate within 90 days of deployment.

Question 3: How Do You Measure ROI, and What Happens If You Miss It?

Any firm can promise ROI in a proposal. Ask what the measurement methodology is, what the specific metrics are, what timeline the ROI is expected by, and what the firm will do if those numbers are not met. [2]

Firms that are confident in their work will negotiate outcome-linked fees or offer extensions at reduced rates if milestones are missed. Firms that insist on fixed-fee-regardless-of-outcome arrangements are telling you something about their confidence in their own results.

Question 4: Which Parts of This Will My Team Be Able to Run Without You in Six Months?

This is the dependency question, and it is the most revealing one. The answer tells you whether the firm is building toward your independence or against it.

Excellent firms answer with a specific capability transfer plan: which team members will be trained, on which systems, to what level of proficiency, by what date. They want you to be able to run it because that is what makes their deployment successful.

Firms that cannot answer this question clearly are building a retainer, not a solution. Your dependency is their recurring revenue.

Question 5: What Is Your Position on Generative vs. Deterministic AI for Regulated Decisions?

This question tests technical depth. Any firm advising enterprises on AI should have a clear, defensible position on when to use generative AI and when to use deterministic AI. The EU AI Act makes this question regulatory, not just architectural.

If the firm advocates generative AI for high-risk decisions without a clear governance and auditability plan, they are either uninformed about the regulatory landscape or optimizing for what is easiest to build rather than what is right for your organization.

"The most expensive AI consulting mistake is hiring a firm that builds what they are best at rather than what you need. The diagnostic question is simple: do they ask about your workflows before they mention their platform?"

- SynthesisArc, Strategy practice

Question 6: What Is Your Firm's Relationship With the AI Vendors You Recommend?

Consulting firms earn referral fees, implementation partner bonuses, and co-marketing arrangements from AI vendors. These relationships are not inherently corrupt, but they create conflicts of interest that should be disclosed.

Ask directly: do you receive any financial benefit from recommending specific AI platforms? A trustworthy firm will disclose this without being defensive about it. They should also be able to explain why the platform they are recommending is right for your specific situation, not just right for their partnership arrangement.

Question 7: How Do You Handle It When the Right Answer Is to Not Use AI?

The best AI consulting firms sometimes recommend not using AI. Some processes are better served by better software, better data, or better hiring. A firm that always recommends AI is not advising you. They are selling you.

Ask for an example of a situation where they recommended against AI deployment, or recommended a smaller deployment than the client initially wanted, and what the outcome was. If they cannot name one, they have never put a client's interest ahead of their revenue.

Question 8: What Is Your Methodology for Assessing Organizational Readiness Before Deployment?

MIT's 95% pilot failure rate is not a technology problem. [3] It is a readiness problem. Firms that skip the readiness assessment to get to the billable implementation work are optimizing for their engagement revenue, not your outcomes.

Look for firms that have a formal readiness assessment methodology covering data infrastructure, process clarity, technical capability, governance posture, and change management capacity. If the readiness assessment is a two-hour workshop before the proposal, that is not an assessment. That is a sales call with a different name.

Question 9: How Do You Ensure Our Data and Models Remain Ours?

AI sovereignty is a strategic business issue, not just a legal one. The consulting firm's answer to this question reveals their philosophy about client independence.

Look for: explicit contractual language vesting all AI artifacts, trained models, and derived data insights in your organization; architecture choices that enable vendor portability; and a clear position on data residency and model exportability.

Firms that build on proprietary platforms without portability provisions are not acting in your long-term interest. The lock-in may not feel painful in year one. It becomes very painful in year three when the vendor raises prices or the platform does not evolve in the direction you need.

Question 10: What Will This Engagement Cost, What Will It Deliver, and How Will I Know?

This sounds basic. Most AI consulting engagements cannot answer it clearly at the proposal stage. The cost is clear. The deliverables are vague. The measurement criteria are absent.

Push for: a fixed or capped cost for a defined scope, specific deliverables with acceptance criteria, business metrics that will be tracked (not just deployment milestones), and a timeline with milestones, not just a start and end date.

Red flags in the proposal

Time-and-materials pricing with no cap, deliverables described as 'strategy and recommendations' rather than working systems, success metrics that are measured by the consulting firm rather than by your business results, and a Phase 2 that is referenced in the Phase 1 proposal are all signs of an engagement designed to extend rather than resolve.

What a Good Answer Looks Like

Real names. Real numbers. Real timelines. That is the only kind of answer worth trusting from a firm asking for six or seven figures of your budget.

Good answers to these ten questions share a few characteristics. They are specific. They involve real client names, real numbers, real timelines. They acknowledge tradeoffs rather than claiming pure upsides. They include examples of things that did not go as planned and what was learned.

Good answers also reveal a consistent orientation: the firm is building toward your independence, not their ongoing dependency. They are measuring success by your business outcomes, not by their deployment milestones. They are advising you toward what is right, not toward what they are best positioned to build.

The Scorecard

After your conversations with candidate firms, score each one: 1 (poor answer), 2 (acceptable but vague), or 3 (specific, credible, confident) on each of the ten questions. Maximum score is 30.

  • Score 25 to 30: Strong candidate. Due diligence on references before signing.
  • Score 18 to 24: Proceed with caution. Negotiate specific outcome commitments into the contract.
  • Score 12 to 17: Significant concerns. Consider whether the firm is the right fit or whether the gaps reflect fundamental philosophy differences.
  • Score below 12: Walk away. The warning signs are too significant regardless of the pitch quality.

How SynthesisArc Answers These Questions

We built this list because they are the questions we would want answered before hiring us. They are also the questions we ask ourselves before accepting an engagement, because we only take projects where we can deliver against them.

Our engagements are scoped to 90-day delivery cycles with defined business metrics. Our governance frameworks are designed to be operated by your team within that window. Our deployments run on architectures that you own, with models you can export, on infrastructure you control.

We sometimes recommend not using AI. We sometimes recommend smaller projects than clients initially want. That orientation is what makes our results trustworthy.

Ready to ask us the ten questions? We will answer every one with specificity, or tell you honestly why we are not the right fit.

Ask Us the 10 Questions

The Bottom Line

AI consulting is mature enough that the right firm exists for almost every enterprise use case. The challenge is distinguishing it from the dozens of firms that have added AI to their service menu without the depth to deliver.

These ten questions do not require technical expertise to evaluate. They require judgment about specificity, accountability, and orientation. A firm that answers them confidently and specifically has done this before and has thought hard about what makes it work.

A firm that hedges, generalizes, or gets defensive has not. Trust that signal. [4]

References

  1. [1] IDC. Worldwide AI and Generative AI Spending Guide. Market sizing of enterprise AI services and consulting segment. IDC, 2025.
  2. [2] Harvard Business Review. Research and analysis on consulting engagement structures and client outcome alignment. HBR, 2025.
  3. [3] MIT NANDA Initiative. "The GenAI Divide: The State of AI in Business 2025." 95% of generative AI pilots fail to reach production; organizational readiness as primary cause. MIT, 2025.
  4. [4] Harvard Business Review. Research on AI program failures and the role of organizational readiness. HBR, 2025.
  5. [5] Forrester Research. Research on AI services vendor evaluation and outcome accountability frameworks. Forrester, 2025.
  6. [6] McKinsey & Company. "The State of AI." Documents characteristics of high-performing AI engagements versus median outcomes. McKinsey, November 2025.
  7. [7] Gartner. Research on AI consulting services evaluation frameworks and delivery model assessment. Gartner, 2025.

Published by

SynthesisArc Strategy

Our strategy division publishes executive-level analysis on AI markets, competitive positioning, and the economics of AI transformation.

Enterprise AI strategy for the C-suite.

Ready to act?

See where Operational Intelligence applies in your business.

Two weeks. Dollar-value roadmap. No commitment beyond the conversation.

Take the AI Readiness Assessment