When AI Starts Doing What Consultants Billed You $1 Million to Deliver

When AI Starts Doing What Consultants Billed You $1 Million to Deliver

By M. Mahmood | Strategist & Consultant | mmmahmood.com

TL;DR/Summary

Every executive reading this has an exploratory or renewal conversation coming with a major consulting firm. Before that meeting happens, you need to understand what has changed in the last twelve months, because the leverage in that room has shifted permanently.

AI agents are now doing a significant share of the analytical and synthesis work that consulting firms have billed at partner rates for decades. That is not a prediction. It is a documented operational reality at the firms themselves. The question you face is not whether AI is disrupting consulting. The question is whether you are capturing the margin it frees up, or whether your consulting vendor is.


The Consulting Industry Is Running the Same Play You Are

McKinsey deployed its internal generative AI platform, Lilli, firm-wide in July 2023 and by the firm's own telemetry, 72% of its 45,000 professionals now use Lilli monthly, generating over 500,000 prompts. The platform saves an estimated 50,000 consulting labor hours every month and coping decks that once required two days of junior analyst work now emerge in under three hours. McKinsey's CEO stated publicly that the firm saved 1.5 million hours in a single year through AI-assisted search and synthesis alone.


That is not augmentation theater, rather it is a structural reduction in the cost of producing the work you pay for and McKinsey is not alone. Bloomberg reported on May 1, 2026 that the firm now plans to use AI agents to staff client engagements, supplementing the professional development employees who have matched consultants to projects for decades. EY launched enterprise-scale agentic AI across 160,000 audit engagements in 130,000+ staff globally, with full end-to-end audit support expected by 2028. Accenture laid off over 11,000 employees in an AI-focused restructuring while simultaneously growing its AI and data specialist headcount to 77,000, and tying promotions directly to AI tool adoption.

These firms are building AI-powered delivery engines, while executives of enterprises are still paying headcount-based rates for the output.


The Market Is Starting to See It

The enterprise buyer is catching up, as HFS Research surveyed 1,002 senior executives across 16 industries and 14 countries and found that 65 percent of enterprises say traditional consulting models fail to deliver real value. 83% say AI-powered consulting delivers greater value than the traditional approach, while 49% of consulting contracts today are still structured around headcount, only 16 percent of leaders expect to use that model within two years.

Gartner's consulting segment revenue declined approximately 13 percent in a single quarter, and the firm forecast total 2026 revenue below analyst expectations. Its stock dropped more than 22 percent on the news. The declared reason was that enterprises are building internal capability and using AI to handle planning and performance evaluation work that once went to external advisors.

Forbes covered the structural mechanics directly: McKinsey CEO Bob Sternfels confirmed plans to cut 25 percent of non-client-facing staff while boosting client-facing headcount by 25 percent, under a "25 squared" plan. As of January 2026, McKinsey employed approximately 40,000 humans and 25,000 AI agents and expects the number of AI agents to reach parity with human employees by the end of 2026. Despite reducing non-client staff, the firm reported that productivity from this side actually increased by about 10%. Deloitte, Ernst and Young (EY), and KPMG have made similar structural moves. The business model is repricing itself in real time and therefore executives next statement of work (SOW), does not reflect that yet.


What AI Can and Cannot Replace in a Consulting Engagement

I spent years on the buyer side of large enterprise AI and technology decisions. I have reviewed vendor proposals, run internal AI assessments, and watched consulting teams bill six figures to produce outputs that an AI agent could now draft in a fraction of the time. I have also watched those same AI-generated outputs get the context wrong in ways that cost more to fix than the original engagement would have.

The honest picture looks like this.

What AI agents now do credibly:

  • Research synthesis and literature review at institutional depth
  • Benchmarking and competitive landscape assembly
  • First-pass financial modeling and scenario framing
  • Slide deck generation from source data
  • RFP and proposal drafting
  • Process documentation and gap analysis
  • Stakeholder interview synthesis

What AI agents cannot yet replace:

  • Navigating the internal politics that kill good recommendations
  • Coaching a board through a decision they have already emotionally made
  • Building cross-functional trust during a restructuring
  • Bringing liability and reputational accountability to a recommendation
  • Translating context-specific institutional history into strategy

BCG published research in March 2026 estimating that 50 to 55 percent of US jobs will be reshaped by AI over the next two to three years, with junior consulting roles particularly exposed as AI absorbs the execution-heavy analytical work that historically justified large entry-level cohorts.

The implication for enterprise buyers is precise and executives should not be paying partner rates for work that a well-configured AI agent now produces in hours. Executives should be paying premium rates for the work that requires judgment, accountability, and trusted human relationships. Those are genuinely different things, and most contracts do not separate them.


The Real Disruption Is Not AI Replacing Consultants — It Is AI Replacing the Need for Consensus Theater

Here is the things that no executive and therefore a consulting firm will not say.

A significant portion of large consulting engagements do not exist because the enterprise does not know the answer. They exist because the answer is politically inconvenient, and an outside firm provides cover for a decision leadership has already made. An AI agent cannot do that. It cannot absorb accountability for a restructuring recommendation. It cannot provide the reputational legitimacy a board needs when approving a $300 million transformation.

As Fortune reported in March 2026, boards are still more willing to listen to advice from McKinsey than from an AI model, and they still want a named firm to implement AI agents. That social function does not compress easily. But the analytical infrastructure underneath that social function is being hollowed out. Business Insider reported in March 2026 that developers are building open-source AI agent skills explicitly modeled on McKinsey consultant workflows, available directly in developer toolchains. PromptQL, built on open-source unicorn Hasura, already deploys AI analysts that perform tasks traditionally handled by data scientists and consultants, continuously learning and adapting as they run.

The consensus floor on what justifies a consulting engagement is dropping. What required a team six months ago now requires a prompt and a workflow. That means the work you should still pay consulting rates for is narrower, more specific, and more accountable than most engagement letters reflect.


Why the Big Four Are Betting on the Alliance Model

OpenAI announced its Frontier Alliances in February 2026, entering multi-year partnerships with McKinsey, BCG, Accenture, and Capgemini to deploy its enterprise agent platform. CNBC reported that the consulting firms will help clients redesign workflows, integrate AI agents, and manage change. BCG and McKinsey are positioned as strategy and operating model partners. Accenture and Capgemini take the systems integration and data architecture roles. Anthropic has formed equivalent deals with Deloitte, Accenture, and Cognizant, and per Fortune is reportedly in talks with Blackstone to implement Claude-based solutions across portfolio companies.

This matters for enterprise buyers because executives are now navigating a market where consulting firms are simultaneously your advisor on AI strategy and a distribution channel for AI vendors who want access to your infrastructure and data. That is a conflict of interest worth naming explicitly in the next engagement scoping conversation.

If your consulting partner is helping you deploy a vendor platform they are formally partnered to sell, you need outcome-based contractual protections that separate advisory independence from implementation incentives. The AI vendor evaluation framework developed here gives you the specific questions to ask before any vendor-adjacent advisory engagement is signed.


What This Means for CFOs Managing AI and Consulting Spend Together

The structural shift creates a specific CFO problem that most finance organizations are not yet framing correctly. AI spend and consulting spend are now the same budget conversation. A Futurum survey of 830 IT decision-makers in early 2026 found that direct financial impact nearly doubled as the primary AI ROI metric, while productivity as the primary justification collapsed 5.8 percentage points. Enterprise buyers have matured, as they want P&L proof, not hours saved.

That maturity should extend to consulting contracts and the CFOs who win this negotiation are not the ones who cut consulting spend, rather they are the ones who restructure it around measurable outcomes rather than hours and headcount.

Three principles apply:

  • Separate analytical work from judgment work in every SOW. Analytical deliverables now have a computable AI-equivalent cost. If a firm is charging senior rates for a market benchmarking study that an AI platform can produce for under $5,000, that line item should be renegotiated or removed. The judgment, context-setting, and implementation accountability work has no comparable AI floor yet.
  • Build outcome milestones with teeth. KPMG research found that 65 percent of UK organizations say they would continue AI investment regardless of tangible ROI. That tolerance for non-accountability should not extend to your consulting spend. Milestone payments, IP transfer clauses, and defined exit conditions are now standard practice in sophisticated engagements. They should be non-negotiable when the firm is using AI to produce deliverables at a fraction of historical cost.
  • Require disclosure of AI tool usage in delivery. If a firm is using AI to produce work you are paying human rates for without disclosing that, you are subsidizing their margin expansion. Some firms have already faced scrutiny for undisclosed AI use in deliverables. A simple clause requiring disclosure of material AI tool use in any deliverable is reasonable and overdue.

The AI cost allocation framework published here applies directly to this problem: if consulting spend is not producing measurable outcomes at cost-equivalent value, it belongs in the same rationalization exercise as any other AI-era expense.


The Firms That Lose First

Junior consultants and associate-level analysts are the first category to absorb displacement. The research and synthesis functions that justify large associate cohorts are exactly what McKinsey's Lilli, Accenture's internal platforms, and open-source agent toolkits already perform. BCG's own March 2026 research acknowledges that short-term entry-level hiring volumes will decrease as AI absorbs execution-heavy tasks.

Firms that built their leverage model on the ratio of junior to senior staff will see margins compress as that ratio collapses. The pyramid pricing architecture, charge senior rates for strategy and subsidize it with high-volume junior work, is mathematically unsustainable when AI removes the junior volume.

The second category to lose is pure-research advisory firms as can be seen, when Gartner's consulting revenue fell 13 percent in a single quarter and the stock dropped 22 percent on the guidance revision. The declared driver was enterprises using AI to internalize the research and benchmarking function. As one analyst noted, when clients can get 80 percent of a firm's output from an AI agent at 5 percent of the cost, the subscription research model faces a structural challenge that a product pivot cannot fully solve.

The third category is mid-market generalist consultancies with no proprietary data, no technology platform, and no industry-specific accountability model. They occupy the space where AI agents are most competitively priced and most capable.


The 90–180 Day Playbook: What to Do Before the Next Engagement

This is a sequenced set of actions by role. Run them before you sign or renew any consulting contract above any amount, especially the the typical $250,000 mark.

CFO (Days 0–30): Audit the last three engagements:

Pull the last three consulting SOWs. For each deliverable, ask whether an AI agent could produce a comparable first draft at current capability levels. If the answer is yes for more than 40 percent of line items, request itemized repricing before the next renewal conversation. Use your AI vendor consolidation framework logic to apply the same rationalization discipline to consulting spend that you would to any software vendor.

CIO / CAIO (Days 15–60): Run an internal capability gap assessment:

Identify the workflows your consulting partners address most frequently. Map each to internal AI agent capacity. For workflows where internal agents are within 80 percent of consulting output quality, the engagement scope should shrink and the savings should be renegotiated explicitly. For the remainder, the consulting engagement should be restructured around judgment and accountability, not analytical production. Reference your AI workforce manager playbook to understand how to redesign those workflows correctly.

Head of Procurement / CPO (Days 30–90): Rewrite the SOW template:

Add three clauses that do not exist in most enterprise consulting contracts today. First, a material AI use disclosure requirement. Second, milestone-based payment tied to defined business outcomes, not deliverable completion. Third, IP ownership language covering any AI-generated artifacts, data, or models produced during the engagement. The standard contract language was written for a world where consulting output was purely human-generated. It no longer reflects delivery reality.

Board / Governance Committee (Days 60–180): Frame consulting spend as an AI governance question.
If your consulting partners are simultaneously your AI strategy advisors and your AI vendor implementation partners, the conflict of interest belongs in the board's AI governance framework. The AI governance framework for boards provides the checklist. Add one line item: consulting partner AI vendor relationships must be disclosed and reviewed annually against independence standards.


Frequently Asked Questions (FAQ)

Is AI actually replacing management consultants in 2026?

Not fully, and not uniformly. AI is replacing the analytical and synthesis work that junior and mid-level consultants have historically delivered. Senior judgment, stakeholder management, and accountability-bearing advice remain human functions. But the internal cost structure at every major firm has shifted materially, and most enterprise contracts have not yet reflected that shift. Buyers should demand repricing for AI-compressible work while preserving engagement scope for judgment-dependent work.

What should enterprise buyers demand in consulting contracts today?

Three things that are still rare in standard SOWs. First, material disclosure of any AI tool used to produce a deliverable. Second, milestone payments tied to defined business outcomes rather than time and deliverable completion. Third, explicit IP ownership of any AI-generated work product, data model, or prompt architecture created during the engagement.

How do I calculate whether a consulting engagement is worth the spend in the AI era?

Start with this test: for each deliverable in the SOW, estimate the cost of producing a comparable first draft using an AI agent or platform your enterprise already has. If the AI equivalent costs less than 20 percent of the consulting fee for that deliverable, the gap needs to be justified by accountability, not analytical complexity. If it cannot be justified that way, negotiate the fee down or remove the deliverable scope and run it internally.


Additional Readings

If you are building an AI strategy inside an enterprise: the practitioner playbook behind this site is AI Strategy: A Practitioner's Guide.

If you are advising clients or building a consulting practice: the complementary resource is The Entrepreneurship Playbook.

Working through an AI consulting selection or governance challenge yourself? MD-Konsult works with executive teams on exactly these decisions.