AI in Business in 2025

+44 7349 2717 34 info@aiforbusiness.courses ChatGPT, AI For Business Courses & Trainings

State of AI in Business in 2025: The GenAI Divide

Despite $30-40 billion in enterprise investment into GenAI, this report uncovers a surprising result: 95% of organizations are getting zero return.

State of AI in Business in 2025: The GenAI Divide

Source: MIT NANDA

Aditya Challapally Chris Pease Ramesh Raskar Pradyumna Chari

July 2025

NOTES

Preliminary Findings from AI Implementation Research from Project NANDA

  • Reviewers: Pradyumna Chari, Project NANDA

  • Research Period: January – June 2025

  • Methodology: This report is based on a multi-method research design that includes a systematic review of over 300 publicly disclosed AI initiatives, structured interviews with representatives from 52 organizations, and survey responses from 153 senior leaders collected across four major industry conferences.

  • Disclaimer: The views expressed in this report are solely those of the authors and reviewers and do not reflect the positions of any affiliated employers.

  • Confidentiality Note: All company-specific data and quotes have been anonymized to maintain compliance with corporate disclosure policies and confidentiality agreements, ensure neutrality, and prevent any perception of commercial advancement or opinion.

EXECUTIVE SUMMARY

Despite $30-40 billion in enterprise investment into GenAI, this report uncovers a surprising result: 95% of organizations are getting zero return. The outcomes are so starkly divided across both buyers (enterprises, mid-market, SMBs) and builders (startups, vendors, consultancies) that we call it the GenAI Divide. Just 5% of integrated AI pilots are extracting millions in value, while the vast majority remain stuck with no measurable P&L impact. This divide does not seem to be driven by model quality or regulation, but seems to be determined by approach.

Tools like ChatGPT and Copilot are widely adopted. Over 80 percent of organizations have explored or piloted them, and nearly 40 percent report deployment. But these tools primarily enhance individual productivity, not P&L performance.

Meanwhile, enterprise-grade systems, custom or vendor-sold, are being quietly rejected. Sixty percent of organizations evaluated such tools, but only 20 percent reached pilot stage and just 5 percent reached production. Most fail due to brittle workflows, lack of contextual learning, and misalignment with day-to-day operations.

From our interviews, surveys, and analysis of 300 public implementations, four patterns emerged that define the GenAI Divide:

  • Limited disruption: Only 2 of 8 major sectors show meaningful structural change.

  • Enterprise paradox: Big firms lead in pilot volume but lag in scale-up.

  • Investment bias: Budgets favor visible, top-line functions over high-ROI back office.

  • Implementation advantage: External partnerships see twice the success rate of internal builds.

The core barrier to scaling is not infrastructure, regulation, or talent. It is learning. Most GenAI systems do not retain feedback, adapt to context, or improve over time.

A small group of vendors and buyers are achieving faster progress by addressing these limitations directly. Buyers who succeed demand process-specific customization and evaluate tools based on business outcomes rather than software benchmarks. They expect systems that integrate with existing processes and improve over time. Vendors meeting these expectations are securing multi-million-dollar deployments within months.

While most implementations don’t drive headcount reduction, organizations that have crossed the GenAI Divide are beginning to see selective workforce impacts in customer support, software engineering, and administrative functions. In addition, the highest-performing organizations report measurable savings from reduced BPO spending and external agency use, particularly in back-office operations. Others cite improved customer retention and sales conversion through automated outreach and intelligent follow-up systems. These early results suggest that learning-capable systems, when targeted at specific processes, can deliver real value, even without major organizational restructuring.

Want to learn how to work with AI? Register for AI training for business!

THE WRONG SIDE OF THE GENAI DIVIDE: HIGH ADOPTION, LOW TRANSFORMATION

Takeaway: Most organizations fall on the wrong side of the GenAI Divide; adoption is high, but disruption is low. Seven of nine sectors show little structural change. Enterprises are piloting GenAI tools, but very few reach deployment. Generic tools like ChatGPT are widely used, but custom solutions stall due to integration complexity and lack of fit with existing workflows.

The GenAI Divide is most visible when examining industry-level transformation patterns. Despite high-profile investment and widespread pilot activity, only a small fraction of organizations have moved beyond experimentation to achieve meaningful business transformation.

3.1 The Disruption Reality Behind the Divide

Takeaway: The GenAI Divide manifests clearly at the industry level. Despite GenAI’s visibility, only two industries (Tech and Media) show clear signs of structural disruption, while seven others remain on the wrong side of transformation.

To better quantify the state of disruption, we developed a composite AI Market Disruption Index. Each industry was scored from 0 to 5 based on five observable indicators:

  1. Market share volatility among top incumbents (2022 to 2025)

  2. Revenue growth of AI-native firms founded after 2020

  3. Emergence of new AI-driven business models

  4. Changes in user behavior attributable to GenAI

  5. Frequency of executive org changes attributed to AI tooling

Exhibit: GenAI Disruption Varies Sharply by Industry

Index Score (0-5)

  • Technology: 3.5

  • Media & Telecom: 2.0

  • Professional Services: 1.5

  • Healthcare & Pharma: 0.5

  • Consumer & Retail: 0.5

  • Financial Services: 0.5

  • Advanced Industries: 0.5

  • Energy & Materials: 0.5

Exhibit: Description of GenAI Disruption

IndustryKey Signals
TechnologyNew challengers gaining ground (e.g., Cursor vs Copilot); shifts in workflows.
Media & TelecomRise of AI-native content; shifting ad dynamics; incumbents still growing.
Professional ServicesEfficiency gains; client delivery remains largely unchanged.
Healthcare & PharmaDocumentation/transcription pilots; clinical models unchanged.
Consumer & RetailSupport automation; limited impact on loyalty or leaders.
Financial ServicesBackend automation; customer relationships stable.
Advanced IndustriesMaintenance pilots; no major supply chain shifts.
Energy & MaterialsNear-zero adoption; minimal experimentation.

Sensitivity Analysis: We tested alternative weightings for the five disruption indicators. Technology and Media & Telecom maintained top rankings across all reasonable weighting schemes, while Healthcare and Energy remained consistently low.

One mid-market manufacturing COO summarized the prevailing sentiment:

“The hype on LinkedIn says everything has changed, but in our operations, nothing fundamental has shifted. We’re processing some contracts faster, but that’s all that has changed.”

3.2 The Pilot-to-Production Chasm

Takeaway: The GenAI Divide is starkest in deployment rates. Only 5% of custom enterprise AI tools reach production. Chatbots succeed because they’re easy to try and flexible, but fail in critical workflows due to lack of memory and customization. This fundamental gap explains why most organizations remain on the wrong side of the divide.

Exhibit: The Steep Drop from Pilots to Production for Task-Specific GenAI Tools

Tool TypeInvestigatedPilotedSuccessfully Implemented
General-Purpose LLMs80%50%40%
Embedded/Task-Specific60%20%5%

Research Note: We define successfully implemented for task-specific GenAI tools as ones users or executives have remarked as causing a marked and sustained productivity and/or P&L impact.

As one CIO put it:

“We’ve seen dozens of demos this year. Maybe one or two are genuinely useful. The rest are wrappers or science projects.”

Five Myths About GenAI in the Enterprise

  1. AI Will Replace Most Jobs in the Next Few Years → Research found limited layoffs from GenAI, and only in industries that are already significantly affected by AI. There is no consensus among executives as to hiring levels over the next 3-5 years.

  2. Generative AI is Transforming Business → Adoption is high, but transformation is rare. Only 5% of enterprises have AI tools integrated in workflows at scale and 7 of 9 sectors show no real structural change.

  3. Enterprises are slow in adopting new tech → Enterprises are extremely eager to adopt AI and 90% have seriously explored buying an AI solution.

  4. The biggest thing holding back AI is model quality, legal, data, risk → What’s really holding it back is that most AI tools don’t learn and don’t integrate well into workflows.

  5. The best enterprises are building their own tools → Internal builds fail twice as often.

3.3 The Shadow AI Economy: A Bridge Across the Divide

Takeaway: While official enterprise initiatives remain stuck on the wrong side of the GenAI Divide, employees are already crossing it through personal AI tools. This “shadow AI” often delivers better ROI than formal initiatives and reveals what actually works for bridging the divide.

Behind the disappointing enterprise deployment numbers lies a surprising reality: AI is already transforming work, just not through official channels. Our research uncovered a thriving “shadow AI economy” where employees use personal ChatGPT accounts, Claude subscriptions, and other consumer tools to automate significant portions of their jobs, often without IT knowledge or approval.

Exhibit: The Shadow AI Economy – Employee Usage Far Outpaces Official Adoption

  • Companies who have purchased an LLM subscription: 40%

  • Employees who use LLMs regularly for work: 90%

This shadow economy demonstrates that individuals can successfully cross the GenAI Divide when given access to flexible, responsive tools.

3.4 Investment Patterns Reflect the Divide

Takeaway: Investment allocation reveals the GenAI Divide in action. 70% of GenAI budgets go to sales and marketing, but back-office automation often yields better ROI. This bias reflects easier metric attribution, not actual value, and keeps organizations focused on the wrong priorities.

Exhibit: GenAI Investment Distribution by Function

FunctionSample Use Cases
Sales & MarketingAI-generated outbound emails, Personalized content for campaigns, AI-based competitor analysis, Smart lead scoring, Follow-up automation, Social sentiment analysis
OperationsInternal workflow orchestration, Dynamic resource allocation, Document summarization
Customer ServiceCall summarization and routing, AI-powered chatbots, Smart ticket routing
Finance & ProcurementContract classification, Process compliance monitoring, Supplier risk alerts, Lagging AP/AR automation

A VP of Procurement at a Fortune 1000 pharmaceutical company expressed this challenge clearly:

“If I buy a tool to help my team work faster, how do I quantify that impact? How do I justify it to my CEO when it won’t directly move revenue or decrease measurable costs?”

WHY PILOTS STALL: THE LEARNING GAP BEHIND THE DIVIDE

The primary factor keeping organizations on the wrong side of the GenAI Divide is the learning gap: tools that don’t learn, integrate poorly, or match workflows. Users prefer ChatGPT for simple tasks, but abandon it for mission-critical work due to its lack of memory. What’s missing is systems that adapt, remember, and evolve—capabilities that define the difference between the two sides of the divide.

4.1 The Barriers Keeping Organizations Trapped

Takeaway: The top barriers reflect the fundamental learning gap that defines the GenAI Divide: users resist tools that don’t adapt, model quality fails without context, and UX suffers when systems can’t remember.

Exhibit: Why GenAI Pilots Fail: Top Barriers to Scaling AI

(Rated on a 1-10 frequency scale)

  • Unwillingness to adopt new tools: 8.5

  • Model output quality concerns: 8.0

  • Challenging change management: 6.5

  • Poor user experience: 5.0

  • Lack of executive sponsorship: 3.0

This paradox illustrates the GenAI Divide at the user level. The same professionals using ChatGPT daily for personal tasks demand learning and memory capabilities for enterprise work.

4.2 Why Generic Tools Win, and Lose

Takeaway: The GenAI Divide manifests in user preferences: ChatGPT beats enterprise tools because it’s better, faster, and more familiar, even when both use similar models.

Exhibit: User Preference Drivers (Generic LLM Interface vs. Integrated Tool)

  • “The answers are better”: 85%

  • “Already familiar with the interface”: 60%

  • “Trust it more”: 45%

A corporate lawyer at a mid-sized firm exemplified this dynamic:

“Our purchased AI tool provided rigid summaries with limited customization options. With ChatGPT, I can guide the conversation and iterate until I get exactly what I need. The fundamental quality difference is noticeable.”

4.3 The Learning Gap That Defines the Divide

Takeaway: ChatGPT’s very limitations reveal the core issue behind the GenAI Divide: it forgets context, doesn’t learn, and can’t evolve. For mission-critical work, 90% of users prefer humans. The gap is structural: GenAI lacks memory and adaptability.

Exhibit: Barriers to Core Workflow Integration

  • “It doesn’t learn from our feedback.”: 65%

  • “Too much manual context required each time.”: 55%

  • “Can’t customize it to our specific workflows.”: 40%

  • “Breaks in edge cases and doesn’t adapt.”: 30%

When we asked enterprise users to rate different options for high-stakes work, the preference hierarchy became clear.

Exhibit: Perceived Fitness for High-Stakes Work

“Would you assign this task to AI or a junior colleague?”

Task TypeHuman PreferredAI Preferred
Complex Projects90%10%
Quick Tasks (emails, summaries)30%70%

The dividing line isn’t intelligence; it’s memory, adaptability, and learning capability—the exact characteristics that separate the two sides of the GenAI Divide.

Exhibit: Positioning GenAI Tools by Customization and Learning Capability

 Low Memory / LearningHigh Memory / Learning
Low CustomizationCopilot, GPT wrappersChatGPT w/memory (beta)
High CustomizationInternal builds (fragile)Agentic workflows, vertical SaaS

CROSSING THE GENAI DIVIDE: HOW THE BEST BUILDERS SUCCEED

Organizations on the right side of the GenAI Divide share a common approach: they build adaptive, embedded systems that learn from feedback. The best startups crossing the divide focus on narrow but high-value use cases, integrate deeply into workflows, and scale through continuous learning rather than broad feature sets.

5.1 What Enterprises Actually Want: The Bridge Across the Divide

When evaluating AI tools, buyers consistently emphasized a specific set of priorities.

Exhibit: How Executives Select GenAI Vendors

(% of interviews where theme was a top-3 priority)

  • The ability to improve over time: 66%

  • Deep understanding of our workflow: 63%

  • A vendor we trust: 55%

  • Flexibility when things change: 48%

  • Clear data boundaries: 40%

  • Minimal disruption to current tools: 35%

Exhibit: Direct Quotes on Executive Vendor Selection

What They WantDirect Quotes
A vendor we trust“We’re more likely to wait for our existing partner to add AI than gamble on a startup.”
Deep workflow understanding“Most vendors don’t get how our approvals or data flows work.”
Minimal disruption“If it doesn’t plug into Salesforce or our internal systems, no one’s going to use it.”
The ability to improve over time“It’s useful the first week, but then it just repeats the same mistakes. Why would I use that?”
Flexibility when things change“Our process evolves every quarter. If the AI can’t adapt, we’re back to spreadsheets.”

5.2 The Winning Playbook for Crossing the Divide

Takeaway: Startups that successfully cross the GenAI Divide land small, visible wins in narrow workflows, then expand. Tools with low setup burden and fast time-to-value outperform heavy enterprise builds.

The most successful startups execute two strategies:

  1. Customizing for specific workflows: Embedding in non-critical processes with significant customization, demonstrating clear value, then scaling into core workflows.

  2. Leveraging referral networks: Using channel partnerships, board member referrals, and familiar enterprise marketplaces to overcome trust barriers.

Exhibit: How Leaders Discover GenAI Solutions

  • Existing Vendor Relationships: 35%

    • Existing vendor partnerships: 20%

    • New integrations / partner referrals: 15%

  • Peer Networks: 23%

    • Informal peer recommendations: 13%

    • Board member or advisor referral: 10%

  • Events & Media: 15%

    • Conference demos or panels: 9%

    • Industry publications or webinars: 6%

  • Internal Processes: 15%

  • Cold Inbound Offer: 12%

CROSSING THE GENAI DIVIDE: HOW THE BEST BUYERS SUCCEED

Organizations that successfully cross the GenAI Divide approach AI procurement differently: they act like BPO clients, not SaaS customers. They demand deep customization, drive adoption from the front lines, and hold vendors accountable to business metrics.

6.1 Organizational Design for Crossing the Divide

Takeaway: The right organizational structure is critical for crossing the GenAI Divide. Strategic partnerships are twice as likely to succeed as internal builds.

Exhibit: Team Structures for GenAI Implementation

Approach% of Successful DeploymentsDescription
Strategic Partnerships (Buy)66%Procure external tools, co-develop with vendors
Internal Development (Build)33%Build and maintain GenAI tools fully in-house
Hybrid (Build-Buy)Insufficient dataInternal team co-develops with an external vendor

Pilots built via strategic partnerships were 2x as likely to reach full deployment as those built internally.

6.2 Buyer Practices That Cross the Divide

Top buyers treated AI startups less like software vendors and more like business service providers. These organizations:

  • Demanded deep customization aligned to internal processes.

  • Benchmarked tools on operational outcomes, not model benchmarks.

  • Partnered through early-stage failures, treating deployment as co-evolution.

  • Sourced AI initiatives from frontline managers, not central labs.

6.3 Where the Real ROI Lives: Beyond the Divide

Takeaway: Organizations that cross the GenAI Divide discover that ROI is often highest in ignored functions like operations and finance. Real gains come from replacing BPOs and external agencies, not cutting internal staff.

Best-in-class organizations are generating measurable value across both areas:

  • Front-office wins:

    • Lead qualification speed: 40% faster

    • Customer retention: 10% improvement through AI-powered follow-ups

  • Back-office wins:

    • BPO elimination: $2-10M annually in customer service and document processing

    • Agency spend reduction: 30% decrease in external creative and content costs

    • Risk checks for financial services: $1M saved annually on outsourced risk management

CONCLUSION: BRIDGING THE GENAI DIVIDE

Organizations that successfully cross the GenAI Divide do three things differently: they buy rather than build, empower line managers rather than central labs, and select tools that integrate deeply while adapting over time.

The most forward-thinking organizations are already experimenting with agentic systems that can learn, remember, and act autonomously. This transition marks the emergence of an Agentic Web: a persistent, interconnected layer of learning systems that collaborate across vendors, domains, and interfaces.

As enterprises begin locking in vendor relationships and feedback loops through 2026, the window to cross the GenAI Divide is rapidly narrowing. The next wave of adoption will be won not by the flashiest models, but by the systems that learn and remember.

For organizations currently trapped on the wrong side, the path forward is clear: stop investing in static tools, start partnering with vendors who offer custom systems, and focus on workflow integration over flashy demos. The GenAI Divide is not permanent, but crossing it requires fundamentally different choices about technology, partnerships, and organizational design.

APPENDIX

8.1 Acknowledgments

Produced in collaboration with Project NANDA out of MIT. We acknowledge the generous participation of executives who shared their implementation experiences and insights.

8.2 Research Methodology and Limitations

  • Methodology: 52 structured interviews, systematic analysis of 300+ public AI initiatives, and surveys with 153 leaders.

  • Sample Limitations: Our sample may not fully represent all enterprise segments or geographic regions. Selection bias is possible in organizations willing to participate.

  • Methodological Constraints: Industry disruption scores reflect publicly observable patterns. Build vs. buy percentages are based on interview responses. ROI measurements are complicated by concurrent operational improvements.

8.3 Research Instruments

8.3.1 Executive Interview Questionnaire
  1. Strategy and Budget:

    • Has your organization allocated a dedicated budget for GenAI initiatives?

    • Which business functions are currently prioritized?

  2. Buy vs. Build:

    • Do you primarily build internally, partner externally, or take a hybrid approach?

    • What drives that decision?

  3. Pilot to Scale:

    • How many GenAI pilots have been launched since Jan 2024? How many are now deployed?

    • What were the major barriers that stalled scale-up?

  4. Procurement and Evaluation:

    • How do you evaluate potential GenAI vendors or partners?

    • What are the most important selection criteria?

  5. ROI and Outcomes:

    • Have you observed measurable ROI from any GenAI deployment?

    • Which metrics were used?

  6. Workforce and Governance:

    • Have you reduced headcount due to GenAI?

    • Who leads implementation efforts?

8.3.2 Functional Leader / User Interview Questionnaire
  1. Personal Use and Preferences:

    • Do you personally use GenAI tools like ChatGPT? For what tasks?

    • How do they compare to internal GenAI tools?

  2. Enterprise Tool Experience:

    • What GenAI tools have been introduced by your organization?

    • How frequently do you use them? What’s working well? What’s frustrating?

  3. Workflow Fit:

    • Do these tools integrate with your core systems?

    • Do they adapt to your workflow over time or feel static?

  4. Task Type Preferences:

    • For a given use case, would you prefer AI or a human colleague?

    • What kinds of tasks do you trust AI with? What kinds do you avoid?

  5. Adoption Barriers:

    • What stops you or your colleagues from using these tools more often?

Want to learn how to work with AI? Register for AI training for business!

    Contact Us
    AI For Business Training Center