You’ve probably seen the headlines from Fortune, Yahoo Finance, and others: “MIT Study Shows 95% of AI Pilots Fail.” It’s been making rounds across tech media, spooking investors, and reinforcing every AI skeptic’s worldview. But when you dig into the actual research from MIT’s NANDA project, a different story emerges—one that reveals more about sensationalist journalism than AI failure rates.
When you’re reading something (yes, even this post), you should always question the sources. Who is saying what? What is their agenda? For example, if an oil company is releasing their research on climate impact, you should be skeptical, right?
“I completely trust BP Oil’s research on the impact of their 87-day crude oil spill in the Gulf of Mexico.”
— No One Ever
Who or What is NANDA?
NANDA is a research project at MIT focused on building decentralized AI infrastructure. NANDA stands for Networked Agents and Decentralized AI, which I believe should have the acronym NADA (I’m just being a punk). This is actually important technology that’s going to be essential for the future of agentic AI. Think of it as DNS for AI agents—NANDA is working on what they call the “NANDA Index Quilt.”
What is the NANDA Index Quilt?
“agents, resources, and tools across platforms, organizations and protocols. Through such an approach, we allow for global interoperability, discoverability, and flexible governance of agents”
— p.2, Beyond DNS: Unlocking the Internet of AI Agents via the NANDA Index and Verified AgentFacts
The Methodology Raises Questions
When NANDA’s actual report surfaced, some issues became apparent:
Small Sample Size: Despite claims of analyzing “300 public implementations,” the real methodology reveals just 52 interviews and 153 survey responses. That’s not exactly a comprehensive industry survey.
Strict Success Definition: They define “failure” as lacking “rapid revenue acceleration” or “measurable P&L impact.” This excludes efficiency gains, process improvements, cost savings, and capability building—basically any outcome that isn’t immediate revenue growth. For example, I helped a Fortune 100 company with an audit issue. What normally took four people 2-3 months can now be done in under ten minutes with a customized AI agent. Those four people are now free to work on more important things, but apparently that doesn’t count as “success.”
Selective Focus: Their own data shows a 67% success rate for purchased AI solutions and documents companies achieving “$2-10M annually” in savings. Yet somehow this becomes a “95% failure” narrative in the headlines.
What Other Research Is Showing
While NANDA’s small sample painted a mixed picture, larger, more comprehensive studies tell a different story:
Deloitte’s State of GenAI Report:
McKinsey’s Global AI Survey:
Boston Consulting Group’s Study:
These studies, with much larger sample sizes and longer observation periods, suggest the reality is far more positive than these click-bait headlines implied.
What the Data Actually Shows
Here’s what NANDA’s research actually reveals:
- 90% of employees regularly use AI tools for work (the “shadow AI economy”)
- External partnerships achieve 67% success rates vs 33% for internal builds
- Multiple documented cases of companies achieving millions in measurable savings
- Main barriers are organizational, not technological
This paints a picture of widespread AI adoption with clear best practices emerging, not the catastrophic failure narrative that made headlines. Please note, the catastrophe narrative came from media outlets that grabbed one statistic (95% don’t show rapid revenue acceleration) and turned it into “AI is failing everywhere.” That’s not what NANDA said, but it’s what gets clicks.
To be honest, most of what I initially thought was bias was actually something simpler: lazy journalism. NANDA’s research, while flawed, isn’t portraying a catastrophe. They’re documenting what they call the “GenAI Divide”…some organizations succeed, most struggle with implementation.
The catastrophe narrative came from media outlets that grabbed one statistic (95% don’t show rapid revenue acceleration) and turned it into “AI is failing everywhere.” That’s not what NANDA said, but it’s what gets clicks.
The Bigger Picture
This bothers me for two reasons:
- How quickly media turned nuanced research into clickbait headlines
- How a small, obviously limited sample got extrapolated to industry-wide conclusions
Do you know how many companies are using AI right now? All of them. A study of 52 companies doesn’t represent that reality.
As someone who’s spent years advocating for democratized technology, sloppy research and sensationalist reporting undermine trust in both academia and the technology industry.
The Bottom Line
Before you make any strategic decisions based on headlines about “95% failure,” consider reading the actual source. Just because Fortune regurgitated an article doesn’t mean their clickbait headline reflects reality.
The real story is more nuanced: AI adoption is massive, external partnerships work better than internal builds, and organizations are achieving meaningful value when they approach implementation strategically. That’s not as dramatic as “95% failure,” but it’s the truth and a lot more useful for making actual business decisions. If someone tells you “it’s just plug ‘n play”, run.
Do your own research. Look at the methodology. And remember that clickbait headlines are often just that: headlines, not truth.
This analysis is based on NANDA’s own published research and publicly available information about their institutional partnerships and commercial interests.
Here are some of the other articles I read to write this post (which is why it took me a week to post this!):
Vatché
Tinker, Thinker, AI Builder. Writing helps me formulate my thoughts and opinions on various topics. This blog's focus is AI and emerging tech, but may stray from time to time into philosophy and ethics.