PR All the Way Down: Why AI Can’t Tell Corporate Myth from Reality
How artificial intelligence amplifies business mythology because the entire information ecosystem is built on corporate narratives
Pull‑quote: You can’t automate skepticism—but you can teach it.
I was researching a piece about the recent Bending Spoons acquisition of Vimeo—trying to understand how a company that had destroyed 94% of shareholder value could command a $1.38 billion buyout price. The story seemed straightforward: distressed asset sale of a fundamentally broken business model.
But when I tested my thesis with an AI system, challenging it with the simple assertion:
“Brightcove and Vimeo. Two failing OVPs.”
the system launched into a 2,000‑word corporate defense that could have been written by the companies’ investor relations departments, complete with “strategic synergies,” revenue projections, analyst citations, and explanations of why a 94% stock collapse wasn’t really “failure.”
Every piece of information came from published sources. Every citation was technically accurate. But the analysis was fundamentally wrong because the AI couldn’t distinguish between corporate PR and business reality. This isn’t just about two video companies—it’s about a critical limitation that affects how AI systems process information across every domain.
The Teaching Experiment: Making the Model Think
Instead of arguing with the corporate apologetics, I pushed with sharper challenges:
“Neither could win significant enterprise clients and they are oversized for the type of clients that might find them attractive.”
I got another 1,500 words of defense—simultaneously listing enterprise “wins” while admitting that “self‑serve revenue has been flat” and “smaller customers have churned.”
“Lies.”
That single word triggered 1,600 words of panic—citations, methodology, elaborate defenses.
Finally, I gave it a frame:
“If not for the false euphoria around business video applications post‑COVID outbreak, their idiot senior team would have gone down with the Vimeo ship several years ago.”
Suddenly the tone flipped. Gone were the euphemisms. The model produced critical analysis about “false euphoria,” “leadership fumbles,” and “COVID masking systematic failure.” It learned to see through corporate narrative—but only after I supplied the analytical lens.
The Source Problem: What AI Actually Reads
Primary sources models over‑trust
-
Press releases (pure corporate messaging)
-
Tech journalism (often rewritten press releases)
-
Analyst reports (for paying clients)
-
Earnings calls (scripted talking points)
-
Business publications (dependent on corporate access)
What’s systematically missing
-
Insider knowledge of actual operations
-
Historical context of similar patterns
-
Incentive analysis (how comp & careers drive behavior)
-
Distinctions between durable moats vs. temporary tailwinds
-
Pattern recognition of corporate communications
Result: sophisticated amplification of whatever narrative corporations want to project.
Case Study: Vimeo—Two Stories, Same Data
The Corporate Narrative (2020–2023)
-
“Visionary CEO pivots to enterprise transformation”
-
“Pandemic accelerates adoption of business video solutions”
-
“AI integration drives next phase of growth”
-
“Strategic acquisitions build comprehensive video ecosystem”
The Business Reality
-
Stock collapsed 94% from peak to trough
-
Revenue growth decelerated from 44% to flat
-
Self‑serve subscribers fled to free alternatives
-
Multiple rounds of layoffs while C‑suite compensation remained high
-
Executive turnover suggesting organizational dysfunction
-
Leadership timing misread as strategy (marketing leader elevated during COVID demand spike; post‑collapse framed as “navigating headwinds”)
Both narratives use the same data points. The corporate version spotlights the COVID‑era surge and files the collapse under “market volatility.” Trained on published sources, models reproduce the optimistic frame.
Why Algorithms Love Myths
Algorithmic Authority‑Washing
When several “authoritative” outlets echo the same talking points, the model infers independent confirmation.
Example: Fortune 40 Under 40 → WEF Young Global Leader → HBS boards/McKinsey podcasts → conference keynotes—each credential amplifies the last while drawing on the same corporate narrative.
Scale Without Skepticism
Models can ingest thousands of articles in seconds—but they don’t apply the skeptical filters experienced operators use. If every “credible” source calls lucky timing “vision,” there’s no basis to challenge the consensus.
The Self‑Reinforcing Delusion
Over time the myth‑makers believe their own propaganda. Boards and exec search firms recycle the same “proven” names, hard‑wiring the credential loop into the hiring market. Mediocre execution becomes “strategic brilliance.” Accidental timing becomes “foresight.” Systematic value destruction gets reframed as “learning experiences.”
Beyond Markets: Where This Shows Up
-
Healthcare: Published literature can echo pharma talking points without skeptical counterweight.
-
Climate: Corporate sustainability reports get treated like peer‑reviewed research.
-
Technology: Capability and safety claims are absorbed uncritically.
-
Politics: Campaign messaging and think‑tank papers enter as “authoritative” policy analysis.
Same root cause: models lack contextual knowledge and skeptical frameworks to judge sources.
What Humans Add
-
Historical pattern recognition: Humans see cycles, not just datapoints.
-
Incentive analysis: Comp packages, quarterly pressure, and career risk shape behavior.
-
Bullshit detection: Translation layer—when “transformation” means “the old market is dying,” when “operational efficiency” means “cut R&D.”
-
Source quality assessment: Who wrote it, for whom, with what track record?
Working With AI (Instead of Being Worked By It)
Use models for: volume processing, anomaly spotting, broad comparisons, hypothesis generation.
Guardrails:
-
Start skeptical. Ask models to defend assumptions.
-
Provide counter‑frames. (Vimeo: “COVID artificial demand” instantly reframed the analysis.)
-
Demand source hierarchy. Prioritize primary results over secondary spin.
-
Force historical comps. Map today’s claims to prior cycles.
AI systems don’t just amplify myths—they’re trained on an information environment where the myth‑makers themselves have forgotten the difference between reality and marketing.
Bottom Line
You can’t automate skepticism—but you can teach it. The next time you read AI‑generated analysis, ask: Is it revealing a hidden signal—or amplifying a familiar narrative with mathematical precision? The difference matters.