The Ghibli Move, Leveled Up
Why Sora’s Magic Won’t Survive Its Lawsuits
When OpenAI’s Sora app hit #1 on the App Store last week, it looked like another inevitability—the moment AI video became mainstream. An invite-only app, a vertical feed, a flood of surreal mashups: Wednesday chatting with Peter Griffin, a spectral Drake wandering through a watercolor forest. It was engineered wonder, perfectly timed for maximum virality.
But Sora isn’t just a tool; it’s a distribution system. The feed is the feature. It borrows the grammar of TikTok—vertical scroll, endless discovery—but replaces human production with model generation. Every frame looks like content; none of it is owned.
If that sounds familiar, it should. OpenAI has run this play before. The “Studio Ghibli–style” craze that flooded the internet last year was a stress test: could aesthetic nostalgia override ethical discomfort? It could, easily. The images were beautiful, the resemblance undeniable, and the source—unlicensed. Now the same logic scales from style to specifics: from tranquil Ghibli forests to branded, recognizable characters. From imitation to identity.
That’s why Sora feels both breathtaking and doomed. The app’s magic depends on frictionless generation—of anyone, from anywhere, doing anything. But those same freedoms make it legally radioactive. You can’t monetize a feed filled with borrowed IP, deepfaked celebrities, and unapproved digital resurrections. You can’t sell ads against “Family Guy x Wednesday x Drake in Ghibli style” when none of those entities have cleared the rights. The moment someone tries to turn the view count into dollars, the lawyers will arrive faster than the downloads.
This isn’t a new story; it’s the oldest one in the modern internet. YouTube did it in 2006: launch first, litigate later. TikTok repeated the trick in 2018 with music licensing. Each time, the platform bets that user behavior will solidify faster than law can adapt. And each time, the cleanup—automated takedowns, rev-share systems, corporate partnerships—replaces spontaneity with compliance.
OpenAI’s bet is just bigger. Sora collapses the gap between creation and consumption. It’s not “upload your video”—it’s “generate the feed.” The platform becomes the studio, the model becomes the labor, and the user becomes the director. It’s intoxicating, but also unsustainable. The same realism that draws audiences also collapses boundaries that used to protect art, identity, and consent.
The early weeks of Sora have already surfaced every unresolved question in AI media:
-
Who owns likeness once it’s data?
-
What happens when the dead go viral again?
-
How do you enforce copyright when the file itself is new, but the DNA inside it is stolen?
Regulators are moving quickly. Japan’s digital ministry has already asked OpenAI to flip its default from opt-out to opt-in for copyrighted material. Major talent agencies are warning clients to lock down their estates. And OpenAI, facing blowback for unapproved use of Martin Luther King Jr.’s likeness, has paused certain categories of generation entirely.
None of that stops the videos from spreading. Platforms move faster than policy because they’re built to. Every model release is a cultural zero-day exploit. The result is what I’ve called unplanned infrastructure—systems that shape behavior long before we agree on rules for them. That’s the heart of No One Planned This: we don’t design these platforms as much as they design us. Sora is only the latest expression of that feedback loop.
The deeper problem isn’t legal; it’s structural. Sora’s entire viral appeal is built on borrowed novelty. People aren’t watching because the videos are good—they’re watching because the combinations feel illicit. It’s Family Guy in Ghibli colors, Elsa singing a Frank Ocean song, a deepfaked celebrity cameo that shouldn’t exist. Take away that transgression and what remains? A slick interface generating safe, licensed content. The feed keeps scrolling, but the thrill is gone.
Altman has raised the stakes on his original Ghibli gambit. Last time, OpenAI let the world flood social media with imitations of Miyazaki’s art. This time, they’ve moved from style to substance: from reverence to replication. It’s a bet that awe still beats authority—that we’ll keep watching even as the legal foundation crumbles beneath it.
But there’s a limit to how long you can run on wonder alone. The moment rights holders synchronize their defenses—opt-in frameworks, model filters, watermarks—the behavior collapses back into something familiar: a catalog of cleared content. The system survives, but the spontaneity dies. The future of AI video isn’t lawless; it’s licensed.
Which means Sora’s real legacy may be less about what it creates and more about what it accelerates: the rapid enclosure of AI creativity into corporate fences. The cycle is predictable now—open, explode, litigate, contain. We call it innovation, but it’s really pattern recognition with a shorter half-life.
The awe was real. The ambition is, too. But wonder built on borrowed likeness is a sugar high. Once the world’s IP lawyers catch up, Sora will still exist—it’ll just feel a lot more like Netflix than magic.
