In the chaotic hours after Hurricane Melissa slammed into Jamaica as a Category 5 storm, a video went viral: four sharks circling in a hotel pool, supposedly swept inland by catastrophic flooding. The clip racked up millions of views on TikTok, X, and Instagram. It was terrifying, spectacular, and completely fake.
The sharks never existed. Neither did the destroyed airport in another viral video, nor the crowds swimming through floodwaters “like they were at a resort,” as one expert noted. These weren’t disinformation campaigns. They were clickbait, turbocharged by AI.
Hurricane Melissa was the first major natural disaster since OpenAI released Sora 2 just weeks earlier. What once required technical skill can now be conjured in seconds. Type a prompt, get a convincing fake airport. The barrier to creating compelling fake content has hit zero.
AI didn’t break our information ecosystem it exposed how broken it already was. For a decade, social media platforms have optimized for engagement over everything else. A fake shark video will always outperform actual flood damage.
The people creating these videos weren’t propagandists. As AI expert Henry Ajder noted, most were just “trying to get engagement, to try and get clicks” making fake content for financial rewards and social validation. AI turned disinformation into a participatory sport. The consequences are real. The New Zealand Herald published one AI-generated image before catching it. Jamaica’s Education Minister pleaded with citizens: “Many of them are fake. Please listen to the official channels.” When officials compete with AI disaster porn during an actual crisis, something has gone fundamentally wrong.
Newsrooms can’t fix this. For twenty years, we’ve watched them get hollowed out, thousands of journalists laid off, local papers shuttered or stripped for parts by hedge funds. Survivors tried subscriptions, paywalls, pivoting to video. They still lost ground because the economic foundation of journalism collapsed when platforms decided they could distribute everyone’s content, keep the ad revenue, and take zero responsibility for what they amplified.
The platforms control distribution. They decide what billions see. They spent two decades optimizing algorithms to reward engagement over accuracy because that drives ad revenue. A reporter in Jamaica doing verification work never stood a chance. Now AI has made it exponentially worse.
Facebook, X, TikTok, and Instagram built empires on a simple equation: more content equals more engagement equals more money. That model was problematic but functional because humans could only create so much content. AI shattered those limits. Content is now infinite and free to produce. When anyone can generate a hundred disaster videos in an hour, “more content” stops being valuable. The approach that made these companies billions becomes actively destructive when 90% of content is synthetic garbage gaming engagement metrics.
These companies face a choice: shift from maximizing volume to maximizing quality, or drown in AI slop until users can’t find anything real. This means redesigning algorithms optimized for engagement over fifteen years. Deprioritizing AI content during crises. Creating friction for synthetic media instead of amplifying it. Boosting verified sources even when they’re less engaging than fake sharks.
None of this is technically impossible. Companies that serve personalized ads to billions in milliseconds can identify and deprioritize AI content. They don’t because it’s expensive and cuts directly against the business model that made them trillions. But an ecosystem so polluted that users can’t tell what’s real doesn’t just harm society it destroys the product. When platforms become synonymous with AI slop, people leave.
And they are. Reddit reported 21% growth in daily users in 2025, with analysts pointing to a simple reason: people are desperate for real human perspectives in a feed full of synthetic garbage. Users are spending more time on Reddit than any other social platform specifically because it hasn’t been overrun with AI-generated content. Clear proof that deprioritizing algorithmic engagement farming can work as a business model. The platforms drowning in fake shark videos should be paying attention.
The next hurricane is coming, and so are the fake videos. The platforms created the attention economy that made this possible. Now they have the opportunity and I would argue, the obligation to be part of the solution. The technology to prioritize quality over virality exists. The question is whether they’ll deploy it before the information ecosystem they built becomes impossible to trust.