In August 2025, Instagram introduced a repost feature that lets users reshare their favorite posts to their followers. The change was framed as a way to surface content that one’s friends had liked and engaged with, rather than relying entirely on algorithmic recommendations. 

At face value, this repost feature looks like a move toward a more human-centered feed. But it reveals a deeper problem: in the face of growing AI-generated (or synthetic) content, human (or authentic) content rarely surfaces on its own. Instagram admits it needs to manually engineer human connection.

Researchers have begun to call this the “AI slop”: mass-produced, low-quality synthetic content designed to fill feed space and capture attention quickly.

A feed increasingly filled with AI-generated content does more than degrade user experience — it erodes a basic sense of what’s real and who is actually speaking online. As authentic people and content become harder to find, data show that users are starting to leave.

This erosion of visible human presence is the real crisis of today’s social media. Addressing it will require platforms to rebuild around a simple premise: making real human presence visible again.

The Collapse of Authentic Engagement

In 2025, average daily social media use fell to approximately one hour and twenty minutes, about 10% below the 2022 peak. Instagram engagement dropped 24% year-over-year. Posts on X and Facebook averaged just 0.15 percent engagement — a fraction of their historic interaction rates. 

Yet the amount of digital content continues to rise. 

This paradox resolves itself when you examine the composition of the feed. An estimated 64% of X accounts are now bots, with 76% of peak traffic automated. On Instagram, approximately 95 million accounts are fake or bot-operated, comprising at least 14.1% of all followers.

Platforms rely so heavily on synthetic content because they’ve discovered that such content scales effortlessly: A single prompt can produce thousands of variations, and AI models can tailor content to every demographic slice of a user base. Algorithms reward this material because it drives short-term engagement — users see more, faster; platforms capture more data; advertisers spend more.

What platforms haven’t realized is that this strategy is now backfiring.

Researchers have begun to call this the “AI slop”: mass-produced, low-quality synthetic content designed to fill feed space and capture attention quickly. Because social media today no longer serves the purpose it once did — to connect real people with each other — users, especially younger ones, are leaving the platforms.

Why Detection and Regulation Won’t Solve This

The typical response to problems on social media is more detection and regulation: Label AI content, require watermarks, improve media literacy. These solutions assume the problem is telling real from fake — and that better detection will fix it.

This detection-first framework misdiagnoses the situation entirely.

The fundamental issue is not that people cannot identify AI content. It’s that there’s too much of it. If 90% of the content in your feed is AI-generated or algorithmically amplified, checking what’s real stops being worth the effort. People don’t become more skeptical — they become apathetic. 

A new model of social media is emerging. Platforms are evolving to prioritize real interaction over engagement metrics. 

The policy fragmentation now unfolding in the United States compounds this problem. Twenty-six states have passed deepfake legislation, with twenty-three requiring AI-content disclosure and three imposing outright bans. One recent piece of state legislation, California’s SB 942, which will take effect in August 2026, will mandate AI-content detection tools and watermarking. 

Yet simultaneously, the Trump administration’s December 2025 executive order on artificial intelligence threatens to weaken coordinated oversight by pulling against state-level regulation rather than aligning with it. The result is neither coherent national policy nor functional state-level response. Instead, platforms face a patchwork of requirements that is difficult to enforce consistently across jurisdictions.

More fundamentally, all of these regulatory approaches address the supply side of the problem. They ask: How do we prevent AI content from being created or distributed? 

They do not ask the more urgent demand-side question that now matters more: Why would anyone participate in a platform where authentic human presence has become statistically rare?

What Platforms Could Become Instead

A new model of social media is emerging. 

Smaller platforms that prioritize real users over scale are gaining traction. Spaces like Bluesky, Reddit communities, and private group chats work not because they are perfect, but because they still make it possible to find authentic human connection.

This is not a return to the early internet of web forums and blog rings, though the comparison is tempting. Instead, platforms are evolving to prioritize real interaction over engagement metrics. 

Researchers and designers are already exploring models that reflect this shift. 

One approach is building what we might call “authenticity infrastructure”: systems that verify human authorship. Consider the approach taken by the Congressionally Directed Medical Research Programs and disease-specific patient organizations. They verify participants through lived experience and credential checks before granting them platform status. A social platform could extend this model by implementing tiered authentication that gives verified human users greater visibility and trust without requiring full public identification.

A second approach is decentralization. Federated platforms like Bluesky let users choose their own communities and move between them, making it easier to avoid spaces overrun by bots. This enables competition based on authenticity rather than engagement, and builds governance directly into the system.

Third, platforms need new metrics. Current platforms measure success by engagement: likes, shares, and comments. An alternative framework would measure success through meaningful interaction by users: depth of engagement and sustained connection to real online communities. These are harder to monetize, but they are what last.

Reconnection as Democratic Infrastructure

This is not merely a platform problem. It is a democratic one. 

Early social media succeeded because it accidentally created a shared information environment. Imperfect though it was, Facebook and Twitter became the spaces where citizens encountered a broad cross-section of other citizens and other viewpoints. Today, faced with the growth of synthetic content, that space has fragmented. 

Democratic participation requires a shared understanding of what’s happening and what counts as evidence. A society in which the majority of voices online are AI-generated cannot maintain this baseline. Preserving that baseline requires redesigning the fundamental architecture of how platforms operate — treating authentic human connection not as a byproduct of engagement metrics but as the core good that platforms should optimize for. 

This will mean slower growth, smaller audiences, and lower ad revenue for platforms. But what made social media valuable in the first place was not the technology. It was the humans.

Social media remains worth saving only if platforms make those humans visible to one another again.

James Wang is a graduate candidate at Georgetown University's School of Foreign Service, where he concentrates in Science, Technology, and International Affairs, and an incoming Schwarzman Scholar at Tsinghua University. His work focuses on US-China technology competition, semiconductor export controls, and AI governance. Previously, he studied Political Science and Philosophy at the University of Toronto. Having lived and studied in China and Singapore, his research examines how emerging technologies reshape international security and economic statecraft.