Link Copied!

에이전트 SEO 붕괴: 2025년 4분기 추천 트래픽 60% 감소

새로운 2025년 4분기 데이터에 따르면 "응답 엔진"의 부상으로 인해 기술 게시자의 추천 트래픽이 무려 60% 감소했습니다. 개방형 웹의 경제 모델이 실시간으로 붕괴되고 있습니다.

🌐
언어 참고

이 기사는 영어로 작성되었습니다. 제목과 설명은 편의를 위해 자동으로 번역되었습니다.

웹사이트가 이진 코드로 분해되어 단일형 검은색 AI 서버로 흘러 들어가는 개념적 그림

The Silence of the Servers

It didn’t happen with a bang, but with a politely phrased summary.

For two decades, the “Grand Bargain” of the internet was simple: Publishers created content, Google indexed it, and in exchange for scraping that data, Google sent users back to the source. It was an imperfect ecosystem, but it paid the bills.

In Q4 2025, that bargain officially died.

New data released this week confirms what every media executive has been whispering about in panicked Board meetings. Referral traffic to technology and news publishers has collapsed by an estimated 60% year-over-year. This isn’t a fluctuation; it’s an extinction event. The culprit isn’t a change in the algorithm, but a change in the user.

Users are no longer searching; they are asking. And the agents are answering, without ever letting them leave.

The Data: The “Zero-Click” Cliff

The numbers are brutal. According to widespread industry analysis of Q4 traffic patterns, the combined impact of “Answer Engines” (like SearchGPT, Perplexity, and Google’s own fully agentic overviews) has decimated the click-through rate (CTR) for informational queries.

In 2023, if a user searched “how to fix a patchy beard,” they clicked on three blogs. In 2025, their agent reads those three blogs in 400 milliseconds, synthesizes a 3-step guide, and presents it to them. The user gets the value; the publisher gets a server bill.

This phenomenon has birthed a new, terrifying metric for the digital economy: ATR (Answer-Through Rate).

ATR=Queries Answered entirely by AITotal Search Queries\text{ATR} = \frac{\text{Queries Answered entirely by AI}}{\text{Total Search Queries}}

For informational verticals like tech support, coding, and history, the ATR has officially crossed roughly 80%. That means 4 out of 5 users never visit a website. They simply consume the hallucinated or summarized ghost of the website’s content.

The Attribution Crisis

This connects directly to the security nightmare of the Agentic Breach, but the economic damage is even faster than the security risk.

The core problem is the “Instruction vs. Data Paradox.” Agents treat high-quality journalism not as a product to be distributed, but as raw training data to be mined. When an agent reads an article to summarize it, it extracts 100% of the informational value while providing 0% of the economic value (ad impressions or subscription conversions).

It is the ultimate vampire economy. The host is being drained dry to feed the parasite, but unlike biological parasites, these digital ones don’t seem to care if the host dies.

The “Ghost Traffic” Phenomenon

Publishers are seeing a weird anomaly in their logs: “Ghost Traffic.” This is non-human traffic where an agent visits a page, scrapes the content, and leaves in 0.5 seconds.

It drives up server costs (bandwidth is not free) but counts for nothing in ad revenue because no ads are loaded or viewed by human eyes. The industry is essentially subsidizing the very machines that are putting it out of business.

The Revenue Model Break

Let’s look at the math of a standard independent tech blog in 2025.

The Old Model (2024):

  • Writer spends 4 hours researching a Deep Dive.
  • Article ranks #1 on Google.
  • 10,000 people visit.
  • 2% click an affiliate link or see a CPM ad.
  • Revenue: $200. Cost: $150. Profit: $50.

The Agentic Model (2025):

  • Writer spends 4 hours researching a Deep Dive.
  • Article is indexed by an Agent.
  • 10,000 people ask the Agent a question related to the topic.
  • The Agent summarizes the article 10,000 times.
  • 0 people visit the site.
  • Revenue: $0. Cost: $150. Profit: -$150.

This is unsustainable. An information economy cannot survive where the cost of production is borne by one party and the value of consumption is captured entirely by another.

The Reaction: The “Dark Forest” Internet

So, what happens next? The industry is already seeing the response, and it’s ugly. The internet is going dark.

1. The Paywall Curtain

If public data is scraped for free, data will no longer be public. The web is moving toward a world where all high-quality information is locked behind aggressive paywalls or login screens that agents (theoretically) cannot breach. The open web will become a wasteland of AI slop talking to other AI slop, while human insight retreats to gated communities.

2. “Agent-Traps” and Poisoning

Some publishers are fighting back with “poison pills”—injecting hidden text that confuses summarization models or malicious instructions that break the agent’s output context. It’s a digital arms race. Publishers are effectively laying landmines in their own libraries to keep the robots out.

3. The Return to Personality

The only thing an AI cannot steal (yet) is voice. The 60% drop is largely in “howto” and “informational” content. Personality-driven content, opinion pieces (like this one), and community-focused journalism are resilient. Humans still want to hear from humans, not a sanitized synthesis of the average of all human thought.

The Google Dilemma: Cannibalizing the Golden Goose

Perhaps the most ironic casualty of this shift is Google itself. For twenty years, Google made money by sending people away. Their entire business model relied on the “ten blue links”, a directory to the open web.

By shifting to AI overviews, Google has effectively nuked its own ecosystem. If users don’t click links, they don’t visit publisher sites using AdSense. If publishers die, the content pool dries up. If the content pool dries up, the AI has nothing to summarize.

Google is currently eating its own tail. The Q4 earnings call alluded to this “transition pain,” but the reality is sharper: they are destroying the village to save it from OpenAI, but in the process, they are starving the villagers.

Conclusion: Adapt or Evaporate

The 60% drop in Q4 2025 is a lagging indicator. The real damage is yet to come. The era of “Passive SEO” (writing good content and waiting for traffic) is dead.

Unless the major AI providers (Google, OpenAI, Anthropic) agree to a standardized Content Licensing Protocol (essentially “Spotify for the Web,” where publishers are paid per-token generated from their work), the professional web publishing industry will cease to exist as currently constituted by 2027.

For now, the advice to creators is stark: Stop writing for robots. Stop optimizing for keywords. The robots have won that game. Start building a tribe that cares enough to type your URL directly into the bar. Because getting “found” is no longer enough when the finder keeps what it finds.

The traffic isn’t coming back. The only metric that matters now is loyalty. If you don’t own your audience, you don’t have a business—you just have a dataset waiting to be ingested.

Sources

🦋 Discussion on Bluesky

Discuss on Bluesky

Searching for posts...