We cannot conclude llms.txt is 100% unnecessary. Google did officially say it's not needed — but that statement applies only to its own generative AI search (AI Overviews / AI Mode). No other AI vendor (OpenAI, Anthropic, Perplexity, etc.) has issued an equivalent statement.
Why 'llms.txt Was Unnecessary' Is Spreading
Since early 2026, claims that 'llms.txt turned out to be unnecessary' have been spreading rapidly on social media (especially X) and tech publications. The trigger is clear: Google's 'AI optimization guide,' published in 2026, explicitly named llms.txt as something you don't need to create for generative AI search.
John Mueller stated at Google Search Central Live: 'No AI system currently uses llms.txt.' SE Ranking's 300K-domain study found no statistical correlation between llms.txt presence and citation rates. Evidence supporting 'it doesn't work' continues to accumulate.
The Question
But many statements casually generalize 'Google said it's unnecessary' into 'llms.txt is completely unnecessary.' Is this generalization actually valid? It's worth carefully re-reading the subject of Google's official statement.
What Google Actually Said — and Didn't Say
Let's verify the exact passage from Google's 'AI optimization guide' in the original text.
Google Official Guide — Original Text
“You don't need to create new machine readable files, AI text files, markup, or Markdown to appear in generative AI search.”
Source: developers.google.com/search/docs/fundamentals/ai-optimization-guide
Note the trailing clause 'to appear in generative AI search.' This prepositional phrase critically scopes what Google is calling unnecessary. Strip it away and quote only 'You don't need to create...' and it reads as a blanket dismissal. But read the full sentence, and the scope is narrowed to 'appearing in Google's generative AI search (AI Overviews / AI Mode).'
| What Google Says | What Google Does NOT Say |
|---|---|
| Scoped Statement You don't need to create llms.txt or other machine-readable files to appear in Google's AI Overviews / AI Mode | Not Stated Other AI search systems (ChatGPT, Perplexity, Claude, etc.) don't read or use llms.txt |
| Stated You don't need to rewrite content specifically for generative AI search | Not Stated Sites deploying llms.txt will permanently remain ineligible for evaluation |
| Principle 'Optimizing for generative AI search is optimizing for the search experience — and thus still SEO' | Not Stated You should delete llms.txt now, or keeping it is detrimental |
In short, Google officially says 'llms.txt isn't needed to appear in Google's generative AI search' — but never says 'llms.txt has no value for any AI in the world.' This distinction is the central weakness of the 'unnecessary' argument.
Other AI Vendors' Official Stance (Primary Sources)
How do other major AI vendors (OpenAI / Anthropic / Perplexity) treat llms.txt? Verifying primary sources publicly available as of May 2026 reveals a clear contrast with Google's position.
Fact 1: All three vendors publish their own llms.txt on developer docs
- OpenAI: platform.openai.com/docs/llms.txt
- Anthropic: docs.anthropic.com/llms.txt / docs.claude.com/llms-full.txt
- Perplexity: docs.perplexity.ai/llms-full.txt
As noted in 'Contradiction 1,' even Google — the vendor that officially called it 'unnecessary' — deploys llms.txt on its own docs. OpenAI, Anthropic, and Perplexity all do the same without exception. It is difficult to declare 'unnecessary' a file that the vendors themselves publish.
Fact 2: None of the three has issued an official 'llms.txt is unnecessary' statement
Cross-checking the official blogs, documentation, and verified social accounts of OpenAI, Anthropic, and Perplexity, no statement dismissing llms.txt has been published as of May 2026. Multiple third-party industry analyses describe their stance consistently as 'strategic silence.'
Third-party analysis references:
- ・ The Current Consensus on llms.txt (TheSEOCommunity) — Summary of major AI vendors' stances
- ・ llms.txt Zero Usage: AI Bots Ignore It (AEO Engine) — Explicitly notes 'no official statement from OpenAI / Anthropic / Perplexity'
- ・ The llms.txt is dead (Kai Spriestersbach, Medium) — Even this leading 'unnecessary' argument cites only Google (Mueller) as an official source
The accurate situation as of May 2026: Google has officially called it unnecessary; no other AI vendor has issued any equivalent statement. Generalizations that present 'other AI vendors also say it's unnecessary' do not hold up when primary sources are checked.
Three Logical Gaps in the Unnecessary Argument
Subject Substitution: Expanding 'Google' to 'All AI'
The syllogism 'Google said it's unnecessary → therefore AI doesn't need it' quietly expands the subject. Google trails ChatGPT in generative AI market share (per Mercator research), and the space includes Perplexity, Claude, Gemini, and others. Treating Google's verdict as equivalent to 'the verdict of all AI' is not logically valid.
→ Correct subject: 'Google's generative AI search (AI Overviews / AI Mode)'
Ignoring Observed Facts (ChatGPT Actually Reads It)
Mintlify's and Cloudflare's CDN log analyses quantitatively confirm that ChatGPT (OAI-SearchBot) accesses llms.txt. Mintlify's 25-company CDN audit shows llms-full.txt receives roughly 5.6x more requests than llms.txt. We've also directly observed ChatGPT crawler access on our own site (seo.codequest.work) after deployment.
Whether 'being read' translates to 'being used in answers' is a separate question. However, claiming 'it's not even being read' contradicts the observed data. Dismissing the rationality of why a crawler is fetching the file — based solely on the absence of an official statement — is premature.
→ Fact: ChatGPT access is observed in CDN logs
Ignoring Asymmetric Cost (Removing ≠ Keeping)
The deployment cost of llms.txt is a single text file. Maintenance cost is near zero. Meanwhile, if llms.txt becomes a standardized AI search specification in the future, undeployed sites will need to catch up. The decision to 'remove because effectiveness is unproven' and 'keep despite unproven effectiveness' do not have symmetric cost/risk structures.
In IT projects, the cost of 'not doing it' often exceeds the cost of 'doing it.' At this level of near-zero cost, 'keeping it' is rationally the default unless there's a strong reason to remove it.
→ Cost structure: Deployment cost is near zero; the cost of non-deployment surfaces if standards shift
Three Facts That Contradict the Unnecessary Argument
What most strongly challenges the unnecessary argument is the observed reality. The following three facts don't align with 'unnecessary.'
Contradiction 1: Google itself deploys llms.txt on its own documentation sites
Documentation sites like developers.google.com, cloud.google.com, and firebase.google.com all deploy llms.txt files. The entity officially calling it 'unnecessary' has implemented it on its own platforms — the hardest contradiction to interpret. Possible explanations include 'Mueller's personal stance differs from the documentation team's decision' or 'CMS auto-generation as a side effect,' but the situation is clearly not 'we don't deploy it, but you don't need to either.'
Contradiction 2: ChatGPT crawler actually fetches llms.txt
Multiple organizations have observed OpenAI's OAI-SearchBot fetching llms.txt and llms-full.txt at the CDN log level. A crawler routinely fetching files it 'doesn't read' is implausible from a cost-efficiency standpoint. OpenAI's silence on official support is a separate matter — at the implementation level, the crawler is reading them.
Contradiction 3: Developer tools (Cursor / Claude Code / Windsurf) actively utilize llms.txt
Outside the context of AI search engines' autonomous crawling, coding assistant tools implement llms.txt-loading via explicit user commands. Cursor, Claude Code, Windsurf, and other developer tools treat llms.txt as a documentation retrieval mechanism. MCP (Model Context Protocol)-compatible agents are increasingly referencing llms.txt as well. It looks unnecessary if you focus only on 'AI search,' but it's already being utilized across the broader AI ecosystem.
Why We Keep It as a Check Item
CodeQuest.work SEO includes llms.txt as a technical SEO check item — Basic 1 point, Pro up to 3 points. Modest scoring, intentionally. Considering this debate, we maintain this scoring. Our reasoning is as follows.
| Item | Score | Rationale |
|---|---|---|
| Rich Results Eligibility | 20点 | CTR improvement is proven |
| llms.txt deployment | 1〜3点 | Access is observed, but citation impact is unproven |
The scoring gap reflects the confidence gap. Rich Results has proven impact, so high weight. llms.txt has uncertainty, so modest weight. 'Wait until effectiveness is confirmed' means users are always behind. For tactics with near-zero deployment cost, preparing proactively has more value.
Triggers for Score Revision
- •If AI vendors (OpenAI, Anthropic, etc.) officially declare llms.txt support → increase score
- •If the llmstxt.org specification is standardized through IETF or similar → increase score
- •If large-scale studies confirm causal impact on citations → increase score
- •If AI vendors explicitly declare 'we don't read llms.txt' → remove score
Summary: Where 'Unnecessary' Applies — and Where It Doesn't
To appear in Google AI Overviews / AI Mode → Confirmed unnecessary
Officially stated by Google. Focusing on SEO fundamentals (content quality, structured data, E-E-A-T) is more efficient.
Other AI vendors (ChatGPT / Perplexity / Claude) → Cannot conclude unnecessary
No official statement. ChatGPT crawler access is observed. Google itself deploys it on documentation sites. Insufficient grounds to declare it unnecessary at this point.
Developer tools / MCP ecosystem → Already in active use
Cursor, Claude Code, Windsurf, and MCP-compatible agents already use llms.txt as a documentation retrieval mechanism. There's already utility outside the AI search context.
The correct answer lies between '100% unnecessary' and '100% essential.'
Don't extrapolate Google's statement to all AI, don't ignore observed facts, and correctly evaluate the asymmetric cost of deployment. This is the most logically defensible stance on llms.txt at this point in time.
