2 min read

AI Discovery in 2026: The llms.txt Standard

How to make your website readable for LLMs and AI-driven discovery tools.

AILLMWeb Standards

The web is no longer just for humans. AI agents, powered by Large Language Models (LLMs), are now primary consumers of web content. The llms.txt standard is the industry's response to this shift.

What is llms.txt?

The llms.txt file is a Markdown file placed at the root of your website (e.g., /public/llms.txt). It provides a concise, high-level summary of your site's content specifically formatted for easy parsing by AI models like Claude, ChatGPT, and perplexity.

Why Does It Matter?

Standard web pages are often filled with UI noise—navbars, footers, popups, and complex HTML layouts. While humans can filter this out, LLMs process tokens. By providing a clean, text-first summary, you:

  1. Improve Accuracy: The AI doesn't have to "guess" your site's purpose.
  2. Reduce Token Cost: Clean text is cheaper and faster for models to process.
  3. Enhance Discovery: AI-driven search engines can represent your work more accurately to users.

The Standard: llms.txt vs. llms-full.txt

The community has converged on a two-tier approach:

  • llms.txt: A brief summary with links to key sections. It's the "elevator pitch."
  • llms-full.txt: A comprehensive document containing the actual content of your site (projects, bio details, descriptions). This is the "reference manual."

Where to Place It?

Both files should live in your domain's root. In a Next.js project, this means placing them in the public/ directory:

  • /public/llms.txt
  • /public/llms-full.txt

Conclusion

Just as we used robots.txt in the 90s to guide search engines, we use llms.txt in 2026 to guide AI. It's a small change with a massive impact on how you are perceived by the next generation of web crawlers.