How to Add llms.txt to Next.js and React Apps (Complete Guide)

If you're building modern web applications with Next.js or React, making your content accessible to AI systems should be on your radar. The llms.txt standard provides a clean way for AI assistants like Claude and ChatGPT to understand your application's structure and content.
The implementation approach differs significantly between Next.js and standard React apps. Next.js offers built-in routing and API capabilities that make serving llms.txt straightforward, while React applications typically require different strategies depending on your build setup and hosting environment.
In this guide, I'll walk you through the most effective ways to add llms.txt to your JavaScript applications, from simple static file serving to dynamic content generation. Whether you're working with Next.js App Router, Pages Router, Create React App, or Vite, you'll find a solution that fits your project.
Understanding the Challenge
Before we jump into implementation details, it's worth understanding why JavaScript frameworks require specific approaches for serving plain text files. Traditional static sites can simply drop an llms.txt file in the root directory and call it done. Modern JavaScript applications work differently.
React apps are typically single-page applications that handle routing on the client side. When a user visits your site, they're actually receiving an HTML shell that loads your JavaScript bundle, which then renders the appropriate content. This architecture means a plain text file like llms.txt needs special handling to be served correctly at the root URL without being processed by your JavaScript router.
Next.js bridges the gap between traditional server-rendered sites and modern React applications. With built-in server-side capabilities, it can serve static files and dynamic routes natively. This makes implementing llms.txt more straightforward, though the exact approach varies depending on whether you're using the newer App Router or the traditional Pages Router.
The key difference comes down to control. With Next.js, you have server-side functionality at your fingertips. With standard React apps, you're often working within the constraints of your build tool and hosting platform. Both scenarios have elegant solutions.
Next.js App Router Implementation
The App Router, introduced in Next.js 13 and now the recommended approach for new projects, provides the cleanest way to serve llms.txt files. You'll create a route handler that responds with plain text when someone requests your llms.txt file.
Start by creating a new file in your app directory at app/llms.txt/route.ts (or route.js if you're not using TypeScript). This location tells Next.js to respond to requests for /llms.txt with whatever this route handler returns.
export async function GET() {
const content = `# My Next.js Application
> A modern web application built with Next.js and React
## Documentation
- [Getting Started](https://yoursite.com/docs/getting-started): Learn how to use our platform
- [API Reference](https://yoursite.com/docs/api): Complete API documentation
- [Tutorials](https://yoursite.com/tutorials): Step-by-step guides
## Features
- [Dashboard](https://yoursite.com/dashboard): Main application interface
- [Analytics](https://yoursite.com/analytics): Track your metrics
- [Settings](https://yoursite.com/settings): Configure your account
## Optional
- [Changelog](https://yoursite.com/changelog): Recent updates and improvements
- [Blog](https://yoursite.com/blog): Articles and insights
`;
return new Response(content, {
headers: {
'Content-Type': 'text/plain; charset=utf-8',
},
});
}
This approach gives you complete control over the content while keeping everything type-safe and integrated with your Next.js application. The route handler runs on the server, which means you can fetch data from databases, read from your file system, or call external APIs to build your llms.txt content dynamically.
For sites with frequently changing content, you might want to build the file content at request time. Perhaps you're pulling blog posts from a CMS or generating documentation from your codebase. The beauty of this approach is that you can do all of that inside your GET function.
export async function GET() {
// Fetch your latest blog posts
const posts = await fetch('https://your-cms.com/api/posts').then(r => r.json());
let content = `# My Application\n\n> Your site description\n\n## Blog Posts\n`;
posts.forEach(post => {
content += `- [${post.title}](https://yoursite.com/blog/${post.slug}): ${post.excerpt}\n`;
});
return new Response(content, {
headers: {
'Content-Type': 'text/plain; charset=utf-8',
},
});
}
If you prefer to generate the file at build time rather than on every request, Next.js supports that too. Add export const dynamic = 'force-static' to your route file, and Next.js will generate the file once during your build process. This is perfect for content that doesn't change frequently.
export const dynamic = 'force-static';
export async function GET() {
// This runs once at build time
const content = generateLLMSContent();
return new Response(content, {
headers: {
'Content-Type': 'text/plain; charset=utf-8',
},
});
}
The trade-off here is freshness versus performance. Build-time generation means faster responses since the content is pre-rendered, but updates require a new deployment. Runtime generation means your content is always current, but each request does a bit more work. For most applications, especially those deployed on platforms like Vercel, the performance difference is negligible.
Next.js Pages Router Implementation
If your project uses the older Pages Router (which is still fully supported and widely used), the implementation looks a bit different. Instead of creating a route handler in the app directory, you'll work with the pages/api directory.
Create a file at pages/api/llms.txt.ts and export an API handler function.
import type { NextApiRequest, NextApiResponse } from 'next';
export default function handler(req: NextApiRequest, res: NextApiResponse) {
const content = `# My Application
> Brief description of what your app does
## Key Pages
- [Home](https://yoursite.com/): Welcome page
- [Features](https://yoursite.com/features): What we offer
- [Pricing](https://yoursite.com/pricing): Our plans
## Documentation
- [Quick Start](https://yoursite.com/docs/quickstart): Get started in 5 minutes
- [API Docs](https://yoursite.com/docs/api): Technical reference
`;
res.setHeader('Content-Type', 'text/plain; charset=utf-8');
res.status(200).send(content);
}
This approach works reliably but serves the file from /api/llms.txt instead of /llms.txt. While this technically works, AI systems expect the file at the root path. You can work around this with Next.js rewrites.
Open your next.config.js file and add a rewrite rule that maps /llms.txt to your API route.
module.exports = {
async rewrites() {
return [
{
source: '/llms.txt',
destination: '/api/llms.txt',
},
];
},
};
Now when someone visits yoursite.com/llms.txt, Next.js silently serves your API route while keeping the clean URL. This configuration works seamlessly with all deployment platforms.
The Static File Approach for Next.js
Sometimes the simplest solution is the best one. Both App Router and Pages Router applications can serve static files from the public directory. If your llms.txt content doesn't need to be generated dynamically, you can simply create a file at public/llms.txt with your content.
Next.js automatically serves everything in the public directory at the root level. A file at public/llms.txt becomes accessible at yoursite.com/llms.txt. No configuration needed, no code to write, no routes to create.
This works perfectly for applications where the llms.txt content is curated manually or generated by a build script. You might have a Node script that runs during your build process, pulls content from various sources, and writes out the llms.txt file to your public directory before Next.js compiles everything.
The downside is that updates require code changes and redeployment. You can't pull in fresh content at runtime or serve different versions based on user context. But for many applications, especially those with relatively stable content structures, this simplicity is exactly what you want.
Standard React Applications (CRA, Vite, etc.)
React applications built with tools like Create React App or Vite don't have server-side capabilities built in. These are purely client-side applications that need different approaches for serving static files.
For Create React App, the public folder works similarly to Next.js. Place your llms.txt file in the public directory, and the build process will copy it to the root of your build output. When deployed, it's accessible at the root URL without any additional configuration.
my-react-app/
public/
llms.txt ← Your file goes here
favicon.ico
src/
App.tsx
index.tsx
Vite follows the same pattern. The public directory in a Vite project serves static assets that are copied to the build output without processing. Create your llms.txt file there, and it'll be available at the root path after deployment.
The catch with this approach is that your content is completely static. There's no way to generate it dynamically based on your React components, routes, or application data. For applications with complex content structures that change frequently, you'll need to involve build-time scripting.
Build-Time Generation for React Apps
Many React applications benefit from generating their llms.txt file during the build process. This lets you analyze your application structure, enumerate your routes, and create a comprehensive file that stays in sync with your actual application.
You can add a Node script to your package.json that runs before your build command. This script can read your route configuration, scan your content files, or even make API calls to external services.
// scripts/generate-llms-txt.js
const fs = require('fs');
const path = require('path');
function generateLLMSTxt() {
// Read your route configuration or scan content
const routes = [
{ path: '/', title: 'Home', description: 'Welcome to our app' },
{ path: '/about', title: 'About', description: 'Learn about us' },
{ path: '/contact', title: 'Contact', description: 'Get in touch' },
];
let content = `# My React Application\n\n`;
content += `> A modern single-page application\n\n`;
content += `## Pages\n`;
routes.forEach(route => {
content += `- [${route.title}](https://yoursite.com${route.path}): ${route.description}\n`;
});
// Write to public directory
fs.writeFileSync(
path.join(__dirname, '../public/llms.txt'),
content,
'utf-8'
);
console.log('llms.txt generated successfully');
}
generateLLMSTxt();
Update your package.json to run this script before building.
{
"scripts": {
"prebuild": "node scripts/generate-llms-txt.js",
"build": "react-scripts build",
"dev": "react-scripts start"
}
}
Now every time you run npm run build, your llms.txt file gets regenerated based on your current application structure. This keeps your AI-readable content in perfect sync with your actual application without manual updates.
Deployment Platform Considerations
Where you deploy your application affects how you might implement llms.txt. Different platforms have different strengths and quirks.
Vercel Deployment
Vercel handles Next.js deployments beautifully since they're the creators of the framework. Both App Router route handlers and Pages Router API routes work perfectly. Static files in the public directory are served with excellent caching and CDN distribution.
If you want to add custom headers to your llms.txt file served from the public directory, create a vercel.json configuration file in your project root.
{
"headers": [
{
"source": "/llms.txt",
"headers": [
{
"key": "Content-Type",
"value": "text/plain; charset=utf-8"
},
{
"key": "Cache-Control",
"value": "public, max-age=3600, must-revalidate"
}
]
}
]
}
This ensures your file is served with the correct MIME type and sensible caching directives. The must-revalidate directive tells CDNs to check for updates periodically while still serving cached versions when possible.
Netlify Deployment
Netlify works great for both Next.js and standard React applications. For static files, the public directory approach works seamlessly. If you want custom headers, create a _headers file in your public directory.
/llms.txt
Content-Type: text/plain; charset=utf-8
Cache-Control: public, max-age=3600
Netlify also supports redirects and rewrites through a _redirects file, which can be useful if you need to map your llms.txt file to a different internal path.
Other Platforms
Most modern hosting platforms support serving static files from a designated directory. Cloudflare Pages, AWS Amplify, and Firebase Hosting all handle files in your build output directory correctly. The key is ensuring your build process outputs the llms.txt file to the right location.
Dynamic Content Generation Strategies
One of the most powerful aspects of implementing llms.txt in Next.js is the ability to generate content dynamically. Let's look at practical patterns for common scenarios.
Pulling Content from a CMS
Many Next.js applications use headless CMS platforms like Contentful, Sanity, or Strapi. You can query your CMS during llms.txt generation to ensure your file always reflects your published content.
// app/llms.txt/route.ts
export const revalidate = 3600; // Revalidate every hour
export async function GET() {
// Fetch from your CMS
const response = await fetch('https://your-cms.com/api/content', {
headers: {
'Authorization': `Bearer ${process.env.CMS_API_KEY}`
}
});
const data = await response.json();
let content = `# ${data.siteName}\n\n> ${data.siteDescription}\n\n`;
// Add blog posts
content += `## Blog\n`;
data.posts.forEach(post => {
content += `- [${post.title}](${post.url}): ${post.excerpt}\n`;
});
// Add documentation
content += `\n## Documentation\n`;
data.docs.forEach(doc => {
content += `- [${doc.title}](${doc.url}): ${doc.description}\n`;
});
return new Response(content, {
headers: { 'Content-Type': 'text/plain; charset=utf-8' }
});
}
The revalidate export tells Next.js to cache the generated content for an hour before regenerating it. This balances freshness with performance, ensuring you're not hammering your CMS with requests on every llms.txt access.
Reading from Your File System
For documentation sites or blogs that store content as Markdown files in your repository, you can read directly from the file system. This is particularly elegant because your llms.txt automatically stays synchronized with your actual content files.
import fs from 'fs';
import path from 'path';
import matter from 'gray-matter';
export const dynamic = 'force-static';
export async function GET() {
const docsDirectory = path.join(process.cwd(), 'content/docs');
const filenames = fs.readdirSync(docsDirectory);
let content = `# Documentation\n\n> Comprehensive guides and references\n\n## Guides\n`;
filenames.forEach(filename => {
const filePath = path.join(docsDirectory, filename);
const fileContents = fs.readFileSync(filePath, 'utf8');
const { data } = matter(fileContents);
const slug = filename.replace(/\.md$/, '');
content += `- [${data.title}](https://yoursite.com/docs/${slug}): ${data.description}\n`;
});
return new Response(content, {
headers: { 'Content-Type': 'text/plain; charset=utf-8' }
});
}
This pattern works beautifully for sites built with Contentlayer, next-mdx-remote, or similar Markdown-based content solutions.
Creating llms-full.txt Alongside Your Index
Many organizations provide both the standard llms.txt index and an llms-full.txt file containing complete content. The implementation follows the same patterns we've covered, just with different content generation logic.
For Next.js App Router, create app/llms-full.txt/route.ts alongside your llms.txt route. The full version should include actual page content rather than just descriptions.
export async function GET() {
let content = `# Documentation - Full Content\n\n`;
// Include complete documentation text
const docs = await getAllDocs();
docs.forEach(doc => {
content += `## ${doc.title}\n\n`;
content += `${doc.content}\n\n`;
content += `---\n\n`;
});
return new Response(content, {
headers: { 'Content-Type': 'text/plain; charset=utf-8' }
});
}
The full file can get quite large, so consider implementing compression or chunking strategies if you're serving extensive documentation. Most AI systems can handle large context windows, but there are practical limits to file size over HTTP.
Testing Your Implementation
Once you've implemented llms.txt in your Next.js or React application, thorough testing ensures everything works as expected in development, staging, and production environments.
Start by running your application locally and visiting http://localhost:3000/llms.txt in your browser. You should see plain text formatted according to the standard, not JSON, HTML, or a 404 error. Check your browser's network inspector to verify the Content-Type header is set to text/plain.
Deploy to a staging environment and test again. Sometimes local development and production builds behave differently, especially with static file handling and API routes. Verify the file is accessible at your production domain root.
Try uploading your llms.txt file to an AI assistant like Claude or ChatGPT to see how well it understands your content structure. Ask questions about your site that the file should help answer. This real-world testing often reveals areas where descriptions could be clearer or where important content is missing.
Use online validators if available to check your file's format against the standard. While the specification is relatively forgiving, following best practices helps ensure consistent behavior across different AI systems.
Maintaining Your llms.txt Over Time
The initial implementation is just the beginning. Keeping your llms.txt file current and accurate requires ongoing attention, though the amount of maintenance depends on your implementation approach.
If you've implemented dynamic generation that pulls from your CMS or file system, maintenance is largely automatic. Your file updates as your content changes. The main consideration is periodically reviewing the generated output to ensure it accurately represents your site's structure and that descriptions remain clear and helpful.
For static implementations or build-time generation, establish a review process. Perhaps every time you add a major new section to your site, you update your generation script or static file accordingly. Some teams add llms.txt updates to their definition of done for new features.
Watch for broken links as your site evolves. Routes change, pages get reorganized, and old content gets archived. Your llms.txt should reflect these changes. If you're generating content dynamically, add checks that validate URLs are still accessible.
Consider how your content organization serves AI systems. As you learn more about how successful companies structure their llms.txt files, you might find better ways to organize your own content for maximum clarity.
Looking Forward
As AI systems become more sophisticated in how they discover and use web content, the implementation patterns for llms.txt in JavaScript applications will likely evolve. Next.js and other frameworks might introduce built-in support for these files, similar to how they handle sitemaps and robots.txt.
The fundamental patterns we've covered will remain relevant regardless of framework updates. Whether you're serving static files, generating content dynamically, or pulling from external sources, these approaches give you the flexibility to maintain accurate AI-readable content as your application grows.
For now, implementing llms.txt in your Next.js or React application is straightforward with the patterns we've covered. Choose the approach that matches your content management strategy and technical constraints, and you'll have a robust solution that helps AI systems understand and represent your application accurately.
Ready to get started? Our tool can help you generate a properly formatted llms.txt file that you can drop into your Next.js or React project.
Questions about implementation? Check our FAQ or get in touch with our team. We're here to help you make your JavaScript applications AI-friendly.





