Robots.txt Generator
Build robots.txt files to control how search engines crawl your site
Quick Presets
User-Agent Rules
Generated robots.txt
Frequently Asked Questions
What is a robots.txt file? +
A robots.txt file is a plain text file placed at the root of your website that tells search engine crawlers which pages or sections of your site they are allowed or not allowed to access. It follows the Robots Exclusion Standard.
Where should I place my robots.txt file? +
The robots.txt file must be placed at the root of your domain. For example, if your site is https://example.com, the file should be accessible at https://example.com/robots.txt.
Can robots.txt block all search engines? +
Yes. Setting User-agent: * with Disallow: / will instruct all well-behaved crawlers to avoid your entire site. However, malicious bots may ignore robots.txt, so it should not be relied upon for security.
What is the difference between Allow and Disallow? +
Disallow tells crawlers not to access a specific path. Allow explicitly permits access to a path, which is useful for overriding a broader Disallow rule. For example, you can disallow /private/ but allow /private/public-page.
Should I include my sitemap URL in robots.txt? +
Yes, it is a best practice to include a Sitemap directive in your robots.txt file. This helps search engines find and index your sitemap more efficiently, especially if they haven't discovered it through other means.