Easily generate robots.txt files for search engine crawlers
When enabled, the following AI crawlers will be blocked:
# robots.txt generated by Blog Tools # https://example.com/robots.txt User-agent: * Allow: /
robots.txt is a text file that tells search engine crawlers which parts of your website they can crawl. Proper robots.txt configuration is fundamental to SEO.
Control which pages search engines can access.
Save server resources by preventing unnecessary crawling.
Block paths you don't want exposed, like admin pages.
Control access from AI crawlers like GPTBot and ChatGPT.
robots.txt is a recommendation. Well-behaved crawlers follow it, but malicious crawlers may ignore it. Protect sensitive information with separate authentication.
| Directive | Description | Example |
|---|---|---|
| User-agent | Specify crawler to apply rules to | User-agent: Googlebot |
| Disallow | Path to block from crawling | Disallow: /admin/ |
| Allow | Path to allow crawling | Allow: /public/ |
| Sitemap | Notify sitemap location | Sitemap: https://example.com/sitemap.xml |