Create properly formatted robots.txt files to control how search engines crawl your website. 100% free and privacy-friendly - all processing happens in your browser.
Start with a pre-configured template
Select which bots to configure
Specify paths to block from crawling
Time to wait between requests (not supported by all bots)
Copy and save this file to your website root directory
Click the "Download" button to save the robots.txt file.
Place the robots.txt file in the root directory of your website (e.g., https://example.com/robots.txt).
Use Google Search Console's robots.txt Tester to validate your file.
Check your server logs and Search Console to ensure bots respect your directives.
✓ Always include your sitemap URL
✓ Block admin areas and private directories
✓ Don't block CSS/JS files (can hurt SEO)
✓ Test changes before deploying
✓ Keep the file simple and well-commented
✗ Don't use robots.txt for sensitive data security
A robots.txt file is a text file that website owners create to instruct search engine robots (also known as crawlers or spiders) how to crawl and index pages on their website. It's part of the Robots Exclusion Protocol (REP), a group of web standards that regulate how robots crawl the web.
Block /wp-admin/, /wp-includes/, and plugin directories while allowing admin-ajax.php for functionality.
Prevent indexing of cart, checkout, and filtered/sorted product pages that create duplicate content.
Block all crawlers from staging or development environments to prevent accidental indexing.
Block search result pages, user profiles, and other dynamically generated pages with thin content.
User-agent: * # Applies to all bots
Disallow: /admin/ # Block /admin/ directory
Allow: /admin/public/ # But allow this subdirectory
Sitemap: https://example.com/sitemap.xml # Sitemap location