Robots.txt Generator

    Create properly formatted robots.txt files to control how search engines crawl your website. 100% free and privacy-friendly - all processing happens in your browser.

    Quick Templates

    Start with a pre-configured template

    User Agents

    Select which bots to configure

    Disallowed Paths

    Specify paths to block from crawling

    Additional Settings

    Time to wait between requests (not supported by all bots)

    Generated robots.txt

    Copy and save this file to your website root directory

    How to Use

    1. Download the file

    Click the "Download" button to save the robots.txt file.

    2. Upload to your website

    Place the robots.txt file in the root directory of your website (e.g., https://example.com/robots.txt).

    3. Test your robots.txt

    Use Google Search Console's robots.txt Tester to validate your file.

    4. Monitor crawling

    Check your server logs and Search Console to ensure bots respect your directives.

    Best Practices

    ✓ Always include your sitemap URL

    ✓ Block admin areas and private directories

    ✓ Don't block CSS/JS files (can hurt SEO)

    ✓ Test changes before deploying

    ✓ Keep the file simple and well-commented

    ✗ Don't use robots.txt for sensitive data security

    What is a robots.txt File?

    A robots.txt file is a text file that website owners create to instruct search engine robots (also known as crawlers or spiders) how to crawl and index pages on their website. It's part of the Robots Exclusion Protocol (REP), a group of web standards that regulate how robots crawl the web.

    Why Do You Need a robots.txt File?

    • Control Crawl Budget: Prevent search engines from wasting resources on unimportant pages
    • Prevent Duplicate Content: Block crawlers from indexing duplicate or similar pages
    • Keep Private Sections Private: Discourage indexing of admin areas, staging sites, or internal search results
    • Manage Server Load: Control how frequently bots can crawl your site
    • Improve SEO: Direct crawlers to your most important content

    Common Use Cases

    WordPress Sites

    Block /wp-admin/, /wp-includes/, and plugin directories while allowing admin-ajax.php for functionality.

    E-commerce Sites

    Prevent indexing of cart, checkout, and filtered/sorted product pages that create duplicate content.

    Development Sites

    Block all crawlers from staging or development environments to prevent accidental indexing.

    Content Sites

    Block search result pages, user profiles, and other dynamically generated pages with thin content.

    robots.txt Syntax Guide

    User-agent: * # Applies to all bots

    Disallow: /admin/ # Block /admin/ directory

    Allow: /admin/public/ # But allow this subdirectory

    Sitemap: https://example.com/sitemap.xml # Sitemap location