SEO tools

Robots.txt Tester

Paste robots.txt rules, choose a user agent, and test whether a URL path is allowed or blocked.

Crawl result

Allowed

The most specific matching rule is Allow.

Winning rule

Allow: /private/preview/

Matching rules

Allow: /private/preview/

Disallow: /private/

More seo tools

Continue with nearby tools in the same workflow category.

Open category

About robots.txt testing

Robots.txt tells crawlers which paths they may request. Testing a path against the rules can reveal broad blocks, special bot rules, and exceptions before search crawlers find them.

Common uses

  • Check if Googlebot can crawl a page
  • Review Allow and Disallow conflicts
  • Debug broad blocked directories
  • Test crawler rules before deployment

Notes

This tester uses the most specific matching rule as a practical model. Search engines may have implementation details beyond this lightweight checker.

Examples

Check if Googlebot can crawl a path

Paste robots.txt, enter Googlebot as the user agent, and test a path such as /tools/json-formatter.

Debug broad disallow rules

Compare Allow and Disallow rules to see which matching directive controls a URL path.

FAQ

Does robots.txt prevent indexing by itself?Show

Robots.txt controls crawling, not indexing directly. A blocked URL can still appear in search if it is discovered elsewhere.

How are conflicting Allow and Disallow rules handled?Show

Search engines generally use the most specific matching rule. This tester follows that practical rule of thumb.

Can I test multiple user agents?Show

Yes. Change the user agent field and run the same path against another bot name.

Related next tools

Use these nearby utilities when the current task is part of a larger debugging, writing, security, or site maintenance workflow.