SEO fix guide
robots.txt Blocks Important Pages
Your robots.txt file contains Disallow rules that may prevent search engines from crawling important pages on your site.
Issue ID: CRAWL-ROBOTS-BLOCKS-001
Severity: major
Impact: High
Effort: S
Use this article when
- You need deeper remediation guidance than the issue card can show.
- You want CMS-specific steps before handing the fix to a developer.
- You want a repeatable re-check path after shipping the change.
What this issue is
Your robots.txt file contains Disallow rules that may prevent search engines from crawling important pages on your site.
Why it matters
Your robots.txt file contains Disallow rules that may prevent search engines from crawling important pages on your site. This affects how clearly search engines understand the page and how persuasive it looks in search results.
How we detect it
- FreeSiteAudit flags this issue when the rule for CRAWL-ROBOTS-BLOCKS-001 fails and the page evidence points to Http headers.
- You can usually confirm this by checking the page source or the relevant page settings inside your CMS.
Evidence examples
Check the affected page source, rendered output, or relevant CMS setting to confirm the missing or incorrect element.
How to fix it
- 1Review robots.txt Disallow rules for overly broad patterns
- 2Ensure important content pages are not disallowed
- 3Test with Google Search Console robots.txt tester
How to re-check it
- Use the robots.txt tester in Google Search Console to verify important URLs are allowed
Related tools
This issue is best verified with the full FreeSiteAudit crawl rather than a single-point mini tool.