Skip to main content
SEO fix guide

Pages Blocked by robots.txt (GSC Confirmed)

Google Search Console confirms that important pages are blocked by robots.txt rules, preventing crawling and indexing.

Issue ID: CRAWL-GSC-BLOCKED-ROBOTS-001
Severity: major
Impact: High
Effort: S

Use this article when

  • You need deeper remediation guidance than the issue card can show.
  • You want CMS-specific steps before handing the fix to a developer.
  • You want a repeatable re-check path after shipping the change.
Re-run full audit

What this issue is

Google Search Console confirms that important pages are blocked by robots.txt rules, preventing crawling and indexing.

Why it matters

Google Search Console confirms that important pages are blocked by robots.txt rules, preventing crawling and indexing. This affects how clearly search engines understand the page and how persuasive it looks in search results.

How we detect it

  • FreeSiteAudit flags this issue when the rule for CRAWL-GSC-BLOCKED-ROBOTS-001 fails and the page evidence points to Http headers.
  • You can usually confirm this by checking the page source or the relevant page settings inside your CMS.

Evidence examples

Check the affected page source, rendered output, or relevant CMS setting to confirm the missing or incorrect element.

How to fix it

  1. 1Update robots.txt to allow crawling of important pages
  2. 2Test changes with the GSC robots.txt tester before deploying

How to re-check it

  • Confirm affected pages show as allowed in GSC URL Inspection

Related tools

This issue is best verified with the full FreeSiteAudit crawl rather than a single-point mini tool.