robots.txt Blocks Render-Critical Assets
The robots.txt file disallows crawling of CSS, JavaScript, or other assets required to render the page. This prevents search engines from seeing the page as users do, which can harm indexing and ranking.
Use this article when
- You need deeper remediation guidance than the issue card can show.
- You want CMS-specific steps before handing the fix to a developer.
- You want a repeatable re-check path after shipping the change.
What this issue is
The robots.txt file disallows crawling of CSS, JavaScript, or other assets required to render the page. This prevents search engines from seeing the page as users do, which can harm indexing and ranking.
Why it matters
The robots.txt file disallows crawling of CSS, JavaScript, or other assets required to render the page. This prevents search engines from seeing the page as users do, which can harm indexing and ranking. This affects how clearly search engines understand the page and how persuasive it looks in search results.
How we detect it
- FreeSiteAudit flags this issue when the rule for CRAWL-ROBOTS-RENDER-001 fails and the page evidence points to Http headers.
- You can usually confirm this by checking the page source or the relevant page settings inside your CMS.
Evidence examples
How to fix it
- 1Remove Disallow rules that block CSS and JS files needed for rendering
- 2Allow Googlebot access to all resources required for page rendering
- 3Use Google Search Console URL Inspection to verify rendering is not blocked
How to re-check it
- Use GSC URL Inspection "View Tested Page" to confirm the rendered page matches the live page
Related tools
This issue is best verified with the full FreeSiteAudit crawl rather than a single-point mini tool.