Resolving rogue robots directives



In this episode of SEO Fairy Tales, Martin Splitt and Jason Stevens, a Senior Performance Media Manager at Google, shares how Jason’s team audited a site with bad search snippets and discovered that a robots.txt file was preventing the site from being crawled. Learn the steps you can take to investigate changes in website traffic or snippets using Search Console and Google’s suite of tools.

Chapters
0:00 – Intro
0:27 – Bad snippets
1:18 – Researching the root cause
2:11 – Understanding the updated directive
3:30 – It isn’t always that simple
4:17 – What to look for?
5:36 – Other things to look out for
6:55 – Reports and tools
8:56 – Did it work?
9:43 – Setting up for sustainable success
11:05 – robots.txt – friend or foe?
14:01 – Wrap up

Watch more episodes of SEO Fairy Tales → https://goo.gle/SEOFairyTales
Subscribe to Google Search Central Channel → https://goo.gle/SearchCentral

#TechnicalSEO #Snippets #Crawling

source

4 thoughts on “Resolving rogue robots directives”

  1. I have a very awkward issue, GOOGLE is crawling a type of URLs on our site, which is expected, but when I inspect those URLs, Google says 'blocked by robots' on GSC.

    What can be the reason here?

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top