In this episode of SEO Fairy Tales, Martin Spiltt and Jamie Indigo, a Senior Technical SEO at Lumar (formerly DeepCrawl), chat in-depth about a Javascript technical SEO problem, specifically how 3 million product page listings got lost from Google’s index. Find out how Jamie solved her client’s problem step by step using Search Console’s URL inspection tool, robots.txt tester, and Chrome Developer Tools.
Chapters
0:00- Introduction
1:23 – Product page problems
2:19 – Starting the investigation
3:04 – Searching for the soft 404 source
4:23 – Inspect URL clues
5:20 – CDN caching
6:31 – API calls breaking
7:25 – Block and load
8:46 – Fixing the issue
10:11 – Wrap up
Watch more episodes of SEO Fairy Tales → https://goo.gle/SEOFairyTales
Subscribe to Google Search Central Channel → https://goo.gle/SearchCentral
#SEO #JavaScript
source
Thank you so much for sharing your insight! Shine on Superstars!
I loved it and the way the story was told…….It made me listen to it until the end.
6:42 What is a fallback error?
Well this was an amazing episode!
From what I can gather, improper use of robots.txt was blocking some scripts that were used to populate product pages, causing them to soft 404. Caching of the robots.txt made QA’ing the fix more difficult. Lesson: stop using robots.txt for crawler control, as it’s almost always a poor solution, and use noindex instead.
Jamie & Martin two of the nicest, coolest people around. So cool to see a show with them together!
Hello is there a summary or something?
Any interesting JavaScript stories you’ve encountered? Let us know in the comments below!
Don't forget to subscribe → htps://goo.gle/SearchCentral