The Past, Present, And Future Of Robots.txt

A robots.txt file allows search engine crawlers which ULRs the crawler can access on a website. Basically, this happens so that your site doesn\’t get overloaded with requests. What robots.txt doesn\’t do is keep a web page out of Google. In order to that, you would block indexing with noindex or password-protect them.

In this Google Search Central episode of Search Off the Record, we find Gary, Lizzi and Martin from the Google Search team discuss robots.txt with Product Counsel for Google, David Price.

They talk about human initiated crawl requests versus bot initiated crawl requests, the robots.txt file along with more!

Scott Davenport

Leave a Comment

Your email address will not be published. Required fields are marked *

Are You Ready To Thrive?