|
本帖最後由 rumaisa6782 於 12:42 編輯
Its role is to tell the bot which pages on the site to access and which pages not to. And it does it through the directives mentioned above (now that there are also bots that don't care about these directives, that's part two and a discussion for another time; Googlebot is good and will always take directives into account!) What does this one look like? Look, for example, what robots txt from Fujix looks like : robots txt at fujix optimization What does this file do in this case.
The first line tells us that the following disallow and allow directives Netherlands Mobile Number List are addressed to all bots that will crawl (although some will not comply, unfortunately) through that starts with a disallow directive, followed by the name of a folderthis means with Googlebot it will not enter thefolder on the site; the third line contains the Allow directive, followed by a subfolder of the one mentioned above this means that Googlebot will still, with our consent, access wp-admiadmin-ajax.relevance and Google.

Php even if it is part of the /wp folder -admin/ which is not allowed; and the last line contains the sitemap directive, along with the name of our main sitethis tells Google the information it needs about which pages to crawl on our site (it's all in sitemaps, which are contained in the main sitemap in image). "Ok, so if I put disallow in robots.txt, then Google will stop indexing the files (pages) in that folder" Nooot really. Yes and no.have no SEO.doesn't from robots.txt, then there's a good chance they won't be indexed. If, however, those are SEO-important pages, such as the homepage, the chances of.
|
|