Whenever I work on redesigning or altering existing websites, I am careful to preserve whatever SEO that site has already accumulated. After all, search rankings are extremely valuable, fickle, and take time to build.
One way I protect against SEO degradation is to ensure that any draft site I deploy doesn’t pull traffic away from the established site. The easiest way to avoid this accidental siphoning is to tell search engines to essentially unlist or prevent listing the draft site in search results.
Fortunately all that’s requrired is one line of HTML placed in the head of each page that is to be omitted from search engine indexing:
<meta name="robots" content="noindex" />
This noindex
rule tells search engines “do not include this page in search results.” You can think of it like a toggle switch. By default, search engines that crawl a page without any rule declared will index it. By including this line of code, you activate stealth mode for the page.
Ideally, it’s best to include this line of code in the intial deployment to prevent a draft site or page from being listed from the get-go.
That said, you can also add a noindex
rule to a page that’s been indexed previously, though it may take a few days or weeks for search engine crawlers to notice the change and update their servers. If time is of the essence, it might be helpful to submit a reindex request to Google.
A related note on the topic of crawlers - in order for a noindex
rule to be registered by search engines, it is necessary for crawlers to be able to reach your site. Thus if you have some other instruction (e.g. in robots.txt
) diverting crawlers from ever reaching the page to begin with, they’ll never see the noindex
directive, and the page will continue to appear in search results.
One last thing - don’t forget to remove the noindex
rule from the finished page so that search engines know to start including it in search results. Otherwise you’re undoing the work you just did to preserve that ever-precious SEO.