Whatever You Required To Learn About The X-Robots-Tag HTTP Header

Posted by

Search engine optimization, in its the majority of fundamental sense, trusts something above all others: Online search engine spiders crawling and indexing your website.

However nearly every website is going to have pages that you don’t wish to include in this expedition.

For instance, do you truly desire your privacy policy or internal search pages showing up in Google results?

In a best-case situation, these are not doing anything to drive traffic to your site actively, and in a worst-case, they might be diverting traffic from more crucial pages.

Luckily, Google permits webmasters to tell online search engine bots what pages and material to crawl and what to overlook. There are numerous methods to do this, the most typical being using a robots.txt file or the meta robots tag.

We have an outstanding and in-depth description of the ins and outs of robots.txt, which you must absolutely read.

But in high-level terms, it’s a plain text file that resides in your site’s root and follows the Robots Exclusion Procedure (REPRESENTATIVE).

Robots.txt supplies crawlers with guidelines about the site as an entire, while meta robotics tags consist of directions for particular pages.

Some meta robotics tags you might use include index, which tells search engines to add the page to their index; noindex, which tells it not to include a page to the index or include it in search results page; follow, which advises an online search engine to follow the links on a page; nofollow, which tells it not to follow links, and an entire host of others.

Both robots.txt and meta robots tags work tools to keep in your tool kit, but there’s likewise another method to advise search engine bots to noindex or nofollow: the X-Robots-Tag.

What Is The X-Robots-Tag?

The X-Robots-Tag is another method for you to manage how your websites are crawled and indexed by spiders. As part of the HTTP header action to a URL, it controls indexing for a whole page, along with the specific aspects on that page.

And whereas using meta robotics tags is relatively uncomplicated, the X-Robots-Tag is a bit more complicated.

However this, obviously, raises the concern:

When Should You Use The X-Robots-Tag?

According to Google, “Any instruction that can be utilized in a robotics meta tag can also be specified as an X-Robots-Tag.”

While you can set robots.txt-related instructions in the headers of an HTTP reaction with both the meta robots tag and X-Robots Tag, there are specific situations where you would want to utilize the X-Robots-Tag– the 2 most common being when:

  • You want to control how your non-HTML files are being crawled and indexed.
  • You wish to serve directives site-wide instead of on a page level.

For instance, if you want to block a specific image or video from being crawled– the HTTP reaction technique makes this easy.

The X-Robots-Tag header is likewise helpful due to the fact that it enables you to integrate numerous tags within an HTTP response or utilize a comma-separated list of regulations to specify regulations.

Possibly you do not want a certain page to be cached and desire it to be unavailable after a particular date. You can utilize a combination of “noarchive” and “unavailable_after” tags to advise online search engine bots to follow these directions.

Essentially, the power of the X-Robots-Tag is that it is a lot more flexible than the meta robotics tag.

The advantage of utilizing an X-Robots-Tag with HTTP actions is that it allows you to utilize routine expressions to carry out crawl regulations on non-HTML, in addition to use criteria on a larger, global level.

To assist you understand the distinction between these directives, it’s handy to categorize them by type. That is, are they crawler directives or indexer regulations?

Here’s an useful cheat sheet to discuss:

Spider Directives Indexer Directives
Robots.txt– uses the user agent, allow, disallow, and sitemap regulations to define where on-site online search engine bots are allowed to crawl and not enabled to crawl. Meta Robots tag– allows you to specify and prevent search engines from showing particular pages on a website in search results page.

Nofollow– allows you to define links that need to not pass on authority or PageRank.

X-Robots-tag– permits you to manage how defined file types are indexed.

Where Do You Put The X-Robots-Tag?

Let’s state you wish to block particular file types. An ideal approach would be to include the X-Robots-Tag to an Apache configuration or a.htaccess file.

The X-Robots-Tag can be contributed to a site’s HTTP actions in an Apache server configuration via.htaccess file.

Real-World Examples And Uses Of The X-Robots-Tag

So that sounds fantastic in theory, but what does it look like in the real life? Let’s take a look.

Let’s say we wanted online search engine not to index.pdf file types. This configuration on Apache servers would look something like the below:

Header set X-Robots-Tag “noindex, nofollow”

In Nginx, it would look like the listed below:

place ~ * . pdf$ add_header X-Robots-Tag “noindex, nofollow”;

Now, let’s look at a various circumstance. Let’s say we wish to utilize the X-Robots-Tag to block image files, such as.jpg,. gif,. png, etc, from being indexed. You might do this with an X-Robots-Tag that would look like the below:

Header set X-Robots-Tag “noindex”

Please keep in mind that understanding how these instructions work and the effect they have on one another is essential.

For instance, what takes place if both the X-Robots-Tag and a meta robotics tag are located when crawler bots discover a URL?

If that URL is blocked from robots.txt, then particular indexing and serving regulations can not be discovered and will not be followed.

If instructions are to be followed, then the URLs including those can not be disallowed from crawling.

Look for An X-Robots-Tag

There are a couple of various methods that can be utilized to look for an X-Robots-Tag on the site.

The most convenient method to check is to install an internet browser extension that will tell you X-Robots-Tag info about the URL.

Screenshot of Robots Exclusion Checker, December 2022

Another plugin you can use to figure out whether an X-Robots-Tag is being used, for instance, is the Web Developer plugin.

By clicking the plugin in your internet browser and browsing to “View Response Headers,” you can see the numerous HTTP headers being utilized.

Another approach that can be used for scaling in order to determine problems on sites with a million pages is Yelling Frog

. After running a website through Shouting Frog, you can browse to the “X-Robots-Tag” column.

This will show you which areas of the website are using the tag, in addition to which particular regulations.

Screenshot of Screaming Frog Report. X-Robot-Tag, December 2022 Using X-Robots-Tags On Your Website Comprehending and controlling how search engines interact with your website is

the cornerstone of search engine optimization. And the X-Robots-Tag is a powerful tool you can utilize to do simply that. Just know: It’s not without its risks. It is really easy to make a mistake

and deindex your entire website. That said, if you’re reading this piece, you’re most likely not an SEO beginner.

So long as you utilize it sensibly, take your time and inspect your work, you’ll discover the X-Robots-Tag to be a beneficial addition to your toolbox. More Resources: Included Image: Song_about_summer/ Best SMM Panel