No Index – all important information about this instruction

No Index – all important information about this instruction


To be found in the search engines is a basic requirement for a good online business. And yet there are situations where it is better not to have specific information in the search results. In this article you can find out why, how exactly this works and what the “No Index” command has to do with it.

  • What does No Index
  • The procedure of the search engines: crawling and indexing
  • How to implement a no-index statement
  • Why Google no longer allows a No Index for robots.txt


What does No Index mean?

No Index- How to implement a no-index statementToday we have an incredible amount of information. In order to get the best results, especially with search queries, the search engines create an index with all important content. In doing so, they save websites, keywords for content, content and links and in turn also remove pages and the like from this.  Incidentally, this is not only the case with Google search, but also with other search engines, depending on the Search Console , such as Bing or Yahoo.

But now there is also content that is not relevant for the search engines and their queries or where you do not want it to be indexed at all. And that’s exactly what you use the noindex statement for. This belongs to the so-called meta tags and helps you to make your site more SEO-friendly. And ultimately to avoid confusion and frustration among users and thus a bad ranking in SEO for you. It’s about the best possible Google positioning .

No indexing can be advantageous for the following content, for example:

  • imprint
  • data protection
  • Conditions
  • Thank you page after ordering

The procedure of the search engines: crawling and indexing

No Index- What is the No Index Tag good for?So that the search engines can deliver the best and most up-to-date results, the index must of course also be constantly up to date. However, searching through the mass of information by hand is simply no longer possible. It is precisely for this reason that so-called robots or bots are used. These now scour the Internet independently and permanently and follow the links and content on the websites – also known as crawling. Each bot can only take in a certain amount of information and then stores it in the search engine’s register. That process is indexing on the search engines. So in order for you to be found at all, your page has to be indexed.

You can think of it as similar to a library. Collecting the information from the respective books for the relevant search terms is crawling in the online area. The registration is the indexing and the created catalog is the register of the search engines. But you will only be found if you are also indexed.


Due to the limited storage capacity of the bots, it is an advantage for you to make their work as easy as possible. On the one hand, you can do this by removing non-relevant pages from the search. And on the other hand, by deliberately marking links that lead to such pages with a “do not follow” instruction. The best way to do this is to use the “nofollow” command. Nofollow is used to slag your page for the bots, to save paths and to use data more sensibly. And with it, also to ensure a better SEO rating for you. The combination of the two statements noindex and nofollow play a more important role here.

For this reason, these bots always first check what the rules for the respective website look like. And that’s exactly where you put your nofollow and noindex tags.



Leave a Reply

Your email address will not be published. Required fields are marked *