Clicking on "Check URL blocking in robots.txt
Posted: Sun Jan 19, 2025 9:05 am
Clicking on an error opens a report with examples of pages to which the error applies. The list also displays the date of the last crawl of each page. Clicking on a specific example will open up details and options for solving the problem. In this case, you can check the indexing of this URL and the presence of blocking in the robots.
" will open the robots.txt file checking tool of the old console version. belgium email list The directive prohibiting indexing of this URL will be highlighted in red. Since this is just a warning and not a critical error, you can leave it as is. But if you really need to block the search engine's access to the page, you can do the following: Add the meta tag "noindex,follow" to the page code.
Use the "Removal" tool in GSC, which we'll talk about a little later. If you look at the excluded pages (gray color on the graph), the reasons for their removal from the index can be very different. In my case, the picture is as follows: Let's take a closer look at some of them: Blocked in robots.txt file . This type of excluded pages is the largest, as all service pages, duplicates, category pages, etc.
" will open the robots.txt file checking tool of the old console version. belgium email list The directive prohibiting indexing of this URL will be highlighted in red. Since this is just a warning and not a critical error, you can leave it as is. But if you really need to block the search engine's access to the page, you can do the following: Add the meta tag "noindex,follow" to the page code.
Use the "Removal" tool in GSC, which we'll talk about a little later. If you look at the excluded pages (gray color on the graph), the reasons for their removal from the index can be very different. In my case, the picture is as follows: Let's take a closer look at some of them: Blocked in robots.txt file . This type of excluded pages is the largest, as all service pages, duplicates, category pages, etc.