About the SearchmetricsBot

The SearchmetricsBot, also known as a webcrawler or spider, is a program that regularly queries websites and prepares the resulting data for analysis and evaluation.


User Agent

The SearchmetricsBot employs a user agent like “Mozilla/5.0 (compatible; SearchmetricsBot; http://www.searchmetrics.com/en/searchmetrics-bot/)”. This makes the SearchmetricsBot identifiable in statistics and logfiles and is  always offer offered when querying a website.


Closing of content

It is practically impossible to keep web content secret without the use of access controls and passwords. The moment that someone places a link to your site they will be found by search engines and eventually by the SearchmetricsBot as well. If you don’t want the SearchmetricsBot to crawl your website or parts of it, please use the robots.txt file. Simply include “User-agent: SearchmetricsBot” in your robots.text to block the SearchmetricsBot. Additional links, examples and explanantions concerning robots.txt can be found in Google’s Webmaster Tools ‘Block or remove’ or on http://en.wikipedia.org/wiki/Robots_exclusion_standard



If you don’t want the SearchmetricsBot to crawl an area of your site (e.g. ‘/hidden.html’), then enter the following rule into your robots.txt:
User-agent: SearchmetricsBot
Disallow: /hidden.html


What do we crawl?

The SearchmetricsBot is not comparable with the crawlers from large search engines that  seek to access all of a website’s content. The SearchmetricsBot is normally restricted to selected interesting pages and assessing the structure of these pages.


Report problems

If you experience any problems with SearchmetricsBot and your website please let us know. We are constantly working to improve the quality of our crawlers and look forward to any feedback.

For any queries or suggestions regarding the SearchmetricsBot you can contact us at bot@searchmetrics.com.