Site Owners


Background uses automated crawling technology to identify shopping-related web sites for inclusion in our index. We are particularly interested in sites containing shopping-related research information such as buying guides, reviews, articles, specifications, forums, etc.

Maintaining the trust of our users is extremely important to us. We do not accept payment for improved ranking in our index. All advertising is clearly labeled on our site.

Site Submission Policy crawls the web on a very frequent basis to locate new shopping-related web pages and identify changes to existing pages. Submission of your web site to our search engine is not required. At this time, we do not accept submissions from webmasters.

The BecomeBot Crawler

BecomeBot is the user-agent for’s web crawler.

Q: How can I prevent from crawling my site?

A: BecomeBot obeys the standard robot exclusion file and will not crawl your site if the exclusion file disallows it. For example, to prevent the BecomeBot from crawling everything in the /cgi-bin directory of your site, simply add two lines that say:

User-agent: BecomeBot
Disallow: /cgi-bin/

Q: How can I control the speed at which BecomeBot crawls my site?

A: BecomeBot uses a dynamic politeness policy to determine the speed at which individual sites are crawled. When first encountering a site, BecomeBot begins downloading web pages slowly. Based on such factors as bandwidth and number of web pages within the site, BecomeBot automatically adjusts the speed of its crawl.

You can control the rate at which your site is crawled by using the Crawl-Delay feature. The Crawl-Delay feature allows you to specify the number of seconds between visits to your site. Note that it may take quite a long time to crawl a site if there are many pages and the Crawl-Delay is set high. You could specify an interval of 30 seconds between requests with an entry like this:

User-agent: BecomeBot
Crawl-Delay: 30
Disallow: /cgi-bin