Search Engine Optimization Specialist!

Robots.txt

 

Not all spiders are good (including those you see in basements).

Search Engines crawl web sites using program that is generally referred as Spiders or Robots. In addition to search engine spiders there are many more spiders (crawling programs) created by individuals and companies to crawl websites for several reasons (gathering info, spying... etc.)

Now on your web site you might be having some content that you do not want search engine spiders to crawl (some info that changes constantly or some pages that you do not wish to be cached or found in search results). In such situation use of robots.txt could be one of the solutions available. Or in other words, you can also refer robots.txt as traffic light - that allows & stops search engine spiders from crawling whole or a part of your web site.

For example, if you do not want Search Engine X to crawl the image directory within your web site, then you can use:

User-agent: searchenginex # replace searchenginex with the spider name for Search Engine X
Disallow: /

Now suppose if you don't want Search Engine X to crawl on /private sub directory then you can use:

User-agent: searchenginex # replace searchenginex with the spider name for Search Engine X
Disallow: /private/

The function of robots.txt file is to guide/control spiders/crawlers on your web site.

Before you make any decision
Contact us for your NO Obligation Search Engine Optimization Quote !!
Home | About us | SEO Services | Research Data | Free Quote | Contact us
Top Search Engine Positioning & Placement TRPN. Web Hosting by Web Hosting Kit