What is a robots.txt file? And How to Use It

Robots.txt is a text file webmasters, web developers, and any public app creators create to tell web crawlers (typically search engine robots) how to crawl pages on their websites.

In their simplest forms, the robot.txt file indicates whether specific "user agents" (web-crawling software or search engine bots) can or cannot crawl parts of a website.

These crawl instructions are specified by "disallowing" or "allowing" the behavior of certain (or all) user agents.

What is a Robots.txt File Used For?

A robots.txt file plays a significant role in directing search engine crawlers and shaping how they interact with your website. This simple yet powerful tool serves three primary purposes:

Directing Search Engine Crawlers

First and foremost, a robots.txt file provides instructions to search engine crawlers, such as Googlebot, on navigating your website.

By including specific directives in the file, you can tell crawlers which parts of your site to access and which to avoid, ensuring that only relevant content is indexed and included in search results.

Preventing Indexing of Specific Pages or Sections

There might be certain pages or sections of your website that you don't want to be indexed by search engines, such as private or sensitive content. A robots.txt file helps you achieve this by allowing you to specify the pages or folders that should not be crawled or indexed. However, remember that this method isn't foolproof; some crawlers might ignore the instructions.

Controlling Crawl Rate and Frequency

Another crucial function of a robots.txt file (and usually less known) is managing the crawl rate and frequency of search engine crawlers on your site.

This can be particularly important for large websites or sites with limited server resources, as excessive crawling can lead to slow loading times or server overload.

By setting a crawl-delay directive in your robots.txt file, you can control the time interval between requests from a crawler, ensuring your site remains functional and accessible to users.

Utilizing this file effectively can optimize your website's visibility and enhance its SEO performance.

Understanding the Limitations of Robots.txt File

While a robots.txt file can significantly impact your website's SEO and web crawling behavior, remember that this tool has limitations.

One major one you need to be aware of is:

It DOES NOT Guarantee Non-Indexing

Although you can use a robots.txt file to instruct search engine crawlers not to index specific pages or sections of your site, this method is not foolproof.

Some crawlers or other bots might ignore the instructions or accidentally index the content through other means, such as external backlinks.

To maximize your control over content indexing, consider using other methods like meta robots tags or X-Robots-Tag HTTP headers in addition to a robots.txt file.

But don't let this scare you; the benefits still make it extremely worthwhile!

Creating or Updating a Robots.txt File

Now for the fun stuff.

A well-crafted robots.txt file is crucial for effectively managing search engine crawlers and optimizing your website's SEO.

To create or update your robots.txt file, consider the following key aspects:

Basic Syntax and Structure

At its core, a robots.txt is a plain text file that can potentially have some of the following rules:

  • User-agent
  • Disallow
  • Allow
  • Crawl-delay
  • Sitemap

Let's look at an example of a robot.txt file with the three most common elements: the User-agent (what bots can crawl your site), Disallow (What they shouldn't crawl) and the Sitemap (the map of your site to aid the crawlers).

User-agent: *
Disallow: /private/

Sitemap: https://www.example.com/sitemap.xml

In this example, the user-agent "*" indicates that the rule applies to all crawlers, and the disallow directive prevents them from accessing your website's "/private/" section. The Sitemap is the location of your website's sitemap on your server.

Other Common Directives and Pattern-Matching

Robots.txt files often include additional directives and employ pattern-matching to refine crawler behavior. Some common directives include:

  • Crawl-delay: Specifies the time interval between crawler requests, helping manage server resources and loading times. Not all crawlers/agents understand this (but Google and many big search engines do).

  • Sitemap: Provides the location of your website's sitemap, assisting crawlers in discovering and indexing your content more efficiently.

Pattern matching allows you to create more granular rules using wildcards and other special characters. For instance, the following rule would disallow crawlers from accessing any URL containing "example":

User-agent: *
Disallow: /*example*

Example Robots.txt File

I think you'll start grasping it more with a more comprehensive example. Now that you know the basics, you might be able to guess what things do:

User-agent: *
Allow /private/find-me
Disallow: /private/
Disallow: /temp/
Disallow: /*example
Crawl-delay: 10

Sitemap: https://www.example.com/sitemap.xml

Let's explain some of the results of the options here:

User-agent The "*" is an indicator to allow all crawlers. In a later section, we will look at allowing and disallowing specific agents.

Disallow In this example, the robots.txt file disallows crawlers from accessing the /private/ and /temp/ pages or any subpages within here.

We also use a wild card (*) to disallow any URL containing "example".

Allow With the Allow /private/find-me, we added exceptions to the Disallow list. This means if we had /private/admin, it would not be found, but /private/find-me/index.html could be found.

Crawl-delay

This sets a crawl delay of 10 seconds between page crawling attempts to stop you from potentially getting too much traffic from some of the crawlers that could cause some issues with your server.

Sitemap

Lastly, the Sitemap location to your site. This is explained in the previous section.

Where to Place Robots.txt File on a Website?

For a robots.txt file to be effectively recognized and utilized by search engine crawlers, it should be placed in your website's root directory.

This means that the file should be accessible at the top level of your domain, such as https://www.example.com/robots.txt.

This location makes it easy for crawlers to find and interpret the file, which helps ensure your website's visibility and SEO are optimized.

Correct placement of your robots.txt file is crucial for maintaining effective crawler access and management.

Suppose the file is placed in a different directory or is not publicly accessible. In that case, search engine crawlers may be unable to find or process the directives, leading to suboptimal crawling behavior and potentially impacting your website's visibility in search results.

How to Allow/Disallow Agents

Let us extend our previous example with a common recent use case.

Let's say you want all agents except ChatGPT to be able to crawl your site.

We create a new block which starts with a new User-Agent rule:

User-agent: *
Allow /private/find-me
Disallow: /private/
Disallow: /temp/
Disallow: /*example
Crawl-delay: 10
Sitemap: https://www.example.com/sitemap.xml

User-Agent: GPTBot
Disallow: /

You'll have to do some research to find out the right names if there are specific agents you want to disallow, but I'll list the ones I could find next.

Common Agents Names

  • *: Applies rules to all crawlers or user agents if no specific rule is set for them.
  • AdsBot-Google: Controls how Google Ads crawls pages for ad quality assessment.
  • Alexa: User-agent for Alexa's internet crawlers.
  • Applebot: Used for Apple's search engine, such as Siri and Spotlight Suggestions.
  • archive.org_bot: Governs the Internet Archive’s Wayback Machine crawler.
  • Baiduspider: Controls the behavior of Baidu's search engine spider.
  • Bingbot: Specifies rules for Bing's web crawling bot.
  • DuckDuckBot: Used to control DuckDuckGo's crawler.
  • Exabot: User-agent for Exalead, a search engine company.
  • Facebookexternalhit: Governs how Facebook accesses and displays content.
  • Googlebot: The primary user-agent for Google's search crawler.
  • Googlebot-Image: Controls how Google crawls images on a site.
  • Googlebot-News: Used for crawling news content by Google.
  • Googlebot-Video: Specifies rules for crawling videos by Google.
  • GPTBot: Rules for crawling by ChatGPT.
  • LinkedInBot: User-agent for LinkedIn's crawler.
  • Mediapartners-Google: Related to Google AdSense for crawling and indexing content for ads.
  • MJ12bot: User-agent for Majestic, a UK-based search engine.
  • Msnbot: User-agent for Microsoft's MSN search crawler.
  • pinterest.com.bot: Specifies crawling behavior for Pinterest's bot.
  • SeznamBot: The user-agent for Seznam, a Czech search engine.
  • Slurp: The user-agent for Yahoo’s search crawler.
  • Sogou Spider: Governs the crawler for Sogou, a Chinese search engine.
  • Teoma: User-agent for Teoma's crawler, part of Ask.com.
  • Twitterbot: Controls how Twitter accesses and displays website content.
  • Yandex: User-agent for Russia's Yandex search engine crawler.
  • YandexImages: Specific to Yandex's image crawling and indexing.

Robots.txt vs Meta Robots vs X-Robots

While the robots.txt file is a powerful tool for managing crawlers, other methods like meta robots and X-Robots-Tag can also be used to complement or refine your approach.

In this brief section, we'll discuss the differences and similarities between these methods and when to use each one for controlling crawlers so you can decide if you want to go further and learn more.

Each serves a unique purpose in guiding search engines on what they should and shouldn't do on your site.

Let's break down the differences:

  1. Robots.txt:

Hopefully, by now, in the article, you know what that robots.txt tells search engine crawlers which parts of your site they're allowed to enter or need to stay away from.

But remember, robots.txt is a polite request rather than a strict rule. Good crawlers follow what it says, but it doesn’t hide pages from view or stop other crawlers that choose to ignore it.

  1. Meta Robots Tags

Meta robots tags are like instruction manuals found on individual pages of your website. They are snippets of code that tell search engines specific actions for that page, like whether to index it (include it in search results) or follow links. You can use them to say things like, “Hey Google, don’t show this page in your search results” or “Hey Bing, don’t follow any links on this page.”

Unlike robots.txt, which is more like a site-wide rule, it gives you page-level control.

  1. X-Robots-Tag

The X-Robots-Tag is a bit more technical and powerful.

It’s part of the HTTP header of a page, which means it's not in the visible part of the page like a meta tag but in the code sent before the page loads. This tag is helpful because it can apply rules to non-HTML files like PDFs or images, which meta tags can’t do. It can do almost everything a meta robot tag can but is used for more complex scenarios.

Choosing the appropriate method or combination of methods for your needs ensures that your website remains visible, accessible, and optimized for search engine performance.


In this guide, we have explored the importance and functionality of a robots.txt file in managing search engine crawlers, optimizing website SEO, and controlling website visibility.

So, if you haven't already, take the time to create or update your robots.txt file and start getting the benefits!

If you have any questions or suggestions, comment below or jump into our Discord and chat with our community by signing up for a free Codú account today. ✌️

Web DevelopmentSeo
Avatar for Niall Maher

Written by Niall Maher

Founder of Codú - The web developer community! I've worked in nearly every corner of technology businesses: Lead Developer, Software Architect, Product Manager, CTO, and now happily a Founder.

Loading

Fetching comments

Hey! 👋

Got something to say?

or to leave a comment.