Picto thématique

Rule n° 212 - The website’s root contains instructions for web crawlers.

Robots are computer programs that analyse and browse your pages. The best known are those of search engines, which index content. These robots can receive instructions to guide them in their indexing. All you have to do is place a file called robots.txt at the root of your site and respect a particular syntax in this file.

#Server and performances #Development #SEO

Goal

  • Enable targeted referencing.
  • Improve guidance for search tools.
  • Reduce the energy impact related to the consultation of the site.
  • Improve the way content is taken into account by search engines and indexing tools.

Implementation

To define non-indexable directories, files and file types, use the user-agent and disallow instructions in a single text file called robots.txt, located in the root directory of the website.

Alternatively, at a specific page level, use the tag meta name="robots" content="attribut1, attribut2" :

  • attribut1 can take the values index (index this page) or noindex (do not index this page);
  • attribut2 can take the valuesfollow (follow the links in this page) or nofollow (do not follow the links in this page).

Find out more:

Control

From the URL of your website:

  • First, access the address of the robots.txt file, at the root of the website, by typing, for example, http://example.com/robots.txt in the browser's address bar;
  • Check that the robots.txt file is in the root directory of the site;
  • Check the validity of the syntax of the robots.txt file using the indications given by the search engines

If there is no robots.txt file, check that meta name="robots" content=" attribute1, attribute2 " tag is present and valid in each page.

By Opquast - Read the license

Business application and benefits

The rules should be applied to your projects from the design phase through to post-implementation , and they should be understood by all professionals with web and customer experience (CX) responsibilities: from strategy to operations, marketers to project managers, and editorial to technical staff. The benefits of using this ruleset are numerous, including improving customer satisfaction, web performance, and e-commerce, and expanding your client base, while also decreasing your errors and costs.

Multidisciplinary verticles - accessiiblity, SEO, e-commerce, ecodesign etc..- starting from the foundational Opquast base.

The objective of these rules and the Opquast community mission is ‘making the web better’ for your customers and for everyone! Opquast rules cover the key major areas of risk that can negatively affect website users such as privacy, ecodesign, accessibility and security.

Opquast training has already allowed over 11,000 web professionals to have their skills certified. Train your teams or your students, contact us