- Enable targeted referencing.
- Improve guidance for search tools.
- Reduce the energy impact related to the consultation of the site.
- Improve the way content is taken into account by search engines and indexing tools.
To define non-indexable directories, files and file types, use the
disallow instructions in a single text file called
robots.txt, located in the root directory of the website.
Alternatively, at a specific page level, use the tag
meta name="robots" content="attribut1, attribut2" :
- attribut1 can take the values
index(index this page) or
noindex(do not index this page);
- attribut2 can take the values
follow(follow the links in this page) or
nofollow(do not follow the links in this page).
Find out more:
From the URL of your website:
- First, access the address of the robots.txt file, at the root of the website, by typing, for example, http://example.com/robots.txt in the browser's address bar;
- Check that the
robots.txt fileis in the root directory of the site;
- Check the validity of the syntax of the
robots.txt fileusing the indications given by the search engines
If there is no
robots.txt file, check that
meta name="robots" content=" attribute1, attribute2 " tag is present and valid in each page.
Business application and benefits
The rules should be applied to your projects from the design phase through to post-implementation , and they should be understood by all professionals with web and customer experience (CX) responsibilities: from strategy to operations, marketers to project managers, and editorial to technical staff. The benefits of using this ruleset are numerous, including improving customer satisfaction, web performance, and e-commerce, and expanding your client base, while also decreasing your errors and costs.
The objective of these rules and the Opquast community mission is ‘making the web better’ for your customers and for everyone! Opquast rules cover the key major areas of risk that can negatively affect website users such as privacy, ecodesign, accessibility and security.