Goal
- Enable targeted referencing.
- Improve guidance for search tools.
- Reduce the energy impact related to the consultation of the site.
- Improve the way content is taken into account by search engines and indexing tools.
Implementation
To define non-indexable directories, files and file types, use the user-agent
and disallow
instructions in a single text file called robots.txt
, located in the root directory of the website.
Alternatively, at a specific page level, use the tag meta name="robots" content="attribut1, attribut2"
:
- attribut1 can take the values
index
(index this page) ornoindex
(do not index this page); - attribut2 can take the values
follow
(follow the links in this page) ornofollow
(do not follow the links in this page).
Find out more:
Control
From the URL of your website:
- First, access the address of the robots.txt file, at the root of the website, by typing, for example, http://example.com/robots.txt in the browser's address bar;
- Check that the
robots.txt file
is in the root directory of the site; - Check the validity of the syntax of the
robots.txt file
using the indications given by the search engines
If there is no robots.txt
file, check that meta name="robots" content=" attribute1, attribute2 "
tag is present and valid in each page.
Discover Opquast training and certification
The objective of these rules and the Opquast community mission is ‘making the web better’ for your customers and for everyone! Opquast rules cover the key major areas of risk that can negatively affect website users such as privacy, ecodesign, accessibility and security.
Opquast training has already allowed over 19,000 web professionals to have their skills certified. Train your teams, contact us
We offer a 1 hour free discovery module.