Outages are always the worst thing that can happen to a website. But what is even worse, are failures that are not found by the website owner. found by the website owner. Even a little bit worse, are just the failures that are found and reported by the customer. In order not to get into this situation too often situation, your own website must be continuously checked for accessibility.
What is accessibility?
Accessibility, also called uptime, describes the availability of a web offer. In classic monitoring, this usually means that a web server is web server delivers content. For us, this definition does not go far enough, but more about that later.
A website is always unavailable if the user or potential customer cannot use it. From the operator's point of view the definition is a little different. A website is unavailable when it can no longer fulfill its business mission.
What do we mean by this? Let's take an editorial website like spiegel.de. Classic criteria for reachability here would be:
- The server responds with a 200 Http status code on the home page.
And yes, if this condition is met, then there is a good chance that the customer can read his articles and be happy. The value for the customer is therefore secured. However, the website's business performance is not yet. From a business perspective, at least the following criteria should also be met:
- The website delivers advertising so that money can be earned.
- The advertising is measured to prove the reach to advertising partners.
What is important in this example is that sometimes it is not enough "just" to check whether the infrastructure basically works.
How is reachability classically tested?
Classically, uptime monitoring checks whether the server on which the website or online store is running returns the correct HTTP status code. is returned. Normally this is 200 - OK. The codes are used to inform the browser that, from the server's point of view, no errors have occurred. have occurred.
Of course you should also check if there is a response from the server at all and you don't run into a timeout.
If you want to play around with HTTP status codes and HTTP headers in general, you can do so on the command line with the command
curl -I https://www.koality.io
The result of this call will look similar to this one:
HTTP/2 200 date: Sat, 24 Oct 2020 07:59:48 GMT content-type: text/html; charset=utf-8 content-length: 21362 vary: Accept-Encoding
As you can see, HTTP code 200 is returned. Thus, the page did not have any basic errors. This way of monitoring the reachability belongs to the implicit methods.
How often should you test your site?
There is no clear answer to this question. Of course, "as often as possible" is never wrong, but whether it is useful is open to question. So the answer should be, as often as it makes sense. What factors come into play here:
- Can the server handle the load of the monitoring? Let's assume that on average we want to have 15 important pages of a project in the monitoring because
there is the most activity. If we now retrieve these pages every minute, for example, we will generate 21600 requests per day. In other words, we simulate up to 15 simultaneous users. You have to be clear about that.
- How fast can I respond? Experience shows that the response time in agencies and with other developers is more in the range of 5-10 minutes. A
5-minute interval makes a lot of sense in most cases.
You don't really have to worry about the interval, because everything between 1 and 5 minutes is in the reasonable range and these are also the time intervals that almost all are what almost all uptime monitoring services provide.
It's nice that you are reading our magazine. But what would be even nicer is if you tried our service. koality.io offers extensive Website monitoring especially for web projects. Uptime, performance, SEO, security, content and tech.I would like to try koality.io for free
When people talk about website reachability, they usually give it in a percentage unit. Our server had last year an uptime of 99.98 %. But here you have to be careful, because big numbers are often deceptive.
- 99 % Uptime in the yearly average means 3.65 days in the year not attainable.
- 99.9 % uptime in the annual average means 0.365 days, i.e. almost 9 hours, not reachable in the year
- 99.99 % uptime in the annual average means 0.0365 days, i.e. about one hour, not available in the year.
Everything above 99.9 % is in the good range. There is nothing to worry about. In koality.io these numbers are calculated and visualized on a daily and monthly basis. visualized.
Cache busting with timestamps.
A particular challenge for monitoring website reachability is caching. Often in front of professionally built websites a cache system is often installed. This means that requests from the monitoring solution first hit the cache. This will of course not say outside, that the server is on fire in the background, but will output the last valid request until it is too old (stale-if-error).
There is a little trick here. As soon as you add a timestamp as
get parameter behind a URL, this is a completely new URL for most caches.
new URL, which they have to fetch from the server. According to the HTTP standard this is correct.
But most web servers and frameworks resolve this URL the same way as the one without the timestamp. So you get a real response. So it can be to add a URL with a timestamp. ## Summary That the monitoring of the accessibility of web pages is important should be clear, since one has thereby an implicit monitoring method at hand, which ensures the basic functioning of the web page. - Accessibility should be defined in such a way that the website retains its business value. - Classic uptime monitoring is performed using the HTTP status code - All intervals between 1 and 5 minutes are good - 99.9% accessibility on an annual average is a good value - A simple timestamp, at the end of the URL, can improve the accuracy of the measurements <MagazineCtaNewsletter></MagazineCtaNewsletter>