Academy

Server-Monitoring

Website Monitoring Magazine

It is important to protect your online business from failures. That's probably clear to everyone. But what is the best way to do this? Monitoring is a simple and inexpensive way to achieve this. Since there are many ways to monitor, we want to introduce two of the most important levels.

Specifically, this article is about server monitoring and website monitoring, how they differ and what koality.io can do. Spoiler: quite a lot.

Server monitoring

Server monitoring is probably the most widespread monitoring on the market. This is mainly due to the fact that professional hosters always have their servers monitored in order to be able to react as quickly as possible in case of errors in the infrastructure and thus provide the best service for their customers. However, these are usually systems that are only used internally. If you want to use this kind of monitoring yourself, you normally have to take care of it yourself.

But what does server monitoring actually monitor? In principle, it is about the servers and their components, such as hard disk, CPU or memory. As soon as one of these fails, it is usually no longer possible to deliver websites without errors.

Default metrics

By default, the most important vital signs of a server are monitored. These almost always include:

  • CPU / Load: This is where the processor's load is measured. If this reaches its limit the server will first become slow and then in many cases will stop generating responses, for example, so that web pages are no longer delivered.
  • Memory: Many applications that are needed to deliver a website use memory to be able to deliver data as fast as possible. First of all databases, which can only deliver answers within milliseconds, because the most important data is already in the memory and cannot be fetched from the hard disk. If the memory is full and cannot hold any more new information, a server starts to swap data to the hard disk. Since this has a massive speed difference in reading and writing, the server will also quickly reach its limits here and no longer deliver the web pages.
  • Hard disk (fill level): Ok, this is relatively simple. If the disk is full, some things will not work anymore. We know this from our computers at home. But why should a disk be full? In modern web applications massive caching is used. This means that pre-calculated elements are kept available for use when they are needed. Often this happens via memory, but also very often via the hard disk. If no more can be written there, applications very often crash.
  • Network (i/o): Evaluating network traffic can be very useful, especially for servers running high-traffic websites. Although this will not crash the server (in most cases), websites will still become very slow if the server can no longer manage to get the data onto the line and thus to the user fast enough.

If you are interested in this data, but don't want to get into the monitoring business right away, we recommend htop. This is a "nice" variant of top that visualizes all important server metrics and also shows all currently running processes.

The tool is available for any operating system based on Unix and can be easily installed. If a server becomes slow, it can be very useful to have a look here. When debugging, you get a lot of useful information here.

Making predictions

There is one property of server metrics that makes them so wonderful. Predictability. In many cases, the metrics you record here are linear. That is, we don't just have big jumps in the metrics. So you can predict where the values are going to go in the next few minutes and react ahead of time if necessary.

However, this also means that one should not only look at the absolute values, but also at tendencies. In concrete terms, this means that we are not (only) monitoring the fact that the hard disk is now 99% full, but have already noticed that more and more has been stored on the disk in the last few minutes and if it continues like this, we will have a full disk in an hour.

If you like this article, please subscribe to our newsletter. After that you will not become one of our articles about monitoring and agencies.

Yes, I want to subscribe to your newsletter

Free tools

Besides predictability, there is another very positive point. There are very many tools on the market for monitoring websites that are free. Some to look at are:

All free tools are installed on the server. This is usually done via the package manager of choice and only takes a few minutes. If you take a closer look at the open source tools, you can then include other services. Utilization of databases for example. But this usually requires a little more detailed knowledge.

All these tools have one thing in common: by default, they store their data locally on the server on which they run. But this also means that if a server goes down, you can no longer access the data to debug the errors. Then usually the only thing that helps is a restart of the server, which re-enables the metrics and but also usually fixes the error. This can become a small problem, which can be solved by configuration and an explicit monitoring server.

NIXstats

We're a little biased here, since NIXstats belongs to the same [corporation] (/en/magazin/en/articles/koality/new-home) as we do, but we'll just quickly forget about that.

So why do we mention NIXstats? It solves the problem of local installation. All you need to do is install an agent, which then sends all the important readings to the NIXStats cloud. There you can then view them independently of the actual server. The first few systems in the monitoring are even free. Just have a look at it. For websites, like small online stores the free model is already sufficient.

Server or website monitoring?

The website monitoring that we offer with koality.io can be wonderfully complemented by monitoring the server and the associated hardware. That is why we recommend a combination of the two approaches to ensure the perfect protection of the website.