Manually Monitoring Site Availability (Review of Select Tools)


Once you’ve acquired and set up your dedicated or virtual server, the next step is to establish a monitoring system. Monitoring keeps you up-to-date on your service’s status by regularly checking your site’s main subsystems.

Since a reliable site should enjoy uninterrupted uptime, its availability and functionality must be consistently monitored:

  • regular independent checks of your site using free tools
  • constant uptime monitoring, which is best performed hourly: most users will try to reaccess a site within 1-2 hours, and more frequent checks don’t guarantee problems will be fixed any faster than within an hour
  • monitoring project parts and analyzing application metrics, including response times, component and service errors, database response times, requests without indexes and slow requests
  • website performance monitoring: sluggish downloads can lead to lost clients; monitoring systems can alert you as soon as a problem is detected, letting you quickly resolve any issue and minimize the consequences
  • problem monitoring means tracking multiple site parameters at least once a minute and from multiple geographic points to perform as many checks as possible in the interval and track potential problems related to users in different regions

Among the different monitoring criteria, we can highlight the following problems and problem areas:

  • DNS servers (when a site address can’t be identified for a given period of time, even though the site is physically available)
  • long response times (when updating the cache or running “heavy” tasks on the server site)
  • scheduled tasks (which cause a site to be unavailable at certain times)
  • extended wait times for static files (due to the network infrastructure or problems with physical media)
  • connecting to databases

Many external services already provide detailed information about problems, including error logs on the client side (provided the service and server are properly configured and error logging is enabled on the server side). Similar methods are particularly useful when you have to catch a “floating” error; when detailed logs of an error on the server side are included, it can be effectively tracked and removed.

Site Workloads on the Weekend/Off Season

Let’s say a website/server/service has to maintain uptime for several days without any human interference. What could go wrong?

Crashes creep up on their own from time to time. Your Tuesday night crash only gets resolved when a backup is restored Wednesday morning, and those weekend crashes usually last from Friday to Monday. The amount of time your site is down during off hours depends on how long the employee responsible is off the clock.

A site may generally not be in great shape, but problems get resolved quickly during the workweek. How long would it take to get a system back online during an extended holiday if there wasn’t any monitoring? It might take a few days instead of a few hours, and this happens more often than you’d think.

Don’t make any serious changes to your code before a long weekend. You have to thoroughly test a system that’s been changed to ensure everything works as it should. A standard recommendation is to put off any big changes until traffic is lighter than usual.

In addition to the usual array of problems, sites love getting up to no good when nobody’s watching, especially when it’s for an extended period of time. This may include domains and certificates expiring, databases filling up, or even having your domain wind up on the DNSBL list or blocked.

DNS Blacklists

It’s important to be able to check if a domain is on a DNS blacklist. Blacklists are lists of domains/IP addresses that are associated with spamming.

Lists are autonomous and each one is created using a unique algorithm. Harmless sites do sometimes end up on these lists; an IP address from your subnet may be used by spammers or hackers for some malicious activity, and then the whole subnet ends up blocked.

How does this put you at risk? Emails won’t get through to clients, your site’s search engine ranking will start to drop, and so on. This is why blacklist monitoring and alert functions are in high demand.

Administrators can configure their web server to not receive messages from a specific list of systems. This helps combat spam, the distribution of malware, DDoS attacks, and other hacker attacks.

Online DNSBLs, such as antispamsniper.com and dnsbl.info, let you filter spam by using a DNS to access databases of spammers’ IP address.

To see if a particular IP address is on a blacklist, open antispamsniper.com, enter the IP address (your current IP address is given by default) and click Search.

DDoS Mitigation

If your profits depend on a website’s availability, then it’s worth making preparations to handle a growing workload (for seasonal sales or Black Friday). You should also prepare for potential attacks from competitors or hackers who are looking to stretch incoming request response times or render your site partially or completely unavailable.

To insure your site against downtime, you can use a traffic filtering service, such as our Anti DDoS service, so that you receive only clean incoming traffic. This is done by forwarding Internet traffic to a secure address and through special equipment for “cleaning”. This will remove all illegitimate traffic.

You can read more about how we clean traffic in our knowledge base.

Scheduled Maintenance

The server software your online resource is built on needs to be updated from time to time, and scheduled maintenance can help facilitate that process.

Scheduled maintenance does two things: it doesn’t send error notifications and it doesn’t log errors to your statistics for a given interval. Monitoring will still occur and properly log any actions taken, which is why it may be useful for administrators: the log lets you determine how long it took to perform an update or reboot and shows you what errors were detected during this time, what problems were observed, etc.

It’s recommended to perform scheduled maintenance during noticeably low periods of client traffic and when there are no peaks in bandwidth.

Monitoring Domain and SSL Certificate Expiration Dates

Even major organizations have problems renewing domains and certificates. This is why reminders (sent by SMS or email) that a domain has to be renewed are imperative. HostTracker, for example, is a paid monitoring alert service for just that.

Checking Domain Expirations

You can check domain availability for free with services like nic.ru.

Domain expirations can be checked for free with a Whois service.

Checking SSL Certificate Expirations

You can find the expiration date of an active SSL certificate from the Linux command line using openssl:

$ echo | openssl s_client -servername ИМЯ -connect ХОСТ:ПОРТ 2>/dev/null | openssl x509 -noout -dates

In addition to the expiration date, SSL certificates contain a lot of interesting information, including information about who it was issued by and to.

Openssl can be used to extract this data from a site’s SSL certificate, too.

To find who issued an SSL certificate:

$ echo | openssl s_client -servername site.com -connect site.com:443 2>/dev/null | openssl x509 -noout -issuer
issuer= /C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3

To find who the SSL certificate was issued to:

$ echo | openssl s_client -servername site.com -connect site.com:443 2>/dev/null | openssl x509 -noout -subject
subject= /CN=www.site.com

To find an SSL’s expiration date:

$ echo | openssl s_client -servername site.com -connect site.com:443 2>/dev/null | openssl x509 -noout -dates
notBefore=Mar 18 10:55:00 2017 GMT
notAfter=Jun 16 10:55:00 2017 GMT

To print all of the abovementioned information in one command:

$ echo | openssl s_client -servername site.com -connect site.com:443 2>/dev/null | openssl x509 -noout -issuer -subject -dates
issuer= /C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3
subject= /CN=www.site.com
notBefore=Mar 18 10:55:00 2017 GMT
notAfter=Jun 16 10:55:00 2017 GMT

Monitoring Site Availability

For any website to work effectively, its content has to be available to visitors 24/7 and the project’s administrator has to have back-end access in order to make changes or perform other actions.

You can easily check a site’s availability from the Linux command line. HTTP status can also easily be retrieved using utilities like TELNET or CURL.

Checking Site Availability with CURL

To check a site’s availability, run the following command, which will return a status code from the server:

$ curl -Is http://www.site.com | head -1
HTTP/1.1 200 OK

The status code ‘202 OK’ means the request was successfully processed and the site is available.

Below is an example of another code curl may return:

$ curl -Is http://site.com | head -n 1
HTTP/1.1 301 Moved Permanently

Using curl, we can also check the availability of a site’s individual pages:

$ curl -Is http://www.site.com/en/Bash-Colors | head -n 1
HTTP/1.1 200 OK

Checking Site Availability with TELNET

You can also check site availability with the telnet utility:

$ telnet www.site.com 80
Trying 91.206.200.119...
Connected to www.site.com.
Escape character is '^]'.
HEAD / HTTP/1.0
HOST: www.site.com
<PRESS ENTER>
<PRESS ENTER>

The output for an available site will look like this:

HTTP/1.1 200 OK
Server: nginx/1.1.10
Date: Sun, 26 May 2017 19:29:46 GMT
***

Checking Site Availability from Geographically Distributed Points

Selectel’s Monitoring service helps you check service availability and analyze a server or web service from different points around the world.

For detailed information on the kinds of metrics available, please visit our knowledge base.

Conclusion

You can always write your own script in PHP or Perl for checking availability, or you can make a telegram bot (Russian) for sending alerts, but considering the daily profits a website can bring relative to the cost of monitoring, it’s usually cheaper to use a paid service like PagerDuty.

Below are some useful reviews of monitoring services.