Monitoring Web server logs (Apache, NGINX) with Netdata using Go. (Update !!!)

This is the seventh in a series of posts featuring Protips, tips, tricks, hacks, and secrets provided by Our Team 🙊 — We want to share our top tips for the growing and thriving Linux community out there. Because sometimes you need a little help...

Monitoring Web server logs (Apache, NGINX) with Netdata using Go. (Update !!!)

Collects webserver/caching proxy metrics
via access log file


ps: we assume that you have Netdata v1.34 fully installed, active, and running.

Web server logs have been around for years now, generating log files that record every accessed website and API in real-time. All these web servers' log files are mostly just filling your disks, rotating every night without any use whatsoever. That's where the magic starts. Netdata turns this "useless" log file, into a powerful performance and health monitoring tool, capable of detecting, in real-time, the most common web server problems as such: too many internal server errors, too many bad requests, too many redirect unreasonably slow responses, or too few successful responses just to name few.

Netdata has been capable of monitoring our customized web log files as we saw in our previous post thanks to the weblog python.d module; but their team has recently refactored this module in favor of Go, the whole parser is faster than ever and that is what we are coming to cover.
Note that, as we speak, the go.d plugin is currently compatible with Nginx and Apache.

Configure the Go-based web log collector


To use the Go version of this plugin, you need to explicitly enable it, and disable the deprecated Python version.
Edit the python.d.conf configuration file using edit-config from the Netdata config directory, which is usually located at /etc/netdata.

$ cd /etc/netdata    # Replace this path with your Netdata config directory, if any different

sudo ./edit-config python.d.conf

Find the web_log line, uncomment it, and set it to web_log: no.

Next, open the go.d.conf file for editing.

$ cd /etc/netdata    # Replace this path with your Netdata config directory, if any different

sudo ./edit-config go.d.conf

Find the web_log line, uncomment it, and set it to web_log: yes.

Restart the service: sudo systemctl restart netdata

Remember, Netdata has a powerful web_log plugin, capable of incrementally parsing any number of web server log files. This plugin is automatically started with Netdata and comes, pre-configured, for finding web server log files on popular distributions. Its configuration is located at /etc/netdata/go.d/web_log.conf and will look like this:

# [ JOBS ]
jobs:
# NGINX
# debian, arch
  - name: nginx
    path: /var/log/nginx/access.log

# APACHE
# debian
  - name: apache
    path: /var/log/apache2/access.log

# debian
  - name: apache_vhosts
    path: /var/log/apache2/other_vhosts_access.log
  #
  ##
  ###

Separately monitoring each access log?


Again the idea here is to create a new custom configuration by setting the path parameter to point to the intended access log file, give it a name as well, and set the log_type to auto as such:

#
##
###
# 
jobs:
  - name: Saturn
    path: '/var/log/nginx/saturn_access.log'
    log_type: auto
#
#
jobs:
  - name: Uranus
    path: '/var/log/nginx/uranus_access.log'
    log_type: auto
##
#
#
jobs:
  - name: Neptune
    path: '/var/log/nginx/neptune_access.log'
    log_type: auto
#
#
jobs
  - name: Pluto
    path: '/var/log/nginx/pluto_access.log'
    log_type: auto
  #
  ##
  ###


Restart the service: sudo systemctl restart netdata


Important
Keep in mind that the configurations are written in YAML. This means that indentation is key here. To maintain portability, tab characters must not be used in indentation, since different systems treat tabs differently.


Charts

Once you have all log files configured Netdata should pick up your web server's access log and begin showing real-time charts with the following charts.

Netdata charts based on metrics collected by querying the nginx API


That's it!!!
We hope you have found this post as useful and informative as we do.
Keep on learning and sharing knowledge.