Personal Workflow Blog

To content | To menu | To search

munin

Entries feed

Friday, 1 February 2013

Avoid those milli-hits in Munin

A recurring question on IRC is : why do I have 500 million hit/s in my graph ?.

Turns out that they are really seeing 500m hit/s, and that lower-case m means milli, and not Mega as specified in the Metric system. This is automatically done by RRD.

To avoid this you should just specify graph_scale no as specified.

Thursday, 12 July 2012

Waiting for Munin 2.0 - Break the 5 minutes barrier !

Every monitoring software has a polling rate. It is usually 5 min, because it's the sweet spot that enables frequent updates yet still having a low overhead.

Munin is not different in that respect : it's data fetching routines have to be launched every 5 min, otherwise you'll face data loss. And this 5 min period is deeply grained in the code. So changing it is possible, but very tedious and error prone.

But sometimes we need a very fine sampling rate. Every 10 seconds enables us to track fast changing metrics that would be averaged out otherwise. Changing the whole polling process to cope with a 10s period is very hard on hardware, since now every update has to finish in these 10 seconds.

This triggered an extension in the plugin protocol, commonly known as supersampling.

Supersampling

Overview

The basic idea is that fine precision should only be for selected plugins only. It also cannot be triggered from the master, since the overhead would be way too big.

So, we just let the plugin sample itself the values at a rate it feels adequate. Then each polling round, the master fetches all the samples since last poll.

This enables various constructions, mostly around streaming plugins to achieve highly detailed sampling with a very small overhead.

Notes

This protocol is currently completely transparent to munin-node, and therefore it means that it can be used even on older (1.x) nodes. Only a 2.0 master is required.

Protocol details

The protocol itself is derived from the spoolfetch extension.

Config

A new directive is used, update_rate. It enables the master to create the rrd with an adequate step.

Omitting it would lead to rrd averaging the supersampled values onto the default 5 min rate. This means data loss.

Notes

The heartbeat has always a 2 step size, so failure to send all the samples will result with unknown values, as expected.

The RRD file size is always the same in the default config, as all the RRA are configured proportionally to the update_rate. This means that, since you'll keep as much data as with the default, you keep it for a shorter time.

Fetch

When spoolfetching, the epoch is also sent in front of the value. Supersampling is then just a matter of sending multiple epoch/value lines, with monotonically increasing epoch. Note that since the epoch is an integer value for rrdtool, the smallest granularity is 1 second. For the time being, the protocol itself does also mandates integers. We can easily imagine that with another database as backend, an extension could be hacked together.

Compatibility with 1.4

On older 1.4 masters, only the last sampled value gets into the rrd.

Sample implementation

The canonical sample implementation is multicpu1sec, a contrib plugin on github. It is also a so-called streaming plugin.

Streaming plugins

These plugins fork a background process when called that streams a system tool into a spool file. In multipcu1sec, it is the mpstat tool with a period of 1 second.

Undersampling

Some plugins are on the opposite side of the spectrum, as they only need a lower precision.

It makes sense when :

  • data should be kept for a very long time
  • data is very expensive to generate and it doesn't vary fast.

Monday, 20 June 2011

Enhance RRD I/O performance in Munin 1.4 and Scale

As with most of the RRD-based monitoring software (Cacti, Ganglia, ...), it is quite difficult to scale.

The bad part is that updating lots of small RRD files seems like pure random I/O to the OS as stated in there documentation.

The good part is that we are not alone, and therefore the RRD developers did tackle the issue with rrdcached. It spools the updates, and flushs them to disk in a batched manner, or when needed by a rrd read command such as graphing. That's why it is scales well when using CGI graphing. Otherwise, munin-graph will read every rrd, and therefore force a flush on all the cache.

And the icing on the cake is that, although it is only fully integrated to munin 2.0, you can use it right away in the 1.4.x series.

You only need to define the environment variable RRDCACHED_ADDRESS while running the scripts accessing the RRDs.

Then, you have to remove the munin-graph part of the munin-cron and run it on its own line. Usually only every hour or so, to be able to accumulate data in rrdcached before flushing it all to disk when graphing.

Updating to 2.0 is also an option to have a real CGI support. (CGI on 1.4 is existing but has nowhere decent performance).

Monday, 23 August 2010

Waiting for Munin 2.0 - Keep more data with custom data retention plans

RRD is Munin's backbone.

Munin keeps its data in an RRD database. It's a wonderful piece of software, designed for this very purpose : keep an history of numeric data.

All you need is to tell RRD for how long and the precision you want to keep your data. RRD manages then all the underlying work : pruning old data, averaging to decrease precision if needed, ...

Munin automatically creates the RRD databases it needs.

1.2 - Only one set

In 1.2, every database creation was done with the same temporal & precision parameters. Since the output parameters were constant (day, week, month, year graphs), there were little need to have a different set of parameters.

1.4 - 2 sets : normal & huge

In 1.4, various users showed their need to have different graphing outputs, and began to hack around Munin's fixed graphing. It became rapidly obvious that the 1.2 preset wasn't a fit for everyone.

Therefore a huge dataset was available to be able to extend the finest precision (5min) to the whole Munin timeframe. This comes at a price though : more space is required, and the graph generation is slower, specially when generating the yearly one, since more data has to be read and analysed.

The switch is done for the whole munin installation by changing the system-wide graph_data_size, although already created rrd databases aren't changed. It is then even possible for a user to pre-customize the rrd file. Munin will then happily uses them transparently thanks to the RRD layer.

Manual overriding

Altering the RRD files after it is created is possible, but not as simple. Standard export & import from RRD take the structure with it. So data has to be moved around with special tools. rrdmove is my attempt to create such a tool. It copies data between 2 already existing RRD files, even asking RRD to interpolate the data when needed.

2.0 - Full control

Starting with 2.0, the parameter graph_data_size is per service. It also has a special mode : custom. Its format is very simple :

 
graph_data_size custom FULL_NB, MULTIPLIER_1 MULTIPLIER_1_NB, ... MULTIPLIER_NMULTIPLIER_N_NB
graph_data_size custom 300, 15 1600, 30 3000

The first number is the number of data at full resolution. Then usually it comes gradually decreasing resolution.

A decreasing resolution has 2 usages :

  • Limit the space consumption : keeping full resolution for the whole period (default : 5min for 2 years) is sometime too precise.
  • Increase performance : RRD will choose the best fitting resolution to generate its graphs. Already aggregated data is faster to compute.

Monday, 12 July 2010

Waiting for Munin 2.0 - Native SSH transport

In the munin architecture, the munin-master has to connect to the munin-node via a very simple protocol and plain TCP.

This has several advantages :

  1. Very simple to manage & install
  2. Optional SSL since 1.4 enabling secure communications
  3. Quite simple firewall rules.

It has also some disadvantages :

  1. A new listening service means a wider exposure
  2. The SSL option might add some administrative overhead (certificates management, ...)
  3. A native protocol isn't always covered by all firewall solutions
  4. Some organisations only authorize a few protocols to simplify audits (ex: only SSH & HTTPS)

Native SSH

Theses down points may be solved by encapsulation over SSH, but it can be a tedious task to maintain if the number of hosts increases.

Therefore 2.0 introduces the concept of a native SSH transport. Its usage is dead simple : replace the address with an ssh:// URL-like one.

The node still has to be modified to communicate with stdin/stdout instead of a network socket. For now, only pmmn and munin-async are able to provide such a node.

Configuration

The URL is quite self-explanatory as shown in the example below :

[old-style-host]
    address host.example.com

[new-style-host]
    address ssh://munin-node-user@host.example.com/path/to/stdio-enabled-node --params

Installation notes

Authentication should be done without password but via SSH keys. The connection is from munin-user@host-munin to munin-node-user@remote-node.

If you use munin-async, the user on the remote node might only be a readonly one, since it only needs to read spooled data. This implies that you use --spoolfetch and not --vectorfetch that updates the spool repository.

Upcoming HTTP(S) transport in 3.0

And the sweetest part is that since all the work has been done for adding another transport, adding a CGI-based HTTP transport one is possible (and therefore done) for 3.0.

- page 2 of 4 -