Personal Workflow Blog

To content | To menu | To search


Entries feed

Saturday, 26 June 2010

Waiting for Munin 2.0 - Performance - Asynchronous updates

munin-update is the fragile link in the munin architecture. A missed execution means that some data is lost.

The problem : updates are synchronous

In Munin 1.x, updates are synchronous : the value of each service[1] is the one that munin-update retrieves each scheduled run.

The issue is that munin-update has to ask every service on every node for their values. Since the values are only computed when asked, munin-update has to wait quite some time for every value.

This very simple design enables munin to have the simplest plugins : they are completely stateless. While being one great strength of munin, it puts a severe blow on scalability : more plugins/node means obviously a slower retrieval.

Evolving Solutions

1.4 : Parallel Fetching

1.4 addresses some of these scalability issues by implementing parallel fetching. It takes into account that the most of the execution time of munin-update is spent waiting for replies. In 1.4 munin-update can ask max_processes nodes in parallel.

Now, the I/O part is becoming the next limiting factor, since updating many RRDs in parallel is the same as random I/O access for the underlying munin-master OS. Serializing & grouping the updates will be possible with the new RRDp interface from rrdtool version 1.4 and on-demand graphing. Tomas Zvala even offered a patch for 1.4 RRDp on the ML. It is very promising, but doesn't address the root defect in this design : a hard dependence of regular munin-update runs.

2.0 : Stateful plugins

2.0 provides a way for plugins to be stateful. They might schedule their polling themselves, and then when munin-update runs, only emit collect already computed values. This way, a missed run isn't as dramatic as it is in the 1.x series, since data isn't lost. The data collection is also much faster because the real computing is done ahead of time.

2.0 : Asynchronous proxy node

But changing plugins to be stateful and self-polled is difficult and tedious. It even works against of one of the real strength of munin : having simple & stateless plugins.

To address this concern, an experimental proxy node is created. For 2.0 it takes the form of a couple of processes : munin-async-server and munin-sync-client.

The proxy node in detail (munin-async)


These 2 processes form an asynchronous proxy between munin-update and munin-node. This avoids the need to change the plugins or upgrade munin-node on all nodes.

munin-async-server should be installed on the same host than the proxied munin-node in order to avoid any network issue. It is the process that will poll regularly munin-node. The I/O issue of munin-update is here non-existent, since munin-async stores all the values by simply appending them in a text file without any further processing. This file is later read by the client's munin-update, and it will be processed there.

Specific update rates

Having one proxy per node enables a polling of all the services there with a specific update rate.

To achieve this, munin-async-server forks into multiple processes, one for each proxied service. This way each service is completely isolated from the other, and therefore is able to have its own update rate, is safe from other plugins slowdowns, and it does even completely parallelize the information gathering.

SSH transport

munin-async-client uses the new SSH native transport of 2.0. It permits a very simple install of the async proxy.


[1] in 1.2 it's the same as plugin, but since 1.4 and the introduction of multigraph, one plugin can provide multiple services.

Wednesday, 16 June 2010

Waiting for Munin 2.0 - Performance - FastCGI

1.2 has CGI, it is slow, unsupported, but it does exist.

1.4 has even an experimental FastCGI install mode.

Quoting from this page :

This is more a proof of concept than a recommended - it's slow. Also we do not test it before every release

In 2.0 lots of work has been done to take this experimental CGI mode into a supported one. It might even be the primary way of using munin since, when an install has a certain size, CGI becomes mandatory.

That's because munin-graph doesn't have time to finish its job when the next one is launched, and the new one doesn't run. It is not as dramatic as a missed munin-update execution, since the graphs will still be generated on the later round, but there will be random graph lags and it will put quite some stress on the CPU & I/O subsystem. This will slow munin-update down since it also uses the I/O subsystem much, and that's to be avoided at all costs.

Mainstream CGI has some consequences :

  1. Only the FastCGI wrapper remained : the plain CGI one is dropped.
    • The CPAN module CGI::Fast is compatible when launched as a normal CGI.
    • Almost all HTTP servers support plain CGI, and with the cgi-fcgi wrapper from the FastCGI devkit (Debian package libfcgi), you can have the best of both worlds (a custom HTTP server & FastCGI). I even posted on how to have a working thttpd with FastCGI.
  2. The old process limit mechanism is dropped also. The FastCGI server configuration is a much better way to control it. The old code was based on System V semaphores and was not 100% reliable.
  3. A caching system has to be implemented, in order for each graph to be generated only once for its lifetime.
  4. The CGI process is launched with the HTTP server user. Since it doesn't only read now, but also writes log files and images files, there is an extra step when installing it. But it's already described in the Munin CGI page given previously.
  5. Since the process is launched only once, for now it read only once the config. So if some part of the config change, the FastCGI container MUST be restarted.

Some benchmarks

Now, the sweet part : I'm putting up some micro-benchmarks.

They should be taken with caution as every benchmark should be, but I think the general idea is conveyed. For the sake of simplicity I'm only doing 1 request in parallel and disabled IMS caching.

Basic 1.2 CGI

$ httperf --num-conns 10  --add-header='Cache-Control: no-cache\n' \
    --uri  /cgi-bin/munin-cgi-graph/localdomain/localhost.localdomain/cpu-day.png

Total: connections 10 requests 10 replies 10 test-duration 27.939 s

Connection rate: 0.4 conn/s (2793.9 ms/conn, <=1 concurrent connections)
Connection time [ms]: min 1653.9 avg 2793.9 max 5217.0 median 1912.5 stddev 1487.8
Connection time [ms]: connect 0.0
Connection length [replies/conn]: 1.000

Request rate: 0.4 req/s (2793.9 ms/req)
Request size [B]: 131.0

1.4 FastCGI

The munin-fastcgi-graph is only loaded once, but the munin-graph is reloaded each time.

$ httperf --num-conns 10  --add-header='Cache-Control: no-cache\n' \
    --uri  /cgi-bin/munin-fastcgi-graph/localdomain/localhost.localdomain/cpu-day.png

Total: connections 10 requests 10 replies 10 test-duration 13.807 s

Connection rate: 0.7 conn/s (1380.7 ms/conn, <=1 concurrent connections)
Connection time [ms]: min 1141.3 avg 1380.7 max 1636.1 median 1381.5 stddev 173.7
Connection time [ms]: connect 0.0
Connection length [replies/conn]: 1.000

Request rate: 0.7 req/s (1380.7 ms/req)

The response time is cut almost in half. That's expected, since only the top half of the processing isn't reloaded.

2.0 FastCGI

Here everything is loaded once.

$ httperf --num-conns 10  --add-header='Cache-Control: no-cache\n' \
    --uri  /cgi-bin/munin-cgi-graph-2.0/localdomain/localhost.localdomain/cpu-day.png

Total: connections 10 requests 10 replies 10 test-duration 1.668 s

Connection rate: 6.0 conn/s (166.8 ms/conn, <=1 concurrent connections)
Connection time [ms]: min 123.0 avg 166.8 max 513.4 median 127.5 stddev 121.9
Connection time [ms]: connect 0.0
Connection length [replies/conn]: 1.000

Request rate: 6.0 req/s (166.8 ms/req)

Now response time is cut almost by a ten factor ! That's quite good news, since it goes 20 times faster that the original CGI.

Thursday, 10 June 2010

Waiting for Munin 2.0 - Performance - Architecture

A little intro/refresh on munin's architecture on the master

Munin has a very simple architecture on the master : munin-cron is launched via cron every 5 minutes. Its only job is to launch in order munin-update, munin-graph, munin-html & munin-limits.

The various processes


This process retrieves the values from the various nodes and to update the rrd files. This one should never take more than 5 minutes to run, otherwise there will be gaps since the next update will not be launched (lockfile-protected runs).

This process stresses the I/O on the master, and depends on the plugins execution time on the various nodes. On 1.4 the retrieval is multi-threaded[1], so an slow node doesn't impact too much the whole process.

2.0 proposes asynchronous updates and vectorized updates.


This process generates all the image files from the rrd files.

It is usually a process that is quite CPU-bound, it generates also a fair load of I/O. Since 1.4 there might also be a parallel graphing generation in order to take advantage of multiple CPU / multiple I/O paths.

A simple optimization is to generate only needed graphs instead of all of them each time. This leads to CGI-generation of graphs. 1.2 & 1.4 took a first step in this direction, but it's quite a hack since it's only a very basic script that calls munin-update with the correct parameters.

A FastCGI port of the wrapper (munin-cgi-graph) removes the overhead of starting the wrapper for each call, but in 1.4 the code is quite experimental and has some serious bugs that would need extensive patching to be fixed.

2.0 completes the integration of CGI graphing with removing the overhead of calling munin-graph and does this extensive patching for bugs fixing


This process generates all the html files from the rrd files. This one is quite fast for now.


This process checks the limits to see if there is a warning/alert to send via mail or nagios. This one is also quite fast for now.


[1] more multi-process actually

Tuesday, 8 June 2010

Waiting for Munin 2.0 - Introduction

This is the first article of a series about the coming version 2.0 of Munin.

The idea came from the series Waiting from 8.5 about PostgreSQL.

The ironic part is that their 8.5 release has become a 9.0, just like our 1.5 will be a 2.0.

I'll post several small articles about new or enhanced-enough features. They will all be tagged munin20.

Planned summary :

  1. Performance - Architecture context
  2. Performance - FastCGI
  3. Performance - Asynchronous updates
  4. Performance - Misc
  5. Native SSH transport
  6. Custom data retention plans (keep more data)
  7. Dynamic zooming

Thursday, 26 November 2009

Native SSH transport for Munin

Update: This page is obsolete, since it has been merged in the 2.0 version of Munin. The syntax is a little bit different, but the idea is the same.

I blogged about the munin monitoring system a while ago.

The fact the Munin team did quite a remarkable job in cleaning up the 1.2 code for the 1.4 release enabled me to add a native SSH transport for Munin, and made be able to get rid of all SSH tunnels.

Continue reading...

- page 3 of 4 -