Personal Workflow Blog

To content | To menu | To search

Tag - sysadmin

Entries feed

Monday, 2 December 2013

Experimenting with a C munin node

Core plugins are designed for simplicity...

As I wrote about it earlier, Helmut rewrote some core plugins in C. It was maintly done with efficiency in mind.

As those plugins are only parsing one /proc file, there seemed no need to endure the many forks inherent with even trivial shell programming. It also acknowledges the fact that the measuring system shall be as light as possible

Munin plugin are highly driven towards simplicity. Therefore having shell plugins is quite logical. It conveys the educational sample purpose for users to write their own, while being quite easy to code/debug for the developpers. Since their impact on current systems is very small, there are not much incentive to change.

... but efficiency is coming !

Nonetheless, now monitored systems are becoming quite small.

It is mostly thanks to embedded systems like the RaspberryPi. This means that processing power available is much lower than on normal nodes[1].

Now the embedded C approach for plugins has a new rationale.

Notes

[1] Usually datacenter nodes are more in the high end of the spectrum than the low-end.

Saturday, 13 April 2013

Spinoffs in the munin ecosystem

KISS is the core design of Munin

Munin's greatest strength is its very KISS architecture. It therefore gets many things right, such as a huge modularity.

Each component (master/node/plugin) has a simple API to communicate with the others.

Spin-offs ...

I admit that the master, even the node, have convoluted code. In fact some rewrites already do exist.

... are welcomed ...

And they are a really good thing, as it enables rapid prototyping on things that the stock munin has (currently) trouble to do.

The stock munin is a piece of software that many depend upon, so it has to move at a much slower pace than one does want, even me. As much as I really want to add many many features to it, I still have to take extra care that it doesn't break stuff, even the least known features.

So I take munin off-springs very seriously and even offer as much help as I can in order for them to succeed.

... because they are very valuable in the long term

In my opinion competition is only short bad in the short term, and in the long term they usually add significant value to the whole ecosystem. That said, there's always a risk to become slowly irrelevant, but I think that's the real power of open-source's evolutionary paradigm : embrace them or become obsolete and get replaced.

Since, if someone takes the time to author a competitor that has a real threat potential, it mostly means that there's a real itch to scratch and that many things are to be learnt.

Different layers of spin-offs

The munin ecosystem is divided in 3 main categories, obviously related to the 3 main components of munin : master, node & plugin.

Plugins

That's the most obvious part as custom plugins are the real bread and butter of munin.

Stock plugins are mostly written in Perl or POSIX shell, as Perl is munin's own language and POSIX shell is ubiquitous. That fact is acknowledged by the fact that core munin provides 2 libraries (Perl & Shell) to help plugin authoring.

So, it's quite natural that each mainstream language has grown its own plugin library. Some language even have two of them.

C

Some plugins got even rewritten in plain C, as it was shown that shell plugins do have a significant impact on very under-powered nodes, such as embedded routers.

Node

This component is very simple. Yet, it has to be run on all the nodes that one wants to monitor. It is currently written in Perl, and while that's not an issue on UNIX-like systems, it can be quite problematic on embedded ones

Simple munin

The official package comes with a POSIX shell rewrite that has to be run from inetd. It is quite useful for embedded routers like OpenWRT, but still suffers from an hard dep on POSIX shell and inetd.

SNMP

SNMP is another way to monitor nodes. While it works really well, it mostly suffers the fact that its configuration is quite different of the usual way, so I guess some things will change on that side.

Win32 ports

Win32 has long been a very difficult OS to monitor, as it doesn't offer much of the UNIX-esque features. Yet the number of win32 nodes that one wants to monitor is quite high, as it makes munin one the few systems that can easily monitor heterogeneous systems.

Therefore, while you can install the stock munin-node, several projects emerged. We decided to adopt munin-node-win32.

Android

There's also a dedicated node for Android. It makes sense, given that the Android is yet Linux-derived, but lacks Perl, and is a Java mostly platform. This node also has some basic capabilities of pushing data to the master instead of the usual polling.

This is specially interesting given the fact that Android nodes are usually loosely connected, so the node spools values itself and pushes them when it recovers connectivity.

Note that this is specifically an aspect that is currently lacking in munin, and I'm planning to address it in the 2.1 series. So thanks to its author for showing a relevant use-case.

C

That's my last experiment. It started with a simple question : how difficult would it be to code a fairly portable version of the node ?

It turned out that it wasn't that difficult. I'm even asking myself about eventually replacing the win32 specific port with this one, as the code is much simpler. The win32 node has several plugin built-in mostly due to platform specifics. I still have to find a way to work my way around it, but it's in quite good shape.

This post was originally done to promote it, but while writing it I noticed that the ecosystem deserved a post on its own. So I'll write another one, specific to the C port of munin-node and plugins.

Master

The master is the most complex component. So rewrites of it won't happen as-is. They usually take the form of a bridge between the munin protocol and another graphing system, such as Graphite.

Clients

There are also client libraries that are able to directly query munin nodes, to be able to reuse the vast ecosystem. Languages are various, from the obvious Python to Ruby, along with a quite modern node.js one.

Friday, 1 February 2013

Avoid those milli-hits in Munin

A recurring question on IRC is : why do I have 500 million hit/s in my graph ?.

Turns out that they are really seeing 500m hit/s, and that lower-case m means milli, and not Mega as specified in the Metric system. This is automatically done by RRD.

To avoid this you should just specify graph_scale no as specified.

Monday, 23 August 2010

Waiting for Munin 2.0 - Keep more data with custom data retention plans

RRD is Munin's backbone.

Munin keeps its data in an RRD database. It's a wonderful piece of software, designed for this very purpose : keep an history of numeric data.

All you need is to tell RRD for how long and the precision you want to keep your data. RRD manages then all the underlying work : pruning old data, averaging to decrease precision if needed, ...

Munin automatically creates the RRD databases it needs.

1.2 - Only one set

In 1.2, every database creation was done with the same temporal & precision parameters. Since the output parameters were constant (day, week, month, year graphs), there were little need to have a different set of parameters.

1.4 - 2 sets : normal & huge

In 1.4, various users showed their need to have different graphing outputs, and began to hack around Munin's fixed graphing. It became rapidly obvious that the 1.2 preset wasn't a fit for everyone.

Therefore a huge dataset was available to be able to extend the finest precision (5min) to the whole Munin timeframe. This comes at a price though : more space is required, and the graph generation is slower, specially when generating the yearly one, since more data has to be read and analysed.

The switch is done for the whole munin installation by changing the system-wide graph_data_size, although already created rrd databases aren't changed. It is then even possible for a user to pre-customize the rrd file. Munin will then happily uses them transparently thanks to the RRD layer.

Manual overriding

Altering the RRD files after it is created is possible, but not as simple. Standard export & import from RRD take the structure with it. So data has to be moved around with special tools. rrdmove is my attempt to create such a tool. It copies data between 2 already existing RRD files, even asking RRD to interpolate the data when needed.

2.0 - Full control

Starting with 2.0, the parameter graph_data_size is per service. It also has a special mode : custom. Its format is very simple :

 
graph_data_size custom FULL_NB, MULTIPLIER_1 MULTIPLIER_1_NB, ... MULTIPLIER_NMULTIPLIER_N_NB
graph_data_size custom 300, 15 1600, 30 3000

The first number is the number of data at full resolution. Then usually it comes gradually decreasing resolution.

A decreasing resolution has 2 usages :

  • Limit the space consumption : keeping full resolution for the whole period (default : 5min for 2 years) is sometime too precise.
  • Increase performance : RRD will choose the best fitting resolution to generate its graphs. Already aggregated data is faster to compute.

Monday, 12 July 2010

Waiting for Munin 2.0 - Native SSH transport

In the munin architecture, the munin-master has to connect to the munin-node via a very simple protocol and plain TCP.

This has several advantages :

  1. Very simple to manage & install
  2. Optional SSL since 1.4 enabling secure communications
  3. Quite simple firewall rules.

It has also some disadvantages :

  1. A new listening service means a wider exposure
  2. The SSL option might add some administrative overhead (certificates management, ...)
  3. A native protocol isn't always covered by all firewall solutions
  4. Some organisations only authorize a few protocols to simplify audits (ex: only SSH & HTTPS)

Native SSH

Theses down points may be solved by encapsulation over SSH, but it can be a tedious task to maintain if the number of hosts increases.

Therefore 2.0 introduces the concept of a native SSH transport. Its usage is dead simple : replace the address with an ssh:// URL-like one.

The node still has to be modified to communicate with stdin/stdout instead of a network socket. For now, only pmmn and munin-async are able to provide such a node.

Configuration

The URL is quite self-explanatory as shown in the example below :

[old-style-host]
    address host.example.com

[new-style-host]
    address ssh://munin-node-user@host.example.com/path/to/stdio-enabled-node --params

Installation notes

Authentication should be done without password but via SSH keys. The connection is from munin-user@host-munin to munin-node-user@remote-node.

If you use munin-async, the user on the remote node might only be a readonly one, since it only needs to read spooled data. This implies that you use --spoolfetch and not --vectorfetch that updates the spool repository.

Upcoming HTTP(S) transport in 3.0

And the sweetest part is that since all the work has been done for adding another transport, adding a CGI-based HTTP transport one is possible (and therefore done) for 3.0.