[icinga-users] organizing remote passive checks?

Michael Friedrich michael.friedrich at gmail.com
Mon Jan 21 21:43:36 CET 2013

On 21.01.2013 15:30, Simon Oosthoek wrote:
> I've been reading the documentation on passive checks (particularly
> distributed checks). (http://docs.icinga.org/latest/en/distributed.html)
> As I understand it, on the central server you define all hosts, and all
> service checks for the hosts. On the remote server you define only the
> hosts that will be checked from there and the services with their
> (active) check_commands.
> If the central server will not do active checks on a particular service
> or host you can use a dummy check that will show up when the freshness
> runs out on a result.
> Translating this to how I would administer this, I would apparently have
> to keep two separate configuration entries for the same host, one for
> the central server and one for the distributed check on the remote
> server. (If the central server _can_ do active checks, the configuration
> may not be different).
> How does one keep this consistent?

with different automated distribution methods (git, puppet, etc) or by 
hand with one central config, but different master templates, if you 
require parts of your master server to actually run checks too.

if you do not want to let the master run any checks, globally disable 
the host and service check execution, only let freshness checks happen
> Assuming a central repository for both types of servers, I'd imagine a
> different configuration directory for the remote server, as it contains
> less objects, than the central server. Keeping both configurations for
> the same object synchronised seems not entirely trivial to do? Any
> pointers to how this can be managed more easily?

with my netways hat on, i'd say lconf with lconf export for the slaves. 
but merely this is not so easy with 1.2 and 1.3 is still in rc mode, 
some stuff to fix and update documentation as well.

with my icinga hat on, i'd say same configuration using a master 
template, which sets the active/passive flag (and command) on each 
host/service, if the master icinga.cfg entries are not allowed to be 
disabled entirely. though, it's the recommended way of doing this, not 
forcing you to keep 2 different configs, but only control that via core 
configuration instead [0].

if you have more than one probe, and want to distribute the checks in 
the way of splitting them up by location e.g. it will get more tricky to 
really push the configuration only matching the satellite. if you are 
using hostgroup tricks to assign services to hosts, you can re-use the 
hostgroups if you enable the empty_hostgroup_assignment config option 
which has been added on that exact purpose - to allow basic config 
template distribution among satellites, but not all using those really.

other than that, there still stands lconf with the satellite setup and 
automated configuration split and distribution [1]. if you're brave, try 
the git master of lconf ;)

kind regards,

[0] http://docs.icinga.org/latest/en/distributed.html#centralconfig

> Cheers
> Simon
> ------------------------------------------------------------------------------
> Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
> MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
> with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
> MVPs and experts. SALE $99.99 this month only -- learn more at:
> http://p.sf.net/sfu/learnmore_122412
> _______________________________________________
> icinga-users mailing list
> icinga-users at lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/icinga-users

DI (FH) Michael Friedrich

mail:     michael.friedrich at gmail.com
twitter:  https://twitter.com/dnsmichi
jabber:   dnsmichi at jabber.ccc.de
irc:      irc.freenode.net/icinga dnsmichi

icinga open source monitoring
position: lead core developer
url:      https://www.icinga.org

More information about the icinga-users mailing list