[icinga-users] organizing remote passive checks?

Simon Oosthoek soosthoek at nieuwland.nl
Tue Jan 22 10:12:59 CET 2013


On 01/21/2013 09:43 PM, Michael Friedrich wrote:
> On 21.01.2013 15:30, Simon Oosthoek wrote:
>> Translating this to how I would administer this, I would apparently have
>> to keep two separate configuration entries for the same host, one for
>> the central server and one for the distributed check on the remote
>> server. (If the central server _can_ do active checks, the configuration
>> may not be different).
>>
>> How does one keep this consistent?
>
> with different automated distribution methods (git, puppet, etc) or by
> hand with one central config, but different master templates, if you
> require parts of your master server to actually run checks too.
>
> if you do not want to let the master run any checks, globally disable
> the host and service check execution, only let freshness checks happen

In that case you would keep the service and host configurations the 
same, but globally disable the checks from being run? What happens with 
freshness checks?

>> Assuming a central repository for both types of servers, I'd imagine a
>> different configuration directory for the remote server, as it contains
>> less objects, than the central server. Keeping both configurations for
>> the same object synchronised seems not entirely trivial to do? Any
>> pointers to how this can be managed more easily?
>
> with my netways hat on, i'd say lconf with lconf export for the slaves.
> but merely this is not so easy with 1.2 and 1.3 is still in rc mode,
> some stuff to fix and update documentation as well.
>

lconf looks interesting (though I'd have to dig into LDAP as well), 
thinking further after I sent the e-mail I had a similar idea using git 
instead, a bit like you say here:

> with my icinga hat on, i'd say same configuration using a master
> template, which sets the active/passive flag (and command) on each
> host/service, if the master icinga.cfg entries are not allowed to be
> disabled entirely. though, it's the recommended way of doing this, not
> forcing you to keep 2 different configs, but only control that via core
> configuration instead [0].

I'd store a complete set of configs for the central and remote server in 
the same git repository and then run a script to turn active into 
passive checks for a list of services/hosts stored in a file in the 
repository. The script still has to be written of course ;-)
I figure it would be easiest to keep both the active and passive 
check_command rules in the repository and comment out the one that isn't 
needed on the server where the config is used.

I'm haven't dug into how to handle this kind of active/passive setting 
with host/service tricks using host-groups.

>
> if you have more than one probe, and want to distribute the checks in
> the way of splitting them up by location e.g. it will get more tricky to
> really push the configuration only matching the satellite. if you are
> using hostgroup tricks to assign services to hosts, you can re-use the
> hostgroups if you enable the empty_hostgroup_assignment config option
> which has been added on that exact purpose - to allow basic config
> template distribution among satellites, but not all using those really.


I'm having a bit of trouble understanding this part. I can see the 
benefit of allowing empty hostgroups, as this allows you to add members 
from the host definitions instead of centralised. Sometimes this may 
result in empty hostgroups, especially if some groups don't occur at the 
remote server. Is this what you meant?

Cheers

Simon




More information about the icinga-users mailing list