[icinga-devel] Icinga redesign

Vitali Voroth vitalivoroth at web.de
Tue Jun 8 23:05:11 CEST 2010


Sorry, did I miss a thing?

So, you want to write icinga completely new from scratch?
What about backward compatibility to nagios?
Shouldn't switching from nagios to icinga be as easy as possible?
Shouldn't one using nagios be able to (at least) reuse his old
configuration files?

Am 08.06.2010 22:15, schrieb Hiren Patel:
> posting this to devel for input and ideas, discussions etc.
> 
>>> --------
>>> before we start any dev, perhaps a small outline of coding style we will use
>>>    
>> before we even code a line, we need to clarify the exact coding style like
>>
>> if(condition) {
>>      test();
>> }
>>
> 
> cool, I'll put together a small list in the week to come.
> 
>> even more, the comments and (function) headers should be written in 
>> doxygen format.
>>
>>
>>> my thought is to have lists and each thread processing specific lists, locking where appropriate,
>>> eg, a notification list, the notification thread will continuously watch for anything on this list,
>>> if/when there is jobs on this list, the thread will lock, remove job, unlock, and process job.
>>> job being a notification to send out, the list will point to a object with all the details.
>>> as such each thread handles such a list, and others add onto it where need be.
>>>    
>>
>> Sounds good. We should talk about in deep, what should be possible with 
>> those lists, and if we can create that simply with C, or if we should 
>> switch over to C++ in this regard. That's the more or less basic 
>> question before starting to code.
>> I can see a small problem in C - we could begin looking at the old code 
>> and just copy pasting things. That's not the way it's meant to be.
>>
> 
> I haven't really done c++, but I'm sure I could learn it in a few weeks.
> is there any real advantage to switch to c++ beside resisting copy/paste from existing core?
> 
>>> was thinking, these lists will have global locks that threads would lock and unlock, and the main data structure storage
>>> could possibly have locks per object if feasible, so one thread wanting too update a service struct with new perf data
>>> for eg, and another wanting to read from a different service struct to populate macros, could do so at the same time.
>>>    
>>
>> Yep of course. As a matter of fact you would run into deadlocks if only 
>> having a global lock.
>>
>>> we do away with as much global variables as possible
>>>
>>> make macro functions reenterent, so that thread jobs can request macros etc without conflict
>>>
>>> separate threads to:
>>> =======
>>> run host/service checks
>>>   with ability to active check on slave nodes like dnx does
>>>   possibly have a built in ping module to thread on instead of fork (since ping is common)
>>> reschedule checks
>>> external command checks
>>> check reaper
>>> retention save
>>> notifications
>>> event handlers
>>> performance data processing
>>> module handling
>>> general tasks:
>>>   schedule downtime
>>>   freshness check
>>>   comment handling
>>>   flapping calc
>>>   stats handling including profiling
>>> general low pri tasks:
>>>   log rotation
>>>    
>> status api (like livestatus currently is)
>> event broker
>>
> 
> yep the module handling listed was meant to refer to event brokering.
> 
>>> do away with status.dat file:
>>> ========
>>> fifo/socket to listen on to dump live object data to
>>> anything that queries it
>>> dump all or parts of objects depending on request etc
>>>    
>> ok, sounds even better. So in fact the current livestatus implementation 
>> handed directly into a core api. This should be both directions, so that
>>
>> * question for livestatus data, answer
>> * dumping all data like idomod does, based on different settings 
>> (config, live, historical)
>> * adding commands not via pipe but on this api
>> * secure that in every possible way in regard of performance
>> * adapt api output in an easy format for icinga api, e.g. add sth like a 
>> JSON writer, or couchDB (NoSQL DB) and livestatus
>>
> 
> sounds good to me.
> 
>>> do away with xdata:
>>> =========
>>>    
>> xdata is horrible design. this needs to be reworked fully into the core.
>>
>>> one base/config/ dir with all the config handling routines
>>> read conf files, resolve inheritence etc, and add straight
>>> to data structures (if feasible)
>>>    
>> also handling modules, extra plugin configs (!), and so on.
>>
>>> one base/retention/ dir with all the retention routines
>>> one base/perfdata/ dir with all perf data routines
>>>    
>>
>> Ok then modules/ dir with all modules routines (aka neb init, callback etc)
>>
>> e.g.
>>
>> modules/broker/ dir with event broker stuff
>> modules/api/ dir with everything regarding a status and command api
>>
>> Hmmm and even more, the overall structure needs to be fully written down 
>> before starting to code.
>>
>> But in fact, we should sum our mails up into a single one and drop that 
>> onto the devel mailinglist.
>>
> 
> doing so in this mail.
> any input and ideas welcome.
> I'll start documenting a design as you suggest above, and we can take it from there.
> 
> ------------------------------------------------------------------------------
> ThinkGeek and WIRED's GeekDad team up for the Ultimate 
> GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
> lucky parental unit.  See the prize list and enter to win: 
> http://p.sf.net/sfu/thinkgeek-promo
> _______________________________________________
> icinga-devel mailing list
> icinga-devel at lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/icinga-devel





More information about the icinga-devel mailing list