We have a large distributed setup running Nagios 2.12 with 20 distributed servers sharing about 20000 checks against 2500 hosts. They are reporting into multiple master Nagios servers using a modified OCP_daemon that handles multiple master servers. Recently we nearly doubled our number of distributed servers. Our number of checks had grown so we only were doing about 20-30% per minute on some of our most busy distributed servers. Now we are doing 90% per minute. <br>
<br>Ever since we increased the frequency of all the checks, our oldest Master server has started crashing randomly every so often. Nothing else has changed. Memory use goes through the roof until eventually there is 0 swap left and the server finally crashes and has to be rebooted. If we restart the Nagios service while the memory usage is going crazy, it drops back down to normal for quite a while, but days later it will happen again. I started restarting Nagios on that server once an hour but it hasn't helped. We tried upgrading to 16 GB of RAM which has made this happen a bit less often, but it continues to happen sometimes.<br>
<br>We are using NPCD to graph the performance data from all of our checks, but all the graph .RRD files are on a dedicated partition, and the crashing happens even when we disable graphing completely and disk I/O is near 0% on both the system partitions and the graph partition.<br>
<br>So I was wondering how I could go about figuring out why Nagios is freaking out on our older server (Dell PowerEdge 1950). Our other Master server (a Dell PowerEdge R710) gets all the same checks reported to it, and handles it just fine, but it using much newer Xeon CPUs, faster memory, etc. The old crashing server handles things just fine for days at a time until it randomly runs itself out of swap space and crashes.<br>
<br>I know I really should get around to upgrading to Nagios 3.x but no time for that yet and it's going to be a pain to upgrade them all at once without being blind for a little bit, so pretend Nagios 3.x isn't an option just yet.<br>
<br>Thanks for any insight!<br>Jeremy<br>