<div class="Ih2E3d"><div class="gmail_quote">On Sun, Jan 25, 2009 at 2:00 PM, Eric Michaelis <span dir="ltr"><<a href="mailto:combinare@gmail.com" target="_blank">combinare@gmail.com</a>></span> wrote: <blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
I realize that each installation will be different based on the number<br>
of hosts/services that will be monitored, but I'm looking for something<br>
basic--something along the lines of "a base system with 512MB of RAM and<br>
10G disk space is a good starting point for monitoring X hosts running Y<br>
services, with a total of X*Y = Z services. Then, assume you need<br>
another N GB of ram for every M services you add."</blockquote></div><br></div>If
we assume that many nagios installations montitor modest size
environments (less than 100 hosts and 500 services) then any modern
single-processor server with 1GB of memory and 10GB of disk should do
just fine in most cases. <br>
<br>Beyond those numbers my philosophy has been that hardware is cheap
so over-spec and if you still run into problems it isn't too hard to
scale up anyway. Our current server is dual quad-core xeons (8 cores
total) at 2.5GHz with 4GB of memory. On this server I monitor about 850
hosts and 2150 services that are 100% active checks executed
server-side (snmp-based checks instead of nrpe). We also collect perf
data and hand it over to Nagiosgraph. In addition, we run Cacti for
about 200 devices, a few light-weight network-related web apps, etc.
and the server sits at about 10% CPU on average and the load average
peaks at 2.28. Nagios generates about 1.5MB or less of logs per day and
other things like rrd and scratch files for service checks are a
constant size so disk space isn't really a concern at all.<br><br><font color="#888888">-- <br>Jake Paulus<br><a href="mailto:JakePaulus@gmail.com" target="_blank">JakePaulus@gmail.com</a></font>