logfile scrubbing - solutions?
Carroll, Jim P [Contractor]
jcarro10 at sprintspectrum.com
Thu Nov 14 22:15:14 CET 2002
Is anyone else on the list doing logfile scrubbing? If so, which scrubber
are you using, the shell-based one, or the Perl-based one? If neither, what
approach are you using?
I've run into some inconsistencies with the Perl-based one on one of my
(Solaris8) hosts. Every time it sees a line in /var/adm/messages like:
Nov 14 13:27:39 itdmln14 uxwdog[9305]: [ID 248799 daemon.error] error
communicating with server (Broken pipe)
it seems to stumble across the error, and ends up reporting it thusly (via
e-mail):
(1): dog[14816]: [ID 248799 daemon.error] error communicating with server
(Broken pipe)
The odd thing is, that particular error received via e-mail wasn't even from
today (based on the PID referenced), it was from yesterday. And yet, the
seek value and the size of the logfile are consistent.
I should point out that the nrpe.cfg file entry looks like this:
command[check_log_err]=/home/nagios/libexec/check_log3 -l /var/adm/messages
-s /home/nagios/.messages_err.seek -p "ERR|Err|err|PANIC|Panic|panic|File
system full" -n " nrpe| sshd|httpd.conf| uxwdog|sprintnb tldd|sprintnb
ltid|sprintnb tldcd|sprintnb avrd"
so it should't be reporting on uxwatch dog at all.
At some point in the recent past, the author of the Perl check_log script
told me there was some funkiness with Perl on Solaris8. However, we've
recently upgraded to Perl 5.8 (courtesy of Sunfreeware.com). The problem
persists.
I'm not sure that going to a centralized syslog host will solve this
problem.
Thoughts? Solutions? Random musings?
jc
-------------------------------------------------------
This sf.net email is sponsored by: To learn the basics of securing
your web site with SSL, click here to get a FREE TRIAL of a Thawte
Server Certificate: http://www.gothawte.com/rd524.html
More information about the Users
mailing list