Tuesday, May 12, 2009

Handling Large Log Files (3-5GB)

This is a summary of an interested discussion with different recommendations on Firewall Wizards mailing list:

- One of the suggestions was to use syslog-ng and represent logs with splunk in addition to SEC (Simple Event Correlation) and perl scripts for alert generation and log correlation. Others used OSSIM for correlations.

- Writing logs on RAID 5/6 is slow, as it will need higher access time, while reading from RAID 5/6 is fast, so one suggestion was to write on RAID 0, index there, then get them copied to RAID6 which is read-only.

- You should always fix the problems causing the logs, as many of them could be coming from mis-configured machines, this will reduce the logs considerably

- The cool answer was about a setup that can handle 40-80GB of logs per day, the guy is using a nightly process to split the large file into smaller files summarize them using perl script into buckets, then generate summary report for each bucket (like number of logs in each bucket).
one of the scripts will use 'sed' to filter out un-interesting details like timestamp or port numbers
and then run them into sort|uniq -c|sort -rn to produce a report that shows how many times the same log message appeared.
other scripts to assemble e-mails from these reports with the most common items only. the process takes about 3-6 hours to generate a report. again Splunk is used to investigate more in the logs, 'grep' can take days for that large file.
- syslog-ng database parser can be a huge added value, it will parse the content of the log, and change the destination db table accordingly. Check here and here.

In summary, a combination of syslog-ng, splunk, perl, SEC, OSSIM and shell scripts are the answer to anyone dealing with large files.

Just would like to add my input: using syslog-ng, Mysql, and php-mysql-ng was working fine with me on 8GB of logs per day.

No comments: