Saturday, May 30, 2009

Web Application Security Scanner Evaluation Criteria

A final draft of Web Application Security Scanner Evaluation Criteria (WASSEC) is available here, it is a set of guidelines to evaluate web application scanners on their ability to effectively test web applications and identify vulnerabilities. It covers areas such as crawling, parsing, session handling, testing, and reporting.

Friday, May 29, 2009

l0phtcrack is alive, again !

l0phtcrack is back with version6, one of the best password crackers ever, with some new features.

Thursday, May 28, 2009

Anti-Virus review for corporate Products

AV-Comparatives released a comparison of AV products for corporate use, there are some points to note:
- Mcafee did not participate in the test
- Symatec did very good in the SPAM test
- Avira is the best in heuristics
- The report has a comprehensive feature list at the end
- Avira seems to be a very good choice for Small and Medium organizations

Wednesday, May 27, 2009

Malware Toolkits with new updates

Unique Pack Toolkit got a new update, with more exploits, it uses also JavaScript dynamic obfuscation as usual to avoid anti-virus detection.

Another Toolkit "YES Exploit System" is now having exploits for Linux and MacOS.

Tuesday, May 26, 2009

Detecting Packers in network stream

This is a new way of detecting packers in network stream without using snort, it is a Python script (nPeID).

Tuesday, May 19, 2009

Data Loss DB

Do you want to check data loss incidents world-wide? check this site .
While most of the data in US based, it is nice to see some statistics about data loss.

Saturday, May 16, 2009

Single Packet Authorization and Port Knocking

I like the fwknop tool , a new version has been released recently.
The tool can be used in 2 modes, SPA or Port-Knocking. SPA is a variant of Port-knocking that uses only one single knock.

- Authorization part is done by using libpcap, so there is no service and no ports to listen to.
- Access to protected service is only granted after receiving a single non-replayed encrypted packet from fwknop client.

Tuesday, May 12, 2009

Handling Large Log Files (3-5GB)

This is a summary of an interested discussion with different recommendations on Firewall Wizards mailing list:

- One of the suggestions was to use syslog-ng and represent logs with splunk in addition to SEC (Simple Event Correlation) and perl scripts for alert generation and log correlation. Others used OSSIM for correlations.

- Writing logs on RAID 5/6 is slow, as it will need higher access time, while reading from RAID 5/6 is fast, so one suggestion was to write on RAID 0, index there, then get them copied to RAID6 which is read-only.

- You should always fix the problems causing the logs, as many of them could be coming from mis-configured machines, this will reduce the logs considerably

- The cool answer was about a setup that can handle 40-80GB of logs per day, the guy is using a nightly process to split the large file into smaller files summarize them using perl script into buckets, then generate summary report for each bucket (like number of logs in each bucket).
one of the scripts will use 'sed' to filter out un-interesting details like timestamp or port numbers
and then run them into sort|uniq -c|sort -rn to produce a report that shows how many times the same log message appeared.
other scripts to assemble e-mails from these reports with the most common items only. the process takes about 3-6 hours to generate a report. again Splunk is used to investigate more in the logs, 'grep' can take days for that large file.
- syslog-ng database parser can be a huge added value, it will parse the content of the log, and change the destination db table accordingly. Check here and here.

In summary, a combination of syslog-ng, splunk, perl, SEC, OSSIM and shell scripts are the answer to anyone dealing with large files.

Just would like to add my input: using syslog-ng, Mysql, and php-mysql-ng was working fine with me on 8GB of logs per day.

Sunday, May 10, 2009

New Browser Security Paper

The paper is about the browsers update feature, and evaluate the different update strategies of all browsers. The researchers used google web servers logs to compare and rank the browsers.

Tuesday, May 5, 2009

Torpig Botnet Takeover

Torpig, also known as Mebroot or Sinowal was discovered in October 2008 by RSA after 3 years of successful operation without detection, at that time RSA guys estimated that around 500,000 financial accounts were compromised. The main focus of Torpig is the user's financial information.

Researchers from University of California, Santa Barbara revealed details on taking over Torpig botnet for 10 days in January/February 2009.

Takeover Process:
- Torpig is using domain flux, so sinkholing the connection from bots to C&C server allowed them to take over the botnet C&C.
- In cooperation with domain registrar, they managed to map the C&C domain to a machine controlled by the researchers.

Botnet Operation Observation:
- Every 20 min. the infected machine will send the C&C server all captured information using HTTP obfuscated with XOR and base64 encoding.
- C&C server reply can be a new configuration file with new communication parameters, this commands are obfuscated using XOR-11 encoding
- Each bot uses a domain generation algorithm (DGA) to compute a list of domain names
- The Torpig authors did not register all the domains in advance, which allows the researchers to take control of it.
- 22% if infected hosts are corporate
- Botnet size is more than 180,000 machines
- Torpig also operate Socks and HTTP proxies on the infected machine
- Profit of Torpig operator is ranging from 83K-8M US$ just in 10 days of activity !!

More details can be found here.

Sunday, May 3, 2009

Hardning VMWare

This blog entry is about hardning the vmx file of a vmware virtual machine, to lock down the communication between the host and the guest OS, the parameters are mainly for ESX host.