J Wolfgang Goerlich's thoughts on Information Security
B-Sides Detroit overview

By wolfgang. 6 June 2011 12:46

Do information security conferences seem a tad corporate these days? Too staid? A little too serious? Maybe, maybe not.

Fresh from Source Boston, I definitely had an expectation of security conferences that the new BSides Detroit blew away. Forget vendors and booths. There were none. Forget nametags. How about a piece of tape with a sharpe, eh? Forget av equipment with wireless mics. Heck, forget even having a projector screen. Throw up the slides on an improvised canvas. Let’s get a room full of tech people with a hacker bent, get them talking, get them thinking, and get them outside of the typical conference mindset. 

Outside the norm: that defined the atmosphere this past weekend at the OmniCorpDetroit hackerspace. Present the can-do raw creative experience. BSides Detroit was different, fun, inspiring.

H
ighlights from the talks are below. I hear planning for a 2012 event is already underway, so more good content to come.

High-level talks:

Rafal Los: Ultimate Hack - Manipulating Layers 8+9 [Management & Budget] of the OSI Model. If you’ve been following Raf’s #SecBiz threads, you know he has been stirring the pot. Think social engineering meets Dilbert corporate America. For me, this was the talk that made the conference. I am hoping to get Rafal back into the area to give us an encore.

Chad Childers: Towards Data Centric, Technology Agnostic Security. I am sure there’s been at least once in your career where you have thought, heck, Bell-La Padula and Biba security models should be good enough for anyone. No? Well, good, you have not been touched by the CISSP mindset. Chad broke down the classic models and argued for a data-centric model, possibly based on ccREL, S/MIME, virtual smart cards, and DRM.

Nuts-and-bolts talks:

Brett Cunningham, Jack Crook, Matt Sabourin: Intelligent Fuzzy Hashing for Malware Similarity and Attribution. We all know that regular hashing (MD5/SHA) works great for finding identical files. But how do we find similar files? Use Fuzzy hashing with tools like ssdeep, which give an indication of how similar (in terms of percentage) two or more files are. Possible use cases include forensics, plagiarism, malware analysis, and data loss prevention. The cool thing is that they are launching a project (www.allsum.org) to integrate the technique with intrusion detection and malicious code detection. 

Mark Stanislav: It's Vulnerable... Now What?: Three Diverse Tales of Woe and Remediation. I am not a PHP programmer. I am less AppSec, and more NetSec. But none of that mattered. Mark’s common sense talk on PHP security was good fun. What’s more, his emphasis on vulnerability disclosure as a community responsibility spoke to me. Just as we would not walk by garbage on the street without addressing it, we cannot ignore garbage in the code. We have an obligation to help keep our Internet clean.

 

Tags:

Security | Security Information Management | Systems Engineering

Bypassing IDS/NSM detection

By wolfgang. 3 March 2011 09:17

There are a number of ways an attacker can circumvent the protection of network security monitoring. He can use evasion techniques to avoid detection, or use diversion techniques to distract the defender. Here are a couple methods.

Protocol misuse. NetFlow and layer 1/2/3 statistics track hardware addresses, IP addresses, and TCP/UDP ports. Application layer detail is generally not analyzed and tracked. Any packet sent over port 80 will be assumed an HTTP packet, anything over port 53 a DNS packet, and so on. An attacker can send information over alternate ports to mask their activities. Alternatively, some protocols can be directly misused to carry out an attacker’s aims. For example, see the OzymanDNS app that tunnels SSH and transfers files over the standard DNS protocol. When application layer tracking is not enabled, an attacker has a blind spot that they can use.

Kaminsky, D. (2004, July 29). Release!, from Dan Kaminsky's Blog: http://dankaminsky.com/2004/07/29/51/

Payload obfuscation. An attacker can also create a blind spot by obfuscating (or disguising) their application layer traffic. If application layer analysis is enabled, it may be utilizing pattern matching for application layer analysis. The attacker has to modify the packet or its payload enough to no longer match the pattern. Perhaps the simplest method is fragmentation, where the IP packet is broken into fragments. Any one fragment will not match the pattern detection. When the fragments get to the host computer, the host re-assembles the packet. The attacker’s payload is then delivered undetected.

Schiffman, M. (2010, February 15). A Brief History of Malware Obfuscation, from Cisco: http://blogs.cisco.com/security/a_brief_history_of_malware_obfuscation_part_1_of_2/

Timm, K. (2002, May 05). IDS Evasion Techniques and Tactics, from Symantec: http://www.symantec.com/connect/articles/ids-evasion-techniques-and-tactics

Denial of Service. A solid NSM solution is one that performs application layer analysis, checks for fragmentation, and negates common obfuscation techniques. An attacker then has options. Think of the smash and grab crimes, where the criminal gets in, gets what they can, and gets out quickly. The equivalent is the attacker who triggers the NSM in one area to create a distraction while they attack in another area. For example, an attacker launches a Denial of Service attack on a network link unrelated to their real target. Alternatively, the DoS targets the NSM infrastructure itself. If the attack is a quick raid of the victim’s network, such methods may pay off.

In sum, attackers can hide in the blind spots, cover their tracks, or make diversions.

Tags:

Security | Security Information Management

Egypt up, Libya down

By wolfgang. 18 February 2011 14:03

Egypt has rejoined the Internet. But now the Libyan government has followed Egypt’s lead, and went dark. In Egypt’s case, it was DNS outages followed by physical disconnects. In Libya’s case, BGP is being used. In both, protests are being used as an excuse to unplug Internet access.

Tags:

Security Information Management

Netflows Simplified (Part 2)

By wolfgang. 12 January 2011 14:09

There are two drivers for NetFlow might not be immediately obvious.

The first is that NetFlow requires significantly less throughput than full packet capture. For example, see this discussion on how much bandwidth is needed to capture all the packets in a typical business. Let's say 100 Gbps span port for typical SMB (100-1000 employee) company. NetFlow takes <5% of the bandwidth that full packet capture requires. So you could get by with five 1 Gbps span ports, which is a huge cost savings on the switching side.

The second driver is statistical analysis. Let's assume you did not want the complexity of five 1 Gbps mirror ports (and the corresponding grunt work on the NSM to avoid duplicate entries.) If you only want 1 Gbps port, you have two options. The first option is to watch one-fifth of your servers. This poses a problem if your infected computer(s) are in the four-fifths of the LAN you are ignoring. The second option is to statistically select 1 in 5 packets from the stream. Your NSM will now have full coverage on the network, greatly improving your chance at identifying infections.

Tags:

Security | Security Information Management

Netflows Simplified (Part 1)

By wolfgang. 5 January 2011 12:47

For any given office computer at any given time, numerous network communications are ongoing. There may be Kerberos authentication and ticketing occurring between the computer and the domain controller. Streaming audio from, say, Pandora leaves many packets. ARP and DNS packets will be interspersed with application layer packets for files and websites. In sum, there are many thousands of packets to work through.

Like separating spaghetti strands from a bowl of pasta, separating packets into sessions allows us to separate and study the communication from end-to-end. Session data presents the packets for a single communication. That is, for a source host, a destination host, and a given application layer protocol. The InfoSec analyst can then follow the packet trail thru the session to see what transpired and how.

Now there are two ways to read a session: in detail and in summary. The detail of a session includes all the bytes of the packets. This is available from a switch mirror port. The summary of a session includes the packet headers (IP, UDP, TCP). There are a few ways to get this information, including the Cisco NetFlow protocol. NetFlow can be all of all packets or of statistically relevant packets (sampled NetFlow).

Picture all network traffic for a given office computer as a big bowl of pasta. Pull out individual strands to get your session data. Keep statistics on the strands to get your NetFlow.

Tags:

Security | Security Information Management

Can you capture all the packets on your network?

By wolfgang. 12 December 2010 18:52

The simple answer is yes, you can capture all the traffic on your network. I do it all day, every day, with my network monitoring servers. But it is a little more complicated that the short answer.

The first consideration is bandwidth. Let’s assume 200 client computers are attached to 50 servers. The clients are at 100 Mbps and the servers are at 1 Gbps. Quickly doing the math, you can see that the maximum bandwidth is 70 Gbps. Each packet will be mirrored (or copied) to the network monitor port. To avoid missing packets, that port would need a 70 Gbps uplink. Such an uplink exceeds the budgets of SMB IT departments.

The second consideration is storage. Let’s assume that the through put for client computers is, on average, 5% of the available bandwidth. For servers, we will use 25%. Given 3,600 seconds in an hour, do the math, and you’ll see we need 439.5 GB an hour for clients and 5.5 TB an hour for servers. Call that an even 6 TB an hour, 142 TB a day, 1 PB a week. Such disk storage costs exceed the budgets of SMB IT departments.

Given these numbers, how do I capture the packets that travel across my network? First, I use a 10 Gbps uplink to get the mirrored traffic. There are times when the traffic overwhelms the uplink and packets are lost. Second, I keep only a few hours of packets in storage. I maintain the packet summary (time, source IP and port, destination IP and port, byte count, application details) for a few weeks. The summary is significantly smaller than the actual traffic.

The more complex answer is yes and no. You can log all the packets. But even for relatively small networks, the required hardware for the resulting through put and storage requirements will be cost prohibitive.

In hindsight, maybe switching to NetFlows is not such a bad idea. 

Tags:

Security | Security Information Management

Pentetration testing lab

By wolfgang. 4 June 2010 06:08

Security Information Management systems are meant to catch and report anything suspicious, right? So how do we test them? Creating a vulnerable network and exploiting it. The following tools can be used to create a testing lab to validate network security and web application security controls


Attack systems:

Back|Track -- The most widely used and well developed penetration distro. The main disadvantage is bloat and lack of Hyper-V support. (Live disc; Slax; netsec)
http://www.backtrack-linux.org/

Matriux -- The new kid on the block, with a faster and leaner distro than Back|Track and native Hyper-V support. (Live disc, Hyper-V; Kubuntu; netsec)
http://www.matriux.com/

Neopwn -- A penetration testing distro created for smart phones. (Debian; netsec)
http://www.neopwn.com/

Pentoo -- Gentoo meets pentesting. (Live disc; Gentoo; netsec).
http://pentoo.ch/

Samurai Web Testing Framework -- Specifically targeted towards web application security testing. (Live disc, Ubuntu, appsec)
http://samurai.inguardians.com/


Target systems:

Damn Vulnerable Linux (DVL) -- The classic vulnerable Linux environment. (Live disc; netsec)
http://www.damnvulnerablelinux.org/

De-ICE -- A series of systems to provide real-world security challenges, used in training sessions. (Live disc; netsec)
http://heorot.net/livecds/

Metasploitable -- Metasploit’s answer to the question: now that I have Metasploit installed, what can I attack? (VMware; Ubuntu; netsec)
http://blog.metasploit.com/2010/05/introducing-metasploitable.html

Damn Vulnerable Web App (DVWA) -- A preconfigured web server hosting a LAMP stack (Linux, Apache, MySQL, PHP) with a series of common vulnerabilities. (Live disc; Ubuntu; appsec;)
http://www.dvwa.co.uk/

Moth -- From the people that brought you w3af, Moth is a preconfigured web server with vulnerable PHP scripts and PHP-IDS. (VMware; Ubuntu; appsec)
http://www.bonsai-sec.com/en/research/moth.php

Mutillidae -- An insecure PHP web app that implements the OWASP Top 10. (Installer; appsec)
http://www.irongeek.com/i.php?page=mutillidae/mutillidae-deliberately-vulnerable-php-owasp-top-10

WebGoat -- An insecure J2EE web app that OWASP uses for security training. (Installer; appsec)
http://www.owasp.org/index.php/Category:OWASP_WebGoat_Project

Tags:

Security | Security Information Management

Nessus Tip: auditing services on non-standard ports

By wolfgang. 9 April 2010 08:03

One security trick is to host network services on different ports. For example, a web server may be on 8080 or a database server may be on 3333; instead of TCP 80 and 3306 respectively. This is also an operations trick for scenarios that may have port conflicts, like clustering and nat’ing.

Non-standard TCP ports can cause vulnerabilities to be missed when scanning with Nessus. Nessus, by default, only checks known ports.

The workaround is to preload the plugins (for example, Apache and MySQL) and to set Nessus to check all ports. Under the scan policy preferences section, check “Probe services on every port” and “Thorough tests”. That will give you a more complete picture of the target’s security posture.

For more information, see:

Using Nessus Thorough Checks for In-depth Audits
http://blog.tenablesecurity.com/2010/03/using-nessus-thorough-checks-for-indepth-audits.html

Tags:

Security | Security Information Management

Risk Management is prevention and Security Information Management is detection

By wolfgang. 19 January 2009 05:45

Risk Management (RM) is comprised of asset management, threat management, and vulnerability management. Asset management includes tying IT equipment to business processes. Asset management also includes performing an impact analysis to determine the relative value of the equipment based upon what the business would pay if the equipment was unavailable, and what the business would earn if the equipment was available. Threat management includes determining threat agents (the who) and threats (the what). For example, a disgruntled employee (threat agent) performs unauthorized physical access (threat 1) to sabotage equipment (threat 2). Vulnerability management is auditing, identifying, and re-mediating vulnerabilities in the IT hardware, software, and architecture. Risk management is tracking assets, threats, and vulnerabilities at a high level by scoring on priority (Risk = Asset * Threat * Vulnerability) and scoring on exposure (Risk = Likelihood * Impact).

Once prioritized, we can then move onto determining controls to reduce the risk. Controls can be divided into three broad methods: administrative or management, operational, and technical. Preventative and detective are the two main forms of controls. Preventative controls stop the threat agent from taking advantage of the threat. In the above example, a preventative control would be a locked door. Detective controls track violations and provide a warning system. For the disgruntled employee entering an unauthorized area, a detective control would be things like motion detectors. The resulting control matrix includes management preventative controls, management detective controls, operational preventative and detective controls, and so on for technical controls.

Security Information Management (SIM) is a technical detective control that is comprised of event monitoring and pattern detection. Event monitoring shows what happened when and where, from both the network and the computer perspectives. Pattern detection is then applied to look for known attacks or unknown anomalies. The challenge an InfoSec guy faces is that there is just too many events and too many attacks to perform this analysis manually. The purpose of a Sim is to aggregate all the detective controls from various parts of the network, automate the analysis, and roll it up into one single console.

My approach to managing security for a business networks is to use Risk Management for a top down approach. This allows me to prioritize my efforts for preventative controls. My team and I can then dig deep into the security options and system parameters offered by the IT equipment that is driving the business. For all other systems, I rely on detective controls summarized by a Security Information Management tool.

In my network architecture, RM drives preventative controls and SIM drives detective controls.

Tags:

Risk Management | Security Information Management

Nmap output to XML and SQL

By wolfgang. 28 November 2008 10:39

The Nmap port scanner has a handful of output options. It has its own proprietary format (-oN). If you want to play with the data, you can use XML output (-oX) or grep text files (-oG). The -oA will export in all three formats.

Why export to XML or grepable text? Typically, because you want to audit several IP hosts and store the results in a database.

A quicker method is to use the Nmap::Parser module with a Perl script. This method comes courtesy of Anthony Persaud. His Nmap-Parser automates reading the XML output and writing to SQL tables. MySQL and SQLite are both supported. Nmap-Parser is now up to version 1.19.

Use case: nightly IP scans of a subnet along with TCP scans of select hosts, as part of a security information management process.

Update: Paul Haas has a sample Perl script that uses Nmap::Parser and SQLite.

Tags:

Security | Security Information Management

    Log in