J Wolfgang Goerlich's thoughts on Information Security
Incog: past, present, and future

By wolfgang. 10 January 2013 12:30

I spent last summer tinkering with covert channels and steganography. It is one thing to read about a technique. It is quite another to build a tool that demonstrates a technique. To do the thing is to know the thing, as they say. It is like the art student who spend time duplicating the work of past masters.

And what did I duplicate? I started with the favorites: bitmap steganography and communication over ping packets. I did Windows-specific techniques such as NTFS ADS, shellcode injection via Kernel32.dll, mutexes, and RPC. I also replicated Dan Kaminsky’s Base32 over DNS. Then I tossed in a few evasion techniques like numbered sets and entropy masking.

Incog is the result of this summer of fun. Incog is a C# library and a collection of demos which illustrate these basic techniques. I released the full source code last fall at GrrCon. You can download Incog from GitHub.

If you would like to see me present on Incog, including my latest work with new channels and full PowerShell integration, I am up for consideration for Source Boston 2013.

Please vote here: https://www.surveymonkey.com/s/SRCBOS13VS

This year SOURCE Boston is opening up one session to voter choice. Please select the session you would like to see at SOURCE Boston 2013. Please only vote once (we will be checking) and vote for the session you would be the most interested in seeing. Voting will close on January 15th.

OPTION 5: Punch and Counter-punch with .Net Apps, J Wolfgang Goerlich, Alice wants to send a message to Bob. Not on our network, she won’t! Who are these people? Then Alice punches a hole in the OS to send the message using some .Net code. We punch back with Windows and .Net security configurations. Punch and counter-punch, breach and block, attack and defend, the attack goes on. With this as the back story, we will walk thru sample .Net apps and Windows configurations that defenders use and attackers abuse. Short on slides and long on demo, this presentation will step thru the latest in Microsoft .Net application security.


Cryptography | Out and About | Security

Not-so-secure implementations of SecureString

By wolfgang. 27 July 2012 09:48

Microsoft .Net has an object for safely and securely handling passwords: System.Security.SecureString. "The value of a SecureString object is automatically encrypted, can be modified until your application marks it as read-only, and can be deleted from computer memory by either your application or the .NET Framework garbage collector", according to the MSDN documentation. As with any security control, however, there are a few ways around it. Consider the following PowerShell and C# code samples.

# Some not-so-secure SecureString from a vendor whitepaper
password = Read-Host -AsSecureString -Prompt "Please provide password"


// Some not-so-secure SecureString code I wrote by mistake
private void button1_Click(object sender, RoutedEventArgs e)
  secretString = new SecureString();
foreach (char c in textBox1.Text.ToCharArray()) { secretString.AppendChar(c); }
  textBox1.Text = "";


Try the samples above. Use something like Mandiant's Memoryze or AcessData FTK Imager to get a copy of your current Windows memory. Search the memory for your password and, sure enough, you will find it in clear text. Sometimes, as with the C# code, you will find your password several times over.

What happened? In both cases, the value was passed to the SecureString in clear text. The SecureString is encrypted, however, the original input is not. That input value may stay in memory for a long time (depending what the underlying Windows OS is doing.)

Below are some examples of populating a SecureString in such a way that the password is not exposed in clear text. A the saying goes, trust but verify. In this case, trust the method but check using Memoryze or Imager to be certain.


# A secure SecureString implementation
$password = New-Object System.Security.SecureString
do {
  $key = $host.UI.RawUI.ReadKey("NoEcho,IncludeKeyDown")
  if ($key.Character -eq 13) { break }
} while (1 -eq 1)


// A more SecureString code example
private void button1_Click(object sender, RoutedEventArgs e)
  secretString = passwordBox.SecurePassword;



Cryptography | Security

Microsoft embraces and extends IPSec NULL

By wolfgang. 8 January 2010 06:53

IPsec provides authentication, integrity, and confidentiality. In IPv4, IPsec generates an AH (Authentication Header) that provides packet header integrity using a cryptographic hash. ESP (Encapsulating Security Payload) provides integrity using a hash and confidentiality using encryption. Both AH and ESP provide authentication thru key exchange (IKE).

The hashing is typically done with MD5 or SHA and the encrypting is done with 3DES or AES. As known attacks exist for MD5 and 3DES that renders them only slightly better than nothing. SHA-1 and SHA-2 are in a similar state. NIST is currently working on SHA-3. For now, the best is SHA-2 with a long key length and AES.

Interestingly, ESP can also be encrypted using NULL. (See RFC 2410: The NULL Encryption Algorithm and Its Use With IPsec). "NULL does nothing to alter plaintext data.  In fact, NULL, by itself, does nothing.  NULL provides the means for ESP to provide authentication and integrity without confidentiality." Put differently, ESP performs the key exchange and hashing only.

Microsoft's version of IPsec NULL does not quite conform to the RFC. Rather than using a hashing algorithm in conjunction with a NULL encryption, Windows 7 and Windows 2008 skips it altogether. According to Microsoft's IPsec setup guide, the NULL encapsulation "option specifies that no integrity protection is provided to each network packet in the connection. No AH or ESP header is used to encapsulate the data." Embraced? Yes. Extended? Not so much.


Cryptography | Encryption | Systems Engineering

Audit for SSL/TLS renegotiation

By wolfgang. 16 November 2009 14:43

An SSL/TLS renegotiation attack has been carried out against Twitter. The Register has some details on the Twitter attack, while Educated Guesswork has the technical details on the renegotiation vulnerability itself.


SSL/TLS renegotiation has been used to get a web server to downshift its cipher and key length before. The new angle is using renegotiation to cause both the web server and the browser to renegotiate and create a man-in-the-middle scenario. Once in the inserted in the middle of web server and browser, the attacker can access the HTTP stream unencrypted.


Being an IT operations security guy, my focus is on auditing for and protecting against the weakness. The mitigation is simple: disable renegotiation. As for auditing, you can use openssl on any Linux OS to test.


sudo openssl s_client -connect www.yourhosthere.com:443


You will see the certificate chain, server certificate, SSL handshake, and SSL session details. The session is established when you get prompted verify return code: 0 (ok).


Now suppose OpenSSL reports verify error:num=20:unable to get local issuer certificate.)I have seen this error on GoDaddy websites. To resolve, browse to the website with Firefox. Open the certificate viewer and click the details tab. There, below the details, click the Export button. Save the certificate file in the x.509 PEM format with a .pem extension (Example: godaddy.pem). Then rerun OpenSSL and specify the certificate authority file.


sudo openssl s_client -connect www.yourhosthere.com:443 –CAfile godaddy.pem


Make an HTTP request and then request renegotiation.





The error ssl handshake failure indicates the web server is denying renegotiations.  If OpenSSL renegotiates successfully, you will see a new certificate path and then read read:errno=0. Contact your web server administrator if the server renegotiates.



(Update 2009-12/18: You can use the Matriux distro to perform the above steps.)


Apache | Cryptography | IIS | Security

Criminal Intent and Cryptography (IANAL)

By wolfgang. 28 February 2009 14:36

The question is back in the news: is using encryption a sign you are criminal? 


In May of 2005, a Minnesota court filed a ruling that upheld a conviction in part based on the presence of encryption software (State v. Levie). The chilling sentence in the filing was: “We find that evidence of appellant’s internet use and the existence of an encryption program on his computer was at least somewhat relevant to the state’s case against him.” This was but one in a chain of legal cases involving cryptography. In fact, the software in question in the Levie case has, almost from its inception, been the subject of legal scrutiny. Yet the ruling set off a firestorm in part because it appeared to imply that encryption by itself was indicative of criminal activity.



The view was reinforced by Bruce Schneier, a prominent InfoSec analyst and cryptographer. “An appeals court in Minnesota has ruled that the presence of encryption software on a computer may be viewed as evidence of criminal intent.” As sometimes happens on the Internet, the resulting discussion involved many who did not read the ruling and many more who employed fallacy of extension arguments. For example, one commentator responded "Next I suppose they'll consider finding a knife in your kitchen is 'evidence' on criminal intent to commit some gruesome attack on an innocent bystander." 


The actual ruling was significantly more balanced than it appeared from Schneier’s summary. When placed into context, it is clear that the presence of cryptography along with the existence of searches related to the crime were introduced to demonstrate the Levie’s state of mind. It was only in relation to the primary crime that they, in fact, became admissible. Other writers were quick to point this out. “The court did not hold that encryption is a signal of criminal activity. All it did was say that in one case, where a crucial witness testified about the presence of a computer file on a computer, that the presence of encryption software on the computer in early 2003 was "at least somewhat relevant" to the question of whether the defendant was a skilled computer user who had intentionally removed any traces of that file from the hard drive. (Kerr, 2005) 


The concern remained, however, that the ruling would be interpreted and used in future cases. This concern was best voiced by Jennifer Granick; the “hacker lawyer” and director of Stanford Law School's Center for Internet and Society. Granick repeated Kerr’s argument that the ruling suggested the presence of encryption software simply shows the defendant could have destroyed the evidence. She then repeated the argument that was most concerning: the ruling could demonstrate that encryption “suggests a consciousness of guilt.” That is, why encrypt if you have nothing to hide? While Granick is careful to say that both interpretations are valid, the primary concern should not be “what this opinion says or doesn't say, but how it could be used by courts looking at this issue in the future.” 


Where do we stand some today? The legal opinions are still a mixed bag. Because encryption is such a wide field, let us take another example that deals with PGP. A federal judge ruled in 2007 that a decryption passphrase was protected under the Fifth Amendment (United States v. Boucher). This was celebrated at the time but the celebrations were short lived. In February of 2009, the court reversed its decision. At its heart was the definition of a PGP passphrase: was it speech or was it a key? The original ruling came down on the side of speech and thereby protected the passphrase. The reversal saw the passphrase like a key or combination, which prior rulings had established are unprotected by the Fifth Amendment.  


A person can be legally required to open a safe and reveal incriminating documents or books. Likewise, according to States v. Boucher, a person can now be required to decrypt a folder to review digital documents. The issue is whether the presence of encrypting software or, for that matter, illicit digital materials is relevant to prosecution.     


While this topic has not been specifically taken up in law journals, the related topic of suspicious materials has been covered in depth. See, for example, Swiss Cheese That's All Hole: How Using Reading Material To Prove Criminal Intent Threatens The Propensity Rule (Murphy, May 2008). Murphy details the legal precedence and evidentiary rules that allow for reading materials to be used by the prosecution. Such materials can be submitted if they are relevant in demonstrating the defendant’s mental state or aptitudes. But books cannot be used to demonstrate motive or intent. By itself, a book cannot be used to demonstrate a defendant had the inclination towards criminal acts (this is the propensity rule).  


The same legal structures likely apply to digital materials. In State v. Levie, illicit web searches were relevant to Levie’s mental state at the time of the crime. The presence of PGP was relevant to Levie’s ability to obfuscate or remove digital evidence. Yet neither the web searches nor PGP were used to demonstrate his propensity to perform the criminal act. The digital materials, just like written materials would have been, were not used to define motive or intent. That proof was from the primary evidence: an eye witness testimony. People can continue to safely use PGP and other encryption technologies. The propensity rule prevents these from being admitted as evidence of criminal intent. 



2008 US Presidential Elections Predicted

By wolfgang. 10 January 2008 15:13

Hashing functions such as MD5 are susceptible to collisions. That is, someone can take two documents and tailor them such that the resulting MD5 hashes are identical. One group showed this particularly well by hashing documents that shows who will win the US election. The group made twelve PDF documents all have the same MD5 fingerprint.


Predicting the winner of the 2008 US Presidential Elections using a Sony PlayStation 3





Reading your SSL Web Traffic

By wolfgang. 10 January 2008 15:09

Consider SSL. The web client and web server exchange keys and establish an encrypted tunnel, which they then use to communicate over. The person sees the reassuring padlock and begins entering sensitive information such as credit cards and passwords.


Of course, the person could be sending quite a bit more thru that tunnel and no one in the middle would be the wiser. This makes it difficult to protect incoming and outgoing traffic against threats such as drive-by downloads and data leakage. The question becomes how to read the SSL traffic as it crosses our Internet gateways.


I have been looking at Microsoft’s ISA server quite a bit recently. One feature they offer is SSL bridging. Now the web client negotiates an SSL tunnel with the ISA server. The ISA server then negotiates a separate tunnel between itself and the web server. Then ISA proxies web requests between these two tunnels. The web traffic is unencrypted on ISA itself and therefore can be monitored.


Of course, this means that people cannot trust their sensitive information is actually confidential. But, I am sure someone will say, we trust our network administrators. True, yet consider this: Akamai also does SSL bridging on a massive scale. This company handles web traffic for a third or more of all Internet sites. If you are hitting Akamai during an SSL session, someone at Akamai is reading your unencrypted information.



    Log in