Posts Tagged microsoft

Updating the Updater

Professor John Frink Updates

Attacks against security components have been fairly common on server operating systems for decades; on PC’s this wasn’t always necessary because of security models that resembled swiss cheese. Since the beginning of the 21st century, Microsoft has been working diligently to close obvious holes (for the most part.) As a result, researchers have shifted their focus to the attack surface of security-centric code on PC’s. Case in point; in the past several years we’ve seen loads of advisories released for vulnerabilities in anti-virus software. Read the Yankee Group’s “Fear and Loathing in Las Vegas: The Hackers Turn Pro” for a more in-depth analysis of this trend. One area in particular where I feel PC protection is lacking is automated software security update mechanisms; there is a lot of room for improvement.

According to Hewlett-Packard, Digital Equipment Corporation was the first in the industry to perform patch delivery in 1983. Prior to this, updates were commonly delivered on tape by private courier. At one of 2600’s HOPE conferences, Kevin Mitnick spoke about an analog attack he had used to compromise this process during the social engineering panel. The gist of it was that he wore a UPS uniform (procured from a costume store) and delivered the “update” tape to his mark with a login trojan on it. Later, Mitnick became known for using SYN floods and TCP hijacking against Tsutomo Shimomura. Some sources even refer to this sort of digital man-in-the-middle as “The Mitnick Attack.”

Many software update components don’t use public key infrastructure to cryptographically verify the validity of the update server (i.e. SSL) or the updated package (i.e. digital signature.) This is a problem. Impersonating the software update server is usually trivial. Wi-Fi access point impersonation, DNS cache poisoning, ARP spoofing, session hijacking, and compromising the legitimate update server are all possibilities.

Some applications–I’m not going to name any names–rely on HTTP (note that I didn’t say HTTPS) for downloading packages after checking for updates instead of using a separate file transfer manager program or internal update component. This is much easier to reverse engineer than a custom update solution. Sometimes the attacker can allow the real update server to carry out most of the process and simply shoehorn their malcode into the update session(s) after initial preconditions are met.

SSL won’t save the day either unless it’s implemented properly. I’ve seen plaintext updaters with digital signatures that are safer than some HTTPS updaters. Gentoo’s Portage Tree (emerge and ebuild) is a good example of an effective plaintext digital signature approach. See SECOBJADV-2008-01 (CVE-2008-3249) for a description of a software updater with an erroneous SSL implementation.

The issue is further complicated because software updaters themselves need to be updated in order to resolve such vulnerabilities. Typically this requires a major architectural modification. What’s worse is that breaking the updater would force users to manually update. Hoyvin-Glayvin!

Comments (2)

Dimes

2005_dime.jpgMicrosoft Security Bulletin

MS08-010 – Critical CVE-2008-0076

None of the flaws I’ve ever found on Microsoft platforms have ever been public (that is, they have all been derived from internal projects) and it’s nice to see at least in this round of fixes that my bug scored a perfect 10.0 (a dime) on the bulletin. I actually did not test as many platforms and configurations as Microsoft. For those of you that are unaware, bug regression and the overall triage process can become quite intensive. I knew that this vulnerability/flaw/bug/exploit/whatever had wide reaching appeal, fairly easy to see from the fact that all architectures and versions as far back as possible are marked critical.

As with all doings in the security space, walking a line between disclosure and tight-lipped mums, the word practice is not easy. So, what can be said here? Nothing? Something? I guess I have to write something, the marketoid’s wouldn’t be happy if I did not.

Before I digress into any technical discussion, I will take this opportunity to say something about the exploit sales “industry?”. In this world, everything and everybody has their place, that said, any individual that thinks exploits are worth any money, has another thing coming. Look at it this way, if you’re in the business of purchasing information (exploits), by definition you are unaware of the value of that information thereby inherently you are in a position to devalue the time and emotional investment into the derivation of that work. So this means, you’re never going to get back enough cash to make up for your time, EVER!! Where I do see some value in exploit brokers, is exclusively in the capacity of having them take the burden of dealing with uninformed software vendors (the Microsoft/IBM/others process is fairly straight forward).

Now that that’s all done with, I don’t really want to talk about the exploit, at least until some poorly constructed version winds up in metasploit. I will say though that the bulletin is correct in its description and synopsis.

The fact that there are no mitigating factors or workarounds possible, gives me some incentive and reassurance that the tools and methodologies that we’re building into our product offering works.

We’re ramping up development for a big push this quarter and will be uploading some more screenshots and related minutia in the coming months.

Our product in brief is an automated tool for native application flaw finding. It can assess, at runtime in a dynamic way, the integrity of a given binary application. This process then produces test cases and reproductions of what is necessary to trigger the flaw for a developer (this way, reducing regression rates due to bug fixes as it’s much easier to fix something when you can interact with it as opposed to a simple warning message).

We’re working on a management interface (on top of the technical one), that will also enable the lay person to identify architectural problems in arbitrary software also. This is actually quite simple (with the support of our engine), in essence, a landscape or tomography view is laid out before the user, with associated peaks and valleys, this then changes over time (4D), and represents the surface area of your application binary’s response to input. That is, a dynamic environment that is rooted by a system of systems methodology. What becomes apparent is that (if you are in the capacity to fix these issues yourself), as time goes on, and you assign various resources (people) to fix the peaks and turn them into valley’s. The rate at which you push down the peaks (bugs), across the application is not constant, some issues are harder to fix than others and persist. This way, a self-relative understanding of where problem area of code exist poignantly reveal themselves as architectural flaws and appropriate steps can be taken to drive the business case that will support a rewrite.

Whew, that’s a mouthful. Needless to say, we’re working to create the best platform around for software sovereignty.

Comments (1)

%d bloggers like this: