Posts Tagged programming

Tickle! See? Gee, I …

A montage of TCL and Tcl-related logos


Ah, TCL, the Tool Command Language. Based on the research conducted by myself and my colleagues here at Security Objectives (most notably Shane Macaulay,) we have concluded that Tcl has a multitude of security issues, especially when being used in a network environment; and contemporarily speaking, network usage is almost unavoidable. In essence, we are urging the use of extreme caution in Tcl-based web development–whether it’s being used directly or indirectly. To generalize, we also advise against using Tcl for any network application or protocol (not just HTTP.) Security Objectives has published an in-depth analysis of practical Tcl vulnerabilities. The whitepaper, entitled “Tickling CGI Problems”, outlines the theoretical backbone of the phenomena in the first half and presents cases of real-world exploitation in the second half. However, the background theory along with some general programming and Hyper-Text Transfer Protocol knowledge is recommended in order to gain a firm understanding of the exploits themselves.

This is not to say that Tcl should not be used ever, so as a disclaimer we are not advocating any programming language over another.  Our position is that the traditional approach to web security with Tcl has much room for improvement. Like any other programming language it works nicely in certain areas such as academic research, scientific computing, extensions, and software testing. With that being said, one project that comes to mind is regfuzz, a regular expression fuzzer written in Tcl which is quite useful. The distinction here is that regfuzz is not intended to be exposed to a public (or even a private) network. Surely, Safe-Tcl could successfully serve network clients in a hardened production environment given that assessed risks were rated low enough to suffice as acceptable. The problem is, that’s not the type of operations that occur in practice as evidenced by an overwhelming majority of cases.

The vulnerabilities exposed by the whitepaper affect TclHttpd, Lyris List Manager, cgi.tcl (which also uses Expect) as well as the Tcl language itself and interpreters thereof. Some of the attack methodologies and vulnerabilities identified are new to the public. Others are similar to well-known attacks or simply subversions of previous security patches, e.g. CVE-2005-4147. As time unfolds, there will surely be a surge in publicized Tcl weaknesses due to the research which is elaborated on within the whitepaper. If you’re interested in discovering vulnerabilities in Tcl software yourself, then there’s a grand list of references to Tcl-related things at There is also a USENET newsgroup dedicated to it which is naturally called comp.lang.tcl.

For those of you attending CanSecWest 2011 in Vancouver, we are sponsoring the event. Professionals from Security Objectives will be in attendance to answer your queries regarding Tcl/Tk security or other areas of specialized research (information assurance, software assurance, cloud security, etc.) Of course, our professionals will also be available to field questions regarding Security Objectives’ product and service offerings as well. In addition, CanSecWest 2011 attendees receive special treatment when purchasing licenses for BlockWatch, the complete solution to total cloud security.

Comments (2)

Why Dynamic Program Analysis is Superior.. Part Two: False Positives

Fool's GoldA few years ago I was making a living as a dedicated employee of a security consultancy whose name I won’t mention. For those of you who know me, I’ll give you three guesses and the last two don’t count. In any case, one day I was working at a (unnamed) client site and I noticed one of my fellow consultants running RATS, an antiquated static analysis tool for auditing source code. RATS stands for Rough Auditing Tool for Security and “rough” is a suitable description of it. Similar to another tool named flawfinder, RATS greps through source code for calls to functions that are considered unsafe. It prints out the source file and line number where the function call that is considered unsafe occurs. The person using RATS still has to review the programming language statements themselves to confirm that the alert displayed by RATS wasn’t a false positive.

So now I wonder.. Why even use such tools at all? Why not do a manual review of the code yourself since you’re going to have to look at the code anyway? I have no doubt that the tool can grep faster than the human eye, but the trained eye can pick up things that the source code scanner can’t. Furthermore, a dynamic analyzer can detect things that both the static analyzer and the human won’t see. I don’t want anyone to think that I’m trying to malign my old co-worker’s noble effort to get his job done–that’s not my intention at all. I am simply looking towards the future and wondering how these tasks will be accomplished five or even ten years from now.

Software assurance suites that take advantage of the dynamic code analysis paradigm have the ability to feature a zero false positive rating. Static analyzers tend to ask the question: “What appears be wrong with this code that I’m analyzing?” whereas dynamic analyzers phrase the question as: “Which input sets will yield unexpected and/or unintended program output?”

As a result of executing software directly (either natively or through emulation), the dynamic analysis approach to assuring software quality is much better poised to discover bugs without presenting a conundrum of false positives. A static analysis tool may appear to detect what is believed to be a critical security vulnerability. However, since the tool is not actually executing the program in question, what appears to be an insecurity can turn out to manifest itself as a safe operation. Conversely, what appears to be a safe operation can in actuality be another vulnerability.

Leave a Comment

%d bloggers like this: