Tickle! See? Gee, I …

A montage of TCL and Tcl-related logos


Ah, TCL, the Tool Command Language. Based on the research conducted by myself and my colleagues here at Security Objectives (most notably Shane Macaulay,) we have concluded that Tcl has a multitude of security issues, especially when being used in a network environment; and contemporarily speaking, network usage is almost unavoidable. In essence, we are urging the use of extreme caution in Tcl-based web development–whether it’s being used directly or indirectly. To generalize, we also advise against using Tcl for any network application or protocol (not just HTTP.) Security Objectives has published an in-depth analysis of practical Tcl vulnerabilities. The whitepaper, entitled “Tickling CGI Problems”, outlines the theoretical backbone of the phenomena in the first half and presents cases of real-world exploitation in the second half. However, the background theory along with some general programming and Hyper-Text Transfer Protocol knowledge is recommended in order to gain a firm understanding of the exploits themselves.

This is not to say that Tcl should not be used ever, so as a disclaimer we are not advocating any programming language over another.  Our position is that the traditional approach to web security with Tcl has much room for improvement. Like any other programming language it works nicely in certain areas such as academic research, scientific computing, extensions, and software testing. With that being said, one project that comes to mind is regfuzz, a regular expression fuzzer written in Tcl which is quite useful. The distinction here is that regfuzz is not intended to be exposed to a public (or even a private) network. Surely, Safe-Tcl could successfully serve network clients in a hardened production environment given that assessed risks were rated low enough to suffice as acceptable. The problem is, that’s not the type of operations that occur in practice as evidenced by an overwhelming majority of cases.

The vulnerabilities exposed by the whitepaper affect TclHttpd, Lyris List Manager, cgi.tcl (which also uses Expect) as well as the Tcl language itself and interpreters thereof. Some of the attack methodologies and vulnerabilities identified are new to the public. Others are similar to well-known attacks or simply subversions of previous security patches, e.g. CVE-2005-4147. As time unfolds, there will surely be a surge in publicized Tcl weaknesses due to the research which is elaborated on within the whitepaper. If you’re interested in discovering vulnerabilities in Tcl software yourself, then there’s a grand list of references to Tcl-related things at http://www.tcl.tk/resource_dump.html. There is also a USENET newsgroup dedicated to it which is naturally called comp.lang.tcl.

For those of you attending CanSecWest 2011 in Vancouver, we are sponsoring the event. Professionals from Security Objectives will be in attendance to answer your queries regarding Tcl/Tk security or other areas of specialized research (information assurance, software assurance, cloud security, etc.) Of course, our professionals will also be available to field questions regarding Security Objectives’ product and service offerings as well. In addition, CanSecWest 2011 attendees receive special treatment when purchasing licenses for BlockWatch, the complete solution to total cloud security.

Comments (2)

Changed-Rooted Jail Hackery Part 2

It’s been a while, but that doesn’t necessarily mean it was vaporware! 😉 As promised in Part 1, the system call tool that mimicks GNU fileutils commands is in the code listing below. Support for any additional commands is welcome; if anybody adds more feel free to e-mail your source code. Extension should be fairly straightforward given then “if(){}else if{}else{}” template. Just simply add another else-if code block with appropriate command line argument parsing. It’s too bad you can’t really do closures in C, but a likely approach to increasing this tool’s modularity is the use of function pointers. Of course new commands don’t have to be from GNU fileutils–mixing and matching Linux system calls in C has limitless possibilities.

Speaking of GNU, I stumbled across an extremely useful GNU project called parallel. Essentially, it’s a multi-threaded version of xargs(1p). I’ve been including it in a lot of bash scripts I’ve written recently. It doesn’t seem to be part of the default install for any operating system distributions, yet; maybe when it evolves into something even more awesome it’ll become mainstream. 🙂 Suprisingly, I was even able to compile it on SUA/Interix without any problems. The only complaint I have about it is the Perl source language (not that I have anything against Perl). I simply feel that the parallelization processes could be that much faster if written in C. Maybe I’ll perlcc(1) it or something. Okay, then–without any further adieu, here’s the code for syscaller:

 * syscaller v0.8a - breaking out of chroot jails "ex nihilo"
 * by Derek Callaway <decal@security-objectives.com>
 * Executes system calls instead of relying on programs from the
 * GNU/Linux binutils package. Can be useful for breaking out of
 * a chroot() jail.
 * compile: gcc -O2 -o syscaller -c syscaller.c -Wall -ansi -pedantic
 * copy: cat syscaller | ssh -l user@host.dom 'cat>syscaller'
 * If the cat binary isn't present in the jail, you'll have to be more
 * creative and use a shell builtin like echo (i.e. not the echo binary,
 * but bash's internal implementation of it.)
 * Without any locally accessible file download programs such as:
 * scp, tftp, netcat, sftp, wget, curl, rz/sz, kermit, lynx, etc.
 * You'll have to create the binary on the target system manually.
 * i.e. by echo'ing hexadecimal bytecode. This is left as an exercise
 * to the reader.

 * to the reader.


#define _GNU_SOURCE 1
#define _USE_MISC 1


int syscall(int number, ...);

/* This is for chdir() */
#define SHELL_PATHNAME "/bin/sh"

static void usage(char **argv)
  printf("usage: %s syscall arg1 [arg2 [...]]\n", *argv);
  printf("help:  %s help\n", *argv);

static void help(char **argv)
  puts("syscaller v0.8a");
  puts("chmod mode pathname");
  puts("chdir pathname");
  puts("chown user group pathname");
  puts("mkdir pathname mode");
  puts("rmdir pathname");
  puts("touch pathname mode");

  puts("Note: modes are in octal format (symbolic modes are unsupported)");
  puts("Note: some commands mask octal mode bits with the current umask value");
  puts("Note: creat is an alias for touch");
  puts("ls -a / (via brace/pathname expansion): echo /{.*,*}");


int main(int argc, char *argv[])
  register char *p = 0;
  signed auto int r = 1;

  if(argc < 2)

  /* I prefer to avoid strcasecmp() since it's not really standard C. */
  for(p = argv[1];*p;++p)
    *p = tolower(*p);

    if(!strcmp(argv[1], "chmod") && argc >= 4)
      /* decimal to octal integer conversion */
      const mode_t m = strtol(argv[2], NULL, 8);

      r = syscall(SYS_chmod, argv[3], m);

#ifdef DEBUG
  fprintf(stderr, "syscall(%d, %s, %d) => %d\n", SYS_chmod, argv[3], m, r);
    else if((!strcmp(argv[1], "chdir") || !strcmp(argv[1], "cd")) && argc >= 3)
      static char *const av[] = {SHELL_PATHNAME, NULL};
      auto signed int r2 = 0;

      r = syscall(SYS_chdir, argv[2]);

#ifdef DEBUG
  fprintf(stderr, "syscall(%d, %s) => %d\n", SYS_chdir, argv[2], r);

      /* This is required because the new current working directory isn't
       * bound to the original login shell. */
      printf("[%s] exec'ing new shell in directory: %s\n", *argv, argv[2]);
      r2 = system(av[0]);
      printf("[%s] leaving shell in child process\n", *argv);

      if(r2 < 0)
        r = r2;
    else if(!strcmp(argv[1], "chown") && argc >= 5)
      struct passwd *u = NULL;
      struct group *g = NULL;

      if(!(u = getpwnam(argv[2])))

#ifdef DEBUG
  fprintf(stderr, "getpwnam(%s) => %s:%s:%d:%d:%s:%s:%s\n", argv[2], u->pw_name, u->pw_passwd, u->pw_uid, u->pw_gid, u->pw_gecos, u->pw_dir, u->pw_shell);

      if(!(g = getgrnam(argv[3])))


#ifdef DEBUG
  fprintf(stderr, "getgrnam(%s) => %s:%s:%s:%s:", argv[3], g->gr_nam, g->gr_passwd, g->gr_gid);

  if((p = g->gr_mem))
      fputs(p, stderr);


        fputc(',', stderr);

        r = syscall(SYS_chown, argv[4], u->pw_uid, g->gr_gid);

#ifdef DEBUG

  fprintf(stderr, "syscall(%d, %d, %d, %s) => %d\n", SYS_chown, u->pw-uid, g->gr_gid, argv[4], r);
    else if((!strcmp(argv[1], "creat") || !strcmp(argv[1], "touch")) && argc >= 4 )
      const mode_t m = strtol(argv[3], NULL, 8);

      r = syscall(SYS_creat, argv[2], m);

#ifdef DEBUG
  fprintf(stderr, "syscall(%d, %S, %d) => %d\n", SYS_creat, argv[2], m, r);
    else if(!strcmp(argv[1], "mkdir") && argc >= 4)
      const mode_t m = strtol(argv[3], NULL, 8);

      r = syscall(SYS_mkdir, argv[2], m);

#ifdef DEBUG
  fprintf(stderr, "syscall(%d, %S, %d) => %d\n", SYS_mkdir, argv[2], m, r);
    else if(!strcmp(argv[1], "rmdir") && argc >= 3)
      r = syscall(SYS_rmdir, argv[2]);

#ifdef DEBUG
  fprintf(stderr, "syscall(%d, %S) => %d\n", SYS_rmdir, argv[2], r);
    else if(!strcmp(argv[1], "help"))

  } while(1);



Please note that some of the lines of code in this article are truncated due to how WordPress’s CSS renders the font text. Although, you’ll still receive every statement in its entirety when you copy it to your clipboard. The next specimen is similar to the netstat emulating shell script from Part 1. It loops through the procfs PID number directories and parses their contents to make it look like you’re running the actual /bin/ps, even though you’re inside a misconfigured root directory that doesn’t have that binary. It also has some useful aliases and a simple version of uptime(1).   

# ps.bash by Derek Callaway decal@security-objectives.com
# Sun Sep  5 15:37:05 EDT 2010 DC/SO

alias uname='cat /proc/version' hostname='cat /proc/sys/kernel/hostname'
alias domainname='cat /proc/sys/kernel/domainname' vim='vi'

function uptime() {
  declare loadavg=$(cat /proc/loadavg | cut -d' ' -f1-3)
  let uptime=$(($(awk 'BEGIN {FS="."} {print $1}' /proc/uptime) / 60 / 60 / 24 ))
  echo "up $uptime day(s), load average: $loadavg"

function ps() {
    local file base pid state ppid uid
    echo 'S USER     UID   PID  PPID CMD'
    for file in /proc/[0-9]*/status
        do base=${file%/status} pid=${base#/proc/}
        { read _ st _; read _ ppid; read _ _ _ _ uid; } < <(egrep '^(State|PPid|Uid):' "$file")
        IFS=':' read user _ < <(getent passwd $uid) || user=$uid
        printf "%1s %-6s %5d %5d %5d %s\n" $st $user $uid $pid $ppid "$(tr \ \ <"$base/cmdline")"


Leave a Comment

Jenny’s Got a Perfect Pair of..

binomial coefficients

binomial coefficients

..binomial coefficients?! That’s right. I’ve found the web site of a Mr. Bob Jenkins with an entire page dedicated to a pairwise covering array generator named jenny.c. I’m fairly sure that only the most hardcore of the software testing weenies have some notion of what those are so for the sake of being succinct I’ll be providing my own explanation here: A pairwise covering array generator is a program for silicon computing machines that deduces sequences of input value possibilities for the purposes of software testing; and yes, I did say silicon computers–since testing their software is really a question of the great Mr. Turing’s halting problem, the existence of a practical, affordable, and efficient nano/molecular computing device such as a DNA computer, Feynman machine, universal quantum computer, etc. would essentially predicate a swift solution to the problem of testing contemporary computer software in non-deterministic polynomial time. The only problem we would have then is how to test those fantastic, futuristic, (seemingly science fictive) yet wondrous problem-solving inventions as they break through laborious barriers of algorithmic complexities that twentieth century computer scientists could have only dreamed about: PCP, #P, PSPACE-complete, 2-EXPTIME and beyond.. The stuff that dreams are made of.

Now, let’s return to Earth and learn about a few things that make Jenny so special. Computer scientists learned early on in their studies of software testing that pairwise or test cases with two input values were the most likely to uncover erroneous programming or “bugs.” Forget the luxury of automation for a minute, old school programmers typed input pairs manually to test their own software. Code tested in that manner was most likely some sort of special-purpose console mode utility. (Celsius to Fahrenheit, anyone?) As the computing power of the desktop PC increased according to Moore’s law, it became time-effective to write a simple program to generate these input pairs instead of toiling over it yourself–I suppose not testing at all was another option. Today, still some software is released to market after only very minor functional and/or quality assurance testing. Regression, stress, security, and other forms of testing cost money and reduce time to market, but in reality significant return on investment acts as a hedge against any losses incurred. Even ephemeral losses justify the absolute necessity of these expenditures.

A Jenny built in modern times undoubtedly has the power to deductively prove that a software product of the eighties decade is comprised of components (or units) that are fundamentally error-free. However, the paradox remains that improvements in automated software testers share a linear relationship with improvements of software in general. Thus, pairwise has become “n-way” which describes the process of utilizing greater multiples of input values in order to cover acceptable numbers of test cases. The number of covering arrays generated in this fashion grows exponentially and can be calculated as a binomial coefficient (see formula below.)

(n choose r) in factorial terms

(n choose r) in factorial terms

According to Paul Black, former SAMATE (Software Assurance Metrics and Tool Evaluation) project leader, researchers at NIST have pegged 6-way as the magic number for optimal fault interaction coverage (notably Rick Kuhn and Dolores Wallace.) This conclusion is based on hard evidence from studies on real-world software scenarios including medical devices and the aerospace industry. However, it would not surprise me to see this approximation rise significantly in the coming decades, just as the paradoxical relationship between general-purpose software and automated software testing programs shifts itself in accordance with Moore’s law. If not by Moore, then by some other axiom of metric progression such as Rogers’ bell curve of technological adoption.

I’ve also got a hunch that the tiny percentage of bugs in that “n is arbitrarily greater than 6” range are some of the most critical, powerfully impacting software vulnerabilities known to man. They lie on an attack surface that’s almost non-existent; this makes them by definition, obscure, non-obvious, shadowy, and hidden. Vulnerabilities in this category are the most important by their very nature. Therefore, detecting vulnerabilities of this type will involve people and tools that are masters of marksmanship and artistic in their innovation. Research in this area is entering a steadfast beginning especially within the realms of dynamic instrumentation or binary steering, active analysis, fault propagation, higher-order preconditions/dependencies, concurrency issues, race conditions, etc. I believe that combining merits inherent in various analysis techniques will lead to perfection in software testing.

For perfection in hashing, check out GNU’s gperf, read how Bob used a perfect hashing technique to augment Jenny’s n-tuples; then get ready for our Big ßeta release of the BlockWatch client software (just in time for the holiday season!)

Leave a Comment

The “X” Files


It’s been a little while since we last posted so I wanted to get a blog out there so everybody knows we’re still alive! We just finalized the XML schema for our soon to be released BlockWatch product so with all the XML tags, elements, attributes, and such running through my head I figured I’d blog about XML security. I’m sure the majority of penetration testers out there routinely test for the traditional web application vulnerabilities when looking at Web Services. The same old authentication/authorizations weaknesses, faulty encoding/reencoding/redecoding, session management issues, et al. are still all there and it’s not uncommon for a SOAP Web Service to hand off an attack string to some middleware app that forwards it on deep into the internal network for handling by the revered legacy mainframe. Some organizations process so much XML over HTTP that they place XML accelerator devices on their network perimeter. I have a feeling that this trend will increase the amount of private IP web servers that feel the effects of HTTP Request Smuggling.

Additionally, XML parsers that fetch external document references (e.g. remote URI’s embedded in tag attributes) open themselves up to client-side offensives from evil web servers. Crafted file attachments can come in the form of a SOAP DIME element or the traditional multipart HTTP POST file upload. With those things in consideration, Phillippe Lagadec’s ExeFilter talk from CanSecWest 2008 made some pretty good points on why verifying filename extensions and file header contents or magic numbers isn’t always good enough.

The new manifestations of these old problems should be cause for concern but I personally find the newer XML-specific bugs the most exciting. For example: XPath injection, infinitely nesting tags to cause resource exhaustion via a recursive-descent parser, XXE (XML eXternal Entity) attacks, etc.

A single file format for everything is supposed to make things more simple but the lion’s share of real-world implementations over-complicate the fleeting markup language tags to the point where they become a breeding ground for old school exploits and new attack techniques alike–we’re all familiar with the cliche regarding failure of a “system of systems” with too many moving parts. I’ll touch on some more advanced XML attacks later in the post, but first let’s take a step back and remember XML when it still had a fresh beginning.

Towards the end of the twentieth century, when I first started taking notice of all the hype surrounding XML (the eXtensible Markup Language) I held a fairly skeptic attitude towards it as I tend to do with many fledgling technologies/standards. Perhaps I’ve been over-analytical in that respect but look how long it’s taken IPv6 to amass even a minuscule amount of usage! Albeit, a formal data representation grammar certainly was needed in that “dot-bomb” era, a time when mash-up web applications were near impossible to maintain since consistently pattern matching off-site content demanded continuous tweaking of regular expressions, parsers, etc. The so-called browser war of Netscape Navigator vs. Internet Explorer couldn’t have helped things either. If that was a war, then we must be on the brink of browser Armagaeddon now that there’s Chromium, FireFox3, IE8 RTM, Safari4 Beta, Opera, Konqueror, Amaya, w3m, lynx, etc. The good news? We now have Safari for Win32. The bad news? Microsoft no longer supports IE for MacOS..bummer.

I think it’s fairly rational to forecast continued adoption of XML Encryption and WS-* standards for SOAP Web Services that handle business-to-business and other communications protocol vectors. If you’re bored of the same old Tomcat/Xerces, WebLogic/SAX, etc. deployments then prepare for applications written in newer API’s to arrive soon; namely Microsoft WCF and Oslo, the Windows Communication Foundation API and a modeling platform with design tools (respectively.) From the surface of .NET promotional hype it appears as if WCF and Oslo will be synthesized into a suitereminiscent of BizTalk Server’s visual process modeling approach. WCF has commissioned many Web Services standards including WS-Security but of course not all major software vendors are participating in the all of the standards. The crew in Redmond have committed to REST in WCF and it wouldn’t surprise me to see innovative XML communications techniques arising from the combination of Silverlight 3 and .NET RIA Services; for those of you who still don’t know, RIA is an acronym for Rich Internet Applications! Microsoft is leveraging the interoperability of this extensible markup language for the long-proprietary document formats of their Office product suite as part of their Open Specification Promise. Even the Microsoft Interactive Canvas, essentially a table that provides I/O through touch uses a form of XML (XAML) for markup.

Blogosphereans, Security Twits, and other Netizens alike seem to take this Really Simple Syndication thing for granted. Over the past several years or so there’s been a trend of malicious payloads piggybacking on banner ads. Since RSS and Atom are capable of syndicating images as well, I’d like to see a case study detailing the impact of a shellcode-toting image referenced from within an XML-based syndication format. Obvious client-side effects that occur when the end user’s browser renders the image are to be expected (gdiplus.dll, anyone?) What else could be done? Web 2.0 search engines with blog and image search features often pre-process those images into a thumbnail as a part of the indexing process. A little recon might discover the use of libMagick by one and libgd by another. Targeting one specific spiderbot over another could be done by testing the netmask of the source IP address making the TCP connection to the web server or probably even as simple as inspecting the User Agent field in the HTTP request header. Crafting a payload that functions both before and after image resizing or other additional processing (ex. EXIF meta-data removal) would be quite an admirable feat. Notwithstanding, I was quite surprised how much Referer traffic our blog got from images.google.com after Shane included a picture of the great Charlie Brown in his “Good Grief!” post…but I digress.

Several years ago when I was still living in New York, I became fascinated with the subtle intricacies of XML-DSig while studying some WS-Security literature. XML Signature Validation in particular had attracted my attention in earnest. In addition to the characteristics of traditional digital signatures, XML Signatures exhibit additional idiosyncrasies that require a bit of pragmatism in order to be implemented properly and therefore also to be verified properly as well (ex. by a network security analyst.) This is mainly because of the Transform and Reference elements nested within the Signature elements–References and Transforms govern the data to be provided as input to the DigestMethod which produces the cryptic DigestValue string. A Reference element contains a URI attribute which represents the location of the data to be signed. Depending on the type of Transform element, data first dereferenced from the Reference URI is then transformed (i.e. via an XPath query) prior to signature calculation. That’s essentially how it works. Something that may seem awkward is that the XML being signed can remain exactly the same while the digital signature (e.g. the DigestValue element value) has changed. I’ve decided to leave some strange conditions that often come about as an exercise for the reader:

What happens to an XML Digital Signature if … ?

  • No resource exists at the location referenced by the Reference element’s URI attribute value.
  • A circular reference is formed because a URI attribute value points to another Reference element whose URI attribute value is identical to the first.
  • The URI identifies the Signature element itself.
  • A resource exists at the URI, but it’s empty.
  • The Reference element has a URI attribute value which is an empty string, <Reference URI=””>

Leave a Comment

The Philosophical Future of Digital Immunization

digital-trojan-horse-viriiUsually it’s difficult for me to make a correlation between the two primary subjects that I studied in college–computer science and philosophy. The first few things that pop into mind when attempting to relate the two are typically artificial intelligence and ethics. Lately, intuition has caused me to ponder over a direct link between modern philosophy and effective digital security.

More precisely, I’ve been applying the Hegelian dialectic to the contemporary signature-based approach to anti-virus while pontificating with my peers on immediate results; the extended repercussions of this application are even more fascinating. Some of my thoughts on this subject were inspired by assertions of Andrew Jacquith and Dr. Daniel Geer at the Source Boston 2008 security conference. Mr. Geer painted a beautiful analogy between the direction of digital security systems and the natural evolution of biological autoimmune systems during his keynote speech. Mr. Jacquith stated the current functional downfalls of major anti-virus offerings. These two notions became the catalysts for the theoretical reasoning and practical applications I’m about to describe.

Hegel’s dialectic is an explicit formulation of a pattern that tends to occur in progressive ideas. Now bear with me here–In essence, it states that for a given action, an inverse reaction will occur and subsequently the favorable traits of both the action and reaction will be combined; then the process starts over. A shorter way to put it is: thesis, antithesis, synthesis. Note that an antithesis can follow a synthesis and this is what creates the loop. This dialectic is a logical characterization of why great artists are eventually considered revolutionary despite  initial ridicule for rebelling against the norm. When this dialectic is applied to anti-virus, we have: blacklist, whitelist, hybrid mixed-mode. Anti-virus signature databases are a form of blacklisting. Projects such as AFOSI md5deep, NIST NSRL,  and Security Objectives Pass The Hash are all whitelisting technologies.

A successful hybrid application of these remains to be seen since the antithesis (whitelisting) is still a relatively new security technology that isn’t utilized as often as it should be. A black/white-list combo that utilizes chunking for both is the next logical step for future security software. When I say hybrid mixed-mode, I don’t mean running a whitelisting anti-malware tool and traditional anti-virus in tandem although that is an attractive option. A true synthesis would involve an entirely new solution that inherited the best of each parent approach, similar to a mule’s strength and size. The drawbacks of blacklists and whitelists are insecurity and inconvenience, respectively. These and other disadvantages are destined for mitigation with a hybridizing synthesis.

The real problem with mainstream anti-virus software is that it’s not stopping all of the structural variations in malware. PC’s continue to contract virii even when they’re loaded with all the latest anti-virus signatures. This is analogous to a biological virus that becomes resistant to a vaccine through mutation. Signature-based matching was effective for many years but now the total set of malicious code far outweighs legitimate code. To compensate, contemporary anti-virus has been going against Ockham’s Razor by becoming too complex and compounding the problem as a result. It’s time for the security industry to make a long overdue about-face. Keep in mind that I’m not suggesting that there be a defection of current anti-virus software. It does serve a purpose and will become part of the synthesization I show above.

The fundamental change in motivation for digital offensive maneuvers from hobbyist to monetary and geopolitical warrants a paradigm shift in defensive countermeasure implementation. For what it’s worth, I am convinced that the aforementioned technique of whitelisting chunked hashes will be an invaluable force for securing the cloud. It will allow tailored information, metrics and visualizations to be targeted towards various domain-specific applications and veriticals. For example: finance, energy, government, or law enforcement, as well as the associated software inventory and asset management tasks of each. Our Clone Wars presentation featuring Pass The Hash (PTH) at Source Boston and CanSecWest will elaborate on our past few blog posts and much more.. See you there!

Leave a Comment

Older Posts »
%d bloggers like this: