Posts Tagged binary

Changed-Rooted Jail Hackery Part 2

It’s been a while, but that doesn’t necessarily mean it was vaporware! 😉 As promised in Part 1, the system call tool that mimicks GNU fileutils commands is in the code listing below. Support for any additional commands is welcome; if anybody adds more feel free to e-mail your source code. Extension should be fairly straightforward given then “if(){}else if{}else{}” template. Just simply add another else-if code block with appropriate command line argument parsing. It’s too bad you can’t really do closures in C, but a likely approach to increasing this tool’s modularity is the use of function pointers. Of course new commands don’t have to be from GNU fileutils–mixing and matching Linux system calls in C has limitless possibilities.

Speaking of GNU, I stumbled across an extremely useful GNU project called parallel. Essentially, it’s a multi-threaded version of xargs(1p). I’ve been including it in a lot of bash scripts I’ve written recently. It doesn’t seem to be part of the default install for any operating system distributions, yet; maybe when it evolves into something even more awesome it’ll become mainstream. 🙂 Suprisingly, I was even able to compile it on SUA/Interix without any problems. The only complaint I have about it is the Perl source language (not that I have anything against Perl). I simply feel that the parallelization processes could be that much faster if written in C. Maybe I’ll perlcc(1) it or something. Okay, then–without any further adieu, here’s the code for syscaller:

/*
 * syscaller v0.8a - breaking out of chroot jails "ex nihilo"
 *
 * by Derek Callaway <decal@security-objectives.com>
 *
 *
 * Executes system calls instead of relying on programs from the
 * GNU/Linux binutils package. Can be useful for breaking out of
 * a chroot() jail.
 *
 * compile: gcc -O2 -o syscaller -c syscaller.c -Wall -ansi -pedantic
 * copy: cat syscaller | ssh -l user@host.dom 'cat>syscaller'
 *
 * If the cat binary isn't present in the jail, you'll have to be more
 * creative and use a shell builtin like echo (i.e. not the echo binary,
 * but bash's internal implementation of it.)
 *
 * Without any locally accessible file download programs such as:
 * scp, tftp, netcat, sftp, wget, curl, rz/sz, kermit, lynx, etc.
 * You'll have to create the binary on the target system manually.
 * i.e. by echo'ing hexadecimal bytecode. This is left as an exercise
 * to the reader.
 *

 * to the reader.
 *
 */

#include<stdio.h>
#include<stdlib.h>
#include<string.h>
#include<ctype.h>

#define _GNU_SOURCE 1
#define _USE_MISC 1

#include<unistd.h>
#include<sys/syscall.h>
#include<sys/types.h>
#include<pwd.h>
#include<grp.h>

int syscall(int number, ...);

/* This is for chdir() */
#define SHELL_PATHNAME "/bin/sh"

static void usage(char **argv)
{
  printf("usage: %s syscall arg1 [arg2 [...]]\n", *argv);
  printf("help:  %s help\n", *argv);
  exit(EXIT_FAILURE);
}

static void help(char **argv)
{
  puts("syscaller v0.8a");
  puts("=-=-=-=-=-=-=-=");
  puts("");
  puts("SYSCALLER COMMANDS");
  puts("");
  puts("chmod mode pathname");
  puts("chdir pathname");
  puts("chown user group pathname");
  puts("mkdir pathname mode");
  puts("rmdir pathname");
  puts("touch pathname mode");
  puts("");

  puts("Note: modes are in octal format (symbolic modes are unsupported)");
  puts("Note: some commands mask octal mode bits with the current umask value");
  puts("Note: creat is an alias for touch");
  puts("");
  puts("USEFUL SHELL BUILTINS");
  puts("");
  puts("ls -a / (via brace/pathname expansion): echo /{.*,*}");

  exit(EXIT_SUCCESS);
}

int main(int argc, char *argv[])
{
  register char *p = 0;
  signed auto int r = 1;

  if(argc < 2)
    usage(argv);

  /* I prefer to avoid strcasecmp() since it's not really standard C. */
  for(p = argv[1];*p;++p)
    *p = tolower(*p);

  do
  {
    if(!strcmp(argv[1], "chmod") && argc >= 4)
    {
      /* decimal to octal integer conversion */
      const mode_t m = strtol(argv[2], NULL, 8);

      r = syscall(SYS_chmod, argv[3], m);

#ifdef DEBUG
  fprintf(stderr, "syscall(%d, %s, %d) => %d\n", SYS_chmod, argv[3], m, r);
#endif
    }
    else if((!strcmp(argv[1], "chdir") || !strcmp(argv[1], "cd")) && argc >= 3)
    {
      static char *const av[] = {SHELL_PATHNAME, NULL};
      auto signed int r2 = 0;

      r = syscall(SYS_chdir, argv[2]);

#ifdef DEBUG
  fprintf(stderr, "syscall(%d, %s) => %d\n", SYS_chdir, argv[2], r);
#endif

      /* This is required because the new current working directory isn't
       * bound to the original login shell. */
      printf("[%s] exec'ing new shell in directory: %s\n", *argv, argv[2]);
      r2 = system(av[0]);
      printf("[%s] leaving shell in child process\n", *argv);

      if(r2 < 0)
        r = r2;
    }
    else if(!strcmp(argv[1], "chown") && argc >= 5)
    {
      struct passwd *u = NULL;
      struct group *g = NULL;

      if(!(u = getpwnam(argv[2])))
        break;

#ifdef DEBUG
  fprintf(stderr, "getpwnam(%s) => %s:%s:%d:%d:%s:%s:%s\n", argv[2], u->pw_name, u->pw_passwd, u->pw_uid, u->pw_gid, u->pw_gecos, u->pw_dir, u->pw_shell);
#endif

      if(!(g = getgrnam(argv[3])))

        break;

#ifdef DEBUG
  fprintf(stderr, "getgrnam(%s) => %s:%s:%s:%s:", argv[3], g->gr_nam, g->gr_passwd, g->gr_gid);

  if((p = g->gr_mem))
    while(*p)
    {
      fputs(p, stderr);

      p++;

      if(*p)
        fputc(',', stderr);
    }
#endif

        r = syscall(SYS_chown, argv[4], u->pw_uid, g->gr_gid);

#ifdef DEBUG

  fprintf(stderr, "syscall(%d, %d, %d, %s) => %d\n", SYS_chown, u->pw-uid, g->gr_gid, argv[4], r);
#endif
    }
    else if((!strcmp(argv[1], "creat") || !strcmp(argv[1], "touch")) && argc >= 4 )
    {
      const mode_t m = strtol(argv[3], NULL, 8);

      r = syscall(SYS_creat, argv[2], m);

#ifdef DEBUG
  fprintf(stderr, "syscall(%d, %S, %d) => %d\n", SYS_creat, argv[2], m, r);
#endif
    }
    else if(!strcmp(argv[1], "mkdir") && argc >= 4)
    {
      const mode_t m = strtol(argv[3], NULL, 8);

      r = syscall(SYS_mkdir, argv[2], m);

#ifdef DEBUG
  fprintf(stderr, "syscall(%d, %S, %d) => %d\n", SYS_mkdir, argv[2], m, r);
#endif
    }
    else if(!strcmp(argv[1], "rmdir") && argc >= 3)
    {
      r = syscall(SYS_rmdir, argv[2]);

#ifdef DEBUG
  fprintf(stderr, "syscall(%d, %S) => %d\n", SYS_rmdir, argv[2], r);
#endif
    }
    else if(!strcmp(argv[1], "help"))
      help(argv);
    else
      usage(argv);

    break;
  } while(1);

  perror(argv[1]);

  exit(r);
}

Please note that some of the lines of code in this article are truncated due to how WordPress’s CSS renders the font text. Although, you’ll still receive every statement in its entirety when you copy it to your clipboard. The next specimen is similar to the netstat emulating shell script from Part 1. It loops through the procfs PID number directories and parses their contents to make it look like you’re running the actual /bin/ps, even though you’re inside a misconfigured root directory that doesn’t have that binary. It also has some useful aliases and a simple version of uptime(1).   

#!/bin/bash
# ps.bash by Derek Callaway decal@security-objectives.com
# Sun Sep  5 15:37:05 EDT 2010 DC/SO

alias uname='cat /proc/version' hostname='cat /proc/sys/kernel/hostname'
alias domainname='cat /proc/sys/kernel/domainname' vim='vi'

function uptime() {
  declare loadavg=$(cat /proc/loadavg | cut -d' ' -f1-3)
  let uptime=$(($(awk 'BEGIN {FS="."} {print $1}' /proc/uptime) / 60 / 60 / 24 ))
  echo "up $uptime day(s), load average: $loadavg"
}

function ps() {
    local file base pid state ppid uid
    echo 'S USER     UID   PID  PPID CMD'
    for file in /proc/[0-9]*/status
        do base=${file%/status} pid=${base#/proc/}
        { read _ st _; read _ ppid; read _ _ _ _ uid; } < <(egrep '^(State|PPid|Uid):' "$file")
        IFS=':' read user _ < <(getent passwd $uid) || user=$uid
        printf "%1s %-6s %5d %5d %5d %s\n" $st $user $uid $pid $ppid "$(tr \ \ <"$base/cmdline")"
    done
}

#EOF#

Leave a Comment

Jenny’s Got a Perfect Pair of..

binomial coefficients

binomial coefficients

..binomial coefficients?! That’s right. I’ve found the web site of a Mr. Bob Jenkins with an entire page dedicated to a pairwise covering array generator named jenny.c. I’m fairly sure that only the most hardcore of the software testing weenies have some notion of what those are so for the sake of being succinct I’ll be providing my own explanation here: A pairwise covering array generator is a program for silicon computing machines that deduces sequences of input value possibilities for the purposes of software testing; and yes, I did say silicon computers–since testing their software is really a question of the great Mr. Turing’s halting problem, the existence of a practical, affordable, and efficient nano/molecular computing device such as a DNA computer, Feynman machine, universal quantum computer, etc. would essentially predicate a swift solution to the problem of testing contemporary computer software in non-deterministic polynomial time. The only problem we would have then is how to test those fantastic, futuristic, (seemingly science fictive) yet wondrous problem-solving inventions as they break through laborious barriers of algorithmic complexities that twentieth century computer scientists could have only dreamed about: PCP, #P, PSPACE-complete, 2-EXPTIME and beyond.. The stuff that dreams are made of.

Now, let’s return to Earth and learn about a few things that make Jenny so special. Computer scientists learned early on in their studies of software testing that pairwise or test cases with two input values were the most likely to uncover erroneous programming or “bugs.” Forget the luxury of automation for a minute, old school programmers typed input pairs manually to test their own software. Code tested in that manner was most likely some sort of special-purpose console mode utility. (Celsius to Fahrenheit, anyone?) As the computing power of the desktop PC increased according to Moore’s law, it became time-effective to write a simple program to generate these input pairs instead of toiling over it yourself–I suppose not testing at all was another option. Today, still some software is released to market after only very minor functional and/or quality assurance testing. Regression, stress, security, and other forms of testing cost money and reduce time to market, but in reality significant return on investment acts as a hedge against any losses incurred. Even ephemeral losses justify the absolute necessity of these expenditures.

A Jenny built in modern times undoubtedly has the power to deductively prove that a software product of the eighties decade is comprised of components (or units) that are fundamentally error-free. However, the paradox remains that improvements in automated software testers share a linear relationship with improvements of software in general. Thus, pairwise has become “n-way” which describes the process of utilizing greater multiples of input values in order to cover acceptable numbers of test cases. The number of covering arrays generated in this fashion grows exponentially and can be calculated as a binomial coefficient (see formula below.)

(n choose r) in factorial terms

(n choose r) in factorial terms

According to Paul Black, former SAMATE (Software Assurance Metrics and Tool Evaluation) project leader, researchers at NIST have pegged 6-way as the magic number for optimal fault interaction coverage (notably Rick Kuhn and Dolores Wallace.) This conclusion is based on hard evidence from studies on real-world software scenarios including medical devices and the aerospace industry. However, it would not surprise me to see this approximation rise significantly in the coming decades, just as the paradoxical relationship between general-purpose software and automated software testing programs shifts itself in accordance with Moore’s law. If not by Moore, then by some other axiom of metric progression such as Rogers’ bell curve of technological adoption.

I’ve also got a hunch that the tiny percentage of bugs in that “n is arbitrarily greater than 6” range are some of the most critical, powerfully impacting software vulnerabilities known to man. They lie on an attack surface that’s almost non-existent; this makes them by definition, obscure, non-obvious, shadowy, and hidden. Vulnerabilities in this category are the most important by their very nature. Therefore, detecting vulnerabilities of this type will involve people and tools that are masters of marksmanship and artistic in their innovation. Research in this area is entering a steadfast beginning especially within the realms of dynamic instrumentation or binary steering, active analysis, fault propagation, higher-order preconditions/dependencies, concurrency issues, race conditions, etc. I believe that combining merits inherent in various analysis techniques will lead to perfection in software testing.

For perfection in hashing, check out GNU’s gperf, read how Bob used a perfect hashing technique to augment Jenny’s n-tuples; then get ready for our Big ßeta release of the BlockWatch client software (just in time for the holiday season!)

Leave a Comment

The Monster Mash

mashed-potatoes

The buzz word “mashup” refers to the tying together of information and functionality from multiple third-party sources. Mashup projects are sure to become a monster of a security problem because of their very nature. This is what John Sluiter of Capgemini predicted at the RSA Europe conference last week during his “Trust in Mashups, the Complex Key” session. This is the abstract:

“Mashups represent a different business model for on-line business and require a specific approach to trust. This session sets out why Mashups are different,  describes how trust should be incorporated into the Mashup-based service using Jericho Forum models and presents three first steps for incorporating trust appropriately into new Mashup services.”

Jericho Forum is the international IT security association that published the COA (Collaboration Oriented Architectures) framework. COA advocates the deperimiterisation approach to security and stresses the importance of protecting data instead of relying on firewalls.

So what happens when data from various third-party sources traverses inharmonious networks, applications, and privilege levels? Inevitably, misidentifications occur; erroneous and/or malicious bytes pass through the perimeters. Sensitive data might be accessed by an unprivileged user or attack strings could be received. A good example of such a vulnerability was in the Microsoft Windows Vista Sidebar; a malicious HTML tag gets rendered by the RSS gadget and since it’s in the local zone, arbitrary JavaScript is executed with full privileges (MS07-048.)

New generations of automated tools will need to be created in order to test applications developed using the mashup approach. Vulnerability scanners like nessus, nikto, and WebInspect are best used to discover known weaknesses in input validation and faulty configurations. What they’re not very good at is pointing out errors in custom business logic and more sophisticated attack vectors; that’s where the value of hiring a consultant to perform manual testing comes in.

Whether it’s intentional or not, how can insecure data be prevented from getting sent to or received from a third-party source? A whitelist can be applied to data that is on its way in or out—this helps, but it can be difficult when there are multiple systems and data encodings involved. There is also the problem of determining the presence of sensitive information.

Detecting transmissions of insecure data can be accomplished with binary analyzers. However, static analyzers are at a big disadvantage because they lack execution context. Dynamic analysis is capable of providing more information for tainting data that comes from third-party sources. They are more adept at recognizing unexpected executions paths that tainted data may take after being received from the network or shared code.

Leave a Comment

Dimes

2005_dime.jpgMicrosoft Security Bulletin

MS08-010 – Critical CVE-2008-0076

None of the flaws I’ve ever found on Microsoft platforms have ever been public (that is, they have all been derived from internal projects) and it’s nice to see at least in this round of fixes that my bug scored a perfect 10.0 (a dime) on the bulletin. I actually did not test as many platforms and configurations as Microsoft. For those of you that are unaware, bug regression and the overall triage process can become quite intensive. I knew that this vulnerability/flaw/bug/exploit/whatever had wide reaching appeal, fairly easy to see from the fact that all architectures and versions as far back as possible are marked critical.

As with all doings in the security space, walking a line between disclosure and tight-lipped mums, the word practice is not easy. So, what can be said here? Nothing? Something? I guess I have to write something, the marketoid’s wouldn’t be happy if I did not.

Before I digress into any technical discussion, I will take this opportunity to say something about the exploit sales “industry?”. In this world, everything and everybody has their place, that said, any individual that thinks exploits are worth any money, has another thing coming. Look at it this way, if you’re in the business of purchasing information (exploits), by definition you are unaware of the value of that information thereby inherently you are in a position to devalue the time and emotional investment into the derivation of that work. So this means, you’re never going to get back enough cash to make up for your time, EVER!! Where I do see some value in exploit brokers, is exclusively in the capacity of having them take the burden of dealing with uninformed software vendors (the Microsoft/IBM/others process is fairly straight forward).

Now that that’s all done with, I don’t really want to talk about the exploit, at least until some poorly constructed version winds up in metasploit. I will say though that the bulletin is correct in its description and synopsis.

The fact that there are no mitigating factors or workarounds possible, gives me some incentive and reassurance that the tools and methodologies that we’re building into our product offering works.

We’re ramping up development for a big push this quarter and will be uploading some more screenshots and related minutia in the coming months.

Our product in brief is an automated tool for native application flaw finding. It can assess, at runtime in a dynamic way, the integrity of a given binary application. This process then produces test cases and reproductions of what is necessary to trigger the flaw for a developer (this way, reducing regression rates due to bug fixes as it’s much easier to fix something when you can interact with it as opposed to a simple warning message).

We’re working on a management interface (on top of the technical one), that will also enable the lay person to identify architectural problems in arbitrary software also. This is actually quite simple (with the support of our engine), in essence, a landscape or tomography view is laid out before the user, with associated peaks and valleys, this then changes over time (4D), and represents the surface area of your application binary’s response to input. That is, a dynamic environment that is rooted by a system of systems methodology. What becomes apparent is that (if you are in the capacity to fix these issues yourself), as time goes on, and you assign various resources (people) to fix the peaks and turn them into valley’s. The rate at which you push down the peaks (bugs), across the application is not constant, some issues are harder to fix than others and persist. This way, a self-relative understanding of where problem area of code exist poignantly reveal themselves as architectural flaws and appropriate steps can be taken to drive the business case that will support a rewrite.

Whew, that’s a mouthful. Needless to say, we’re working to create the best platform around for software sovereignty.

Comments (1)

%d bloggers like this: