Posts Tagged vulnerability

The Monster Mash

mashed-potatoes

The buzz word “mashup” refers to the tying together of information and functionality from multiple third-party sources. Mashup projects are sure to become a monster of a security problem because of their very nature. This is what John Sluiter of Capgemini predicted at the RSA Europe conference last week during his “Trust in Mashups, the Complex Key” session. This is the abstract:

“Mashups represent a different business model for on-line business and require a specific approach to trust. This session sets out why Mashups are different,  describes how trust should be incorporated into the Mashup-based service using Jericho Forum models and presents three first steps for incorporating trust appropriately into new Mashup services.”

Jericho Forum is the international IT security association that published the COA (Collaboration Oriented Architectures) framework. COA advocates the deperimiterisation approach to security and stresses the importance of protecting data instead of relying on firewalls.

So what happens when data from various third-party sources traverses inharmonious networks, applications, and privilege levels? Inevitably, misidentifications occur; erroneous and/or malicious bytes pass through the perimeters. Sensitive data might be accessed by an unprivileged user or attack strings could be received. A good example of such a vulnerability was in the Microsoft Windows Vista Sidebar; a malicious HTML tag gets rendered by the RSS gadget and since it’s in the local zone, arbitrary JavaScript is executed with full privileges (MS07-048.)

New generations of automated tools will need to be created in order to test applications developed using the mashup approach. Vulnerability scanners like nessus, nikto, and WebInspect are best used to discover known weaknesses in input validation and faulty configurations. What they’re not very good at is pointing out errors in custom business logic and more sophisticated attack vectors; that’s where the value of hiring a consultant to perform manual testing comes in.

Whether it’s intentional or not, how can insecure data be prevented from getting sent to or received from a third-party source? A whitelist can be applied to data that is on its way in or out—this helps, but it can be difficult when there are multiple systems and data encodings involved. There is also the problem of determining the presence of sensitive information.

Detecting transmissions of insecure data can be accomplished with binary analyzers. However, static analyzers are at a big disadvantage because they lack execution context. Dynamic analysis is capable of providing more information for tainting data that comes from third-party sources. They are more adept at recognizing unexpected executions paths that tainted data may take after being received from the network or shared code.

Advertisements

Leave a Comment

Breaking Vegas Online

We recently published an advisory for PartyPoker, an online gambling site (SECOBJADV-2008-03.) It was for a weakness in the client update process, a class of vulnerability that can affect various kinds of software. The past few years have seen some vulnerabilities that are specific to online gaming software. Statically seeded random number generators that allow prediction of forthcoming cards and reel values on upcoming slot spins were researched in the early days of online gaming–let’s take a look at some additional threats.

Usually, forms of online cheating are pretty primitive. Justin Bonomo was exposed for using multiple accounts in a single tournament on PokerStars and of course collusion between multiple players occurs as well. Absolute Poker’s reputation took a pretty big hit when players discovered that a site owner used a backdoor to view cards in play. Many private and public bots are also in use. However, a good human poker player will beat a bot, especially in no-limit which is less mathematical than other variations of the game; bots are likely to be most useful in low-stakes fixed-limit games.

Earlier this year, a logic flaw was exploited on BetFair (oh, the pun!) because of a missing conditional check to test for chip stack equality when determining finishing positions. As a result, if multiple players with the same amount of chips were eliminated at the same time, they would all receive the payout for the highest position, instead of decrementing positions. For example, if there were three players that all had chip stacks of the same size and everyone went all-in, the winner of the hand would finish in first place and the other two players would both receive second place money. Interesting!

Comments (1)

Why Dynamic Program Analysis is Superior.. Part Two: False Positives

Fool's GoldA few years ago I was making a living as a dedicated employee of a security consultancy whose name I won’t mention. For those of you who know me, I’ll give you three guesses and the last two don’t count. In any case, one day I was working at a (unnamed) client site and I noticed one of my fellow consultants running RATS, an antiquated static analysis tool for auditing source code. RATS stands for Rough Auditing Tool for Security and “rough” is a suitable description of it. Similar to another tool named flawfinder, RATS greps through source code for calls to functions that are considered unsafe. It prints out the source file and line number where the function call that is considered unsafe occurs. The person using RATS still has to review the programming language statements themselves to confirm that the alert displayed by RATS wasn’t a false positive.

So now I wonder.. Why even use such tools at all? Why not do a manual review of the code yourself since you’re going to have to look at the code anyway? I have no doubt that the tool can grep faster than the human eye, but the trained eye can pick up things that the source code scanner can’t. Furthermore, a dynamic analyzer can detect things that both the static analyzer and the human won’t see. I don’t want anyone to think that I’m trying to malign my old co-worker’s noble effort to get his job done–that’s not my intention at all. I am simply looking towards the future and wondering how these tasks will be accomplished five or even ten years from now.

Software assurance suites that take advantage of the dynamic code analysis paradigm have the ability to feature a zero false positive rating. Static analyzers tend to ask the question: “What appears be wrong with this code that I’m analyzing?” whereas dynamic analyzers phrase the question as: “Which input sets will yield unexpected and/or unintended program output?”

As a result of executing software directly (either natively or through emulation), the dynamic analysis approach to assuring software quality is much better poised to discover bugs without presenting a conundrum of false positives. A static analysis tool may appear to detect what is believed to be a critical security vulnerability. However, since the tool is not actually executing the program in question, what appears to be an insecurity can turn out to manifest itself as a safe operation. Conversely, what appears to be a safe operation can in actuality be another vulnerability.

Leave a Comment

%d bloggers like this: