A few years ago I was making a living as a dedicated employee of a security consultancy whose name I won’t mention. For those of you who know me, I’ll give you three guesses and the last two don’t count. In any case, one day I was working at a (unnamed) client site and I noticed one of my fellow consultants running RATS, an antiquated static analysis tool for auditing source code. RATS stands for Rough Auditing Tool for Security and “rough” is a suitable description of it. Similar to another tool named flawfinder, RATS greps through source code for calls to functions that are considered unsafe. It prints out the source file and line number where the function call that is considered unsafe occurs. The person using RATS still has to review the programming language statements themselves to confirm that the alert displayed by RATS wasn’t a false positive.
So now I wonder.. Why even use such tools at all? Why not do a manual review of the code yourself since you’re going to have to look at the code anyway? I have no doubt that the tool can grep faster than the human eye, but the trained eye can pick up things that the source code scanner can’t. Furthermore, a dynamic analyzer can detect things that both the static analyzer and the human won’t see. I don’t want anyone to think that I’m trying to malign my old co-worker’s noble effort to get his job done–that’s not my intention at all. I am simply looking towards the future and wondering how these tasks will be accomplished five or even ten years from now.
Software assurance suites that take advantage of the dynamic code analysis paradigm have the ability to feature a zero false positive rating. Static analyzers tend to ask the question: “What appears be wrong with this code that I’m analyzing?” whereas dynamic analyzers phrase the question as: “Which input sets will yield unexpected and/or unintended program output?”
As a result of executing software directly (either natively or through emulation), the dynamic analysis approach to assuring software quality is much better poised to discover bugs without presenting a conundrum of false positives. A static analysis tool may appear to detect what is believed to be a critical security vulnerability. However, since the tool is not actually executing the program in question, what appears to be an insecurity can turn out to manifest itself as a safe operation. Conversely, what appears to be a safe operation can in actuality be another vulnerability.