A few weeks ago while I was on vacation in the Outer Banks of North Carolina I was browsing through the media archives for DEF CON 15 since I missed the conference this year (I did make it out to Las Vegas, but not until September for the SANS Institute’s Network Security event.) While I was paging through the PDF-formatted slides for the presentations that I missed, one in particular immediately caught my eye; it was entitled “How I Learned to Stop Fuzzing and Find More Bugs.” Essentially, the presenter (Jacob West of Fortify Software) was playing on the fact that most (if not all) publicly available fuzzing utilities exhibit severely inadequate path coverage benchmarks. Personally, I agree with that assertion. I also believe that Jacob’s claims were somewhat slanted based on his employment at a software company that offers a static analysis product. Although I have not yet seen a practical solution for it, I do believe that it is possible to attain an optimal level of path coverage while utilizing dynamic analysis techniques.
The qualm I have with static analysis is its nature by definition. It doesn’t execute the program being scrutinized. Okay, fine. Static analyzers have their place. Maybe the tester is caught in a situation where he or she doesn’t have permission to execute the program. Regardless, I feel such a predicament is rare and if the tester is capable of executing the program, then why not do so? Why not explore all avenues of possibility? In addition to code execution, dynamic analysis can reap all the benefits of static analysis as well. Static analysis is restricted to read-only access; this is what makes static code analysis an inferior approach to software assurance. Dynamic analyzers can get the best of both worlds.. They have their cake and eat it, too!
What’s in a nutshell? The kernel of course.. but you won’t get inside the kernel if you just stare at the shell..
Not just a cliche–an analogy that sums up the dynamic versus static debate.