Last month, I attended the U.S. Department of Homeland Security (DHS)/Department of Defense (DoD) Software Assurance (SwA) Forum at the Hilton in McLean, Virginia. One of the presenters, Rick Kuhn of NIST, outlined a technique for maximizing path coverage with dynamic analysis dubbed “6-way interactions”. Naturally, I was skeptical because “fuzzing”, as they sometimes call it in the security industry isn’t well-known for path coverage metrics. After dinner that night I printed and read Rick’s paper, “Pseudo-Exhaustive Testing for Software.” “Pseudo-exhaustive” because the combinatorics involved are concerned with optimizing the size of the input space to make the solution to the software assurance problem feasible using the dynamic analysis approach. The input set size required for the exhaustive dynamic test of a piece of modern software would be so large that the testing process would never complete. Keep in mind that there can be a many-to-one ratio between inputs and execution paths. Rick’s paper extended earlier research on pairwise (or, “2-way”) input set generation algorithms. It was shown that six inputs increases path coverage while still keeping a manageable computational complexity for the dynamic analyzer.
It seems that the the latest research on dynamic analysis is putting it ahead of the static paradigm in terms of “coverage”. Static analysis lacks execution context–it covers code but not runtime execution paths and I feel that path coverage is more assuring than code coverage. Furthermore, static analysis slows the SDL (software/systems/security development lifecycle.)
Static analyzers are often run by trusted insiders since companies are so paranoid about who gets to see their precious source code–Why have an outside group perform a code review when you can have a static analyzer do the legwork on the inside? This may help managers and executives sleep at night with thoughts of source code safety but it’s also helping to perpetuate bad practices. Insiders have a very narrow view of the code they write themselves. I think Dave G put it best in his Merits of Threat Modeling post to Matasano’s Chargen: “My code is perfectly secure until someone reports a vulnerability in it, at which point I will fix it and my code will be secure again.” My other favorite is “We have a policy that we won’t get hacked.” Coders and testers must have conflicting interests. They’re two different mindsets; hackers don’t follow policies.
Productivity is greatly reduced when developers test their own code. They could be writing more code instead of improperly testing code they’ve already written. Over time, there will be a tendency to introduce non-obvious back-doors into the product’s source code since developers are writing code such that it doesn’t cause a static analysis tool to produce warnings.