Archive for Software Assurance

Tickle! See? Gee, I …

A montage of TCL and Tcl-related logos

Tcl/Tk

Ah, TCL, the Tool Command Language. Based on the research conducted by myself and my colleagues here at Security Objectives (most notably Shane Macaulay,) we have concluded that Tcl has a multitude of security issues, especially when being used in a network environment; and contemporarily speaking, network usage is almost unavoidable. In essence, we are urging the use of extreme caution in Tcl-based web development–whether it’s being used directly or indirectly. To generalize, we also advise against using Tcl for any network application or protocol (not just HTTP.) Security Objectives has published an in-depth analysis of practical Tcl vulnerabilities. The whitepaper, entitled “Tickling CGI Problems”, outlines the theoretical backbone of the phenomena in the first half and presents cases of real-world exploitation in the second half. However, the background theory along with some general programming and Hyper-Text Transfer Protocol knowledge is recommended in order to gain a firm understanding of the exploits themselves.

This is not to say that Tcl should not be used ever, so as a disclaimer we are not advocating any programming language over another.  Our position is that the traditional approach to web security with Tcl has much room for improvement. Like any other programming language it works nicely in certain areas such as academic research, scientific computing, extensions, and software testing. With that being said, one project that comes to mind is regfuzz, a regular expression fuzzer written in Tcl which is quite useful. The distinction here is that regfuzz is not intended to be exposed to a public (or even a private) network. Surely, Safe-Tcl could successfully serve network clients in a hardened production environment given that assessed risks were rated low enough to suffice as acceptable. The problem is, that’s not the type of operations that occur in practice as evidenced by an overwhelming majority of cases.

The vulnerabilities exposed by the whitepaper affect TclHttpd, Lyris List Manager, cgi.tcl (which also uses Expect) as well as the Tcl language itself and interpreters thereof. Some of the attack methodologies and vulnerabilities identified are new to the public. Others are similar to well-known attacks or simply subversions of previous security patches, e.g. CVE-2005-4147. As time unfolds, there will surely be a surge in publicized Tcl weaknesses due to the research which is elaborated on within the whitepaper. If you’re interested in discovering vulnerabilities in Tcl software yourself, then there’s a grand list of references to Tcl-related things at http://www.tcl.tk/resource_dump.html. There is also a USENET newsgroup dedicated to it which is naturally called comp.lang.tcl.

For those of you attending CanSecWest 2011 in Vancouver, we are sponsoring the event. Professionals from Security Objectives will be in attendance to answer your queries regarding Tcl/Tk security or other areas of specialized research (information assurance, software assurance, cloud security, etc.) Of course, our professionals will also be available to field questions regarding Security Objectives’ product and service offerings as well. In addition, CanSecWest 2011 attendees receive special treatment when purchasing licenses for BlockWatch, the complete solution to total cloud security.

Comments (2)

Jenny’s Got a Perfect Pair of..

binomial coefficients

binomial coefficients

..binomial coefficients?! That’s right. I’ve found the web site of a Mr. Bob Jenkins with an entire page dedicated to a pairwise covering array generator named jenny.c. I’m fairly sure that only the most hardcore of the software testing weenies have some notion of what those are so for the sake of being succinct I’ll be providing my own explanation here: A pairwise covering array generator is a program for silicon computing machines that deduces sequences of input value possibilities for the purposes of software testing; and yes, I did say silicon computers–since testing their software is really a question of the great Mr. Turing’s halting problem, the existence of a practical, affordable, and efficient nano/molecular computing device such as a DNA computer, Feynman machine, universal quantum computer, etc. would essentially predicate a swift solution to the problem of testing contemporary computer software in non-deterministic polynomial time. The only problem we would have then is how to test those fantastic, futuristic, (seemingly science fictive) yet wondrous problem-solving inventions as they break through laborious barriers of algorithmic complexities that twentieth century computer scientists could have only dreamed about: PCP, #P, PSPACE-complete, 2-EXPTIME and beyond.. The stuff that dreams are made of.

Now, let’s return to Earth and learn about a few things that make Jenny so special. Computer scientists learned early on in their studies of software testing that pairwise or test cases with two input values were the most likely to uncover erroneous programming or “bugs.” Forget the luxury of automation for a minute, old school programmers typed input pairs manually to test their own software. Code tested in that manner was most likely some sort of special-purpose console mode utility. (Celsius to Fahrenheit, anyone?) As the computing power of the desktop PC increased according to Moore’s law, it became time-effective to write a simple program to generate these input pairs instead of toiling over it yourself–I suppose not testing at all was another option. Today, still some software is released to market after only very minor functional and/or quality assurance testing. Regression, stress, security, and other forms of testing cost money and reduce time to market, but in reality significant return on investment acts as a hedge against any losses incurred. Even ephemeral losses justify the absolute necessity of these expenditures.

A Jenny built in modern times undoubtedly has the power to deductively prove that a software product of the eighties decade is comprised of components (or units) that are fundamentally error-free. However, the paradox remains that improvements in automated software testers share a linear relationship with improvements of software in general. Thus, pairwise has become “n-way” which describes the process of utilizing greater multiples of input values in order to cover acceptable numbers of test cases. The number of covering arrays generated in this fashion grows exponentially and can be calculated as a binomial coefficient (see formula below.)

(n choose r) in factorial terms

(n choose r) in factorial terms

According to Paul Black, former SAMATE (Software Assurance Metrics and Tool Evaluation) project leader, researchers at NIST have pegged 6-way as the magic number for optimal fault interaction coverage (notably Rick Kuhn and Dolores Wallace.) This conclusion is based on hard evidence from studies on real-world software scenarios including medical devices and the aerospace industry. However, it would not surprise me to see this approximation rise significantly in the coming decades, just as the paradoxical relationship between general-purpose software and automated software testing programs shifts itself in accordance with Moore’s law. If not by Moore, then by some other axiom of metric progression such as Rogers’ bell curve of technological adoption.

I’ve also got a hunch that the tiny percentage of bugs in that “n is arbitrarily greater than 6” range are some of the most critical, powerfully impacting software vulnerabilities known to man. They lie on an attack surface that’s almost non-existent; this makes them by definition, obscure, non-obvious, shadowy, and hidden. Vulnerabilities in this category are the most important by their very nature. Therefore, detecting vulnerabilities of this type will involve people and tools that are masters of marksmanship and artistic in their innovation. Research in this area is entering a steadfast beginning especially within the realms of dynamic instrumentation or binary steering, active analysis, fault propagation, higher-order preconditions/dependencies, concurrency issues, race conditions, etc. I believe that combining merits inherent in various analysis techniques will lead to perfection in software testing.

For perfection in hashing, check out GNU’s gperf, read how Bob used a perfect hashing technique to augment Jenny’s n-tuples; then get ready for our Big ßeta release of the BlockWatch client software (just in time for the holiday season!)

Leave a Comment

The Philosophical Future of Digital Immunization

digital-trojan-horse-viriiUsually it’s difficult for me to make a correlation between the two primary subjects that I studied in college–computer science and philosophy. The first few things that pop into mind when attempting to relate the two are typically artificial intelligence and ethics. Lately, intuition has caused me to ponder over a direct link between modern philosophy and effective digital security.

More precisely, I’ve been applying the Hegelian dialectic to the contemporary signature-based approach to anti-virus while pontificating with my peers on immediate results; the extended repercussions of this application are even more fascinating. Some of my thoughts on this subject were inspired by assertions of Andrew Jacquith and Dr. Daniel Geer at the Source Boston 2008 security conference. Mr. Geer painted a beautiful analogy between the direction of digital security systems and the natural evolution of biological autoimmune systems during his keynote speech. Mr. Jacquith stated the current functional downfalls of major anti-virus offerings. These two notions became the catalysts for the theoretical reasoning and practical applications I’m about to describe.

Hegel’s dialectic is an explicit formulation of a pattern that tends to occur in progressive ideas. Now bear with me here–In essence, it states that for a given action, an inverse reaction will occur and subsequently the favorable traits of both the action and reaction will be combined; then the process starts over. A shorter way to put it is: thesis, antithesis, synthesis. Note that an antithesis can follow a synthesis and this is what creates the loop. This dialectic is a logical characterization of why great artists are eventually considered revolutionary despite  initial ridicule for rebelling against the norm. When this dialectic is applied to anti-virus, we have: blacklist, whitelist, hybrid mixed-mode. Anti-virus signature databases are a form of blacklisting. Projects such as AFOSI md5deep, NIST NSRL,  and Security Objectives Pass The Hash are all whitelisting technologies.

A successful hybrid application of these remains to be seen since the antithesis (whitelisting) is still a relatively new security technology that isn’t utilized as often as it should be. A black/white-list combo that utilizes chunking for both is the next logical step for future security software. When I say hybrid mixed-mode, I don’t mean running a whitelisting anti-malware tool and traditional anti-virus in tandem although that is an attractive option. A true synthesis would involve an entirely new solution that inherited the best of each parent approach, similar to a mule’s strength and size. The drawbacks of blacklists and whitelists are insecurity and inconvenience, respectively. These and other disadvantages are destined for mitigation with a hybridizing synthesis.

The real problem with mainstream anti-virus software is that it’s not stopping all of the structural variations in malware. PC’s continue to contract virii even when they’re loaded with all the latest anti-virus signatures. This is analogous to a biological virus that becomes resistant to a vaccine through mutation. Signature-based matching was effective for many years but now the total set of malicious code far outweighs legitimate code. To compensate, contemporary anti-virus has been going against Ockham’s Razor by becoming too complex and compounding the problem as a result. It’s time for the security industry to make a long overdue about-face. Keep in mind that I’m not suggesting that there be a defection of current anti-virus software. It does serve a purpose and will become part of the synthesization I show above.

The fundamental change in motivation for digital offensive maneuvers from hobbyist to monetary and geopolitical warrants a paradigm shift in defensive countermeasure implementation. For what it’s worth, I am convinced that the aforementioned technique of whitelisting chunked hashes will be an invaluable force for securing the cloud. It will allow tailored information, metrics and visualizations to be targeted towards various domain-specific applications and veriticals. For example: finance, energy, government, or law enforcement, as well as the associated software inventory and asset management tasks of each. Our Clone Wars presentation featuring Pass The Hash (PTH) at Source Boston and CanSecWest will elaborate on our past few blog posts and much more.. See you there!

Leave a Comment

Short-Term Memory

Sometimes I get the feeling that too many Internet users (especially the younger generation) view 1995, or the beginning of commercialized Internet as the start of time itself. More specifically, I notice how people tend to have a short-term memory when it comes to security issues. A recent example of this was all the creative network exploitation scenarios that arose from the great DNS cache poisoning scare of 2008: intercepting e-mails destined for the MX of users who didn’t really click on “Forgot Password,” pushing out phony updates, innovative twists on spear phishing, etc. The fact of the matter is that man-in-the-middle attacks were always a problem; cache poisoning makes them easier but their feasibility has always been within reason. My point is that vendors should address such weaknesses before the proverbial fertilizer hits the windmill.

Too often, short-term memory is the catalyst for reoccurring breaches of information. Sometimes I wonder what (if anything) goes through the mind of one of those celebrities that just got their cell phone hacked for the third time. Maybe it’s something like, “Oh.. those silly hackers! They’ve probably gotten bored by now and they’ll just go away.” Then I wonder how often similar thoughts enter corporate security (in)decision–which is likely to be why cellular carriers neglect to shield their clientele’s voicemail from caller ID spoofing and other shenanigans. Nonetheless, the amusing charade that 2600 pulled on the Obama campaign for April Fool’s Day was simply a case of people believing everything they read on the Internet.

Don’t get me wrong. I’ve seen some major improvements in how larger software vendors are dealing with vulnerabilities, but an overwhelming majority of their security processes are still not up to par. Short-term memory is one of those cases where wetware is the weakest link in the system.

The idea of the digital security industry using long-term memory to become more like insurance companies and less like firefighters is quite intriguing. Putting protective forethought into the equation dramatically changes the playing field. Imagine an SDLC where programmers don’t have to know how to write secure code, or even patch vulnerable code for that matter. I can say for sure that such a proposition will become reality in the not too distant future. Stay tuned…

Leave a Comment

The Monster Mash

mashed-potatoes

The buzz word “mashup” refers to the tying together of information and functionality from multiple third-party sources. Mashup projects are sure to become a monster of a security problem because of their very nature. This is what John Sluiter of Capgemini predicted at the RSA Europe conference last week during his “Trust in Mashups, the Complex Key” session. This is the abstract:

“Mashups represent a different business model for on-line business and require a specific approach to trust. This session sets out why Mashups are different,  describes how trust should be incorporated into the Mashup-based service using Jericho Forum models and presents three first steps for incorporating trust appropriately into new Mashup services.”

Jericho Forum is the international IT security association that published the COA (Collaboration Oriented Architectures) framework. COA advocates the deperimiterisation approach to security and stresses the importance of protecting data instead of relying on firewalls.

So what happens when data from various third-party sources traverses inharmonious networks, applications, and privilege levels? Inevitably, misidentifications occur; erroneous and/or malicious bytes pass through the perimeters. Sensitive data might be accessed by an unprivileged user or attack strings could be received. A good example of such a vulnerability was in the Microsoft Windows Vista Sidebar; a malicious HTML tag gets rendered by the RSS gadget and since it’s in the local zone, arbitrary JavaScript is executed with full privileges (MS07-048.)

New generations of automated tools will need to be created in order to test applications developed using the mashup approach. Vulnerability scanners like nessus, nikto, and WebInspect are best used to discover known weaknesses in input validation and faulty configurations. What they’re not very good at is pointing out errors in custom business logic and more sophisticated attack vectors; that’s where the value of hiring a consultant to perform manual testing comes in.

Whether it’s intentional or not, how can insecure data be prevented from getting sent to or received from a third-party source? A whitelist can be applied to data that is on its way in or out—this helps, but it can be difficult when there are multiple systems and data encodings involved. There is also the problem of determining the presence of sensitive information.

Detecting transmissions of insecure data can be accomplished with binary analyzers. However, static analyzers are at a big disadvantage because they lack execution context. Dynamic analysis is capable of providing more information for tainting data that comes from third-party sources. They are more adept at recognizing unexpected executions paths that tainted data may take after being received from the network or shared code.

Leave a Comment

Breaking Vegas Online

We recently published an advisory for PartyPoker, an online gambling site (SECOBJADV-2008-03.) It was for a weakness in the client update process, a class of vulnerability that can affect various kinds of software. The past few years have seen some vulnerabilities that are specific to online gaming software. Statically seeded random number generators that allow prediction of forthcoming cards and reel values on upcoming slot spins were researched in the early days of online gaming–let’s take a look at some additional threats.

Usually, forms of online cheating are pretty primitive. Justin Bonomo was exposed for using multiple accounts in a single tournament on PokerStars and of course collusion between multiple players occurs as well. Absolute Poker’s reputation took a pretty big hit when players discovered that a site owner used a backdoor to view cards in play. Many private and public bots are also in use. However, a good human poker player will beat a bot, especially in no-limit which is less mathematical than other variations of the game; bots are likely to be most useful in low-stakes fixed-limit games.

Earlier this year, a logic flaw was exploited on BetFair (oh, the pun!) because of a missing conditional check to test for chip stack equality when determining finishing positions. As a result, if multiple players with the same amount of chips were eliminated at the same time, they would all receive the payout for the highest position, instead of decrementing positions. For example, if there were three players that all had chip stacks of the same size and everyone went all-in, the winner of the hand would finish in first place and the other two players would both receive second place money. Interesting!

Comments (1)

Ignorance is Bliss

Ignorance is Bliss When you think about it, time really is all we have. It’s what you have at your disposal, to do anything and everything. It seems that we’re better off not knowing when it comes to security–for our own good. Can it really be so utilitarian?

To anybody out there writing exploits: make sure you’re doing it just for fun. Currently, there are no outlets for any financial gain that will accurately measure your time investment or fairly compensate your hard work.

Security Objectives’ own Shane Macaulay “owned” Vista SP1 in the PWN2OWN contest at CanSecWest 2008 by exploiting a bug in Adobe Flash. As a result of the contest’s categorization of the bug as third-party, the exploit was grossly under-appraised (especially when considering cross-platform targets and the fact that it would work well into the future with Vista’s new Service Pack.) Sure, it technically was a bug in a third-party application, but this particular third-party application happens to be installed on just about every Internet-enabled PC. According to Adobe, “Adobe® Flash® Player is the world’s most pervasive software platform, used by over 2 million professionals and reaching over 98% of Internet-enabled desktops in mature markets as well as a wide range of devices.”

Even if Shane was unfairly compensated, it doesn’t matter because at least he used “responsible disclosure” — or does it? I highly doubt that the people in charge of the companies writing buggy software and brokering bug information have any idea about the amount of work and skill that goes into discovering an exploitable bug, let alone writing a proof-of-concept for it. As it stands, software companies are setting themselves up for a black market in digital weapons trading of unprecedented proportions.

Here’s something else to think about.. I expect Adobe to patch this one rather quickly given all the publicity. How long does it take for a vendor to fix a given vulnerability when it is reported to them directly? Even some of the brokered “upcoming advisories” on 3Com’s ZDI site are many months or even years stale. This “patchtile dysfunction” will increase the value of a 0-day exploit exponentially.

Time is money and to make up for lost time, Mr. Macaulay decided to sell the laptop he had won on eBay. An innocent bystander at the contest dubbed this decision “from pwn to pawn.” So why not? Laptops get sold on eBay everyday–but not this one. It wasn’t long before eBay pulled Mr. Macaulay’s item from auction on the first of April, ostensibly as an April Fool’s shenanigan. This came as a surprise to me. Things to consider here:

  • The laptop may or may not have had forensic evidence of the controlled attack that occurred during the contest.
  • Even so, Mr. Macaulay is a responsible discloser and would not have shipped the laptop until the bug was patched.
  • Mr. Macaulay’s and Mr. Sotirov’s autographs should have increased the laptop value, regardless.

This incident, in a way, reminded me of eBay’s great fearwall debacle from a few years ago (CVE-2005-4131.) In that case, there were several key differences: an information broker such as ZDI was not involved, a pseudonym was being used, the code statements where the memory corruption occurred were disclosed, and no computer hardware was for sale. Nevertheless, I respect eBay’s decision to discontinue the auction as this is obviously a very controversial issue.

Brokering information? How can you do it? From experience, the idea of using an escrow service and 3rd party verification is largely ineffective. It would appear that ZDI is the only show in town. Of course there’s that auction service, but you have to send them your exploit first so how does that work? It appears that they’re still trying to do business by the way, despite alleged legal troubles. I’m subscribed to their mailing list and they send out an e-mail every time new information goes up for auction; they put up a dozen or so new exploits last week but it would appear that few if any were sold. Where do we go from here? Is brokering information even possible?

Imagine for a moment a scenario where a dozen or so exploits of critical severity related to a single software company are posted to Full Disclosure with rumors of many more circulating in the underground and exploits actively being carried out in the wild. Now imagine shareholders shorting that company’s stock. I suppose that the vulnerability information might be more realistically valued in a situation such as this. Anyone have any other ideas?

Comments (1)

Good grief!

Charlie Brown Good GriefHaving just caught up on some of the conference “Source Boston”, I can’t help but call out some of the musings of Andrew Jaquith. Something of a more technical abstract can be read at the code project’s article by Jeffrey Walton (pay special attention to Robin Hood and Friar Tuck). If anybody doubt’s the current trend of sophistication in malware, I’m sure it is somebody who is currently penetrated. I’ve had the opportunity to devote specific analysis on occasion over the years to MAL code and its impact on the enterprise. I know FOR SURE the level of sophistication is on the rise. One thing I had to deal with recently, the extent of capability afforded by most desktop OS’s being so advanced, the majority of functionality desired by MAL code is pre-deployed. Unfortunately paving the way for configuration viruses and their ability to remain undetected in that all they are is an elaborate set of configuration settings. You can imagine, a configuration virus has the entire ability of your OS at its disposal, any VPN/IPSEC, self-(UN) healing, remote administration, etc… The issue is then, how do you determine if that configuration is of MAL intent, it’s surely there for a reason and valid in many deployments. The harm is only when connected to a larger entity/botnet that harm begins to affect a host. Some random points to add hard learned through experience;

  • Use a native execution environment
    • VMWare, prevents the load or typical operation of many MAL code variants
      • I guess VM vendors have a big win here for a while, until the majority of targets are VM hosts.
  • Have an easily duplicated disk strategy
    • MAC systems are great for forensics, target disk mode and ubiquitous fire-wire allows for live memory dumps and ease of off-line disk analysis (without a drive carrier).
    • I’m planning a hash-tree based system to provision arbitrarily sized block checksums of clean/good files, useful of diff’ing out the noise for arbitrary medium (memory, disk, flash).
  • Install a Chinese translator locally
    • As you browse Chinese hack sites, (I think all Russian site’s are so quiet these days due to the fact that they are financially driven, while Chinese are currently motivated by nationalistic motivators), you need to translate locally. Using a .com translation service is detected and false content is rendered, translate locally to avoid that problem.
      • Also, keep notes on lingo.. there are no translation-hack dictionaries yet. (I guess code pigeon is referring to a homing pigeon, naturally horse/wood code is a Trojan).

Unfortunately part of the attacker advantage is the relatively un-coordinated fashion defenders operate, not being able to trust or vet your allies to compare notes can be a real pain. One interesting aspect of a MAL system recently analyzed was the fact that that it had no persistent signature. It’s net force mobility so complete, that the totality of its functionality could shift boot-to-boot, so long as it compromised a boot-up driver it would rise again. The exalted C. Brown put it best, “Good grief!” http://www.codeproject.com/KB/cpp/VirusProtect.aspx http://www.sourceboston.com/blog/?p=25

Comments (12)

Dimes

2005_dime.jpgMicrosoft Security Bulletin

MS08-010 – Critical CVE-2008-0076

None of the flaws I’ve ever found on Microsoft platforms have ever been public (that is, they have all been derived from internal projects) and it’s nice to see at least in this round of fixes that my bug scored a perfect 10.0 (a dime) on the bulletin. I actually did not test as many platforms and configurations as Microsoft. For those of you that are unaware, bug regression and the overall triage process can become quite intensive. I knew that this vulnerability/flaw/bug/exploit/whatever had wide reaching appeal, fairly easy to see from the fact that all architectures and versions as far back as possible are marked critical.

As with all doings in the security space, walking a line between disclosure and tight-lipped mums, the word practice is not easy. So, what can be said here? Nothing? Something? I guess I have to write something, the marketoid’s wouldn’t be happy if I did not.

Before I digress into any technical discussion, I will take this opportunity to say something about the exploit sales “industry?”. In this world, everything and everybody has their place, that said, any individual that thinks exploits are worth any money, has another thing coming. Look at it this way, if you’re in the business of purchasing information (exploits), by definition you are unaware of the value of that information thereby inherently you are in a position to devalue the time and emotional investment into the derivation of that work. So this means, you’re never going to get back enough cash to make up for your time, EVER!! Where I do see some value in exploit brokers, is exclusively in the capacity of having them take the burden of dealing with uninformed software vendors (the Microsoft/IBM/others process is fairly straight forward).

Now that that’s all done with, I don’t really want to talk about the exploit, at least until some poorly constructed version winds up in metasploit. I will say though that the bulletin is correct in its description and synopsis.

The fact that there are no mitigating factors or workarounds possible, gives me some incentive and reassurance that the tools and methodologies that we’re building into our product offering works.

We’re ramping up development for a big push this quarter and will be uploading some more screenshots and related minutia in the coming months.

Our product in brief is an automated tool for native application flaw finding. It can assess, at runtime in a dynamic way, the integrity of a given binary application. This process then produces test cases and reproductions of what is necessary to trigger the flaw for a developer (this way, reducing regression rates due to bug fixes as it’s much easier to fix something when you can interact with it as opposed to a simple warning message).

We’re working on a management interface (on top of the technical one), that will also enable the lay person to identify architectural problems in arbitrary software also. This is actually quite simple (with the support of our engine), in essence, a landscape or tomography view is laid out before the user, with associated peaks and valleys, this then changes over time (4D), and represents the surface area of your application binary’s response to input. That is, a dynamic environment that is rooted by a system of systems methodology. What becomes apparent is that (if you are in the capacity to fix these issues yourself), as time goes on, and you assign various resources (people) to fix the peaks and turn them into valley’s. The rate at which you push down the peaks (bugs), across the application is not constant, some issues are harder to fix than others and persist. This way, a self-relative understanding of where problem area of code exist poignantly reveal themselves as architectural flaws and appropriate steps can be taken to drive the business case that will support a rewrite.

Whew, that’s a mouthful. Needless to say, we’re working to create the best platform around for software sovereignty.

Comments (3)

Reducing the Cost of Software Regression

H.G. Wells Time MachineA widely held notion among computer scientists is that 80% of a programmer’s time is occupied maintaining code while the other 20% is spent actually writing the software. This inefficient allocation of effort was the subject of a master’s thesis at the Lund Institute of Technology called “Formalizing Use Cases with Message Sequence Charts.” According to a 2002 NIST study entitled “The Economic Impacts of Inadequate Infrastructure for Software Testing“, the annual cost of testing and fixing buggy software in the U.S. is estimated to be between $22.2 and $59.5 billion. What are the root causes of this costly inefficiency?

Unfortunately, corporate culture is naturally a contributing factor for this problem. Companies that produce commercial software are in business to make a profit first and foremost. Release schedules are expedited so the program can be released to market quickly and copies are sold sooner rather than later–making code work as expected for baseline use cases takes priority over regression testing. Any aspect of the software development process that does not appear to be fully in-line with company interests is considered a waste. Usually, far too much emphasis is put on project progress. The overall progress is only perceived progress because unforeseen problems will inevitably pop-up later on. In the long run, bugs end up costing much more to fix down the road and the entire business apparatus pays the cost of maintenance. Some secondary losses are customer support costs, internal communications and a negative impact on the company’s public image all of which can be preempted by proper preliminary work.

Since there’s so much emphasis on speedy release code writing starts as soon as possible, often with disregard to planning ahead beforehand and ensuring quality afterwards. Furthermore, programmers tend to implement items that they are most familiar with first because it seems like the easy way to do things. In most cases, the most difficult parts of a task are best handled first, not last. Handling the difficult items first allows more time and attention to the important stuff; it also allows the developer to recognize how the simpler pieces will fit into the grand scheme of things.

Planning ahead is an essential during the inception phase of a software project. Appropriately analyzing the problem and carefully designing the solutions will minimize the accumulation of technical hardships in the future. In fact, by taking advantage of the popular Unified Process for software development and diagramming specifications in UML the need for writing code almost disappears. UML CASE tools such as IBM’s Rational Rose will translate UML diagrams into program source code (typically an object-oriented language such as C++ or Java.) Of course creating detailed requirements and specifications is still extremely helpful even if UML is not practical for the task at hand. Writing code early on gives the illusion that progress is being made but in reality it is a recipe for disaster. No code should be written until all implementation issues have been resolved.

Threat modeling and secure design principles need to be key focal points during the initial phases of a software project. After the code has been written security issues will also comprise a large chunk of ongoing maintenance work. When software developers handle security fixes they have to stop what they’re doing, modify or maybe even rewrite the offending code, test the new code, report their progress, etc. Since developers are rarely security specialists, they tend to write fixes in such a way that the security hole is not closed completely. As a result, the vulnerability persists and leads to more code rewriting. This cycle of stopping, rewriting, and continuing severely detracts from productivity in programming. The majority of a developer’s time should be spent doing what they do best–adding new features to software.

Removing developers from the patch creation process significantly increases their utilization. Dedicated security experts are most fit to create patches and conduct regression testing. A dynamic program analysis tool accelerates the regression testing process. Building security in by default from the ground-up will minimize software bugs along with subsequent patching and testing. In conclusion, proper planning, resource allocation, and testing procedures can greatly reduce the costs associated with software regression.

Leave a Comment

Older Posts »
%d bloggers like this: