Posts Tagged whitelist

The Philosophical Future of Digital Immunization

digital-trojan-horse-viriiUsually it’s difficult for me to make a correlation between the two primary subjects that I studied in college–computer science and philosophy. The first few things that pop into mind when attempting to relate the two are typically artificial intelligence and ethics. Lately, intuition has caused me to ponder over a direct link between modern philosophy and effective digital security.

More precisely, I’ve been applying the Hegelian dialectic to the contemporary signature-based approach to anti-virus while pontificating with my peers on immediate results; the extended repercussions of this application are even more fascinating. Some of my thoughts on this subject were inspired by assertions of Andrew Jacquith and Dr. Daniel Geer at the Source Boston 2008 security conference. Mr. Geer painted a beautiful analogy between the direction of digital security systems and the natural evolution of biological autoimmune systems during his keynote speech. Mr. Jacquith stated the current functional downfalls of major anti-virus offerings. These two notions became the catalysts for the theoretical reasoning and practical applications I’m about to describe.

Hegel’s dialectic is an explicit formulation of a pattern that tends to occur in progressive ideas. Now bear with me here–In essence, it states that for a given action, an inverse reaction will occur and subsequently the favorable traits of both the action and reaction will be combined; then the process starts over. A shorter way to put it is: thesis, antithesis, synthesis. Note that an antithesis can follow a synthesis and this is what creates the loop. This dialectic is a logical characterization of why great artists are eventually considered revolutionary despite  initial ridicule for rebelling against the norm. When this dialectic is applied to anti-virus, we have: blacklist, whitelist, hybrid mixed-mode. Anti-virus signature databases are a form of blacklisting. Projects such as AFOSI md5deep, NIST NSRL,  and Security Objectives Pass The Hash are all whitelisting technologies.

A successful hybrid application of these remains to be seen since the antithesis (whitelisting) is still a relatively new security technology that isn’t utilized as often as it should be. A black/white-list combo that utilizes chunking for both is the next logical step for future security software. When I say hybrid mixed-mode, I don’t mean running a whitelisting anti-malware tool and traditional anti-virus in tandem although that is an attractive option. A true synthesis would involve an entirely new solution that inherited the best of each parent approach, similar to a mule’s strength and size. The drawbacks of blacklists and whitelists are insecurity and inconvenience, respectively. These and other disadvantages are destined for mitigation with a hybridizing synthesis.

The real problem with mainstream anti-virus software is that it’s not stopping all of the structural variations in malware. PC’s continue to contract virii even when they’re loaded with all the latest anti-virus signatures. This is analogous to a biological virus that becomes resistant to a vaccine through mutation. Signature-based matching was effective for many years but now the total set of malicious code far outweighs legitimate code. To compensate, contemporary anti-virus has been going against Ockham’s Razor by becoming too complex and compounding the problem as a result. It’s time for the security industry to make a long overdue about-face. Keep in mind that I’m not suggesting that there be a defection of current anti-virus software. It does serve a purpose and will become part of the synthesization I show above.

The fundamental change in motivation for digital offensive maneuvers from hobbyist to monetary and geopolitical warrants a paradigm shift in defensive countermeasure implementation. For what it’s worth, I am convinced that the aforementioned technique of whitelisting chunked hashes will be an invaluable force for securing the cloud. It will allow tailored information, metrics and visualizations to be targeted towards various domain-specific applications and veriticals. For example: finance, energy, government, or law enforcement, as well as the associated software inventory and asset management tasks of each. Our Clone Wars presentation featuring Pass The Hash (PTH) at Source Boston and CanSecWest will elaborate on our past few blog posts and much more.. See you there!

Advertisements

Leave a Comment

The Monster Mash

mashed-potatoes

The buzz word “mashup” refers to the tying together of information and functionality from multiple third-party sources. Mashup projects are sure to become a monster of a security problem because of their very nature. This is what John Sluiter of Capgemini predicted at the RSA Europe conference last week during his “Trust in Mashups, the Complex Key” session. This is the abstract:

“Mashups represent a different business model for on-line business and require a specific approach to trust. This session sets out why Mashups are different,  describes how trust should be incorporated into the Mashup-based service using Jericho Forum models and presents three first steps for incorporating trust appropriately into new Mashup services.”

Jericho Forum is the international IT security association that published the COA (Collaboration Oriented Architectures) framework. COA advocates the deperimiterisation approach to security and stresses the importance of protecting data instead of relying on firewalls.

So what happens when data from various third-party sources traverses inharmonious networks, applications, and privilege levels? Inevitably, misidentifications occur; erroneous and/or malicious bytes pass through the perimeters. Sensitive data might be accessed by an unprivileged user or attack strings could be received. A good example of such a vulnerability was in the Microsoft Windows Vista Sidebar; a malicious HTML tag gets rendered by the RSS gadget and since it’s in the local zone, arbitrary JavaScript is executed with full privileges (MS07-048.)

New generations of automated tools will need to be created in order to test applications developed using the mashup approach. Vulnerability scanners like nessus, nikto, and WebInspect are best used to discover known weaknesses in input validation and faulty configurations. What they’re not very good at is pointing out errors in custom business logic and more sophisticated attack vectors; that’s where the value of hiring a consultant to perform manual testing comes in.

Whether it’s intentional or not, how can insecure data be prevented from getting sent to or received from a third-party source? A whitelist can be applied to data that is on its way in or out—this helps, but it can be difficult when there are multiple systems and data encodings involved. There is also the problem of determining the presence of sensitive information.

Detecting transmissions of insecure data can be accomplished with binary analyzers. However, static analyzers are at a big disadvantage because they lack execution context. Dynamic analysis is capable of providing more information for tainting data that comes from third-party sources. They are more adept at recognizing unexpected executions paths that tainted data may take after being received from the network or shared code.

Leave a Comment

%d bloggers like this: