In our world of malicious hackers, software security is a primary concern. Any program that has active software interfaces, whether to the file system, a network connection, the Internet, or some other piece of software, is potentially vulnerable to attack. And unfortunately, some devote significant resources to engaging in those attacks. Since those attacks can lead to things like denial of service, program crashes, and destruction or theft of critical data, enterprise software consumers must make every effort to minimize their exposure.
The first step in minimizing exposure to hack-attack is to use software that is secure in the first place. While this is obvious, it’s also probably impossible. Any complex system of software will include bugs; it’s the nature of the beast. And some of those bugs are likely to take the form of security vulnerabilities. So where does that leave us?
Historically, the approach by software teams, whether using open or closed source development methodologies, had been primarily reactive. A very simplified, general view of software development, both pre and post release, looks something like this:
- Implement the software.
- Test the software for bugs, including security vulnerabilities.
- Fix the important bugs found.
- Release the software.
- As security vulnerabilities are discovered in the field, fix them in the code and release patches that repair the installed base.
This gets the job done, in that over time, the released software gets progressively more secure, or that is, less vulnerable. However, the security vulnerabilities discovered in the field are often discovered because they’ve been exploited by malicious hackers. That means that the process of making the code more secure includes damage to users of that software. It can be likened to shipping burning coals to your customers, then sending out fire-hose patches to put them out. In the meantime, however, customers have been burned.
In his blog entry ”Why Windows is less secure than Linux,” Richard Stiennon claims Linux is more secure than Windows simply because it is less complex. He uses an example of exploiting buffer overrun vulnerabilities. “A system call is an opportunity to address memory.” Since Windows makes more system calls per transaction than Linux, it is less secure because it exposes more potential vulnerabilities. Fewer system calls means a program is less vulnerable. This is no lightweight claim. Mr. Stiennon was a VP at Gartner, and one of the areas he covered in that role was security. And that is just one of the security-related positions he has held. (See a more complete bio here.) However, his analysis implies that all potential vulnerabilities are equal. That may be true if everyone stays in a reactive mode, as has traditionally been the case. But what about proactivity?
As a simple case in point, consider buffer overrun vulnerabilities. In the past, as I’ve pointed out in a previous blog post, lot’s of potentially vulnerable APIs like sprintf() have been used in commercial software. Replacing those API calls with calls to newer APIs that don’t expose the same potential, results in lowering the potential vulnerability of the program as a whole. If you take that paradigm and spread it to other types of vulnerabilities, you can theoretically reduce your vulnerability across the board. Taking Mr. Stiennon’s analysis, consider a hypothetical case in which system call infrastructure has been addressed to significantly reduce the possibility that it can be exploited. In that case, the more-secure system call may be more important than the number of system calls. More is only more dangerous if you’re comparing tigers to tigers. If you’re comparing tigers to mice, the equation is weighted differently.
So in the area of program vulnerability, how does a project team practically apply proactivity? I think the answer is to take everything they know about vulnerabilities and develop SDLC processes that explicitly address their potential for inclusion in a released product. This is what Microsoft has claimed to have done. You can read about the details of their Security Development Lifecycle here. So does it work?
Let’s take IIS as a test case. IIS 5 was released in 2000. Since 2003, it has received 10 security advisories, as documented by secunia.com here. Apache 2.0 was released in 2002. Since 2003 it has received 31 security advisories, documented here. Both these products were developed and released using a reactive approach to limiting vulnerabilities. (Don’t misunderstand, I’m sure both project teams took care to write secure code, but based on the number of vulnerabilities found, they depended on the reactive mechanism of patches post-release as a major component of bringing the vulnerability down.)
Between IIS 5 and IIS 6, Microsoft instituted implementation of their SDL process to proactively limit vulnerability. IIS 6 was released in 2003. Since then (the same amount of time as the previous 2 examples), there have been 3 security advisories, documented here.
This is a single case, but it is nonetheless intriguing. It suggests that the total vulnerability of a program can be significantly reduced by applying proactive, explicit, security-related processes to an existing SDLC. I suspect that this is the way software will be developed in the future, since the hackers can only get your goat if you’ve got a goat to get.