The Security of Open versus Closed Software
• This post is an excerpt from Security Now episode 245 by Steve Gibson.
There are very important differences between open versus closed code and open versus closed execution environment platforms.
First, looking at the code and development methodology side, I'm not at all clear that open source code is inherently more or less - either way - secure than closed source code. I don't see anything other than personal policy bias, or commercial interest, to recommend one over the other from a security standpoint, based upon all of the history we have with the demonstrated security arising from either approach.
We have a great pair of examples in Microsoft's Internet Explorer (which is closed) and Mozilla's Firefox (which is open). The first one is as closed as it could be, and the second is wide open to the world. Yet over the past few years, as Firefox has become more prevalent, we don't see significantly fewer problems on that platform. We may see more active exploits of IE due the fact that it still retains a huge majority of web browser market share, and the fact that it's also always installed in every machine, and also the fact that it's the more cautious and careful security-aware users who have taken the trouble to run Firefox and then add on the additional controls for cookies and scripting management. But, overall, looking at the logs of problems being fixed by the now continual flow of Firefox patches, recently more than weekly, we're not seeing anything that says open source means more security.
In this weekly podcast I am forced to skip over reporting on the mass of security problems continually being discovered in open source software because otherwise we would never have time to talk about anything else, and because they generally are spread out among a great many different pieces of software rather than focused, and in general they have a low level of saturation per listener.
From all of the wealth of evidence we have seen, I think that in either case of open or closed source software and software development, it is the resulting delivered product of either approach, which is pounded and pounded upon and never needlessly altered, except for the purpose of carefully fixing small implementation errors, that over time earns the right to be called "secure."
Our listeners may recall my annoyance at Microsoft's Steve Ballmer declaring, at launch, that Windows XP was the most secure operating system they had ever created. And my comment back then was, "That's not something that anyone can simply pronounce. Security can only be proven over time through being tested."
The security of any given implementation of a system can only be earned. Security cannot be "declared," it must be proven. And establishing that "proof" requires time spent in the line of fire. And this is easily demonstrated by the simple fact that it's entirely possible to create vulnerable and exploitable code under either type of source code model, either open or closed. The whole nature of "debugging" is such that programmers will miss both their own and others' coding errors since they get sucked into making the same assumptions.
And it really doesn't matter how many eyeballs examine the code. Closed source economics at Microsoft can certainly afford to employ just as many eyeballs looking at code as does the volunteer open source model. Anyone who has ever actually been involved in open source projects knows that, in reality, only a very few (and oftentimes just one) developer is actually doing most of the heavy lifting on major parts of a project. So I don't buy for a moment that there's any intrinsic benefit either way in open vs. closed source. Either can be buggy as hell, or stable and solid as a rock.
And it's worth noting that, in the case of closed-source code, commercial interests work both for and against its security. On the one hand, commercial interests need a reputation for security to help bolster sales; and, on the other hand, commercial interests are motivated to keep adding features and revising perfectly good and previously proven code in order to keep their users upgrading and revenue flowing. No, mistakes can be made in either development environment, so I see no benefit there, either way.
But what really has a profound effect on security is policy. And this is where massive differences in security results can be obtained. Years ago, how many times did our listeners hear me grumble, not about the fact that a defect was found in Microsoft's Outlook that allowed email to take over the user's machine; but that Microsoft continued, year after year, their policy of enabling any scripting at all in email. Mistakes are going to happen, but policy is deliberate. And policy is directly reflected in design.
Which brings us to the question of open versus closed platforms, which is an entirely different animal from open versus closed software development. In comparison to an inherently closed platform, which deliberately imposes restrictions upon what software is allowed to run on it, and upon what freedom permitted applications have, any inherently open platform that permissively allows anything to be developed for it and run on it is going to, by design and definition, be significantly less secure. The more "freedom" applications have within a platform, the less secure it will be, since freedom is so easily abused. And the more inter-application interaction the platform allows, the less secure that platform will be, since so many games can be played between applications.
Any platform whose design fundamentally adopts an untrusted application model, assuming that applications may misbehave, and by design policy strictly limits their freedom - as, for example, in the case of the iPhone OS with its application sandbox - versus any platform whose design is fundamentally one that encourages a community of assumed mutually trusting applications, as Windows, Unix, Mac OS X, and other "traditional" open platforms do, will be inherently more secure by design.
Finally, the more "locked down" the application screening process is, the more difficult it is for applications to be approved, as individual applications are examined by the platform's administrator scanning for the use of undocumented APIs and any other possible misbehavior, the more inherently secure that locked-down platform will be.