It is not just Open sources developed with shoe strink budgets but also well paid softwares can have bugs. While we are talking about SSL, it may be worth while to note apple also had the ignonimity of a serious security flaw.
Coverity covers some OSS code vs. some Enterprise code (according to the report).
Clearly Coverity did not cover the OpenSSL code. Read the LibreSSL change log, it was a horror show of bad code.
I work for MS, and have on occasion written code for the OS. Just over 10 years ago Windows went through a massive and painful reset, where for the best part of a year the main activity was simply cleaning up the code base. Now, this was not just inspecting it and adding some comments (though that basic stuff happened). They built program verification tools in MS Research (you can look up the publications, Coverity probably learned from MSR who started on those tools in the 90s) and the coding standards included stringent annotations to enhance the capability of the automatic checking. The sort of mistake that LibreSSL is grumbling about simply can't be checked in to the source tree.
Now, I'm not claiming there are no bugs. Millions of lines of code are a complexity which can not be made perfect by humans, even with the aid of verification tools. There are modes of failure discovered which the tools do not yet check for. But there are commercial vendors who take this stuff very seriously, and have long ago built the tools and practices to avoid simple problems like buffer overruns or reading out of bounds, and many other risk factors.
OSS code has its advantages. We use it, and we contribute to it. But, inspection by human eyes is not all you need, and tools like Coverity are limited unless you are willing to strictly change your coding practices to improve automated reasoning and coverage. If you really want to build secure and critical code, deep investment in the practice and the tooling is a good idea.
I am not speaking for my employer here, just adding some perspective to this discussion about the nature of modern software engineering on proprietary software. I assume that many of our competitors have similar practice on critical code.
Go to Google and search for "Open source software qaulity" without the quotes.
Pick off the first article in the search referencing Coverity.
I'm loath to point a link to a competitor to EE Tmes directly on their pages. Just doesn't seem fair.
Coverity has been doing on-going research into OSS quality for quite awhile and their numbers match my personal experience in the industry. I've been a Linux user since nearly day 1 (Version 0.12) and have seen it grow into the true defacto Internet OS. This occurred through natural selection processes as much as anything. The original "Cathederal and Bazaar" article explained it best in my mind. Proprietary software is at a huge disadvantage because typically the people working on it simply do it for the love of it!
The one OSS project I've been personally affiliated with (Icarus Verilog) has been ongoing for something 14 years. The hand full of people that contribute to Icarus have been doing this with very little in the area of financial reward, but often because it solves their personal problems. They give a damn. So the results are merely born out by the Stats.
The corporate user is responsible for keeping the information safe so it is his responsibility to test. However this is no different if the software is proprietary, unless the vendor indemnifies the corporate user (not likely).
Replay available now: A handful of emerging network technologies are competing to be the preferred wide-area connection for the Internet of Things. All claim lower costs and power use than cellular but none have wide deployment yet. Listen in as proponents of leading contenders make their case to be the metro or national IoT network of the future. Rick Merritt, EE Times Silicon Valley Bureau Chief, moderators this discussion. Join in and ask his guests questions.