The facts are that Open Software is generally of a higher quality than proprietary software. From an article just published on 4/16/14 Coverity reported that the defect density for Open Source code was 0.59 while the proprietary code is 0.72.
There were lots of other useful information in the article such as there were 17 times as many defects fixed in proprietary code as in the Open Source code.
So - stop with the soul searching and just get on with it.,
Go to Google and search for "Open source software qaulity" without the quotes.
Pick off the first article in the search referencing Coverity.
I'm loath to point a link to a competitor to EE Tmes directly on their pages. Just doesn't seem fair.
Coverity has been doing on-going research into OSS quality for quite awhile and their numbers match my personal experience in the industry. I've been a Linux user since nearly day 1 (Version 0.12) and have seen it grow into the true defacto Internet OS. This occurred through natural selection processes as much as anything. The original "Cathederal and Bazaar" article explained it best in my mind. Proprietary software is at a huge disadvantage because typically the people working on it simply do it for the love of it!
The one OSS project I've been personally affiliated with (Icarus Verilog) has been ongoing for something 14 years. The hand full of people that contribute to Icarus have been doing this with very little in the area of financial reward, but often because it solves their personal problems. They give a damn. So the results are merely born out by the Stats.
Coverity covers some OSS code vs. some Enterprise code (according to the report).
Clearly Coverity did not cover the OpenSSL code. Read the LibreSSL change log, it was a horror show of bad code.
I work for MS, and have on occasion written code for the OS. Just over 10 years ago Windows went through a massive and painful reset, where for the best part of a year the main activity was simply cleaning up the code base. Now, this was not just inspecting it and adding some comments (though that basic stuff happened). They built program verification tools in MS Research (you can look up the publications, Coverity probably learned from MSR who started on those tools in the 90s) and the coding standards included stringent annotations to enhance the capability of the automatic checking. The sort of mistake that LibreSSL is grumbling about simply can't be checked in to the source tree.
Now, I'm not claiming there are no bugs. Millions of lines of code are a complexity which can not be made perfect by humans, even with the aid of verification tools. There are modes of failure discovered which the tools do not yet check for. But there are commercial vendors who take this stuff very seriously, and have long ago built the tools and practices to avoid simple problems like buffer overruns or reading out of bounds, and many other risk factors.
OSS code has its advantages. We use it, and we contribute to it. But, inspection by human eyes is not all you need, and tools like Coverity are limited unless you are willing to strictly change your coding practices to improve automated reasoning and coverage. If you really want to build secure and critical code, deep investment in the practice and the tooling is a good idea.
I am not speaking for my employer here, just adding some perspective to this discussion about the nature of modern software engineering on proprietary software. I assume that many of our competitors have similar practice on critical code.
hasenmi hits the nail on the head. If this occurred in proprietary code, we'd never know about it.
The true value of free is not in the dollars spent, but in the openness. When there is a problem it is there for everyone, and being open there are many more eyes looking for the problem. The openness ensures there is rapid pressure to fix the problem.
By comparison, for a proprietary tool such as the Microsoft Crypto API, we have to rely on Microsoft assuring us it does not have such a bug. And just Microsoft engineers looking for bugs. And Microsoft's professionalism in fixing and publicizing any bugs they do find.
Many free and open source projects are very well resourced - think of Linux, Apache and the GNU Compiler Collection. All developed by very large communities of well paid full-time professionals. There is plenty of commercial incentive to resource these projects properly.
But even smaller projects actually look quite good compared to much proprietary software. For example, a lot of proprietary EDA tools are provided by small companies or small departments of larger companies, where the R&D team numbers less than 10, and is doubling up providing second line support. By comparison even small team free software looks pretty good, and you still have the freedom of fixing it yourself - no dependency on the supplier here.
The article is correct that we have to look at the cost of free software. But when we look at all the costs, it often turns out that freedom can deliver much better value than proprietary software.
Open Source Software is quite expensive to produce. Just integrate the man hours that goes into something like the Linux kernel or OpenSSL and then multiply by the loaded overhead of an average programmer. You will be shocked at the amount of time-is-money being contributed to OSS efforts.
Yet each contribution is frequently a point patch. They typically solve a problem the patch submitter has. While it is the "job" of the core committers to rationalize all of the patches and move the technology forward in a coherent fashion, there is both little reward for doing so and sometimes deep push back from the user community.
Take, for example, the Python language transition between v2 and v3. There is a deep architectural flaw in how Python v2 misuses strings as byte arrays. We learned this lesson from the C-language. Null terminated strings are a bad idea. Misusing pointers is also the source of many security flaws. Guido and the core committers have tried to fix this flaw in version 3. There is a remarkable amount of pushback from the community. They think their code isn't broken. (It is but they don't run into the problems from international languages fixed in Python v3.) Python v3 is a better language in almost every measure yet after 5 years it only has about 10% uptake. The v2 stubborn mules are making a lot of noise about how Python v3 has failed. I don't believe it has failed. It has started a transition of a 20+ year old technology to a new path. That it has taken 5 years is a testament to the quality and maturity of the engineering team and their respect for the value of legacy code written in Python.
By almost any measure of engineering quality, the Python transition is the model of how architectural flaws should be addressed in open source. It is a testament to the leadership of Guido's team that they persevere. From the independent analyses I've found, it doesn't appear that the OpenSSL has a similar ability to improve their code base. Yet, OpenSSL, as the foundation of e-commerce, is a much more economically important technology than Python.
But in the game of software development, which more frequently ressembles a game of cut-throat, he who invests the least and gets to market the quickest wins. There is little time to justify reengineering "free" software. Just "bolt it on and hope she holds together" is the operating maxim. Well, OpenSSL isn't holding together.
OpenSSL could do with a little bit of corporate stewardship. Google found the problem. (I suspect they found it due to a likely internal audit of every security protocol resulting from NSA having pwned them and everyone else.) Google obviously depends upon OpenSSL for their business. How many Google engineers are in the OpenSSL core committing team? In contrast, how many are in the Linux core committing team?
I hope more companies start funding the development of these critical technologies more directly. Their businesses depend upon it.
People and companies will pick what is best by their own criteria, whether that's software or washing powder.
Cost isn't always going to be the deciding factor, take a look at the web server share going to Apache, the use of free languages like Scala in big corporates like Twitter and various banks etc.
I don't see myself adding a bit of code for an OS like Windows. Why should I, why would MS let me? At the high end, this isn't about cheap, it's about leveraging the available talent pool at many different companies, institutions, and other individual developers where all can contribute for the greater good.
As one datapoint, for our latest project - a cloud based commercial offering, there is no closed or paid for code in it. That's because the free beats it all hollow, technically, published literature, user base, etc.
On the other hand, if I play a computer game, it'll be produced commercially, because they are better currently.
In the end we'll find out what the market thinks on a case by case basis.
I have often wondered about the risks of open source software with regard to security for the Internet. In particular, the Internet of Things.
I have scheduled a live chat on this topic for Wednesday the 23rd at 8am Pacific time. Like EETimes, you have to be registered on the IoT World site to post, but you can come "listen in" without registration as well.
As a special guest, Dave Hughes of HCC, a security middleware provider, will join in to talk about the role of commercial software in security for the IoT.
If this problem were to have happened in a closed source library, it would have taken a report to the vendor, and time for them to priortize the problem, debug it, run it through channels and figure out how to spin the problem to not be their fault, or minimize it, and then schedule a release to contain it.
Since the problem was open source, the code was looked at, and someone other than the vendor was able to find the problem, and the fix was made available. No spin, no wait for the release, just the fix.
I like that the author is asking questions. I think we need to remember not to generalize Open or Closed Source too much, and being biggoted won't help us.
Lots of Open Source doesn't mature to anything useful. Have you perused SourceForge or the like for something new and useful? It's often an exercise in frustration: searching through countless dead, half-baked projects. Quality of code varies in successful and unsuccessful projects alike.
There are an amazing number of very successful Open Source projects. I include OpenSSL in this pool. It's amazing that we haven't experienced more flaws in OpenSSL in the _many_ years we've all been using it. Compare its record to any Open or Closed technology and it will fare well.
I've used OpenSSL to build many comercial products as a Software Develoer and the quality of the OpenSSL code was arguably better than much of the surrounding proprietary code.
I don't believe in perfect code. High quality code does exist. Maybe this incident will reinvigorate us to invest the time and money to make OpenSSL even better than it is today? There aren't many projects of equal importance in the Open or Closed Source ecosystem. It's a fundamental building block of our Internet.
The foundation of Internet is build on Open Sources. The OS is Linux. The webserver is either Apache or others open source webservers. Database engine is MySQL or other NoSQL DB. gcc is free. php is free. perl and python are all free. The core of today's networking, TCP/IP stack including DNS are all from open source. I have no doubt that we would never be where we are today without the contribution of open sources. To make an open source project to be successful, widely adopted, isn't a trivial task. The instance of the OpenSSL bug is just one small bug that causes a huge problem. I agree with a lot of other people that given the time of OpenSSL, the developers have done an amazing job to keep it from bugs.
Confusion over what open software does and does not do well is widespread and seems to be related to what the word "open" means.
1. Looking at open software it is possible to see "the tragedy of the commons." This is an economic theory stating that individuals acting independently and rationally according to each one's self-interest, behave contrary to the whole group's long-term best interests by depleting some common resource ( http://en.wikipedia.org/wiki/Tragedy_of_the_commons ). This, as I understand it, is what Rick Merrett alludes to. This economic theory applies more to the users of open (as in free) software, than the developers, in the Heartbleed case.
2. The "best" security system is the one still providing protection after the strongest attacks. The assumption here is that software attacks are stronger when the source code is known. I don't have a reference for this aphorism other than common sense. In the case of security, the subject of other comments in support of Open SSL, it seems clear that open source software (a different type of open) is an advantage not a disadvantage.
To sum this up: An economic approach (not discovered) that allowed the profitable sale of open source software might be beneficial. One alternative to such an economic approach is taxation. This is how the police (another form of security) are usually financed. Interestingly, the open software environment might be where to attempt a commercial form of taxation. This suggests a new form of software license administered by an "open software group".
The discussion about open vs. proprietary is important, but too general for the Heartbleed bug.
Security software is a special case. Its requirements are different. Its testing is (or should be!) different.
Most bugs are failures that get in the way of the end-user; they get reported and fixed as a consequence. The end-users have become beta testers (for good or for ill, but that's a separate discussion).
But security software has a second set of requirements: no leak of information. How is this going to be tested? The "end-users" doing this kind of "testing" have no incentive to come forward. Programmers using the library will not even notice unless the interface is broken.
A vendor of proprietary software has a clearer incentive to spend programmer cycles attacking his own software. In addition, the visibility of bugs is lower, and their scope (who is using the buggy software) is generally more restricted.
Who has the incentive to do the "black-hat" testing of security software? Open source does not provide a crisp answer to this question. IMO, that's where the discussion should focus.
The corporate user is responsible for keeping the information safe so it is his responsibility to test. However this is no different if the software is proprietary, unless the vendor indemnifies the corporate user (not likely).
Open Source Community was originally started by the academician against the price hikes and constant charging mechanism of the paid software's, the original view behind this was to provide minimal functionality of the software at no cost to the user. But later on the paid alternatives go on putting the additional features and functionalities and continue attracting the users for it, the corporate world were ready to pay for it, this was the users were in constant need of feature rich software, but for open source community it will not be possible to survive providing constant improvements in the software's, this led to use the shareware and in turn malwares along with it. There are many group who are trying to make money by selling the ads via open source software's and this had bring down the quality of the open source software's.
The statement "Most of the world's best newspapers and magazines struggle to survive while we swim in a sea of free news of questionable quality." is very revealing about the whole situation.
First, it's wishful thinking. The reason that those papers struggle to survive is that they forgot the difference between a real magazine and a tabloid and sacrificed quality factual reporting in favor of slanted sob stories that pull on the heartstrings of the illiterate. The demand for alternative news grew from the vaccuum they left, not from the difference in subscription rates.
Likewise, open source software isn't created by a demand for cheap software. Developers who work on these projects have valuable skills and it would cost far less for them to purchase commercial software -- if it met their needs, that is. The closed source companies created the need for alternatives by overselling and underdelivering, failure to listen to customers, and charging early adopters for the privilege of beta testing products released too early.
When people with the technical wherewithal to do it themselves decided that they had no choice but to do so, the natural thing to do would have been to recoup their investment by commercializing their competing solution. But they identified the commercial interests as the cause of the poor quality they were trying to overcome. That's why open source grew.
Now, of course, there's a lot of peer pressure toward open source and it no longer is solely the work of experts creating high quality products (although some still is).
But to blame open source for the struggles of "the world's best software vendors" is quite ridiculous. They caused their own struggles, by making building new wheels a better value proposition than buying them.
SSL has a good "OPEN SOURCE" license. The license guarantees continuity of the product and we contribute because the product will always be available to us. In contrast we may contribute to proprietary software and have our ideas capitalized and posessed - with no guarantee of access to the software or my input ideas in the future (read the "input agreement" when you talk to the vendor).
The GPL and other good licenses tend to protect us from arbitrary decisions by software companies. Such decisions can include complete cancellation of the product or no future bug fixes. So the real cost of committing to a software product tends to favor open source.
It is not just Open sources developed with shoe strink budgets but also well paid softwares can have bugs. While we are talking about SSL, it may be worth while to note apple also had the ignonimity of a serious security flaw.
Drones are, in essence, flying autonomous vehicles. Pros and cons surrounding drones today might well foreshadow the debate over the development of self-driving cars. In the context of a strongly regulated aviation industry, "self-flying" drones pose a fresh challenge. How safe is it to fly drones in different environments? Should drones be required for visual line of sight – as are piloted airplanes? Join EE Times' Junko Yoshida as she moderates a panel of drone experts.