@Rick, I was struck by the number of Linux variants in the group. Add them up (arguably, even including Android) and it becomes a very large percentage. I would also be curious as to how many are using buildroot, Yocto, or the other Linux kernel personalization systems to create tailored distributions. That used to be the realm of the specialists, but now they are very mature and approachable.
FreeRTOS is not an RTOS it is only a small kernel thar runs on virtually all mid size and larger MCU devices. Written in C is is very portable and because it has no I/O and higher level protocols or significant demos, There is not much else to know. SafeRTOS was the 12 function call version of this the last time I looked.
I think that the popularity of this reflects the general lack of business training in the community with respect to time to market, software reuse and software engineering economics.
FreeRTOS is really a build your own RTOS with a free kernel and months of effort put into I/O models, I/O drivers, testing, integration, and documentation before you start your actual development. Engineers do like to build their own solutions but this is definitely a money loosing strategy unless you see engineering time as very low cost.
For a few thousand dollars, you can purchase quality offerings which provide standard APIs, connectivity of your choice, documentation, testing and integration of modules and support for the entire offering. What is the benefit of spending tens of thousands more and months of development time to ernable FreeRTOS or worse still SafeRTOS development with a poorer quality alternative?
If you know anything about FreeRTOS, this is what you should know and understand. Free is definitely very very expensive if you really understand the costs involved.
I should mention that I work for a company that sells complete RTOS solutions. The facts are the same for a FreeRTOS vs an actual RTOS comparison regardless of which RTOS you choose.
One of the biggest hurdles of building an embedded system is device driver. I wonder how well FreeRTOS supports various devices. Well! Unless it is one of the linux variety
I wonder how popular OpenWRT is. There are variety of device drivers support to the platform. In addition, you can pretty much change your home WiFi router into an OpenWRT enabled WiFi router with a couple simple steps. Information is widely available online. It's one of the linux varieties. So, any linux guru can get around the device and start doing some personal touch to the router. In addition, GNU gcc is well supported and any popular opensources applications, you name it; you can build it by simply choosing an option under 'make menuconfig'.
openwrt is just a cut-down version of linux, it has no overlap with freertos, the latter one is for resourced limited chips such as MCUs, where linux does not fit at all. i'm curious on contiki os which is even smaller than FreeRTOS but better fit for extremely resource limited device such as IoT sensors, while contiki is tiny, it may win in quantity
I was shocked when I saw this data at the ESC session. The reasons behind this were also interesting, so I hope we'll have a follow-up 'blog on the FPGA results alone.
I do FPGA design professionally, and what I find is that a lot of design organizations are scared of FPGAs and don't have people who know how to design with them. In a way that's good for a free-lancer like moi, but cuts down the overall volume of work. One effect I think is happening is that IDEs and development boards with good communities are making it easier for people to use embedded CPUs, but the same is not happening with FPGAs. My own experience is that each new release of FPGA tools is harder to use and runs slower. Perhaps that's only because it does more, but if my impression is true it could be one of the reasons you're not seeing as many FPGAs getting designed in.
It could also be that what people are designing these days has shifted away from FPGAs. For example, all these Internet of Digital Things products just need a standard SoC with standard interfaces. It's not like a data comm box that has to speak strange protocols or have extremely high data channel density, which is an excellent fit for an FPGA.
So tell me, did the FPGA vendors find out about this result ahead of time and decide not to exhibit at the show? :-)
rick merritt wrote: Your comment on the FPGA tools is interesting...can you document or is it a subjective impression?
Mostly subjective, though I could probably document it. My recollection is that Xilinx ISE 10.1 (I think) took almost twice as long to synthesize as 5.2, but I'd have to run a test to be sure. I didn't find it that surprising, because logic synthesis is similar to an optimizing compiler, and each new technique (such as being more clever at finding common sub-expressions) adds time.
When I write Verilog, I try to anticipate what the synthesizer is going to do and write my code to make synthesis easier. So my logic tends to produce a good result even if the synthesizer isn't using the most advanced techniques. This means the longer run times of the later version don't produce improved results.
I like to synthesize a lot so that I can immediately see if a logic change I made requires an unexpectedly large number of look-up tables. If I make a bunch of changes and the LUT count jumps, I don't know which change(s) are responsible. This is another problem I have with the tools -- I don't think there's a practical way to review what the synthesizer did. With CPLDs, the tools show you the synthesized logic equations. With FPGAs, they give you a set of automatically-generated schematics which I find useless once the logic reaches a modest level of complexity.
I don't know if FPGA tools are doing a good job with incrementals techniques yet, i.e., keeping as much of the previous result as possible they don't have to repeat unnecessary work. This has the potential of huge speed-ups, particularly in place and route. However, since I still hear of hours long place-and-route of complex FPGAs I assume this speedup hasn't happened yet. I'll be glad to hear comments from others.
betajet: what about c-to-gates tools ? what's your opinion on them? and are they catching up with pro's and non fpga pros ?
And maybe the reason it takes more time to synthesize - is that the chips and designs are more complex , so synthesis time should grow exponetialy with this complexity , but smarter synthesis algorithms reduce it to 2x ?
I haven't ever tried them myself. "C-to-gates" looks like it could do a good job with certain kinds of FPGA applications, specifically implementing high-performance DSP algorithms.
From the little I've seen, it seems to me that they usually convert to C to Verilog or VHDL, and then use the vendor's tool chain. If this is true, "C-to-gates" doesn't help me since it adds an additional tool to design loop, and makes it even harder for me to see how my source code maps into actual logic.
When I design an FPGA, I have my hardware designer hat on and I have a very clear idea what hardware I want to end up with. Then I have to figure out which Verilog templates to use so that the synthesizer generates the hardware I've already visualized. I'm perfectly happy to have the synthesizer do logic minimization for me, since that's tedious and error-prone to do manually. But I want to have control over when I use LUTs as RAMs or shift registers, and it's annoying to keep checking the synthesis report to determine that it did what I wanted.
While there may be an additional tool in the flow (in this case, HLS - high level synthesis, i.e., C to gates), being enabled to work at the C (C/C++, SystemC) level at a higher abstraction will speed up your design process and allow you to address larger scale problems. And you shouldn't just limit yourself to hardware, as some functions (if you decompose your application into multiple tasks) may be better off as software where you have FPGA's with embedded processors (Xilinx Zynq, Altera SoC FPGA). Just find the most optimal hardware and software partitioning to do your job.
Please take advantage of our Resources page at Space Codesign, with links to information on the subject (ESL, SystemC, HLS, HW/SW co-design) including some case studies that we have presented at recent conferences. (usual disclaimers)
I see an interesting correlation in the trends. It seems to me that the increase in the use of out-house operating systems correlates with the increase in project lateness. Personally, I prefer in-house because if there's a problem or a necessary change, you're familiar with the code so you can get in there and make the change with minimal impact elsewhere in the code. With out-house, it might be very difficult even to find where to make the change, and the side-effects of the change could be hard to predict. It seems to me that estimating how much effort is involved would be quite difficult, making it hard to maintain the schedule.
I think that it is easy to change something you know and understand for sure. For sourced operating systems I think that high quality technical support fully addresses (or should fully address) any feature additions and changes. We certainly make sure this happens and happens in a timely fashion. It is easy for an RTOS vendor to make these changes and this leads directly to a solution to change the OS for a customer.
The real cost of in house is the delay and expense of lost time to market. The market share that is lost costs far more than the purchase price of the RTOS so you immediately end up loosing money doing your own RTOS before you have started development.
RTOS vendors spend millions on product development, even smaller vendors. How can an in house effort possibily do better unless they spend similar amounts? Yes with in house you can change it easily but you don't have to if you have purchased a quality product with support.
Think about the cost of documentation and testing as well as the lost time if you throw away the documentation because your project is taking too long. You need to think about how much better your product will be if you focus on your value added application and use the rich feature set that you can purchase.
The economics clearly are not there for build your own OS except for very specialized cases.
RoweBots1 wrote: RTOS vendors spend millions on product development, even smaller vendors. How can an in-house effort possibily do better unless they spend similar amounts?
A RTOS vendor has to write an OS for a general customer base. An in-house OS only has to support the features needed for the product. An in-house OS may be as simple as a single main thread plus interrupt service routines, with no preëmptive multi-tasking and little or no memory management.
RoweBots1 also wrote:Think about the cost of documentation and testing as well as the lost time if you throw away the documentation because your project is taking too long.
Funny you should mention documentation. My main experience with a vendor RTOS was for a datacom product that originally had its own "OS": single thread plus ISRs. All the code was developed in house. We needed to add a vendor stack for an optional feature, and that stack required an RTOS. However, the RTOS had a per-unit royalty, so we only wanted to ship the RTOS if it used the optional comm feature. The solution was easy: if we wanted the RTOS present we simply ran our main thread as one of the OS tasks.
Our product was originally based on a Motorola (now Freescale) 68360 QUICC. The RTOS had excellent documentation for 68K and it was easy to link in the main thread and ISRs because the ASM-level ISR conventions were clearly documented.
A couple of years later we switched to PowerQUICC. Well, for PowerPC the documentation was terrible -- no documentation of the ASM-level ISR conventions. The vendor wanted you to use one of their "board support packages" and we weren't on the list. Plus, the vendor wanted you to buy their C compiler.
So I ended up creating my own "documentation". Specifically, I had to disassemble the task switch machine code to see how they were saving registers and all. Once I had this information, it was straightforward to get it working. But I learned to get out the salt shaker when an RTOS vendor tells me that the RTOS will save time to market.
Yes, I hear you, there has been lots of horror stories in the embedded software world along with clever workarounds.
" An in-house OS only has to support the features needed for the product. An in-house OS may be as simple as a single main thread plus interrupt service routines, with no preëmptive multi-tasking and little or no memory management."
I think this is a bit myopic. Rarely are all features known up front for the life of the product. Customer requirements change. That is why you want something that can grow with you as your product evolves and eliminates lots of expensive development and maintenance. Often developers think they can do this but almost all post mortems will show that thi is was not the best overall choice.
The other thing is that with the right software and the right APIs, you make your application portable to a variety of OS platforms. Now you can develop a better richer application which offers customers more and is easily maintainable. These are hidden benefits - after all the project manager for the first build rarely is the manager for the later versions and he or she is not evaluated on the lack of maintenance costs or the foresight for portability.
Personally i am for the out house operating system. That way the application code does not meddle with the os code for some simple workarounds , else it allbecomes the spaghetti , fixes and workarounds in the os code to takecare of some application idiosynchrosies
Technically speaking, WiFi and BLE are not a competing technology. They serve different purposes. WiFi serves high bandwidth demand application such as video stream, online gaming. BLE serves accessory connectivity market such as earpieces, speakers. The bandwidth difference are hundred Mbps vs 1Mbps. Although they can be put into the same chart to understand how well they penetrate the market individually, they shouldn't be compared.
Having said that, I believe the growth of BLE will be a lot more substantial in the coming years. In average, every household of 4 people needs 1 WiFi router and 4 smartphones, 4 tablets and 4 laptops. All these 4 people may have 1 fitbits each, 1 or 2 earpieces, 1 or 2 portable speakers. In addition, the household may have 1 or 2 remote controllers, console game remote controllers, smart smoke detector, smart thermostat. As IoT grows, the demand of BLE solution is going to grow.
People should always consider the old, but true saying - "you get what you pay for". For small jobs where your budget is very small these "free" software bundles make sense. As products / projects get more expensive, support becomes increasingly more important to control schedule. Great- you got the software, and found a bug - now YOU need to spend money (time) to find out what's wrong rather than the vendor. When the vendor does it - you're schedule isn't hit so hard.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.