One reason to use an 8051 is that the C compilers available for the 8051 have had the snot beaten out of them from an optimization point of view. Franklin & Keil have really done alot with the 8051 compiler, so embedding an 8051 core into say a Xilinz FPGA is much more attractive than using the Blaze core.
Synthesizable doesn't equal to being optimized, verified and supported.
What seems to be available is based on the original Intel 8051 which used 12 clocks per machine cycles and had a single UART as a transmission link. Nowadays, professional designers are more demanding. The present, commercial 8051 can be highly optimized in performance - 1 clock per machine cycle, more registers (e.g. 8 DPTRs), peripherals availability (I2C, UART, SPI), sophisticated IRQ controllers, tweaked ALU or RAM/ROM extended to MBs instead of 64kB. When you realize that such a design, i.e. an optimized 8051 together with peripherals and a SW debugger is already verified and integrated; it's really worth it to pay for the core to offload your design team and speed up the product development. In addition, there is the small matter that this vastly improved design can make your end product much more competitive.
You've already mentioned support; it's very, very important to have assurance that you can write an email or make a call and get an answer within 24hrs. Time or rather its lack is what moves all of us forward ;)
It's hard to imagine that a company which is working on an ASIC design would take
a risk by using a free core even though it's synthesizable.
PS. If you want to read more about one of the best 8051 available on the market, feel free to click the link below:
There are alternative 8051 synthesizable IP cores available. The Digital Blocks DB8051C has the smallest VLSI footprint with a minimum CPI of 3. Most “small” 8051 implementations require 20 % more logic resources, while starting at 4 CPIs.
We do offer full support, & we are comitted to our customers success.
The way I heard it, this junior engineer and his boss went to the initial kickoff meeting for what was to become the 8051 -- they only went because there were going to be free sandwiches and food and stuff.
At the meeting everyone discussed what they thought the 8051 should be capable of doing. Afterwards the guy I'm talking about wen toff and sketched out a block diagram of a architecture that would address all of the requirements -- showed it to his boss -- it went up the line and this was the architecture that was used.
I'm not saying that it wasn't based on an earlier device, but there was some new and innovative stuff there as is evidenced that the 8051 and its derivatives have sold more units than any other microcontroller in history...
Whenever I see "8051" I think "I know the guy who designed the architecture for the 8051" .. .it's a funny story, because he only went to the original meeting for a free sandwich ... maybe one day I will tell it to you ...
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.