I took a look at the datasheet for the NXP4300, which is a ~200MHz Cortex M4F that also has a M0 as well. That device is consuming, with the M0 in reset and all peripherals turned off 81.5mA at 3v3. This is about 0.3W. There is no direct information on this from the Quark datasheet, only the measurements are max allowables that the circuitry can handle, not the consumed amounts. So I derive my numbers from the statements made at the IDF. It was stated that it was 1/5 the size and 1/10th the power of the Atom. The highest power consumption I could find for an Atom was 6W with all peripherals enabled. This was for a server, multi-core device. Even so, this would put the power consumtion around 0.6W. This number may get higher once you enable certain peripherals, especially the DDR# and PCIe. If you were to double the clock of the NXP4300, this would put it in the same power consumption range. I am guessing that my Quark estimates are conservative, so this would mean that the Quark chip is not all that bad. I will reach out to Intel to see if I can get some actual numbers and report back.
There are a couple of things that makes this device very unique. The first, and while it is nothing to do with performance, is that it is made by Intel. Intel may control the market for low volume, high margin processors, but as for overall devices shipped, Intel only has 2-3%. The other 90+% is in embedded controllers. Intel is starting to make a move for this space. This is a big strategic move. The impetus may to be able to increase the volume through the fabs to be able to get more of a return on their machines.
The next thing that makes this unique is that from what I can tell, it is being made on a 32nm process node. This is smaller than all others out there for this market segment. I believe that the smallest process node that is being applied to the Cortex M devices is 65nm (TI is producing their Tiva line at this process node). For the low end Cortex A devices, I think that these are on a 45nm process node. Obviously higher end Cortex A devices are at a smaller process node, but this is not the space that the Quark is competing in. This should give it an inherent power advantage. Intel has indicated that this device would easily transition to 22nm. This all means that the Quark will be able to provide advanced computations all while providing lower power than a competing device. It would enable high speed FFT and filtering calculations right at the sensor itself instead of having to offload that to post processing. This would further enable software defined radios and other complex devices while consuming little power.
As for the Galileo dev board itself, well now you can do some pretty advanced signal processing that was previously not available to other Arduino compatible boards. I would imagine that you should be able to do 1024point FFT calculations in under 1ms, perhaps even much faster than that in the .1ms range (this would depend on how well someone could write an optimized FFT routine in Arduino). Another thing, the board is compatible with Arduino, but it is also stated that it can be programmed with an open source version of C. I do not have too many details on this, but this to me is more exciting.
As I understand it, there were a few boards that were handed out. I tried contacting Intel to see if I could get my hands on one early, but it does not seem like that is currently possible. They will first be distributing them to universities, and then they will begin selling them to the public starting on Nov 29. The Intel rep told me that they would notify me if they decide to distribute any before their inital sale date.
Their datasheets are very different from the datasheets that I am used to looking at. You can tell that this device comes from noble ancestry, though, I did find it interesting that there was absolutly no reference to the term PWM at all in the datasheet. It was called a square wave output. Even at that the terms were very different from other datasheets. It will be interesting to see if Intel pushes further down into lower clock speed devices. If the Quark is sucessful, I could see them going after the Cortex M0+ market. I did also ask the rep if there were any partners that would be having any hardware come out soon based on the Quark core, but I did not get a response on that question.
As to the number of layers, I think that with BGA devices in this pitch, you can get the first two rows of pins out on the first layer, and then you add a layer for every extra row. I have not yet had the oportunity to do anything with a BGA device, though I am really itching to use the Freescale KL02 in a project. I think that I might be able to cheat the design guidlines enough at OSHpark to allow me to use it on a single layer.
With these dual chip Arduino compatible boards, I need to look into if the Arduino interface is being run on the Cortex A chip or the other lower performance 32 bit chip. The new Arduino board seems interesting in that you can create your sketches in Linux running on the Cortex A chip.
The new Sitara-based Arduino will actually consist of an AM335x Sitara running Linux + a AVR8 for shields & such -- very similar in concept to the Udoo (except Udoo is using i.MX6 plus Arduino Due compatible SAM3X).
In fact, Udoo probably takes the crown for fastest Arduino, since you can get it with a quad core 1GHz i.MX6.
The Galileo board doesn't use a co-processor, so maybe it's the fastest Arduino board that doesn't use a co-processor.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.