Interesting that the grid is designed to porevent being read externally - and also that, as I hypothesized - the noise changes through time. Obviously that will be an issue if they can tolerate 30% variation and the chip ends up at 80% variability.
@DrQuine: I see three potential issues with the physically unclonable function (PUF)...
At startup some bits do vary across voltage and temperature -- also the number of bits that are "noisy" increase over the life of the chip -- I think they start around 15% and can increasse to 80% -- the function /algorithm is tollerant up to 30%.
Re monitoring the results of the PUF -- there's no way for someone outside the chip to see the contents of the PUF RAM -- also there are other protections -- some of which they won;t talk about -- but one is a grid in the upper metal layers -- if anyone tries to insert a hyspical probe, the grid detectsis and the PUF shuts down.
I see three potential issues with the physically unclonable function (PUF). First, is the result at startup consistent across voltages? I'd predict that a fresh battery (or device connected to a charger) would provide a slightly higher voltage and the bits that were "on the fence" might bump on whereas a very low battery might cause them to start "off". Secondly, if the device experiences trauma, the flexing or compression of the chip might slightly shift the geometry and bias of the bits. Can we depend upon the physically unclonable function (PUF) remaining constant through time and use? Finally, couldn't somebody monitor the results of a physically unclonable function (PUF) and then create an emulation device which was preprogrammed with that bit pattern? It might always boot with the exact same pattern, but it would be within the expected range and therefore pass the security check.
Most secure chips are used as the root of trust for the rest of the system. For example, to implement a secure boot process. If a secure chip was replaced, the system wouldn't boot up (since the security key required for secure boot is missing).
What a great idea - special secured chips from the same fab who make the counterfeits.
The nonsense of all this is the management of the security.
Say my US distribution company were to pull apart the unit, pop the "secure" chip off the PCB and replace it with the same one someone-else had programmed, put the unit back together, and on-sell/deliver. Who would find out ? No-one.
Even if the chip were programmed to frequently broadcast its "secured" status. Who would listen ? What would they do about it ? Stop the launch, hold the plane, not fire the missile, switch off the pacemaker ? How easy would it be to spoof or obsfucate such a status message ?
The real issue is not the reconfigurable silicon, but rather the programming tools. How do I know if my/contract tools are not compromised with some nasty little timer ?
Sure, the chipmakers say every little intrinsic security improvement is helpful. Is it, or is it just window-dressing to distract us from a more likely reality ?
A Book For All Reasons Bernard Cole1 Comment Robert Oshana's recent book "Software Engineering for Embedded Systems (Newnes/Elsevier)," written and edited with Mark Kraeling, is a 'book for all reasons.' At almost 1,200 pages, it ...