I assume the "MTBF formula" is known.
Then the list of physical events to check and drive is long:
1- Tau, you talked already about it.
2- Danger window during a clock cycle, this is depending of the quality of the synchronizer. That is an other reason a specific synchronizer flop is designed with specific characteristics.
3- Frequency to synchronize to. Not a lot to do about it, this speed is driven by the project needs.
4- Frequency of the changes of the input to synchronize. Not a lot to do about it, it is what the functionality needs.
***** Now from this point what is relatively ignored:
5- The input is supposedly fully asynchronous or random in regards to the synchronizer clock. In other words the input is along the time evenly distributed all along the clock period of the synchronizing clock. However in many applications the different clocks are related to each others. Here the consequences are very important and I don't have place for developing the consequences, but they are very important.
6- The input signal is supposedly to be transitioning instantaneously, what is not the case and that is someway changing the point "2" the danger of window. The very important point to check in the design that is a good powerful driver as last stage of the signal to synchronize. No long interconnection and slow ramp transition.
If you are interested by the topic join us at:
CDC - clock domain crossing in ASIC/SOC
It's purely a statistical MTBF thing. Each additional flip flop stage increases the MTBF exponentially.
The classic equations for metastability show an inverse relationship between MTBF and clock frequency. This intuitively makes sense -- the faster the clock, the more likely the flip flop will fail to resolve a metastable condition in the required time, i.e., before the next clock edge.
As you add additional flip flop stages, the probabilities of synchronizer failure multiply and the MTBF goes up exponentially.
It does seem odd that we have to be comfortable with circuits that "probably" won't fail, but its easier to get comfortable when the probabilities become absurd. The often-used 2 flip flop resynchronizer can result in MTBFs of thousands of years for many systems. If that's not good enough, add another stage or two and pretty soon you're looking at MTBFs exceeding the age of the universe.
The classic two flop synchronizer works on the principle that, even if the first flop becomes metastable, there will be enough time for it to recover and meet the setup requirement for the second flop.
Here and there (most recently at a talk given at Mentor's U2U conference) I have heard mention of synchronizers with three and even four stages. According to the speaker, these extra stages are necessary when the metastability recover time (he called it “tau”) is excessive relative to the clock period. This problem occurs when low voltages flops need to operate at high clock frequencies.
Unfortunately, I did not get the opportunity to ask how the extra stages were supposed to help. It seems to me that if there is not enough time for the first flop to stabilize before feeding flop #2 then there won't be enough time between flop #2 and #3 either. And the same for between #3 and #4 or between any two flops in the chain no matter how long you make it.
Is it simply statistical? The Xilinx speaker at U2U did say that “tau” was not strictly deterministic. Perhaps the chance that all three leading flops will experience long metastability recovery times is remote enough that the problem can be considered solved. I'm not sure I'm comfortable with having circuits that “probably” won't fail feeding circuits that assume their input their inputs never fail.
It occurs to me that a safer solution might be to divide the clock by two and drive the clock inputs of a conventional two stage sychronizer with this half-rate but still synchronous clock. The first flop now has twice the time to recover at the cost of additional latency. It even occurred that Xilinx could be using this method but the speaker was quite explicit about four flops.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.