I think you can believe them as much as you believe anything--that is, find out what they're actually doing and then decide for yourself if it makes sense. Compare results across platforms and if the numbers come within 10%, consider them consistent.
You mentioned that the BERT is the standard for the measurement of TJ. The BERT from company "A" also gives a measurement RJ and DJ and company "T" also has a deeper decomposition of the jitter components. Can I believe these numbers or only trust the TJ measurement?
You don't want to get me started on test patterns. Emerging specs require PRBS31, which is a stupid choice because it's so long that no equipment (not even BERTs) can measure TJ with appreciable statistical significance on that pattern.
Check the DesignCon site for a paper last year by Marty Miller on test pattern lengths.
I wrote one for Tek, but the link seems to have gone away. Send me a note and I'll get it to you: firstname.lastname@example.org
Thanks Ransom for another great presentation. I was wondering if you have compiled or know of some interesting article that covers the different jitter signatures (PDF terms or other) including compound effects of more than one noise source?
The other thing to remember is that *there does not exist* equipment that can measure TJ(BER) with accuracy better than 10% or so. Equipment may be repeatable to a few percent, but it's not any more than 10% closer to the "true" value so don't work late trying to get below 10%, you can't.
What qualifies a suitable test pattern for BER testing? Worst case one - that stimulates all the negative effects or a "real world" one that considers the statistical occurance of the negative contributors?
The question in volves differences in left and right sigma affecting the TJ result.
The answer: it can get ugly. For electrical systems, when they differ widely, look deeper, something is probably wrong with the transmitter or you need to fit farther and farther down the tails.
For optical systems it's more complicated and you see left-right asymmetries more often.
My prejudice, especially since I don't have to pay for test equipment, is to do the full measurement with a BERT and see what's really going on. If there really are asymmetries, you'll have check the accuracy of your dual-Dirac estimate with the BERT measurement.
>> How does acuisiton noise influence the BER/TJ measurements - is tere an established way to calculate those effects out?
Acquisition noise is always tricky. In my experience it's worse on real-time scopes than equiv-time scopes and BERTs. To subtract it out you have to make assumptions about how it's distributed. If you assume the noise is Gaussian you can come up with techniques to remove it, though it's never clear how that affects your systematic uncertainty.
I didn't hear back from Ransom about the reasoning behind his statement that FeXT was more important than NeXT. It is commonly accepted that FeXT is less important than NeXT, especially for embedded striplines.
Thanks for your great presentation. I prefer Rastaplot and will be using that terminology from now on. ;) Slide 23: Is there a case where sigma R and sigma L are so different that averaging them would cause major inaccuracies in the model?
I'll dig up a couple of references for you in a few minutes. Or if you want to san through my web page www.ransomsnotes.com there's a lit of white papers if scroll down that includes dual Dirac references
Hi everyone. If you are new to the course...The streaming audio player will appear on this web page when the show starts at 12:00 pm Eastern today. Note however that some companies block live audio streams. If when the show starts you don't hear any audio, try refreshing your browser.
A Book For All Reasons Bernard Cole1 Comment Robert Oshana's recent book "Software Engineering for Embedded Systems (Newnes/Elsevier)," written and edited with Mark Kraeling, is a 'book for all reasons.' At almost 1,200 pages, it ...