Are we using the debugging resources available in so many MCUs in the most effective manner?
In the olden days, processors just didn't have any on-board debugging features. This was a boon for tool makers, and kept me fed for many years as my company made in-circuit emulators. Early ICEs offered little functionality beyond a handful of instruction breakpoints, but over time they sprouted a wealth of features like data breakpoints, trace, profiling, and much more. The 80s and 90s were halcyon days for the ICE industry.
That business is all but gone. Sure, pockets still exist. Microchip's REAL ICE and a handful of other products still keep a flicker of life in the emulator business. But very high bus speeds, tiny all-but-unprobeable packages, and the staggering array of on-chip debugging features hollowed out that industry.
Multiple hardware break- and watch-points are now common on-chip, as is trace and much more. In the ARM market, of course, vendors are free to pick and choose (for a fee) from a variety of debug modules, or to have none at all.
How many of these resources do you typically use at a time when debugging? I bet the answer is generally no more than a few.
I'd like the IDE vendors to offer a mode that automatically enables these resources to capture common problems. For instance, wouldn’t it be nice if the tools always monitored stacks? Or watched large data structures for buffer overruns? Or captured null pointer dereferences by watching for accesses through location zero?
Jack Ganssle's blog continues on Embedded.com. He will be speaking at EE Live! this week.