Have you ever used your microcontroller's intelligence to self-modify its operation?
In a column on the Microcontroller Central website (long gone now), Duane Benson discussed changing the wiring on a PCB for simplicity and then using the versatility of the microcontroller to adjust the software to match the outputs. This concept can extend way beyond Duane's original musings to where the microcontroller can dynamically change and learn about its environment.
I suppose there are still mimic panels in use. You sometimes see them in the movies, where a process is described pictorially on a large panel -- something that might resemble the New York subway, for example. The movie The Taking of Pelham 1-2-3 (the original with Robert Shaw and Walter Matthau) springs to mind. A mimic panel is made up of many tiles mounted in a large matrix with the graphic engraved on the front. At each point of interest there is a LED inserted through the tile, and there are normally a large number of them.
I worked on just such a project in a railway marshalling yard where they assemble those 100-plus carriage trains. On each signal post was a telephone handset. When a handset was raised, the microcomputer (an 8085) had to connect the call to the central console and turn on a LED on the mimic panel to indicate where the call was coming from. There were about five hundred LEDs installed in the panel. Each LED had a pair of wires that had to be fed back to the rack containing the microcomputer and its I/O drivers. Obviously, the program had to be aware what each LED indicated and the first approach was to associate an I/O point with a particular LED. Wiring this would prove rather difficult for the electricians on site since they would have to know which wire was associated with which LED when there were several meters between the panel and the rack.
The development central rack controller containing the microprocessor (top left as can be seen by the connected emulator), memory, display (LED alphanumeric), keyboard, and some I/O drivers.
The solution was that each LED pair was wired to any output without paying attention to the order. The software had a "learn" feature that allowed an authorized person to individually illuminate each output in sequence. When the desired LED was illuminated, it was then associated with the particular output function. The table of associations was stored in non-volatile memory. This not only made the wiring much simpler, it allowed us to generalize our approach to be used in other installations.
Of course, calibration of analog inputs and outputs is yet another form of environmental adjustment. Since the gain in any analog chain is subject to component tolerances, instead of using expensive potentiometers and time-consuming reiterative procedures, you can insert a calibrated input and used the ADC reading as the point on a curve. A second point is all that is necessary to define a linear input. A similar process can be used to calibrate an analog output, adjusting the DAC setting until it reaches the target. A second measurement at a different point and the calibration is complete. I described the calibration process in a two-part column on Planet Analog (see Part 1 and Part 2).
Today we take this for granted, but in the early days this was not quite so obvious. Have you ever used your microcontroller's intelligence to self-modify its operation?