We started an internal project to evaluate and exercise a variety of commercial products using industry protocols to link products from multiple manufacturers. One of our objectives was to have a couple of RS232 ports talk to the machines to find the status and the results of our testing. Using a PIC32 processor with 6 serial ports, USB, SPI, and Ethernet ports we went on our merry way to accomplish this relatively simple engineering task.
We set up one serial port and enabled interrupts to send a message and read back the results from the machine. The simple read intermittently failed, which started a chain of events that lasted days and blamed everything and everyone in sight. Every few messages, the reading from the buffer would return improper data starting with the sixth byte of the packet sent. We hooked up a serial port analyzer attached to a PC and found the entire data packet was received with the proper data every time by the analyzer.
Thus we started looking at the PIC32 firmware. It looked fine but were we missing something? After rewriting the firmware at least five times, the testing would aways provide the same intermittent results. At this point, we looked for any errata sheets for the processor – surely the manufacturer would not have put out a product that had serial port hardware issues. Of course, no luck; the hardware worked properly according to the manufacturer. We tried rewiring the prototype to use other available on board serial modules and still the same results. At this point, everyone was frustrated including management with the serial port that would not receive all of the data properly.
We needed to step back, brainstorm, and look at this problem from a different angle. We needed to find why an analyzer running on a PC worked but the PIC32 failed. Using C# we quickly wrote a PC program to send hundreds of bytes of data to the PIC32, and of course, the PIC32 read every byte properly. Must be the device under test not the PIC32, but why? Looking at the proprietary serial protocol there was an 'optional' parity bit that could be sent with the first byte of the message only. This bit however was not used as a parity but was overloaded to function as a wakeup bit instead of a true parity bit that would determine data integrity. The device under test set the 'optional parity' most but not all of the time when sending out status messages. We had followed the specification of the protocol 8 bit data, 1 bit parity, 1 stop bit, but it turns out there was more to the total equation.
At this point, we finally understood the issue that we were encountering and changed the port to not detect parity as this would look for data integrity. This is what we had done previously, which was our problem. Now, we rewrote the program to read 9 bit data instead of 8 bit data and interpret the ninth bit ourselves. Problem solved: we were able to read all messages and their entire contents properly. The PC serial port recognized the framing inconsistencies and was able to handle it in all tested cases. Therefore, the PC returned the proper data bytes, however, the PIC32 UARTS were working properly but did not transparently handle the framing inconsistencies.
The moral of the story: software engineers blame hardware engineers, and hardware engineers blame software engineers. However, all engineers must fully understand the entire system specification in order to make the products that are currently developed properly communicate with the system. This includes workarounds and extra features written twenty-five years ago into the legacy protocol.
Eugene Zeldin holds a Bachelor’s degree in electrical engineering from the University of Illinois—Champaign/Urbana. With many different engineering interests, Eugene has held a variety of positions, including manufacturing engineer, project engineer, hardware engineer, software engineer, technical lead engineer, lab engineer, and consultant on various projects in many fields: consumer (Motorola, Lexmark), industrial (GBC), medical (Baxter Healthcare, Jeron) and government.