The same issues apply to writing RTL code for hardware design. As others have said, when you are writing code, you're focused on getting the functionality you need and meeting the schedule.
Most engineers will adhere to the coding standards they believe are important, and will comment their code appropriately so that they themselves can maintain it and re-use it on a later project. Total compliance with a set of standards designed to enable re-use by other engineers who were not part of the original development is not usually a luxury that the tight schedule affords us.
I always run the lint checker on all the RTL, but the fundamental problem is the "trivial rules" that FergusB mentioned. It takes only minutes to run the linting tool, but can take hours to filter through the reports it generates, to weed out the coding standard violations that really don't matter -- which is typically at least 90% of them!
KB, you hit in on the head. When you are creating code, you are focussed on the task of making things work. However, a good editor always helps. It's not burdensome to format code as you are writing it. Writing detailed text comments describing variables and procedures, as well as error processing is what seems most burdensome.
Are there any easy ways to have programs that analyze the code to quickly help the engineers follow the standards? I have always tried to create my own tools to help me avoid errors in my code. I first learned this with the unix pretty command to help format my C code.
AS CTO of Programming Research, I agree that the majority favour good coding practices in principle but suffer from severe implementation hurdles.
Here's two things we focus on at PRL:
a) avoid trivial rules and only focus on the really important *language-based* issues.
b) treat (the majority of) legacy code more leniently than new code. Do this right down to code line level so that what you're adding or changing is made perfect. We call this "baseline analysis".
Unless and until companies see the benefit of the coding standard checking tools they will not implement their use. It is a simple dollars/cents (or euros) matter, tools cost money and running them takes time (also money)and it seems that the value is not seen in tool based code standards checking. Is there a study that can show the cost savings (not from the tool vendor but an independent verification source)?
I agree that most software engineers believe that standards do help the quality and maintainability of the code. It is up to the company to put the proceedures in place to enforce the standards that they want. The problem is that most software projects do not have the time or funding to add this step. The manual checks do not help much because they tend to be glossed over without an in depth review.