I remember my first year of programming. FORTRAN on punch cards using a mainframe. We submitted our programs as batch jobs and got a print out back hours later.
Due to the time involved, you would be very careful about what you submitted, painstakingly checking the code before cutting cards (cards == money) and then checking the cards again before submitting.
It was more than once that a 200-line program ran - correctly - first time.
Frugality was important too. I wrote a 56-line Fortran program to print out wall calendars. Worked great, but was pretty hard to understand.
Now editing and compilation is cheap/free and I don't worry so much about getting it perfect first time.
Forgot a variable name? So what just compile and see the errors the compiler throws up.
Would I trade back for the Good Old Days? No thank you! I also remember dropping the 2000 punch-card source code for a compiler when I tripped down some stairs. It took quite a while to get them back in order and re-punch damaged cards.
It's not just the "do it fast/do it over" that causes me to sometimes question all of the "advances." The proliferation of options is also staggering. If you look at all of the library functions available in .NET and other tool chains, you can spend a lot of time just searching to the right library function to use. On the one hand, that frees up a lot of time from wrote tasks, but it also can cause delays. There are so many that you can't keep them in your head so you likely have to research each time you need to use one.
that's what a lot of folks say about C++ - you get 'choice/decision paralysis' because there are to many ways to code something - and it's not at all obvious which will be best. A great language would point obviously towards the best approach
It almost seems like even the people at Microsoft designing the Windows libraries forget that some task already has a library function written and just write a new one. Or maybe, it's just easier to create a new one than update an old one and go through all of the compatibility testing that would be required.
New tools would probably be great if only the documentation kept pace. While trying a new software IDE for a microcontroller a few days ago, I inadvertently closed two windows on the display. I couldn't figure out how to get them back. The docs explained what those windows would show but had no information about how to open them. Sad.
Don't I know it? A lot of the work that I do is troubleshooting serious design problems in different systems. It has to be right because of the usual critical nature of the issues.
My real projects get delayed but no one pushes me.
I got to be the slowest designer in the company but stuff works when I'm done.
The person sitting in front of their computer appearing to be doing nothing may actually be thinking about how to solve a problem or how to best organize a process. On the other hand, the person frantically hacking away at their keyboard may be making a host of mistakes and bad, quick decisions that will affect a product for a long time down the road.
Bill, are you describing SCRUM design methodology? Oh, that's Schwaber, not Schweber! ;-)
Good point and this is not just a time issue, either.
I admit, I sometimes embrace the dark side and I just tweak and throw the code at the FPGA compiler, again. This only works though if the basic architecture is sound and it's a double-edged sword if this is meant to save time. What if there is another error behind the error? And one behind that? Time to stop and think!
If I jump in without proper thinking upfront and hope to do the checks later, the architecture will resemble the proverbial dog's breakfast and when finally the design appears to work, then the obvious temptation is not to look too hard for possible errors still hiding, especially with increasing time pressures on the project. It's just human nature.
I had a junior engineer wait a month until he had the latest and greatest RF design software before he would even attempt a prototype design of a new power am design.
He wanted to do a harmonic balance along with several other competing analysis before he would even attemp a bench prototype.
The only software we had was an older RF package that was simple, easy to use and plain worked, albeit with a bit of bench tweaking to fine tune the circuit but it never let us down.
It was the minimal tweaking on the bench that this new grad wanted to eliminate and bench tweaking was out of the question, His thought was push a button and have a working design come out the other end without any knowledge of what could go wrong which you only obtain from actaul bench experience.
Unfortunately when you spend 60 thousand dollars to replace your existing EDA software and the new software requires a couple of months to learn how to use, you also loose valuable time in which competitors get their product to market.
You can train a pilot to take off and land in a simulator but you sure as heck are not going to give him or her a pilots license to fly a commercial airliner until they take the seat of something simple and basic to start out on.
"More haste, less speed."
"Months of coding can save hours of planning.".
Yes, I have seen contract programmers display both of the above. They write code, do some testing, get paid, then move on. Management pats itself on the back thinking the jobs done.
Only later when a new software feature has to be added that the true nature of the beast is revealed. Suddenly it becomes clear that coding and documentation standards were not followed. Management was too hasty in accepting the "product", or the person allocated the task of checking "product compliance" was not up to the task, or was put under extreme pressure not to hold the project up. The software contract has no clause for redress. The original coder is no where to be found.
But that is months down the track, and becomes somebody else's problem... usually mine.