The last upgrade seems a bit wasteful. Despite of being one of the longest times between upgrades the benefit was not huge. In some things it is an improvement, in others not so. And application requirements are stagnant so it is only for compiling code etc. I can appreciate the difference. But I had the upgrade itch and could afford it ;) In the 2002-2015 era I kept a lot of the HW from machine to machine so it was mostly the motherboard, CPU, RAM and occationally GFX I upgraded.
The next worthwhile upgrades seems to be for higher core counts like AMD Ryzen. Unfortunately I am still early into the 6700K, so will sit out on that one. So I expect to upgrade to either 6 or 8 core in 1-2 years depending on what offerings there is at the time. Could be AMD's next-gen Ryzen og maybe Intel's IceLake.
Win 3.1 / 3.11 were multitasking but it was cooperative multitasking, meaning the program had to yield control of the CPU to Windows. Windows would never context switch by itself (it would handle interrupts but would not switch between programs based on e.g. timer interrupt). Yielding come mostly naturally in the Windows programming model without much programmer intervention, since control is automatically yielded in the message loop (when the program calls GetMessage()). But if a program starts a huge calculation when processing a message it will lock up the system and even the clock wouldn't update. This required the programmer to manually split up the calculation into small chunks and set up a timer message to trigger the calculation of the individuals chunk. Well-behaving programs could easily multitask. It was actually Win9x (i.e. pre NT) that introduced pre-emptive multitasking and could switch to other programs even if a program didn't yield. It seems the scheduler was not too aggressive/sophisticated about it (probably due to the limited resources available) so in practice a program not yielding could significantly affect the reponsiveness but it usually wouldn't crash the system. WinNT and successors refined the scheduling and priority algorithms further and further and combined with much greater HW resources things are much more smooth today. But it was Win95/98 that introduced true pre-emptive multitasking.
My father used the IBM 1620 in college. Somewhere, I still have a 1620 manual (I'm not getting rid of it).
My brother designed and built an 8080 system in the 1980's, hand assembled assembly language, and then entered it into the EPROM using a DIP switches (he made his own EPROM programmer) - and only made a few mistakes!
My first computer was the Atari 520ST. I still have it lying around - I need to fire it up and show my kids; I think they'd enjoy some of the old games (I had a lot of fun playing Rampage -- need to find a couple joysticks for it).
I hardly used pre-NT Windows, and I never used pre-OSX Macintoshes, so I thank those who corrected my recollections. Both NT and OSX (which is Unix at its core) are true pre-emptive mutitasking, and complete rewrites from their predecessors.
@Elizabeth "We had a slightly different system..."
Did the administration use the system as a case study in the behaviour of systems with multiple queues? It's actually surprising how many things can be modelled that way, especially in OSs.
(And as a footnote, did you notice that "queueing" has five vowels in succession. There can't be many words with that count or higher.)
Karnaugh maps and Boolean algebra still have a role, if you're trying to get to the essentials of logic. Even when circuits are cheap and microscopic, hence not worth optimising in themselves, it's worthwhile to boil a problem down to its essentials.