@MAX: "Although the paper tape reader appears to work fine, the carriage-return / line-feed needs some attention."
If it's a problem when the print head returns to the left, there is an adjustment.
Under the lid, on the left side, where the print head returns, is a cylinder call the dash-pot. The print head has a sort of a piston with a rubber ring (or something) that is supposed to go into the cylnder. A lever on the outside head of the cylinder can be moved over a hole in the end of the cylinder to set the amount of air allowed to escape and fine-tune the cushioning effect. As I recall there are a couple of screws on top to loosen and adjust the cylinder. You'll get a nice little "pop" sound when it's adjusted correctly. I had to do that on an ASR-33 when I had a part-time job teaching programming to 4th, 5th, and 6th graders back in 1973-1974.
If it's not returning, I'm not sure what the problem is. Some have an autoreturn set so the print head automatically returns after 72 or 80 characters. Sometimes the autoreturn is not set and it just keeps piling up characters in the last positon.
But, if it's doing a line feed an printing a character in the middle of the line, you need to send two chariage returns and a line feed to the print. This was the standard -- it's a matter of timing. I've also seen specs for other printers that call for varying numbers of carriage returns or nulls, depending on the bit rate and how far over to the right the print head is (it takes longer to return from 132 characters than from 80, and no printer in the old days could print 960 chars/sec (9600 bps - 10 bits/char).
There once was a Star Trek program that would print out S T A R T R K in the middle of the line, space out to 80 characters, and then print a carriage return with no line feed followed by the E, which would print out between the R and the K.
I was in a Pascal class for a day or two before I was pulled out to troubleshoot a hardware problem with the new equipment. I do remember having trouble remembering when a semicolon was required and when it was not. The instructor said there was sometimes a certain feel to when it was needed. Not my kind of language .... Another time he pointed out that you could stomp on the first byte of a string to change it's length, which was quicker than using the built-it function. When I ask why bother to use the function, he said anyone on his staff would be fired for doing that (I guess because it was not as obvious as using the function). Finally, when I asked why the printer diagnostic was so slow printing a sliding alpha test, he said the programmer was recreating the line each time, incrementing the initial character. When I asked why he didn't just create the full set of characters twice and step across it with a substring function, he replied it was written by a trainee. It thought to myself, "Why is Uncle Sucker paying to train your people?"
Pascal was "Wirth" so much he had to write Modula-2. ;- )
betajet: "I blame the Model 026 keypunch, which influenced Fortran. The creators of Fortran were mathematicians, so I'm sure they would have loved to use the proper mathematical symbols instead of .GE. and .LT. for ">=" and "<". I'm sure glad we've left those silly notations behind! [What? You say they're still around in HTML? 'Strewth!]"
There were only so many keys on the keyboard of a teletype or keypunch, and only so many characters in the character set (7 bits plus pairity). Even IBM mainframe printers didn't always have a full character set. Our college used Assembler, RPG, Fortran and PL/1. PL/1 used a semi-colon to end a statement, but our print train didn' t have the character. The IBM PL/1 compiler had an option which we set as default so we could terminate lines with a comma and a period (",."), as if the semicolon fell over on its side.
Knowing how old FORTRAN is, it may be that they didn't have the ">" and "<" symbols available on printers. The IBM 029 Keypunch did have the > and < characters on the keyboard.
By contrast, IBM's EBCDIC code had a cent sign ("¢") which their mainframe terminals had but that teletypes and ASCII-based equipment didn't have (we Americans are so Jingoistic!).
On the PC one can use CHARMAP.EXE to find it or type Alt-0162 (using the numbers on the Numeric Keypad) to enter it (Like this: ¢).
And if you're really a gutton for mathematical notation, there's always APL
I think glutton is too mild of a term. Masochist comes to mind ... but if you want arcane notation, try FORTH (which I did play with on my TI-99).
@stargzer...thanks for all that.. I did know what BASIC stood for but not about its origins. And I didn't use any of the very early ones much, though I do remember working on something that needed LET for assignments. (Maybe that would have made more sense to your dad, @Chesler?)
> The original BASIC didn't have a DO WHILE
Just looked up in GWBASIC's manual and it does not either. I was probably thinking of QBASIC which I used a lot. Never used DO WHILE much, I preferred the other conditionals.
Gotta love the bazillions of lights (and sometimes switches) on the old computers...
I liked FORTRAN at the time, but I'd be hard pressed to remember any of it now :-)
The name BASIC came from Beginner's All-purpose Symbolic Instruction Code and it was developed at Dartmouth (not Microsoft!). It really was pretty basic at first.
The original BASIC didn't have a DO WHILE construct, only FOR...NEXT loops and GOTO and GOSUB statements. I don't remember if the version I used had the ON...GOTO/GOSUB statement. In the earlier versions every statement had to have a keyword at the beginning, so to assign a value it was LET Y=3 or LET X=X+1. It was great day for typists when BASIC interpreters finally made the LET part of an assignment statement optional!
I seem to recall that Dartmouth BASIC had built-in functions to perform matrix manipulations (MAT INV to invert, something I never understood or used; math was not a strong point). After all, it was invented by two Math professors using student labor. The professors (Kemeny and Kurtz) wrote a book called "BASIC Programming" that gave examples of how to us the language in various academic disciplines.
As for piles of paper ...
The control panel on the IBM 360 Model 25 had half a bazillion lights, but two of them were Wait and Sys. A hard Wait was when something got farbled and the program just sat there; it was called a Hard Wait. When a student's program started looping somewhere in core, the Sys light came on and stayed on, so we nicknamed it a Hard Sys.
A good source of large piles of paper occured when a FORTRAN student tried to print to a channel that wasn't punched on the printer's control tape (IBM 1403 printer). Then it was a contest to see if the operator could stop the printer before it slewed through a whole box of paper before he cancelled the program! At least we could recover the paper in that instance. If their program got into a loop (but not a hard Sys) and printed one character and a page feed continuously, well, there would be a pile of scrap paper.
Flurmy wrote: Think about the C syntax: it's made almost unreadable just to save some bytes IN THE SOURCE CODE!
If did your editing using ed on a 10 char/sec teletype, you'd want concise source code too :-)
Personally, I mostly like C's syntax, especially assignment operators like "+=" and the conditional expression "a? b: c". However, chacun a son goût (YMMV). As Flurmy said, there's always Cobol if you really like to type.
@Chesler....I'm glad Betajet came back to you and advised about the ASRs, as I know next to nothing about them.
As Betajet also remarked, the tape has even parity, ie bit 8 is always such that you have an even number of bits. So only bits 1-7 are used, as you say, true 7-bit ASCII.
When I was playing with GWBasic I noticed that it always displays commands in uppercase. If you look at a program file in (eg) Notepad, it looks like gobbledegook - I suspect they tokenised everything to save memory.
< ...codes 0141 to 0172..." I take it those are Octal? Long time since I have seen that. reminds me of what I think is the first story I ever wrote for EE Times:
I tried to teach my father what I was doing (payback for lots of math he'd taught me and many others). He had tried to learn some programming in the 1960s, but it didn't go anywhere.)
I remember he couldn't get past "N = N - 1". "How can N be equal to N minus 1?" "No Dad, that means 'Set N to be equal to...' "
And because Factorial was the first program I learned (computers were taught out of the math department) every time he asked me or my brother to show him programming, we had to start with that. He could learn videogames and the Web, but through generations of programmable calculators, pocket computers, and PCs that we brought home, he never learned to write a program.
The ASR-33 is UPPERCASE ONLY. If you want full ASCII, you need a Model 37 or Model 38. I used a Model 38 quite a bit. It also had a dual-color ribbon, with escape sequences to switch ribbon color. Great fun.
I used the Model 38 to write PDP-11 assembly language with lower-case comments. This was considered heresy at the time and made me a pariah among pariahs -- kind of like an EE student with a circular slide rule.
There's a nice photo of Teletype output on the familar yellow paper at Wikipedia.
Pedantry: Pascal uses "<>" (less than or greater than) for "not equal to". Pascal also uses the Algol ":=" for assignment, so that you can use "=" for equality. This avoids the need for C's "==" equality operator.
I blame the Model 026 keypunch, which influenced Fortran. The creators of Fortran were mathematicians, so I'm sure they would have loved to use the proper mathematical symbols instead of .GE. and .LT. for ">=" and "<". I'm sure glad we've left those silly notations behind! [What? You say they're still around in HTML? 'Strewth!]
Of course, now that we have full graphics displays and the Symbol font, we can fully expect languages to take advantage of these advances and not have syntax designed for a Model 38 teletype, right?
As for proper printing of ">=", "<=", and "!=", you can of course use overstrikes on a teletype or line printer. And if you're really a gutton for mathematical notation, there's always APL :-)
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.