Hello all,
I'm looking to compile a list of historic "wars" in computing, just for
personal interest cause they're fun to read about (I'll also put the list
up online for others' reference).
To explain what I mean, take for example, The Tcl War
<https://vanderburg.org/old_pages/Tcl/war/>. Other examples:
- TCP/IP wars (BBN vs Berkley, story by Kirk McKusick
<https://www.youtube.com/watch?v=DEEr6dT-4uQ>)
- The Tanenbaum-Torvalds Debate
<https://www.oreilly.com/openbook/opensources/book/appa.html> (it doesn't
have 'war(s)' in the name but it still counts IMO, could be called The
Microkernel Wars)
- UNIX Wars <https://en.wikipedia.org/wiki/Unix_wars>
Stuff like "vi vs. emacs" counts too I think, though I'm looking more for
historical significance, so maybe "spaces vs. tabs" isn't interesting
enough.
Thanks, any help is appreciated : )
Josh
On Thu, Feb 4, 2021 at 9:57 AM John Cowan <cowan(a)ccil.org> wrote:
>
> On Wed, Feb 3, 2021 at 8:34 PM Larry McVoy <lm(a)mcvoy.com> wrote:
>
>> The x86 stuff is about as far away from PDP-11 as you can get. Required
>> to know it, but so unpleasant.
>>
>
> Required? Ghu forbid. After doing a bunch of PDP-11 assembler work, I
> found out that the Vax had 256 opcodes and foreswore assembly thereafter.
> Still, that was nothing compared to the 1500+ opcodes of x86*. I think I
> dodged a bullet.
>
IMHO: the Vax instruction set was the assembler guys (like Culter) trying
to delay the future and keep assembler as king of the hill. That said,
Dave Pressotto, Scotty Baden, and I used to fight with Patterson in his
architecture seminar (during the writing of the RISC papers). DEC hit a
grand slam with that machine. Between the Vax and x86 plus being part of
Alpha, I have realized ISA has nothing to do with success (i.e. my previous
comments about economics vs. architecture).
Funny thing, Dave Cane was the lead HW guy on the 750, worked on the 780 HW
team, and lead the Masscomp HW group. Dave used to stay "Culter got his
way" whenever we talked about the VAX instruction set. It was supposed to
be the world's greatest assembler machine. The funny part is that DEC had
already started to transition to BLISS by then in the applications teams.
But Cutler was (is) an OS weenie and he famously hated BLISS. Only the
other hand, Culter (together with Dick Hustvedt and Peter Lipman), got the
SW out on that system (Starlet - *a.k.a.* VMS) quickly and it worked really
well/pretty much as advertised. [Iknowing all of them I suspect having
Roger Gourd as their boss helped a good bit also).
Clem
On Wed, Feb 3, 2021 at 8:34 PM Larry McVoy <lm(a)mcvoy.com> wrote:
> I have to admit that I haven't looked at ARM assembler, the M1 is making
> me rethink that. Anyone have an opinion on where ARM lies in the pleasant
> to unpleasant scale?
>
Redirecting to "COFF" as this is drifting away from Unix.
I have a soft spot for ARM, but I wonder if I should. At first blush, it's
a pleasant RISC-ish design: loads and stores for dealing with memory,
arithmetic and logic instructions work on registers and/or immediate
operands, etc. As others have mentioned, there's an inline barrel shifter
in the ALU that a lot of instructions can take advantage of in their second
operand; you can rotate, shift, etc, an immediate or register operand while
executing an instruction: here's code for setting up page table entries for
an identity mapping for the low part of the physical address space (the
root page table pointer is at phys 0x40000000):
MOV r1, #0x0000
MOVT r1, #0x4000
MOV r0, #0
.Lpti: MOV r2, r0, LSL #20
ORR r2, r2, r3
STR r2, [r1], #4
ADD r0, r0, #1
CMP r0, #2048
BNE .Lpti
(Note the `LSL #20` in the `MOV` instruction.)
32-bit ARM also has some niceness for conditionally executing instructions
based on currently set condition codes in the PSW, so you might see
something like:
1: CMP r0, #0
ADDNE r1, r1, #1
SUBNE r0, r0, #1
BNE 1b
The architecture tends to map nicely to C and similar languages (e.g.
Rust). There is a rich set of instructions for various kinds of arithmetic;
for instance, they support saturating instructions for DSP-style code. You
can push multiple registers onto the stack at once, which is a little odd
for a RISC ISA, but works ok in practice.
The supervisor instruction set is pretty nice. IO is memory-mapped, etc.
There's a co-processor interface for working with MMUs and things like it.
Memory mapping is a little weird, in that the first-level page table isn't
the same second-level tables: the first-level page table maps the 32-bit
address space into 1MiB "sections", each of which is described by a 32-bit
section descriptor; thus, to map the entire 4GiB space, you need 4096 of
those in 16KiB of physically contiguous RAM. At the second-level, 4KiB page
frames map page into the 1MiB section at different granularities; I think
the smallest is 1KIB (thus, you need 1024 32-bit entries). To map a 4KiB
virtual page to a 4KiB PFN, you repeat the relevant entry 4 times in the
second-level page. It ends up being kind of annoying. I did a little toy
kernel for ARM32 and ended up deciding to use 16KiB pages (basically, I map
4x4KiB contiguous pages) so I could allocate a single sized structure for
the page tables themselves.
Starting with the ARMv8 architecture, it's been split into 32-bit aarch32
(basically the above) and 64-bit aarch64; the latter has expanded the
number and width of general purpose registers, one is a zero register in
some contexts (and I think a stack pointer in others? I forget the
details). I haven't played around with it too much, but looked at it when
it came out and thought "this is reasonable, with some concessions for
backwards compatibility." They cleaned up the paging weirdness mentioned
above. The multiple push instruction has been retired and replaced with a
"push a pair of adjacent registers" instruction; I viewed that as a
concession between code size and instruction set orthogonality.
So...Overall quite pleasant, and far better than x86_64, but with some
oddities.
- Dan C.
I will ask Warren's indulgence here - as this probably should be continued
in COFF, which I have CC'ed but since was asked in TUHS I will answer
On Wed, Feb 3, 2021 at 6:28 AM Peter Jeremy via TUHS <tuhs(a)minnie.tuhs.org>
wrote:
> I'm not sure that 16 (or any other 2^n) bits is that obvious up front.
> Does anyone know why the computer industry wound up standardising on
> 8-bit bytes?
>
Well, 'standardizing' is a little strong. Check out my QUORA answer: How
many bits are there in a byte
<https://www.quora.com/How-many-bits-are-there-in-a-byte/answer/Clem-Cole>
and What is a bit? Why are 8 bits considered as 1 byte? Why not 7 bit or 9
bit?
<https://www.quora.com/What-is-a-bit-Why-are-8-bits-considered-as-1-byte-Why…>
for my details but the 8-bit part of the tail is here (cribbed from those
posts):
The Industry followed IBM with the S/360.The story of why a byte is 8- bits
for the S/360 is one of my favorites since the number of bits in a byte is
defined for each computer architecture. Simply put, Fred Brooks (who lead
the IBM System 360 project) overruled the chief hardware designer, Gene
Amdahl, and told him to make things power of two to make it easier on the
SW writers. Amdahl famously thought it was a waste of hardware, but Brooks
had the final authority.
My friend Russ Robeleon, who was the lead HW guy on the 360/50 and later
the ASP (*a.k.a.* project X) who was in the room as it were, tells his yarn
this way: You need to remember that the 360 was designed to be IBM's
first *ASCII
machine*, (not EBCDIC as it ended up - a different story)[1] Amdahl was
planning for a word size to be 24-bits and the byte size to be 7-bits for
cost reasons. Fred kept throwing him out of his office and told him not to
come back “until a byte and word are powers of two, as we just don’t know
how to program it otherwise.”
Brooks would eventually relent on the original pointer on the Systems 360
became 24-bits, as long as it was stored in a 32-bit “word”.[2] As a
result, (and to answer your original question) a byte first widely became
8-bit with the IBM’s Systems 360.
It should be noted, that it still took some time before an 8-bit byte
occurred more widely and in almost all systems as we see it today. Many
systems like the DEC PDP-6/10 systems used 5, 7-bit bytes packed into a
36-bit word (with a single bit leftover) for a long time. I believe that
the real widespread use of the 8-bit byte did not really occur until the
rise of the minis such as the PDP-11 and the DG Nova in the late
1960s/early 1970s and eventually the mid-1970s’ microprocessors such as
8080/Z80/6502.
Clem
[1] While IBM did lead the effort to create ASCII, and System 360 actually
supported ASCII in hardware, but because the software was so late, IBM
marketing decided not the switch from BCD and instead used EBCDIC (their
own code). Most IBM software was released using that code for the System
360/370 over the years. It was not until IBM released their Series 1
<https://en.wikipedia.org/wiki/IBM_Series/1>minicomputer in the late 1970s
that IBM finally supported an ASCII-based system as the natural code for
the software, although it had a lot of support for EBCDIC as they were
selling them to interface to their ‘Mainframe’ products.
[2] Gordon Bell would later observe that those two choices (32-bit word and
8-bit byte) were what made the IBM System 360 architecture last in the
market, as neither would have been ‘fixable’ later.
Migration to COFF, methinks
On 30/01/2021 18:20, John Cowan wrote:
> Those were just examples. The hard part is parsing schemas,
> especially if you're writing in C and don't know about yacc and lex.Â
> That code tends to be horribly buggy.
True but tools such as the commercial ASN.1 -> C translators are fairly
good and even asn1c has come a long way in the past few decades.
N.
>
> But unless you need to support PER (which outright requires the
> schema) or unless you are trying to map ASN.1 compound objects to C
> structs or the equivalent, you can just process the whole thing in the
> same way you would JSON, except that it's binary and there are more
> types. Easy-peasy, especially in a dynamically typed language.
>
> Once there was a person on the xml-dev mailing list who kept repeating
> himself, insisting on the superiority of ASN.1 to XML. Finally I told
> him privately that his emails could be encoded in PER by using 0x01 to
> represent him (as the value of the author field) and allowing the
> recipients to reconstruct the message from that! He took it in good part.
>
>
>
> John Cowan http://vrici.lojban.org/~cowan
> <http://vrici.lojban.org/%7Ecowan> cowan(a)ccil.org <mailto:cowan@ccil.org>
> Don't be so humble. You're not that great.
> --Golda Meir
>
>
> On Fri, Jan 29, 2021 at 10:52 PM Richard Salz <rich.salz(a)gmail.com
> <mailto:rich.salz@gmail.com>> wrote:
>
> PER is not the reason for the hatred of ASN.1, it's more that the
> specs were created by a pay-to-play organization that fought
> against TCP/IP, the specs were not freely available for long
> years, BER was too flexible, and the DER rules were almost too
> hard to get right. Just a terse summary because this is probably
> off-topic for TUHS.
>
Born on this day in 1925, he was a pioneer in human/computer interaction,
and invented the mouse; it wasn't exactly ergonomic, being just a square
box with a button.
-- Dave
Howdy,
Perhaps this is off topic for this list, if so, apologies in advance.
Or perhaps this will be of interest to some who do not trace yet
another mailing list out there :-).
Two days ago I received a notice that bugtraq would be terminated, and
archive shut down on 31st this month. Only then I realized (looking at
the archive also helped a bit in this) that last post to bugtraq
happened in the last days of Feb 2020. After that, eleven months of
nothing, and shutdown notice. It certainly was not because of list
being shunned, because I have seen posters on other lists cc-ing to
bt, yet their posts never went that route (apparently) and I suppose
they were not postponed either. If they were, I would now get an
eleven months worth of it. But no.
Too bad. I liked bt, even if I had not followed every post.
Today, a notice that they would not terminate bt (after second
thought, as they wrote). And a fresh post from yesterday.
But what could possibly explain an almost year long gap? Their
computers changed owners last year, and maybe someone switched the
flip, were fired, nobody switched it on again? Or something else?
Just wondering.
--
Regards,
Tomasz Rola
--
** A C programmer asked whether computer had Buddha's nature. **
** As the answer, master did "rm -rif" on the programmer's home **
** directory. And then the C programmer became enlightened... **
** **
** Tomasz Rola mailto:tomasz_rola@bigfoot.com **
[moved to COFF]
On Monday, 18 January 2021 at 15:47:48 -0500, Steve Nickolas wrote:
> On Mon, 18 Jan 2021, John Cowan wrote:
>
>> (When I met my future wife I was 21, and she wanted me to grow a beard, so
>> I did. Since then I have occasionally asked cow orkers who have complained
>> about shaving why *they* don't grow beards: the most common answer is "My
>> wife doesn't want me to." *My* wife doesn't believe this story.)
>
> I actually had to shave for a while specifically because of my
> then-girlfriend, so... ;p I can see that.
Early on I made a decision that no woman could make me shave my
beard, and I stuck to it. Not that the beard was more important, but
if she wanted it gone, she was looking at the wrong things.
Greg
--
Sent from my desktop computer.
Finger grog(a)lemis.com for PGP public key.
See complete headers for address and phone numbers.
This message is digitally signed. If your Microsoft mail program
reports problems, please read http://lemis.com/broken-MUA
>
> When I met my future wife I was 21, and she wanted me to grow a beard, so
> I did. Since then I have occasionally asked coworkers who have complained
> about shaving why *they* don't grow beards: the most common answer is "My
> wife doesn't want me to."
>
Moved to COFF ... while bearded UNIX folks do seem to be a common thread, I
think we are stretching Warren's patience a tad. So ... I have sort of a
different story.
I had shaved in off and on during college and in the first few years I was
working but had grown it back before grad school. I still was not sure I
liked having it, and as I got close to finishing, I mentioned to my
officemates at UCB that I'd shave it when Newton (our advisor) signed my
thesis as a signal to everyone I was done.
So the day I came into the office clean-shaven, Peter Moore looks up and
remarked, 'now I know why you wore one.'
So, I showed up at Masscomp without it and was quickly ostracized as so
many of the SW team had some sort of facial hair, I quickly grew it back.
Roll forward 20ish years and my wife egged me into shaving it off one
summer weekend. Our then 5-year-old daughter cried -- she wanted her
Daddy back. I've had it ever since.
That said, 20 years later she and her mother both claim I would look
younger if I shaved it. But at this point, I kinda like not having to
shave my neck and lower chin every day if I don't want to; so I have
ignored them.
Redirecting to COFF. COBOL has really nothing to do with Unix.
On Thursday, 7 January 2021 at 20:25:56 -0500, Nemo Nusquam wrote:
> On 01/07/21 17:56, Stuart Remphrey wrote (in part):
>>> Dave, who's kept his COBOL knowledge a secret in every job
>>
>> Indeed! [...]; but especially COBOL: apart from everything else, too
>> much like writing a novel to get anything done.
>
> As long as we are bashing COBOL, I recall that someone -- name forgotten
> -- wrote a parody that contained statements such as "Verily, let the
> noble variable N be assigned the value known as one".
Heh. In 1973 I was once required to abandon assembler, the language
of Real Programmers, and write a program in COBOL (in fact, a database
front end to COBOL). I took revenge in the label names. From
http://www.lemis.com/grog/src/GOPU
INVOKE SCHEMA KVDMS COPYING COMMON ALL
RECORD COMMON DELIVERY-AREA IS PUFFER
OVERLAY PUFFER WITH ALL
ERROR RECOVERY IS HELL
ROLLBACK IS IMPOSSIBLE.
...
MAKE-GOPU. IF ERROR-STATUS IS NOT EQUAL TO '000307', GO TO
HELL.
Admire that manifest constant.
And yes, this program went into production.
Greg
--
Sent from my desktop computer.
Finger grog(a)lemis.com for PGP public key.
See complete headers for address and phone numbers.
This message is digitally signed. If your Microsoft mail program
reports problems, please read http://lemis.com/broken-MUA
We lost Rear Admiral "Amazing" Grace Hopper on this day in 1992; amongst
other things she gave us COBOL and ADA, and was allegedly responsible for
the term "debugging" when she removed a moth from a relay on the Harvard
Mk I and taped it to the log.
-- Dave
Moving to COFF since this is really not UNIX as much as programming
philosophy.
On Thu, Dec 17, 2020 at 9:36 AM Larry McVoy <lm(a)mcvoy.com> wrote:
> So the C version was easier for me to understand. But it sort of
> lost something, I didn't really understand Steve's version, not at any
> deep level. But it made more sense, somehow, than the C version did.
>
I'm not too hard on Steve as herein lies the dichotomy that we call
programming. Looking back the BourneGOL macros were clearly convenient
for him as the original author and allow him to express ideas that he had
well in his source. They helped him to create the original and were
comforting in the way he was used to. Plus, as Larry notes, the action of
transpiling loses that (BTW -- look some time at comments in the C version
of advent and you can still vestiges of the original Fortran).
But the problem is that when we create a new program, we can easily forget
that it might live forever[1] - particularly if you are a researcher trying
to advance and explore a set of ideas (which of course is what Steve was at
the time). And as has been noted in many other essays, the true cost of SW
is in the maintenance of it, not the original creation. So making
something easy to understand, particularly in the future without the
context, starts to become extremely attractive - particularly when it has a
long life and frankly impact beyond what is was originally considered.
It's funny, coming across BourneGOL help to validate/teach/glue into me an
important concept when programming for real -> the idea of "least
astonishment" or "social acceptance" of your work. Just because you
understand it and like it might not be the same for your sisters and
brothers in the community. There is no such thing as a private program.
The moment a program leaves your desk/terminal, it will be considered and
analyzed by others.
So back to the time and seeing BourneGOL for the first time, please
consider that in the mid-70s, I was coming to C from BLISS, SAIL, Algol-W
as my HLLs, so I was used to BEGIN/END style programming and bracketing
lining up 4 spaces under the next line with B/E in the same column. The
White Book did not yet exist, but what would become 'one-true bracing
style' later described in K&R was used in the code base for Fifth and Sixth
Edition. When I first saw that, it just looked wrong to me. But I was
coming from a different social setting and was using a different set of
social norms to evaluate this new language and the code written in it.
At some point I took CMU's SW engineering course where we had to swap code
3 different times with other groups for the team projects, and I had come
to realize how important making things be understood by the next team was.
So, I quickly learned to accept K&R style and like Ron and Larry cursed
Steve a little. And while I admire Steve for his work and both ADB and
Bourne Shell were tools I loved and used daily, when I tried to maintain
them I had wished that Steve had thought about those that would come after
- but I do accept that was not on his radar.
That lesson has served me well for many years as a professional and it's a
lesson I try to teach with my younger engineers in particular. It's not
about being 100% easy for you now, it is about being easy for someone other
than you that has to understand your code in the future. Simply use the
social norms of the environment you live and work ("do as the Romans" if
you will). Even if it is a little harder now, learn the community norms,
and use them.
FWIW: You can actually date some of my learnings BTW with fsck (where we
did not apply this rule). Ted and I have come from MTS and
TSS respectively (*i.e.* IBM 360), which you remember from this first few
versions had all errors in UPPER CASE (we kept that style from the IBM
system -- not the traditional UNIX style). For many years after its success
and the program spreading like wildfire within the UNIX community, I would
run it on a system and be reminded of my failure to learn that lesson yet.
Clem
[1] BTW: the corollary to living forever, is that the worst hacks you do
seem to be the ones that live the longest.
ᐧ
https://www.youtube.com/watch?v=GWr4iQfc0uw
Abstract of the talk @ ICFP 2020
Programming language implementations have features such as threads, memory management, type safety, and REPLs that duplicate some of the work done by the underlying operating system. The objective of hosting a language on bare metal is to unify the language implementation and operating system to have a lean system with no redundancy for running applications.
This seems to be the OS:
https://github.com/udem-dlteam/mimosa
The Mimosa operating system consists of a minimal kernel built on C++ and Scheme. It contains a Scheme implementation of a hard drive (ATA) driver, keyboard (PS2), serial (8250 UART), FAT32 filesystem and a small real time clock manager. The project was built to experiment with developement of operating system using a high level functional language to study the developement process and the use of Scheme to build a fairly complex system.
On Dec 16, 2020, at 8:08 PM, John Cowan <cowan(a)ccil.org> wrote:
>
> Sometimes I wonder what would have happened if A68 had become the medium-level language of Unix, and Pascal had become the language of non-Unix, instead of both of them using C.
Funny how we seem to rehash the same things over the years!
In a 1988 comp.lang.misc thread when I expressed hope that "a major
subset of Algol 68 with a new and concise syntax (sort of like C's)
can make a very elegant, type safe and well rounded language.", Piet
van Oostrum[1] commented the combination of dynamic arrays *and*
unions forced the use of GC in Algol68. Either feature by themselves
wouldn't have required GC! The larger point being that compiler
complexity is "almost exponential" (his words) to the number of
added features. Piet and others also wrote that both Pascal and C
had left out a lot of the hard things in A68. So I doubt A68 or a
subset would have replaced C or Pascal in 70s-80s.
[My exposure to Algol68 was when I had stumbled upon Brailsford and
Walker's wonderful "Introductory Algol 68 programming" @ USC. After
having used PL/I, Pascal & Fortran the regularity of A68 was quite
enticing but AFAIK no one used A68 at USC. I must admit I still like
it more than modern languages like Java, Go, Rust, C++, ...]
[1] Piet had implemented major parts of both A68 and A60.
Sorta relevant to both groups...
Augusta Ada King-Noel, Countess of Lovelace (and daughter of Lord Byron),
was born on this day in 1815; arguably the world's first computer
programmer and a highly independent woman, she saw the potential in
Charles Babbage's new-fangled invention.
J.F.Ossanna was given unto us on this day in 1928; a prolific programmer,
he not only had a hand in developing Unix but also gave us the ROFF
series.
Who'ld've thought that two computer greats would share the same birthday?
-- Dave
I like a challenge although it wasn't really much of it. A simple arpa
imp in yahoo spilled the beans :-)
"The Interface Message Processor (IMP) was the packet switching node
used to interconnect participant networks to the ARPANET from the late
1960s to 1989. It was the first generation of gateways, which are
known today as routers.[1][2][3] An IMP was a ruggedized Honeywell
DDP-516 minicomputer with special-purpose interfaces and software.[4]
In later years the IMPs were made from the non-ruggedized Honeywell
316 which could handle two-thirds of the communication traffic at
approximately one-half the cost.[5] An IMP requires the connection to
a host computer via a special bit-serial interface, defined in BBN
Report 1822. The IMP software and the ARPA network communications
protocol running on the IMPs was discussed in RFC 1, the first of a
series of standardization documents published by the Internet
Engineering Task Force (IETF)."
https://en.wikipedia.org/wiki/Interface_Message_Processor
Cheers,
uncle rubl
From: Dave Horsfall <dave(a)horsfall.org>
To: Computer Old Farts Followers <coff(a)tuhs.org>
Cc:
Bcc:
Date: Wed, 9 Dec 2020 13:41:11 +1100 (EST)
Subject: Re: [COFF] ARPAnet now 4 nodes
On Sat, 5 Dec 2020, Noel Chiappa wrote:
> The ARPAnet reached four nodes on this day in 1969 .. the nodes were > UCSB, UCLA, SRI, and Utah.
Yeah; see the first map here:
http://www.chiappa.net/~jnc/tech/arpageo.html
Yep; I know that first map well :-) For the newbies here, the ARPAnet
was the predecessor of the Internet (no, it didn't spring from the
brow of Zeus, nor Billy Gates), and what we now call "routers" were
then IMPs (look it up).
Missing maps gratefully received!
Indeed; history needs to be kept alive, lest it die.
-- Dave
> The ARPAnet reached four nodes on this day in 1969 ..
> the nodes were UCSB, UCLA, SRI, and Utah.
Yeah; see the first map here:
http://www.chiappa.net/~jnc/tech/arpageo.html
Missing maps gratefully received!
Noel
The ARPAnet reached four nodes on this day in 1969; at least one "history"
site reckoned the third node was connected in 1977 (and I'm still waiting
for a reply to my correction). Well, I can believe that perhaps there
were only three left by then...
According to my notes, the nodes were UCSB, UCLA, SRI, and Utah.
-- Dave
Dan Cross wrote in
<CAEoi9W63J0HKbWUk8wrGSkCdyzzaV-F6km-q+K-H2+kvURWWdQ(a)mail.gmail.com>:
|On Tue, Dec 1, 2020 at 3:40 PM Bakul Shah <bakul(a)iitbombay.org> wrote:
|
|> On Dec 1, 2020, at 12:20 PM, Steffen Nurpmeso <steffen(a)sdaoden.eu> wrote:
|>> Never without my goto:, and if it is only to break to error
|>> handling and/or staged destruction of local variables after
|>> initialization failures. Traumatic school impression, finding
|>> yourself locked in some PASCAL if condition, and no way to go to.
|>
|> Pascal had goto.
Hm, i did not receive Bakul's mail. Well i did not use it long
enough. I think this came up in the past already, it could have
been it was a mutilated version, there definetely was no goto in
this DOS-looking UI with menu bar, with menu entries for
compilation plus, help screen etc etc. Borland Pascal, Borland
dBASE it must have been then. Didn't i say "maybe the teacher had
an option to turn it on" or something :) Yeah, i do not know, but
there was no goto, definetely.
|Pascal also had to go. (Thanks...I'm here all week.)
Ah, and all the many-page program listings in Delphi, what a waste
of paper. Whether anyone really typed them out, not me.
|You can even do a non-local goto!
Help.
|> In Go you don't need goto for the sort of thing you and McVoy
|> talked about due to its defer statement and GC. Now granted
|> GC may be too big of a hammer for C/C++ but a future C/C++
|> add defer gainfully as the defer pattern is pretty common.
|> For example, mutex lock and unlock.
Terrible just as pthread_cleanup_push/pop, and that can be
entirely local-to-scope. Terrible even if there would be
"closure"s that could be used as arguments instead of a function
pointer. gcc supports/ed computed goto's, which would also be
nice in that respect. And some kind of ISO _Xy() which could be
used in conditionals dependent on whether the argument is
a computed goto, a "closure" or a function pointer (or a member
function pointer).
I always hated that C++ is not ISO C plus extensions, so your
"C/C++" is not true for a long time...
--steffen
|
|Der Kragenbaer, The moon bear,
|der holt sich munter he cheerfully and one by one
|einen nach dem anderen runter wa.ks himself off
|(By Robert Gernhardt)
Is it just me, or did console messages really wake up the screen saver on
BSDi (aka BSD/OS)? That old box has long since gone to $HEAVEN (along
with the company itself; thank you WinDriver) but I'm getting annoyed at
having to tap a key on FreeBSD to see the console, which I don't recall
having to do on BSDi.
-- Dave
The world's first computer programmer (and a mathematician, when that was
deemed unseemly for a mere woman), we lost her in 1852 from uterine
cancer.
-- Dave
[Redirecting to COFF]
On Monday, 23 November 2020 at 8:42:34 -0500, Noel Chiappa wrote:
>> On Mon, Nov 23, 2020 at 12:28 PM Erik E. Fair <fair-tuhs(a)netbsd.org> wrote:
>
>> The Honeywell DDP-516 was the computer (running specialized software
>> written by Bolt, Bernanek & Newman (BBN)) which was the initial model of
>> the ARPANET Interface Message Processors (IMP).
>
> The IMPs had a lot of custom interface hardware; sui generis serial
> interlocked host interfaces (so-called 1822), and also the high-speed modem
> interfaces. I think there was also a watchdog time, IIRC (this is all from
> memory, but the ARPANET papers from JCC cover it all).
I worked with a DDP-516 at DFVLR 46 years ago. My understanding was
that the standard equipment included two different channel interfaces.
One, the DMC (Direct Multiplexer Control, I think) proved to be just
what I needed for my program, a relatively simple tape copy program.
The input tape was analogue, unbuffered, and couldn't be stopped, so
it was imperative to accept all data as it came in from the ADC.
But the program didn't work. According to the docco, the DMC should
have reset when the transfer was complete (maybe depending on
configuration parameters), but it didn't. We called in Honeywell
support, who scratched their heads and went away, only to come back
later and say that it couldn't be fixed.
I worked around the problem in software by continually checking the
transfer count and restarting when the count reached 0. So the
program worked, but I was left wondering whether this was a design
problem or a support failure. Has anybody else worked with this
feature?
Greg
--
Sent from my desktop computer.
Finger grog(a)lemis.com for PGP public key.
See complete headers for address and phone numbers.
This message is digitally signed. If your Microsoft mail program
reports problems, please read http://lemis.com/broken-MUA
I'm currently reviewing a paper about Unix and Linux, and I made the
comment that in the olden days the normal way to build an OS image for
a big computer was from source. Now I've been asked for a reference,
and I can't find one! Can anybody help?
Greg
--
Sent from my desktop computer.
Finger grog(a)lemis.com for PGP public key.
See complete headers for address and phone numbers.
This message is digitally signed. If your Microsoft mail program
reports problems, please read http://lemis.com/broken-MUA
On 2020-Nov-06 10:07:21 -0500, Clem Cole <clemc(a)ccc.com> wrote:
>Will, I do still the same thing, but the reason for 72 for email being that
>way is still card-based. In FORTRAN the first column defines if the card
>is new (a blank), a comment (a capital C), no zero a 'continuation' of the
>last card. But column 73-80 were 'special' and used to store sequence #s
>(this was handy when you dropped your card deck, card sorters could put it
>back into canonical order).
Since no-one has mentioned it, the reason why Fortran and Cobol ignore
columns 73-80 goes back to the IBM 711 card reader - which could read any
(but usually configured for the first) 72 columns into pairs of 36-bit words
in an IBM 701.
--
Peter Jeremy