On Fri, Feb 05, 2021 at 01:16:08PM +1100, Dave Horsfall wrote:
> [ Directing to COFF, where it likely belongs ]
>
> On Thu, 4 Feb 2021, Arthur Krewat wrote:
>
> >>-- Dave, wondering whether anyone has ever used every VAX instruction
> >
> >Or every VMS call, for that matter. ;)
>
> Urk... I stayed away from VMS as much as possible (I had a network of
> PDP-11s to play with), although I did do a device driver course; dunno why.
Me too, though I did use Eunice, it was a lonely place, it did not let
me see who was on VMS. I was the only one. A far cry from BSD where
wall went to everyone and talk got you a screen where you talked.
Hello all,
I'm looking to compile a list of historic "wars" in computing, just for
personal interest cause they're fun to read about (I'll also put the list
up online for others' reference).
To explain what I mean, take for example, The Tcl War
<https://vanderburg.org/old_pages/Tcl/war/>. Other examples:
- TCP/IP wars (BBN vs Berkley, story by Kirk McKusick
<https://www.youtube.com/watch?v=DEEr6dT-4uQ>)
- The Tanenbaum-Torvalds Debate
<https://www.oreilly.com/openbook/opensources/book/appa.html> (it doesn't
have 'war(s)' in the name but it still counts IMO, could be called The
Microkernel Wars)
- UNIX Wars <https://en.wikipedia.org/wiki/Unix_wars>
Stuff like "vi vs. emacs" counts too I think, though I'm looking more for
historical significance, so maybe "spaces vs. tabs" isn't interesting
enough.
Thanks, any help is appreciated : )
Josh
On Thu, Feb 4, 2021 at 9:57 AM John Cowan <cowan(a)ccil.org> wrote:
>
> On Wed, Feb 3, 2021 at 8:34 PM Larry McVoy <lm(a)mcvoy.com> wrote:
>
>> The x86 stuff is about as far away from PDP-11 as you can get. Required
>> to know it, but so unpleasant.
>>
>
> Required? Ghu forbid. After doing a bunch of PDP-11 assembler work, I
> found out that the Vax had 256 opcodes and foreswore assembly thereafter.
> Still, that was nothing compared to the 1500+ opcodes of x86*. I think I
> dodged a bullet.
>
IMHO: the Vax instruction set was the assembler guys (like Culter) trying
to delay the future and keep assembler as king of the hill. That said,
Dave Pressotto, Scotty Baden, and I used to fight with Patterson in his
architecture seminar (during the writing of the RISC papers). DEC hit a
grand slam with that machine. Between the Vax and x86 plus being part of
Alpha, I have realized ISA has nothing to do with success (i.e. my previous
comments about economics vs. architecture).
Funny thing, Dave Cane was the lead HW guy on the 750, worked on the 780 HW
team, and lead the Masscomp HW group. Dave used to stay "Culter got his
way" whenever we talked about the VAX instruction set. It was supposed to
be the world's greatest assembler machine. The funny part is that DEC had
already started to transition to BLISS by then in the applications teams.
But Cutler was (is) an OS weenie and he famously hated BLISS. Only the
other hand, Culter (together with Dick Hustvedt and Peter Lipman), got the
SW out on that system (Starlet - *a.k.a.* VMS) quickly and it worked really
well/pretty much as advertised. [Iknowing all of them I suspect having
Roger Gourd as their boss helped a good bit also).
Clem
On Wed, Feb 3, 2021 at 8:34 PM Larry McVoy <lm(a)mcvoy.com> wrote:
> I have to admit that I haven't looked at ARM assembler, the M1 is making
> me rethink that. Anyone have an opinion on where ARM lies in the pleasant
> to unpleasant scale?
>
Redirecting to "COFF" as this is drifting away from Unix.
I have a soft spot for ARM, but I wonder if I should. At first blush, it's
a pleasant RISC-ish design: loads and stores for dealing with memory,
arithmetic and logic instructions work on registers and/or immediate
operands, etc. As others have mentioned, there's an inline barrel shifter
in the ALU that a lot of instructions can take advantage of in their second
operand; you can rotate, shift, etc, an immediate or register operand while
executing an instruction: here's code for setting up page table entries for
an identity mapping for the low part of the physical address space (the
root page table pointer is at phys 0x40000000):
MOV r1, #0x0000
MOVT r1, #0x4000
MOV r0, #0
.Lpti: MOV r2, r0, LSL #20
ORR r2, r2, r3
STR r2, [r1], #4
ADD r0, r0, #1
CMP r0, #2048
BNE .Lpti
(Note the `LSL #20` in the `MOV` instruction.)
32-bit ARM also has some niceness for conditionally executing instructions
based on currently set condition codes in the PSW, so you might see
something like:
1: CMP r0, #0
ADDNE r1, r1, #1
SUBNE r0, r0, #1
BNE 1b
The architecture tends to map nicely to C and similar languages (e.g.
Rust). There is a rich set of instructions for various kinds of arithmetic;
for instance, they support saturating instructions for DSP-style code. You
can push multiple registers onto the stack at once, which is a little odd
for a RISC ISA, but works ok in practice.
The supervisor instruction set is pretty nice. IO is memory-mapped, etc.
There's a co-processor interface for working with MMUs and things like it.
Memory mapping is a little weird, in that the first-level page table isn't
the same second-level tables: the first-level page table maps the 32-bit
address space into 1MiB "sections", each of which is described by a 32-bit
section descriptor; thus, to map the entire 4GiB space, you need 4096 of
those in 16KiB of physically contiguous RAM. At the second-level, 4KiB page
frames map page into the 1MiB section at different granularities; I think
the smallest is 1KIB (thus, you need 1024 32-bit entries). To map a 4KiB
virtual page to a 4KiB PFN, you repeat the relevant entry 4 times in the
second-level page. It ends up being kind of annoying. I did a little toy
kernel for ARM32 and ended up deciding to use 16KiB pages (basically, I map
4x4KiB contiguous pages) so I could allocate a single sized structure for
the page tables themselves.
Starting with the ARMv8 architecture, it's been split into 32-bit aarch32
(basically the above) and 64-bit aarch64; the latter has expanded the
number and width of general purpose registers, one is a zero register in
some contexts (and I think a stack pointer in others? I forget the
details). I haven't played around with it too much, but looked at it when
it came out and thought "this is reasonable, with some concessions for
backwards compatibility." They cleaned up the paging weirdness mentioned
above. The multiple push instruction has been retired and replaced with a
"push a pair of adjacent registers" instruction; I viewed that as a
concession between code size and instruction set orthogonality.
So...Overall quite pleasant, and far better than x86_64, but with some
oddities.
- Dan C.
I will ask Warren's indulgence here - as this probably should be continued
in COFF, which I have CC'ed but since was asked in TUHS I will answer
On Wed, Feb 3, 2021 at 6:28 AM Peter Jeremy via TUHS <tuhs(a)minnie.tuhs.org>
wrote:
> I'm not sure that 16 (or any other 2^n) bits is that obvious up front.
> Does anyone know why the computer industry wound up standardising on
> 8-bit bytes?
>
Well, 'standardizing' is a little strong. Check out my QUORA answer: How
many bits are there in a byte
<https://www.quora.com/How-many-bits-are-there-in-a-byte/answer/Clem-Cole>
and What is a bit? Why are 8 bits considered as 1 byte? Why not 7 bit or 9
bit?
<https://www.quora.com/What-is-a-bit-Why-are-8-bits-considered-as-1-byte-Why…>
for my details but the 8-bit part of the tail is here (cribbed from those
posts):
The Industry followed IBM with the S/360.The story of why a byte is 8- bits
for the S/360 is one of my favorites since the number of bits in a byte is
defined for each computer architecture. Simply put, Fred Brooks (who lead
the IBM System 360 project) overruled the chief hardware designer, Gene
Amdahl, and told him to make things power of two to make it easier on the
SW writers. Amdahl famously thought it was a waste of hardware, but Brooks
had the final authority.
My friend Russ Robeleon, who was the lead HW guy on the 360/50 and later
the ASP (*a.k.a.* project X) who was in the room as it were, tells his yarn
this way: You need to remember that the 360 was designed to be IBM's
first *ASCII
machine*, (not EBCDIC as it ended up - a different story)[1] Amdahl was
planning for a word size to be 24-bits and the byte size to be 7-bits for
cost reasons. Fred kept throwing him out of his office and told him not to
come back “until a byte and word are powers of two, as we just don’t know
how to program it otherwise.”
Brooks would eventually relent on the original pointer on the Systems 360
became 24-bits, as long as it was stored in a 32-bit “word”.[2] As a
result, (and to answer your original question) a byte first widely became
8-bit with the IBM’s Systems 360.
It should be noted, that it still took some time before an 8-bit byte
occurred more widely and in almost all systems as we see it today. Many
systems like the DEC PDP-6/10 systems used 5, 7-bit bytes packed into a
36-bit word (with a single bit leftover) for a long time. I believe that
the real widespread use of the 8-bit byte did not really occur until the
rise of the minis such as the PDP-11 and the DG Nova in the late
1960s/early 1970s and eventually the mid-1970s’ microprocessors such as
8080/Z80/6502.
Clem
[1] While IBM did lead the effort to create ASCII, and System 360 actually
supported ASCII in hardware, but because the software was so late, IBM
marketing decided not the switch from BCD and instead used EBCDIC (their
own code). Most IBM software was released using that code for the System
360/370 over the years. It was not until IBM released their Series 1
<https://en.wikipedia.org/wiki/IBM_Series/1>minicomputer in the late 1970s
that IBM finally supported an ASCII-based system as the natural code for
the software, although it had a lot of support for EBCDIC as they were
selling them to interface to their ‘Mainframe’ products.
[2] Gordon Bell would later observe that those two choices (32-bit word and
8-bit byte) were what made the IBM System 360 architecture last in the
market, as neither would have been ‘fixable’ later.