On 2021-Feb-03 09:58:37 -0500, Clem Cole <clemc(a)ccc.com> wrote:
>but the original released (distributed - MC68000) part was binned at 8 and
>10
There was also a 4MHz version. I had one in my MEX68KECB but I'm not
sure if they were ever sold separately. ISTR I got the impression
that it was a different (early) mask or microcode variant because some
of the interface timings weren't consistent with the 8/10 MHz versions
(something like one of the bus timings was a clock-cycle slower).
>as were the later versions with the updated paging microcode called the
>MC68010 a year later. When the 68020 was released Moto got the speeds up
>to 16Mhz and later 20. By the '040 I think they were running at 50MHz
I also really liked the M68k architecture. Unfortunately, as with the
M6800, Motorola lost out to Intel's inferior excuse for an architecture.
Moving more off-topic, M68k got RISC-ified as the ColdFire MCF5206.
That seemed (to me) to combine the feel of the M68k with the
clock/power gains from RISC. Unfortunately, it didn't take off.
--
Peter Jeremy
On 2021-Feb-03 17:33:56 -0800, Larry McVoy <lm(a)mcvoy.com> wrote:
>The x86 stuff is about as far away from PDP-11 as you can get. Required
>to know it, but so unpleasant.
Warts upon warts upon warts. The complete opposite of orthogonal.
>I have to admit that I haven't looked at ARM assembler, the M1 is making
>me rethink that. Anyone have an opinion on where ARM lies in the pleasant
>to unpleasant scale?
I haven't spent enough time with ARM assembler to form an opinion but
there are a number of interesting interviews with the designers on YT:
* A set of videos by Sophie Wilson (original designer) starting at
https://www.youtube.com/watch?v=jhwwrSaHdh8
* A history of ARM by Dave Jaggar (redesigner) at
https://www.youtube.com/watch?v=_6sh097Dk5k
If you don't want to pony up for a M1, there are a wide range of ARM
SBCs that you could experiment with.
--
Peter Jeremy
On Fri, Feb 05, 2021 at 01:16:08PM +1100, Dave Horsfall wrote:
> [ Directing to COFF, where it likely belongs ]
>
> On Thu, 4 Feb 2021, Arthur Krewat wrote:
>
> >>-- Dave, wondering whether anyone has ever used every VAX instruction
> >
> >Or every VMS call, for that matter. ;)
>
> Urk... I stayed away from VMS as much as possible (I had a network of
> PDP-11s to play with), although I did do a device driver course; dunno why.
Me too, though I did use Eunice, it was a lonely place, it did not let
me see who was on VMS. I was the only one. A far cry from BSD where
wall went to everyone and talk got you a screen where you talked.
Hello all,
I'm looking to compile a list of historic "wars" in computing, just for
personal interest cause they're fun to read about (I'll also put the list
up online for others' reference).
To explain what I mean, take for example, The Tcl War
<https://vanderburg.org/old_pages/Tcl/war/>. Other examples:
- TCP/IP wars (BBN vs Berkley, story by Kirk McKusick
<https://www.youtube.com/watch?v=DEEr6dT-4uQ>)
- The Tanenbaum-Torvalds Debate
<https://www.oreilly.com/openbook/opensources/book/appa.html> (it doesn't
have 'war(s)' in the name but it still counts IMO, could be called The
Microkernel Wars)
- UNIX Wars <https://en.wikipedia.org/wiki/Unix_wars>
Stuff like "vi vs. emacs" counts too I think, though I'm looking more for
historical significance, so maybe "spaces vs. tabs" isn't interesting
enough.
Thanks, any help is appreciated : )
Josh
On Thu, Feb 4, 2021 at 9:57 AM John Cowan <cowan(a)ccil.org> wrote:
>
> On Wed, Feb 3, 2021 at 8:34 PM Larry McVoy <lm(a)mcvoy.com> wrote:
>
>> The x86 stuff is about as far away from PDP-11 as you can get. Required
>> to know it, but so unpleasant.
>>
>
> Required? Ghu forbid. After doing a bunch of PDP-11 assembler work, I
> found out that the Vax had 256 opcodes and foreswore assembly thereafter.
> Still, that was nothing compared to the 1500+ opcodes of x86*. I think I
> dodged a bullet.
>
IMHO: the Vax instruction set was the assembler guys (like Culter) trying
to delay the future and keep assembler as king of the hill. That said,
Dave Pressotto, Scotty Baden, and I used to fight with Patterson in his
architecture seminar (during the writing of the RISC papers). DEC hit a
grand slam with that machine. Between the Vax and x86 plus being part of
Alpha, I have realized ISA has nothing to do with success (i.e. my previous
comments about economics vs. architecture).
Funny thing, Dave Cane was the lead HW guy on the 750, worked on the 780 HW
team, and lead the Masscomp HW group. Dave used to stay "Culter got his
way" whenever we talked about the VAX instruction set. It was supposed to
be the world's greatest assembler machine. The funny part is that DEC had
already started to transition to BLISS by then in the applications teams.
But Cutler was (is) an OS weenie and he famously hated BLISS. Only the
other hand, Culter (together with Dick Hustvedt and Peter Lipman), got the
SW out on that system (Starlet - *a.k.a.* VMS) quickly and it worked really
well/pretty much as advertised. [Iknowing all of them I suspect having
Roger Gourd as their boss helped a good bit also).
Clem
On Wed, Feb 3, 2021 at 8:34 PM Larry McVoy <lm(a)mcvoy.com> wrote:
> I have to admit that I haven't looked at ARM assembler, the M1 is making
> me rethink that. Anyone have an opinion on where ARM lies in the pleasant
> to unpleasant scale?
>
Redirecting to "COFF" as this is drifting away from Unix.
I have a soft spot for ARM, but I wonder if I should. At first blush, it's
a pleasant RISC-ish design: loads and stores for dealing with memory,
arithmetic and logic instructions work on registers and/or immediate
operands, etc. As others have mentioned, there's an inline barrel shifter
in the ALU that a lot of instructions can take advantage of in their second
operand; you can rotate, shift, etc, an immediate or register operand while
executing an instruction: here's code for setting up page table entries for
an identity mapping for the low part of the physical address space (the
root page table pointer is at phys 0x40000000):
MOV r1, #0x0000
MOVT r1, #0x4000
MOV r0, #0
.Lpti: MOV r2, r0, LSL #20
ORR r2, r2, r3
STR r2, [r1], #4
ADD r0, r0, #1
CMP r0, #2048
BNE .Lpti
(Note the `LSL #20` in the `MOV` instruction.)
32-bit ARM also has some niceness for conditionally executing instructions
based on currently set condition codes in the PSW, so you might see
something like:
1: CMP r0, #0
ADDNE r1, r1, #1
SUBNE r0, r0, #1
BNE 1b
The architecture tends to map nicely to C and similar languages (e.g.
Rust). There is a rich set of instructions for various kinds of arithmetic;
for instance, they support saturating instructions for DSP-style code. You
can push multiple registers onto the stack at once, which is a little odd
for a RISC ISA, but works ok in practice.
The supervisor instruction set is pretty nice. IO is memory-mapped, etc.
There's a co-processor interface for working with MMUs and things like it.
Memory mapping is a little weird, in that the first-level page table isn't
the same second-level tables: the first-level page table maps the 32-bit
address space into 1MiB "sections", each of which is described by a 32-bit
section descriptor; thus, to map the entire 4GiB space, you need 4096 of
those in 16KiB of physically contiguous RAM. At the second-level, 4KiB page
frames map page into the 1MiB section at different granularities; I think
the smallest is 1KIB (thus, you need 1024 32-bit entries). To map a 4KiB
virtual page to a 4KiB PFN, you repeat the relevant entry 4 times in the
second-level page. It ends up being kind of annoying. I did a little toy
kernel for ARM32 and ended up deciding to use 16KiB pages (basically, I map
4x4KiB contiguous pages) so I could allocate a single sized structure for
the page tables themselves.
Starting with the ARMv8 architecture, it's been split into 32-bit aarch32
(basically the above) and 64-bit aarch64; the latter has expanded the
number and width of general purpose registers, one is a zero register in
some contexts (and I think a stack pointer in others? I forget the
details). I haven't played around with it too much, but looked at it when
it came out and thought "this is reasonable, with some concessions for
backwards compatibility." They cleaned up the paging weirdness mentioned
above. The multiple push instruction has been retired and replaced with a
"push a pair of adjacent registers" instruction; I viewed that as a
concession between code size and instruction set orthogonality.
So...Overall quite pleasant, and far better than x86_64, but with some
oddities.
- Dan C.
I will ask Warren's indulgence here - as this probably should be continued
in COFF, which I have CC'ed but since was asked in TUHS I will answer
On Wed, Feb 3, 2021 at 6:28 AM Peter Jeremy via TUHS <tuhs(a)minnie.tuhs.org>
wrote:
> I'm not sure that 16 (or any other 2^n) bits is that obvious up front.
> Does anyone know why the computer industry wound up standardising on
> 8-bit bytes?
>
Well, 'standardizing' is a little strong. Check out my QUORA answer: How
many bits are there in a byte
<https://www.quora.com/How-many-bits-are-there-in-a-byte/answer/Clem-Cole>
and What is a bit? Why are 8 bits considered as 1 byte? Why not 7 bit or 9
bit?
<https://www.quora.com/What-is-a-bit-Why-are-8-bits-considered-as-1-byte-Why…>
for my details but the 8-bit part of the tail is here (cribbed from those
posts):
The Industry followed IBM with the S/360.The story of why a byte is 8- bits
for the S/360 is one of my favorites since the number of bits in a byte is
defined for each computer architecture. Simply put, Fred Brooks (who lead
the IBM System 360 project) overruled the chief hardware designer, Gene
Amdahl, and told him to make things power of two to make it easier on the
SW writers. Amdahl famously thought it was a waste of hardware, but Brooks
had the final authority.
My friend Russ Robeleon, who was the lead HW guy on the 360/50 and later
the ASP (*a.k.a.* project X) who was in the room as it were, tells his yarn
this way: You need to remember that the 360 was designed to be IBM's
first *ASCII
machine*, (not EBCDIC as it ended up - a different story)[1] Amdahl was
planning for a word size to be 24-bits and the byte size to be 7-bits for
cost reasons. Fred kept throwing him out of his office and told him not to
come back “until a byte and word are powers of two, as we just don’t know
how to program it otherwise.”
Brooks would eventually relent on the original pointer on the Systems 360
became 24-bits, as long as it was stored in a 32-bit “word”.[2] As a
result, (and to answer your original question) a byte first widely became
8-bit with the IBM’s Systems 360.
It should be noted, that it still took some time before an 8-bit byte
occurred more widely and in almost all systems as we see it today. Many
systems like the DEC PDP-6/10 systems used 5, 7-bit bytes packed into a
36-bit word (with a single bit leftover) for a long time. I believe that
the real widespread use of the 8-bit byte did not really occur until the
rise of the minis such as the PDP-11 and the DG Nova in the late
1960s/early 1970s and eventually the mid-1970s’ microprocessors such as
8080/Z80/6502.
Clem
[1] While IBM did lead the effort to create ASCII, and System 360 actually
supported ASCII in hardware, but because the software was so late, IBM
marketing decided not the switch from BCD and instead used EBCDIC (their
own code). Most IBM software was released using that code for the System
360/370 over the years. It was not until IBM released their Series 1
<https://en.wikipedia.org/wiki/IBM_Series/1>minicomputer in the late 1970s
that IBM finally supported an ASCII-based system as the natural code for
the software, although it had a lot of support for EBCDIC as they were
selling them to interface to their ‘Mainframe’ products.
[2] Gordon Bell would later observe that those two choices (32-bit word and
8-bit byte) were what made the IBM System 360 architecture last in the
market, as neither would have been ‘fixable’ later.
Migration to COFF, methinks
On 30/01/2021 18:20, John Cowan wrote:
> Those were just examples. The hard part is parsing schemas,
> especially if you're writing in C and don't know about yacc and lex.Â
> That code tends to be horribly buggy.
True but tools such as the commercial ASN.1 -> C translators are fairly
good and even asn1c has come a long way in the past few decades.
N.
>
> But unless you need to support PER (which outright requires the
> schema) or unless you are trying to map ASN.1 compound objects to C
> structs or the equivalent, you can just process the whole thing in the
> same way you would JSON, except that it's binary and there are more
> types. Easy-peasy, especially in a dynamically typed language.
>
> Once there was a person on the xml-dev mailing list who kept repeating
> himself, insisting on the superiority of ASN.1 to XML. Finally I told
> him privately that his emails could be encoded in PER by using 0x01 to
> represent him (as the value of the author field) and allowing the
> recipients to reconstruct the message from that! He took it in good part.
>
>
>
> John Cowan http://vrici.lojban.org/~cowan
> <http://vrici.lojban.org/%7Ecowan> cowan(a)ccil.org <mailto:cowan@ccil.org>
> Don't be so humble. You're not that great.
> --Golda Meir
>
>
> On Fri, Jan 29, 2021 at 10:52 PM Richard Salz <rich.salz(a)gmail.com
> <mailto:rich.salz@gmail.com>> wrote:
>
> PER is not the reason for the hatred of ASN.1, it's more that the
> specs were created by a pay-to-play organization that fought
> against TCP/IP, the specs were not freely available for long
> years, BER was too flexible, and the DER rules were almost too
> hard to get right. Just a terse summary because this is probably
> off-topic for TUHS.
>
Born on this day in 1925, he was a pioneer in human/computer interaction,
and invented the mouse; it wasn't exactly ergonomic, being just a square
box with a button.
-- Dave
Howdy,
Perhaps this is off topic for this list, if so, apologies in advance.
Or perhaps this will be of interest to some who do not trace yet
another mailing list out there :-).
Two days ago I received a notice that bugtraq would be terminated, and
archive shut down on 31st this month. Only then I realized (looking at
the archive also helped a bit in this) that last post to bugtraq
happened in the last days of Feb 2020. After that, eleven months of
nothing, and shutdown notice. It certainly was not because of list
being shunned, because I have seen posters on other lists cc-ing to
bt, yet their posts never went that route (apparently) and I suppose
they were not postponed either. If they were, I would now get an
eleven months worth of it. But no.
Too bad. I liked bt, even if I had not followed every post.
Today, a notice that they would not terminate bt (after second
thought, as they wrote). And a fresh post from yesterday.
But what could possibly explain an almost year long gap? Their
computers changed owners last year, and maybe someone switched the
flip, were fired, nobody switched it on again? Or something else?
Just wondering.
--
Regards,
Tomasz Rola
--
** A C programmer asked whether computer had Buddha's nature. **
** As the answer, master did "rm -rif" on the programmer's home **
** directory. And then the C programmer became enlightened... **
** **
** Tomasz Rola mailto:tomasz_rola@bigfoot.com **