<moving to coff, less unix heritage content here>
On 2021-02-07 23:29, Doug McIntyre wrote:
> On Sun, Feb 07, 2021 at 04:32:56PM -0500, Nemo Nusquam wrote:
>> My Sun UNIX layout keyboards (and mice) work quite well with my Macs.
>> I share your sentiments.
>
> Most of the bespoke mechanical keyboard makers will offer a dipswitch
> for what happens to the left of the A, and with an option to print the
> right value there, my keyboards work quite well the right way.
I've been using the CODE[0] keyboard with 'clear' switches for the past
few years and have been very happy with it. Has the dipswitches for
swapping around CTRL/CAPS and the meta/Alt, probably others as well.
When I don't have hardware solutions to this, most modern OSes let you
remap keys in software. Being a gnu screen user, CTRL & A being right
next too each other makes life easier.
I've used enough keyboards over the years that didn't even have an ESC
key (Mac Plus, the Commodore 64, the keyboard on my Samsung tablet,
probably a few others), that I got in the habit of using CTRL-[ to
generate an ESC and still do that most of the time rather than reaching
for the ESC up there in the corner.
> I did use the Sun Type5 USB Unix layout for quite some years, but I
> always found it a but mushy, and liked it better switching back to
> mechanical keyboards with the proper layout.
Before I got this keyboard, I used a Sun Type 7 keyboard (USB with the
UNIX layout). It had the CTRL and ESC keys in the "right" places (as
noted above, ESC location doesn't bother me as much), but yeah, they're
mushy, and big. Much happier with the mechanical keyboard for my daily
driver.
I've been eyeballing the TEX Shinobi[1], a mechanical keyboard with the
ThinkPad type TrackPoint, cut down on reasons for my fingers to leave
the keyboard even more.
--
Michael Parson
Pflugerville, TX
[0] http://codekeyboards.com/
[1] https://tex.com.tw/products/shinobi?variant=16969884106842
I have to agree with Clem here. Mind you I still mourn the demise of
Alpha and even Itanium but then I never had to pay for those systems.
I only make sure they run properly so the customer can enjoy their
applications.
My 32-1/2 cents (inflation adjusted).
Take care and stay as healthy as some of my 25 year old servers :-)
Cheers,
uncle rubl
>From: Clem Cole <clemc(a)ccc.com>
>To: Larry McVoy <lm(a)mcvoy.com>
>Cc: COFF <coff(a)minnie.tuhs.org>
>Bcc:
>Date: Fri, 5 Feb 2021 09:36:20 -0500
>Subject: Re: [COFF] Architectures -- was [TUHS] 68k prototypes & microcode
<snip>
>BTW: Once again we 100% agree on the architecture part of the discussion. And frankly >pre-386 days, I could not think how anyone would come up with it. As computer >architecture it is terrible, how did so many smart people come up with such? It defies >everything we are taught about 'good' computer architectural design. But .... after all of >the issues with the ISA's of Vax and the x86/INTEL*64 vs. Alpha --- is how I came to the >conclusion, architecture does not matter nearly as much as economics and we need to >get over it and stop whining. Or in Christensen's view, a new growing market is often >made from a product that has technically not as good as the one in the original >mainstream market but has some value to the new group of people.
<snip>
--
The more I learn the better I understand I know nothing.
Moved to COFF - and I should prefix this note with a strongly worded --
these are my own views and do not necessarily follow my employers (and
often have not, as some of you know that have worked with me in the past).
On Wed, Feb 3, 2021 at 8:34 PM Larry McVoy <lm(a)mcvoy.com> wrote:
> The x86 stuff is about as far away from PDP-11 as you can get. Required
> to know it, but so unpleasant.
>
BTW: Once again we 100% agree *on the architecture part of* *the discussion*.
And frankly pre-386 days, I could not think how anyone would come up with
it. As computer architecture it is terrible, how did so many smart people
come up with such? It defies everything we are taught about 'good'
computer architectural design. But .... after all of the issues with the
ISA's of Vax and the x86/INTEL*64 *vs.* Alpha --- is how I came to the
conclusion, *architecture does not matter nearly as much as economics and
we need to get over it and stop whining. * Or in Christensen's view, a new
growing market is often made from a product that has technically not as
good as the one in the original mainstream market but has some value to the
new group of people.
x86 (and in particular once the 386 added linear 32 bit addressing), even
though DOS/Windows sucked compared to SunOS (or whatever), the job (work)
that the users needed to do was performed to the customer's satisfaction *and
for a lot less.* The ISVs could run their codes there and >>they<< sell
more copies of their code which is what they care about. The end-users,
really just care about getting a job done.
What was worse was at the time, it was the ISV's keep their own prices
higher on the 'high-value platform' - which makes the cost of those
platforms ever higher. During the Unix wars, this fact was a huge issue.
The same piece of SW for a Masscomp would cause 5-10 more than a Sun -- why
we were considered a minicomputer and Sun was a workstation. Same 10MHz
68000 inside (we had a better compiler so we ran 20% faster). This was
because the ISV's classified Masscomp's competition was considered the Vax
8600; not Sun and Apollo -- sigh.
In the end, the combination of x86 and MSFT did it to Sun. For example,
my college roommates (who were trained on the first $100K
architecture/drawing 3D systems developed at CMU on PDP-11/Unix and Triple
Drip Graphic's Wonder) Wintel running a 'boxed' AutoCAD was way more
economical than a Sun box with a custom architecture package -- economics
won, even though the solution was technically not as good. Another form
of the same issue did you ever try to write a technical >>publication<<
with Word (not a letter or a memo) -- it sucks -- The pro's liked FrameMaker
and other 'authoring tools' (hey I think even Latex and Troff are -- much '
better' for the author) -- but Frame costs way more and Word, so what do
the publishers want -- ugh Word DOC format [ask Steinhart about this issue,
he lived it a year ago].
In the case of the Arm, Intel #$%^'ed 101-15 yrs ago up when Jobs said he
wanted a $20 processor for what would become the iPhone and our execs told
him to take a hike (we were making too much money with high margin
Window's) boxes. At the time, Arm was 'not as good' - but it had two
properties Jobs cared about (better power - although at the time Arm was
actually not much better than the laptop x86s, but Apple got Samsung to
make/sell parts at less than $20 -- i.e. economics).
Again, I'm not a college professor. I'm an engineer that builds real
computer systems that sometimes people (even ones like the folks that read
this list) want to/have wanted buy. As much as I like to use sold
architecture principle to guide things, the difference is I know be
careful. Economics is really the higher bit. What the VAX engineers at
DEC or the current INTEL*64 folks (like myself) was/is not what some of the
same engineers did with Alpha -- today, we have to try to figure out how to
make the current product continue to be economically attractive [hence the
bazillion new instructions that folks like Paul W in the compiler team
figure out how to exploit, so the ISV's codes run better and can sell more
copies and we sell more chips to our customers to sell to end users].
But like Jobs's, DEC management got caught up in the high margin game, and
ignored the low end (I left Compaq after I managed to build the $1K Alpha
which management blew off -- it could be sold at 45% margins like the Alpha
TurboLaser or 4x00 series). Funny, one of the last things I had proposed
at Masscomp in the early 80s before I went to Stellar, was a low-end system
(also < $1K) and Masscomp management wanted to part of it -- it would have
meant competing with Sun and eventually the PC.
FWIW: Intel >>does<< know how to make a $20 SOC, but the margins will
suck. The question is what will management want to? I really don't
know. So far, we have liked the server chip margins (don't forget Intel
made more $s last year than it ever has - even in the pandemic).
I feel a little like Dr Seuss' 'Onceler' in the Lorax story ... if Arm can
go upscale from the phone platform who knows what will happen - Bell's Law
predicts Arm displaces INTEL*64:
“Approximately every decade a new computer class forms as a new “minimal”
computer either through using fewer components or use of a small fractional
part of the state-of-the-art chips.”
FWIW: Bell basically has claimed a technical point, based on Christenson's
observation; the 'lessor' technology will displace the 'better one.' Or
as I say it, sophisticated architecture always losses to better economics.
On 2021-Feb-03 09:58:37 -0500, Clem Cole <clemc(a)ccc.com> wrote:
>but the original released (distributed - MC68000) part was binned at 8 and
>10
There was also a 4MHz version. I had one in my MEX68KECB but I'm not
sure if they were ever sold separately. ISTR I got the impression
that it was a different (early) mask or microcode variant because some
of the interface timings weren't consistent with the 8/10 MHz versions
(something like one of the bus timings was a clock-cycle slower).
>as were the later versions with the updated paging microcode called the
>MC68010 a year later. When the 68020 was released Moto got the speeds up
>to 16Mhz and later 20. By the '040 I think they were running at 50MHz
I also really liked the M68k architecture. Unfortunately, as with the
M6800, Motorola lost out to Intel's inferior excuse for an architecture.
Moving more off-topic, M68k got RISC-ified as the ColdFire MCF5206.
That seemed (to me) to combine the feel of the M68k with the
clock/power gains from RISC. Unfortunately, it didn't take off.
--
Peter Jeremy
On 2021-Feb-03 17:33:56 -0800, Larry McVoy <lm(a)mcvoy.com> wrote:
>The x86 stuff is about as far away from PDP-11 as you can get. Required
>to know it, but so unpleasant.
Warts upon warts upon warts. The complete opposite of orthogonal.
>I have to admit that I haven't looked at ARM assembler, the M1 is making
>me rethink that. Anyone have an opinion on where ARM lies in the pleasant
>to unpleasant scale?
I haven't spent enough time with ARM assembler to form an opinion but
there are a number of interesting interviews with the designers on YT:
* A set of videos by Sophie Wilson (original designer) starting at
https://www.youtube.com/watch?v=jhwwrSaHdh8
* A history of ARM by Dave Jaggar (redesigner) at
https://www.youtube.com/watch?v=_6sh097Dk5k
If you don't want to pony up for a M1, there are a wide range of ARM
SBCs that you could experiment with.
--
Peter Jeremy
On Fri, Feb 05, 2021 at 01:16:08PM +1100, Dave Horsfall wrote:
> [ Directing to COFF, where it likely belongs ]
>
> On Thu, 4 Feb 2021, Arthur Krewat wrote:
>
> >>-- Dave, wondering whether anyone has ever used every VAX instruction
> >
> >Or every VMS call, for that matter. ;)
>
> Urk... I stayed away from VMS as much as possible (I had a network of
> PDP-11s to play with), although I did do a device driver course; dunno why.
Me too, though I did use Eunice, it was a lonely place, it did not let
me see who was on VMS. I was the only one. A far cry from BSD where
wall went to everyone and talk got you a screen where you talked.
Hello all,
I'm looking to compile a list of historic "wars" in computing, just for
personal interest cause they're fun to read about (I'll also put the list
up online for others' reference).
To explain what I mean, take for example, The Tcl War
<https://vanderburg.org/old_pages/Tcl/war/>. Other examples:
- TCP/IP wars (BBN vs Berkley, story by Kirk McKusick
<https://www.youtube.com/watch?v=DEEr6dT-4uQ>)
- The Tanenbaum-Torvalds Debate
<https://www.oreilly.com/openbook/opensources/book/appa.html> (it doesn't
have 'war(s)' in the name but it still counts IMO, could be called The
Microkernel Wars)
- UNIX Wars <https://en.wikipedia.org/wiki/Unix_wars>
Stuff like "vi vs. emacs" counts too I think, though I'm looking more for
historical significance, so maybe "spaces vs. tabs" isn't interesting
enough.
Thanks, any help is appreciated : )
Josh
On Thu, Feb 4, 2021 at 9:57 AM John Cowan <cowan(a)ccil.org> wrote:
>
> On Wed, Feb 3, 2021 at 8:34 PM Larry McVoy <lm(a)mcvoy.com> wrote:
>
>> The x86 stuff is about as far away from PDP-11 as you can get. Required
>> to know it, but so unpleasant.
>>
>
> Required? Ghu forbid. After doing a bunch of PDP-11 assembler work, I
> found out that the Vax had 256 opcodes and foreswore assembly thereafter.
> Still, that was nothing compared to the 1500+ opcodes of x86*. I think I
> dodged a bullet.
>
IMHO: the Vax instruction set was the assembler guys (like Culter) trying
to delay the future and keep assembler as king of the hill. That said,
Dave Pressotto, Scotty Baden, and I used to fight with Patterson in his
architecture seminar (during the writing of the RISC papers). DEC hit a
grand slam with that machine. Between the Vax and x86 plus being part of
Alpha, I have realized ISA has nothing to do with success (i.e. my previous
comments about economics vs. architecture).
Funny thing, Dave Cane was the lead HW guy on the 750, worked on the 780 HW
team, and lead the Masscomp HW group. Dave used to stay "Culter got his
way" whenever we talked about the VAX instruction set. It was supposed to
be the world's greatest assembler machine. The funny part is that DEC had
already started to transition to BLISS by then in the applications teams.
But Cutler was (is) an OS weenie and he famously hated BLISS. Only the
other hand, Culter (together with Dick Hustvedt and Peter Lipman), got the
SW out on that system (Starlet - *a.k.a.* VMS) quickly and it worked really
well/pretty much as advertised. [Iknowing all of them I suspect having
Roger Gourd as their boss helped a good bit also).
Clem
On Wed, Feb 3, 2021 at 8:34 PM Larry McVoy <lm(a)mcvoy.com> wrote:
> I have to admit that I haven't looked at ARM assembler, the M1 is making
> me rethink that. Anyone have an opinion on where ARM lies in the pleasant
> to unpleasant scale?
>
Redirecting to "COFF" as this is drifting away from Unix.
I have a soft spot for ARM, but I wonder if I should. At first blush, it's
a pleasant RISC-ish design: loads and stores for dealing with memory,
arithmetic and logic instructions work on registers and/or immediate
operands, etc. As others have mentioned, there's an inline barrel shifter
in the ALU that a lot of instructions can take advantage of in their second
operand; you can rotate, shift, etc, an immediate or register operand while
executing an instruction: here's code for setting up page table entries for
an identity mapping for the low part of the physical address space (the
root page table pointer is at phys 0x40000000):
MOV r1, #0x0000
MOVT r1, #0x4000
MOV r0, #0
.Lpti: MOV r2, r0, LSL #20
ORR r2, r2, r3
STR r2, [r1], #4
ADD r0, r0, #1
CMP r0, #2048
BNE .Lpti
(Note the `LSL #20` in the `MOV` instruction.)
32-bit ARM also has some niceness for conditionally executing instructions
based on currently set condition codes in the PSW, so you might see
something like:
1: CMP r0, #0
ADDNE r1, r1, #1
SUBNE r0, r0, #1
BNE 1b
The architecture tends to map nicely to C and similar languages (e.g.
Rust). There is a rich set of instructions for various kinds of arithmetic;
for instance, they support saturating instructions for DSP-style code. You
can push multiple registers onto the stack at once, which is a little odd
for a RISC ISA, but works ok in practice.
The supervisor instruction set is pretty nice. IO is memory-mapped, etc.
There's a co-processor interface for working with MMUs and things like it.
Memory mapping is a little weird, in that the first-level page table isn't
the same second-level tables: the first-level page table maps the 32-bit
address space into 1MiB "sections", each of which is described by a 32-bit
section descriptor; thus, to map the entire 4GiB space, you need 4096 of
those in 16KiB of physically contiguous RAM. At the second-level, 4KiB page
frames map page into the 1MiB section at different granularities; I think
the smallest is 1KIB (thus, you need 1024 32-bit entries). To map a 4KiB
virtual page to a 4KiB PFN, you repeat the relevant entry 4 times in the
second-level page. It ends up being kind of annoying. I did a little toy
kernel for ARM32 and ended up deciding to use 16KiB pages (basically, I map
4x4KiB contiguous pages) so I could allocate a single sized structure for
the page tables themselves.
Starting with the ARMv8 architecture, it's been split into 32-bit aarch32
(basically the above) and 64-bit aarch64; the latter has expanded the
number and width of general purpose registers, one is a zero register in
some contexts (and I think a stack pointer in others? I forget the
details). I haven't played around with it too much, but looked at it when
it came out and thought "this is reasonable, with some concessions for
backwards compatibility." They cleaned up the paging weirdness mentioned
above. The multiple push instruction has been retired and replaced with a
"push a pair of adjacent registers" instruction; I viewed that as a
concession between code size and instruction set orthogonality.
So...Overall quite pleasant, and far better than x86_64, but with some
oddities.
- Dan C.
I will ask Warren's indulgence here - as this probably should be continued
in COFF, which I have CC'ed but since was asked in TUHS I will answer
On Wed, Feb 3, 2021 at 6:28 AM Peter Jeremy via TUHS <tuhs(a)minnie.tuhs.org>
wrote:
> I'm not sure that 16 (or any other 2^n) bits is that obvious up front.
> Does anyone know why the computer industry wound up standardising on
> 8-bit bytes?
>
Well, 'standardizing' is a little strong. Check out my QUORA answer: How
many bits are there in a byte
<https://www.quora.com/How-many-bits-are-there-in-a-byte/answer/Clem-Cole>
and What is a bit? Why are 8 bits considered as 1 byte? Why not 7 bit or 9
bit?
<https://www.quora.com/What-is-a-bit-Why-are-8-bits-considered-as-1-byte-Why…>
for my details but the 8-bit part of the tail is here (cribbed from those
posts):
The Industry followed IBM with the S/360.The story of why a byte is 8- bits
for the S/360 is one of my favorites since the number of bits in a byte is
defined for each computer architecture. Simply put, Fred Brooks (who lead
the IBM System 360 project) overruled the chief hardware designer, Gene
Amdahl, and told him to make things power of two to make it easier on the
SW writers. Amdahl famously thought it was a waste of hardware, but Brooks
had the final authority.
My friend Russ Robeleon, who was the lead HW guy on the 360/50 and later
the ASP (*a.k.a.* project X) who was in the room as it were, tells his yarn
this way: You need to remember that the 360 was designed to be IBM's
first *ASCII
machine*, (not EBCDIC as it ended up - a different story)[1] Amdahl was
planning for a word size to be 24-bits and the byte size to be 7-bits for
cost reasons. Fred kept throwing him out of his office and told him not to
come back “until a byte and word are powers of two, as we just don’t know
how to program it otherwise.”
Brooks would eventually relent on the original pointer on the Systems 360
became 24-bits, as long as it was stored in a 32-bit “word”.[2] As a
result, (and to answer your original question) a byte first widely became
8-bit with the IBM’s Systems 360.
It should be noted, that it still took some time before an 8-bit byte
occurred more widely and in almost all systems as we see it today. Many
systems like the DEC PDP-6/10 systems used 5, 7-bit bytes packed into a
36-bit word (with a single bit leftover) for a long time. I believe that
the real widespread use of the 8-bit byte did not really occur until the
rise of the minis such as the PDP-11 and the DG Nova in the late
1960s/early 1970s and eventually the mid-1970s’ microprocessors such as
8080/Z80/6502.
Clem
[1] While IBM did lead the effort to create ASCII, and System 360 actually
supported ASCII in hardware, but because the software was so late, IBM
marketing decided not the switch from BCD and instead used EBCDIC (their
own code). Most IBM software was released using that code for the System
360/370 over the years. It was not until IBM released their Series 1
<https://en.wikipedia.org/wiki/IBM_Series/1>minicomputer in the late 1970s
that IBM finally supported an ASCII-based system as the natural code for
the software, although it had a lot of support for EBCDIC as they were
selling them to interface to their ‘Mainframe’ products.
[2] Gordon Bell would later observe that those two choices (32-bit word and
8-bit byte) were what made the IBM System 360 architecture last in the
market, as neither would have been ‘fixable’ later.