The EFF just published an article on the rise and fall of Gopher on
their Deeplinks blog.
"Gopher: When Adversarial Interoperability Burrowed Under the
Gatekeepers' Fortresses"
https://www.eff.org/deeplinks/2020/02/gopher-when-adversarial-interoperabil…
I thought it might be of interest to people here.
--
Michael Kjörling • https://michael.kjorling.se • michael(a)kjorling.se
“Remember when, on the Internet, nobody cared that you were a dog?”
A bit of history: on this day in 1941, Konrad Zuse presented the Z3, the
world's first working programmable, fully automatic computer, in Berlin.
Pity it got destroyed when the joint was bombed...
-- Dave
[ COFF not TUHS ]
Clem Cole <clemc(a)ccc.com> wrote:
> On Sun, May 9, 2021 at 3:58 PM Larry McVoy <lm(a)mcvoy.com> wrote:
>
> > National couldn't get it together to produce bug free chips or maybe
> > we'd all be running that, pretty nice architecture (in theory).
>
> I've always wondered if a Nat Semi NS32016 based system running in a PC/AT
> form factor had appeared that was priced like a PC/AT if that might have
> had a chance.
Acorn Computers made an odd machine consisting of a BBC micro with a 32016
second processor in a box. (It didn't run a unix-like OS, I'm afraid.) The
32016 was one of the CPUs that inspired the ARM, because its performance
was so terrible: it was not able to make good use of the available memory
bandwidth. (There wasn't a 68000 second processor because its interrupt
latency was too bad to drive the "tube" interface.)
http://chrisacorns.computinghistory.org.uk/Computers/ACW.htmlhttps://en.wikipedia.org/wiki/Acorn_Computers#New_RISC_architecture
Tony.
--
f.anthony.n.finch <dot(a)dotat.at> https://dotat.at/
Malin, Southeast Hebrides: Cyclonic 4 to 6. Slight or moderate in
southeast, moderate or rough in northwest. Showers. Good, occasionally
poor.
I've got a number of DEC terminals, ranging from the VT220 to the VT520
(sadly, I got rid of my VT100 and VT102 many years ago, before I started
collecting DEC equipment instead of just using it), and some of them
have one or more burned out serial ports. Before I start taking them
apart to find out what chips were used, I figured I'd check if any of
you folks happen to know. I'd like to order a stash of replacements,
and it would be nice to have them handy before I clear the work bench to
start dismantling terminals...
Oh, and for the record: the Q-bus PDP-11/23 uses 9636ACP and 9637ACP for
output and input, respectively, while the VAX-11/630 substitutes a
9639ATC optocoupler for the 9637ACP differential receiver. (I have a
couple of spare CPU boards with damaged ports, as well, so these are all
on my shopping list already.)
-tih
--
Most people who graduate with CS degrees don't understand the significance
of Lisp. Lisp is the most important idea in computer science. --Alan Kay
Re: [COFF] Happy birthday, the Internet!
> From: Jim Carpenter
> But even that isn't really correct, as the Internet is a network of
> networks and Arpanet was all alone.
Correct: the ARPANET was merely an ancestor (albeit an important one) of the
Internet. (The most important, in terms of technical influence, was CYCLADES,
"the key intermediate technical step between the ARPANET and the Internet".)
The ARPANET was later sort of subsumed into the Internet, as its original
long-haul backbone ("sort of" because the ARPANET's main protocol was
discarded, in doing so), but that's not too important.
If you want to select _a_ birthday for the Internet, I'd pick the day they
settled on the IPv4 packet format; we know when that was, it was the second
day of the 15/16 June, 1978 meeting (see IEN-68). I'm not wedded to that
date, if someone has a better suugestion (e.g. the firt PRNET to ARPNET test),
I'm open to hearing why the alternative's preferable.
Noel
Born on this day in 1969 with the publication of RFC-1 "Host Software" by
Steve Crocker, it basically specified the ARPAnet and the IMPs.
Oh, and it really peeves me when the stupid media call it "the internet";
it's a proper noun, hence is capitalised.
-- Dave
On Fri, Apr 2, 2021 at 1:50 PM Theodore Ts'o <tytso(a)mit.edu> wrote:
> Out of curiousity, how was TCF different or similar to Mosix?
>
Many similar ideas. TCF was basically the commercial implementation of the
Locus, which Jerry and students built at UCLA (one 11/70s original). I
want to say the Locus papers are in some early SOSPs.
MOSIX was its own Unix-like OS, as was Locus [and some of this was in
Sprite too BTW]. TCF was a huge number of rewrites to BSD and was UNIX.
The local/remote restructuring was ad-hoc. By the time Roman and I lead
TNF, we had created a formal VPROC layer as an analog to the VFS layer
(more in a minute). TNC was to be the gut of Intel's Paragon using OSF/1
as base OS.
The basic idea of all of them is that the cluster is looks like a single
protection domain with nodes contributing resources. A Larry says a ps is
cluster-wide. TCF had the idea of features that each node provides (ISA,
floating-point unit, AP, *etc*..) so if a process needed specific
resources, it would only run on a node that had those resources. But it
also meant that processes could be migrated from a node that had the same
resources.
One of the coolest demos I ever saw was we took a new unconfigured PS/2 at
a trade show and connected the ethernet to it on the trade show network,
and put in a boot floppy. We dialed back into a system at an LCC, and
filled in some security things, details like the IP address of the new
system and soon it booted and joined the cluster. It immediately started
to add services to the cluster, we walked away, and (overnight) the system
had set up the hard disk and started caching locally things that were
needed for speed. Later I was editing a file and from another screen
migrated the process around the cluster while the editing was active.
The problem with VPROC (like VFS) is it takes surgery all over the kernel.
In fact, for Linux 2.x kernel the OpenSSI
<https://sourceforge.net/projects/ssic-linux/> folks did all the kernel
work to virtualize the concept of process, which sadly never got picked up
as the kernel.org folks did not like it (a real shame IMO). BTW, one of
the neat side effects of a layer like VPROC is things like
checkpoint/restart are free -- you are just migrating a process to the
storage instead of an active processor.
Anyway, Mosix has a lot of the same types of ideas. I have much less
experience with it directly.
On Fri, Apr 02, 2021 at 09:11:47AM -0700, Larry McVoy wrote:
> > Long before Linus released Linux into the wild in 1990 for the >>386<< much
> > less any other ISA, IBM had been shipping as a product AIX/370 (and AIX/PS2
> > for the 386); which we developed at Locus for them. The user-space was
> > mostly System V, the kernel was based on BSD (4.1 originally) pluis a great
> > deal of customization, including of course the Locus OS work, which IBM
> > called TCF - the transparent computing facility. It was very cool you
> > could cluster 370s and PS/2 and from >>any<< node run a program of either
> > ISA. It has been well discussed in this forum, previously.
>
> It's really a shame that TCF didn't get more widespread usage/traction.
> That's exactly what BitMover wanted to do, I wanted to scale small cheap
> SMPs in a cluster with a TCF layer on it. I gave some talks about it,
> it obviously went nowhere but might have if we had TCF as a starting
> point. TCF was cool.
(Moving this to COFF...)
Out of curiousity, how was TCF different or similar to Mosix?
- Ted
The tab/detab horse was still twitching, so I decided to beat it a little
more.
Doug's claim that tabs saving space was an urban legend didn't ring true,
but betting again Doug is a good way to get poor quick. So I tossed
together a perl script (version run through col -x is at the end of this
note) to measure savings. A simpler script just counted tabs,
distinguishing leading tabs, which I expected to be very common, from
embedded tabs, which I expected to be rare. In retrospect, embedded tabs
are common in (my) C code, separating structure types from the element
names and trailing comments. As Norman pointed out, genuine tabs often
preserve line to line alignment in the presence of small changes. So the
fancier script distinguishes between leading tabs and embedded tabs for
various possible tab stops. Small tab stops keep heavily indented code
lines short, large tab stops can save more space when tabbing past leading
blanks. My coding style uses "set-width" of 4, which vi turns into spaces
or tabs, with "standard" tabs every 8 columns. My code therefore benefits
most with tabstops every 4 columns. A lot of code is indented 4 spaces,
which saves 3 bytes when replaced by a tab, but there is no saving with
tabstops at 8. Here's the output when run on itself (before it was
detabbed) and on a largish C program:
/home/jpl/bin/tabsave.pl /home/jpl/bin/tabsave.pl rsort.c
/home/jpl/bin/tabsave.pl, size 1876
2: Leading 202, Embedded 3, Total 205
4: Leading 303, Embedded 4, Total 307
8: Leading 238, Embedded 5, Total 243
rsort.c, size 209597
2: Leading 13186, Embedded 4219, Total 17405
4: Leading 19776, Embedded 5990, Total 25766
8: Leading 16506, Embedded 6800, Total 23306
The bytes saved by using tabs compared to the (detabbed) original size are
not chump change, with 2, 4 or 8 column tabstops. On ordinary text, savings
are totally unimpressive, usually 0. Your savings may vary. I think the
horse is now officially deceased. -- jpl
===
#!/usr/bin/perl -w
use strict;
my @Tab_stops = ( 2, 4, 8 );
sub check_stop {
my ($line, $stop_at) = @_;
my $pos = length($line);
my ($leading, $embedded) = (0,0);
while ($pos >= $stop_at) {
$pos -= ($pos % $stop_at); # Get to previous tab stop
my $blanks = 0;
while ((--$pos >= 0) && (substr($line, $pos, 1) eq ' ')) {
++$blanks; }
if ($blanks > 1) {
my $full = int($blanks/$stop_at);
my $partial = $blanks - $full * $stop_at;
my $savings = (--$partial > 0) ? $partial : 0;
$savings += $full * ($stop_at - 1);
if ($pos < 0) {
$leading += $savings;
} else {
$embedded += $savings;
}
}
}
return ($leading, $embedded);
}
sub dofile {
my $file = shift;
my $command = "col -x < $file";
my $notabsfh;
unless (open($notabsfh, "-|", $command)) {
printf STDERR ("Open failed on '$command': $!");
return;
}
my $size = 0;
my ($leading, $embedded) = (0,0);
my @savings;
for (my $i = 0; $i < @Tab_stops; ++$i) { $savings[$i] = [0,0]; }
while (my $line = <$notabsfh>) {
my $n = length($line);
$size += $n;
$line =~ s/(\s*)$//;
for (my $i = 0; $i < @Tab_stops; ++$i) {
my @l_e = check_stop($line, $Tab_stops[$i]);
for (my $j = 0; $j < @l_e; ++$j) {
$savings[$i][$j] += $l_e[$j];
}
}
}
print("$file, size $size\n");
for (my $i = 0; $i < @Tab_stops; ++$i) {
print(" $Tab_stops[$i]: ");
my $l = $savings[$i][0];
my $e = $savings[$i][1];
my $t = $l + $e;
print("Leading $l, Embedded $e, Total $t\n");
}
print("\n");
}
sub main {
for my $file (@ARGV) {
dofile($file);
}
}
main();
On Mar 11, 2021, at 10:08 AM, Warner Losh <imp(a)bsdimp.com> wrote:
>
> On Thu, Mar 11, 2021 at 10:40 AM Bakul Shah <bakul(a)iitbombay.org> wrote:
>> From https://www.freebsd.org/cgi/man.cgi?hosts(5)
>> For each host a single line should be present with the following information:
>> Internet address
>> official host name
>> aliases
>> HISTORY
>> The hosts file format appeared in 4.2BSD.
>
> While this is true wrt the history of FreeBSD/Unix, I'm almost positive that BSD didn't invent it. I'm pretty sure it was picked up from the existing host file that was published by sri-nic.arpa before DNS.
A different and more verbose format. See RFCs 810 & 952. Possibly because it had to serve more purposes?
> Warner
>
>>> On Mar 11, 2021, at 9:14 AM, Grant Taylor via TUHS <tuhs(a)minnie.tuhs.org> wrote:
>>> Hi,
>>>
>>> I'm not sure where this message best fits; TUHS, COFF, or Internet History, so please forgive me if this list is not the best location.
>>>
>>> I'm discussing the hosts file with someone and was wondering if there's any historical documentation around it's format and what should and should not be entered in the file.
>>>
>>> I've read the current man page on Gentoo Linux, but suspect that it's far from authoritative. I'm hoping that someone can point me to something more authoritative to the hosts file's format, guidelines around entering data, and how it's supposed to function.
>>>
>>> A couple of sticking points in the other discussion revolve around how many entries a host is supposed to have in the hosts file and any ramifications for having a host appear as an alias on multiple lines / entries. To whit, how correct / incorrect is the following:
>>>
>>> 192.0.2.1 host.example.net host
>>> 127.0.0.1 localhost host.example.net host
>>>
>>>
>>>
>>> --
>>> Grant. . . .
>>> unix || die
>> _______________________________________________
>> COFF mailing list
>> COFF(a)minnie.tuhs.org
>> https://minnie.tuhs.org/cgi-bin/mailman/listinfo/coff
Hi,
I'm not sure where this message best fits; TUHS, COFF, or Internet
History, so please forgive me if this list is not the best location.
I'm discussing the hosts file with someone and was wondering if there's
any historical documentation around it's format and what should and
should not be entered in the file.
I've read the current man page on Gentoo Linux, but suspect that it's
far from authoritative. I'm hoping that someone can point me to
something more authoritative to the hosts file's format, guidelines
around entering data, and how it's supposed to function.
A couple of sticking points in the other discussion revolve around how
many entries a host is supposed to have in the hosts file and any
ramifications for having a host appear as an alias on multiple lines /
entries. To whit, how correct / incorrect is the following:
192.0.2.1 host.example.net host
127.0.0.1 localhost host.example.net host
--
Grant. . . .
unix || die
I am currently reading "Memoirs of a Computer Pioneer" by Maurice
Wilkes, MIT press. The following text from p. 145 may amuse readers.
[p. 145] By June 1949 people had begun to realize that it was not so
easy to get a program right as had at one time appeared. I well
remember then this realization first came on me with full force. The
EDSAC was on the top floor of the building and the tape-punching and
editing equipment one floor below [...]. I was trying to get working my
first non-trivial program, which was one for the numerical integration
of Airy's differential equation. It was on one of my journeys between
the EDSAC room and the punching equipment that "hesitating at the angles
of stairs" the realization came over me with full force that a good part
of the remainder of my life was going to spent in finding errors in my
own programs.
N.
=> COFF since it's left Unix history behind.
On 2021-Feb-16 21:08:15 -0700, Grant Taylor via TUHS <tuhs(a)minnie.tuhs.org> wrote:
>I like SQLite and Berkeley DB in that they don't require a full RDBMS
>running. Instead, an application can load what it needs and access the
>DB itself.
I also like SQLite and use it quite a lot. It is a full RDBMS, it
just runs inside the client instead of being a separate backend
server. (BDB is a straight key:value store).
>I don't remember how many files SQLite uses to store a DB.
One file. I often ship SQLite DB files between systems for various
reasons and agree that the "one file" is much easier that a typical RDBMS.
--
Peter Jeremy
I've been maintaining a customer's application which uses C-ISAM as
database interface on SCOUNIX, TRU64 and HP-UX. Simple and above all
in 'C' ! As wiki says
"IBM still recommends the use of the Informix Standard Engine for
embedded applications"
https://en.wikipedia.org/wiki/IBM_Informix_C-ISAM
Mind you, the page hasn't seen significant changes since 2006 :-)
Of course not having C-ISAM as a shared library can make executables a
bit big unless included in a custom made shared library which I never
really tried on SCOUNIX, but did on the newer UNIXes. A diskette used
on SCOUNIX for certain offline salvage actions just isn't 'spacious'
enough.
Cheers,
uncle rubl
.
>From: Grant Taylor <gtaylor(a)tnetconsulting.net>
>To: coff(a)minnie.tuhs.org
>Cc:
>Bcc:
>Date: Sat, 20 Feb 2021 10:23:35 -0700
>Subject: Re: [COFF] [TUHS] cut, paste, join, etc.
>On 2/18/21 12:32 AM, Peter Jeremy via COFF wrote:
>>I also like SQLite and use it quite a lot. It is a full RDBMS, it just runs inside the client >>instead of being a separate backend server. (BDB is a straight key:value store).
>
>Fair enough.
>
>I was referring to an external and independent daemon with it's own needs for care & >feeding.
>.
>>One file. I often ship SQLite DB files between systems for various reasons and agree >>that the "one file" is much easier that a typical RDBMS.
>
>*nod*
>
>--
>Grant. . . .
>unix || die
--
The more I learn the better I understand I know nothing.
The COFF folks may have a bead on this if no one on TUHS does.
---------- Forwarded message ---------
From: ron minnich <rminnich(a)gmail.com>
Date: Wed, Feb 10, 2021, 9:33 AM
Subject: [TUHS] nothing to do with unix, everything to do with history
To: TUHS main list <tuhs(a)minnie.tuhs.org>
There's so much experience here, I thought someone might know:
"Our goal is to develop an emulator for the Burroughs B6700 system. We
need help to find a complete release of MCP software for the Burroughs
B6700.
If you have old magnetic tapes (magtapes) in any format, or computer
printer listings of software or micro-fiche, micro-film, punched-card
decks for any Burroughs B6000 or Burroughs B7000 systems we would like
to hear from you.
Email nw(a)retroComputingTasmania.com"
<moving to coff, less unix heritage content here>
On 2021-02-07 23:29, Doug McIntyre wrote:
> On Sun, Feb 07, 2021 at 04:32:56PM -0500, Nemo Nusquam wrote:
>> My Sun UNIX layout keyboards (and mice) work quite well with my Macs.
>> I share your sentiments.
>
> Most of the bespoke mechanical keyboard makers will offer a dipswitch
> for what happens to the left of the A, and with an option to print the
> right value there, my keyboards work quite well the right way.
I've been using the CODE[0] keyboard with 'clear' switches for the past
few years and have been very happy with it. Has the dipswitches for
swapping around CTRL/CAPS and the meta/Alt, probably others as well.
When I don't have hardware solutions to this, most modern OSes let you
remap keys in software. Being a gnu screen user, CTRL & A being right
next too each other makes life easier.
I've used enough keyboards over the years that didn't even have an ESC
key (Mac Plus, the Commodore 64, the keyboard on my Samsung tablet,
probably a few others), that I got in the habit of using CTRL-[ to
generate an ESC and still do that most of the time rather than reaching
for the ESC up there in the corner.
> I did use the Sun Type5 USB Unix layout for quite some years, but I
> always found it a but mushy, and liked it better switching back to
> mechanical keyboards with the proper layout.
Before I got this keyboard, I used a Sun Type 7 keyboard (USB with the
UNIX layout). It had the CTRL and ESC keys in the "right" places (as
noted above, ESC location doesn't bother me as much), but yeah, they're
mushy, and big. Much happier with the mechanical keyboard for my daily
driver.
I've been eyeballing the TEX Shinobi[1], a mechanical keyboard with the
ThinkPad type TrackPoint, cut down on reasons for my fingers to leave
the keyboard even more.
--
Michael Parson
Pflugerville, TX
[0] http://codekeyboards.com/
[1] https://tex.com.tw/products/shinobi?variant=16969884106842
I have to agree with Clem here. Mind you I still mourn the demise of
Alpha and even Itanium but then I never had to pay for those systems.
I only make sure they run properly so the customer can enjoy their
applications.
My 32-1/2 cents (inflation adjusted).
Take care and stay as healthy as some of my 25 year old servers :-)
Cheers,
uncle rubl
>From: Clem Cole <clemc(a)ccc.com>
>To: Larry McVoy <lm(a)mcvoy.com>
>Cc: COFF <coff(a)minnie.tuhs.org>
>Bcc:
>Date: Fri, 5 Feb 2021 09:36:20 -0500
>Subject: Re: [COFF] Architectures -- was [TUHS] 68k prototypes & microcode
<snip>
>BTW: Once again we 100% agree on the architecture part of the discussion. And frankly >pre-386 days, I could not think how anyone would come up with it. As computer >architecture it is terrible, how did so many smart people come up with such? It defies >everything we are taught about 'good' computer architectural design. But .... after all of >the issues with the ISA's of Vax and the x86/INTEL*64 vs. Alpha --- is how I came to the >conclusion, architecture does not matter nearly as much as economics and we need to >get over it and stop whining. Or in Christensen's view, a new growing market is often >made from a product that has technically not as good as the one in the original >mainstream market but has some value to the new group of people.
<snip>
--
The more I learn the better I understand I know nothing.
Moved to COFF - and I should prefix this note with a strongly worded --
these are my own views and do not necessarily follow my employers (and
often have not, as some of you know that have worked with me in the past).
On Wed, Feb 3, 2021 at 8:34 PM Larry McVoy <lm(a)mcvoy.com> wrote:
> The x86 stuff is about as far away from PDP-11 as you can get. Required
> to know it, but so unpleasant.
>
BTW: Once again we 100% agree *on the architecture part of* *the discussion*.
And frankly pre-386 days, I could not think how anyone would come up with
it. As computer architecture it is terrible, how did so many smart people
come up with such? It defies everything we are taught about 'good'
computer architectural design. But .... after all of the issues with the
ISA's of Vax and the x86/INTEL*64 *vs.* Alpha --- is how I came to the
conclusion, *architecture does not matter nearly as much as economics and
we need to get over it and stop whining. * Or in Christensen's view, a new
growing market is often made from a product that has technically not as
good as the one in the original mainstream market but has some value to the
new group of people.
x86 (and in particular once the 386 added linear 32 bit addressing), even
though DOS/Windows sucked compared to SunOS (or whatever), the job (work)
that the users needed to do was performed to the customer's satisfaction *and
for a lot less.* The ISVs could run their codes there and >>they<< sell
more copies of their code which is what they care about. The end-users,
really just care about getting a job done.
What was worse was at the time, it was the ISV's keep their own prices
higher on the 'high-value platform' - which makes the cost of those
platforms ever higher. During the Unix wars, this fact was a huge issue.
The same piece of SW for a Masscomp would cause 5-10 more than a Sun -- why
we were considered a minicomputer and Sun was a workstation. Same 10MHz
68000 inside (we had a better compiler so we ran 20% faster). This was
because the ISV's classified Masscomp's competition was considered the Vax
8600; not Sun and Apollo -- sigh.
In the end, the combination of x86 and MSFT did it to Sun. For example,
my college roommates (who were trained on the first $100K
architecture/drawing 3D systems developed at CMU on PDP-11/Unix and Triple
Drip Graphic's Wonder) Wintel running a 'boxed' AutoCAD was way more
economical than a Sun box with a custom architecture package -- economics
won, even though the solution was technically not as good. Another form
of the same issue did you ever try to write a technical >>publication<<
with Word (not a letter or a memo) -- it sucks -- The pro's liked FrameMaker
and other 'authoring tools' (hey I think even Latex and Troff are -- much '
better' for the author) -- but Frame costs way more and Word, so what do
the publishers want -- ugh Word DOC format [ask Steinhart about this issue,
he lived it a year ago].
In the case of the Arm, Intel #$%^'ed 101-15 yrs ago up when Jobs said he
wanted a $20 processor for what would become the iPhone and our execs told
him to take a hike (we were making too much money with high margin
Window's) boxes. At the time, Arm was 'not as good' - but it had two
properties Jobs cared about (better power - although at the time Arm was
actually not much better than the laptop x86s, but Apple got Samsung to
make/sell parts at less than $20 -- i.e. economics).
Again, I'm not a college professor. I'm an engineer that builds real
computer systems that sometimes people (even ones like the folks that read
this list) want to/have wanted buy. As much as I like to use sold
architecture principle to guide things, the difference is I know be
careful. Economics is really the higher bit. What the VAX engineers at
DEC or the current INTEL*64 folks (like myself) was/is not what some of the
same engineers did with Alpha -- today, we have to try to figure out how to
make the current product continue to be economically attractive [hence the
bazillion new instructions that folks like Paul W in the compiler team
figure out how to exploit, so the ISV's codes run better and can sell more
copies and we sell more chips to our customers to sell to end users].
But like Jobs's, DEC management got caught up in the high margin game, and
ignored the low end (I left Compaq after I managed to build the $1K Alpha
which management blew off -- it could be sold at 45% margins like the Alpha
TurboLaser or 4x00 series). Funny, one of the last things I had proposed
at Masscomp in the early 80s before I went to Stellar, was a low-end system
(also < $1K) and Masscomp management wanted to part of it -- it would have
meant competing with Sun and eventually the PC.
FWIW: Intel >>does<< know how to make a $20 SOC, but the margins will
suck. The question is what will management want to? I really don't
know. So far, we have liked the server chip margins (don't forget Intel
made more $s last year than it ever has - even in the pandemic).
I feel a little like Dr Seuss' 'Onceler' in the Lorax story ... if Arm can
go upscale from the phone platform who knows what will happen - Bell's Law
predicts Arm displaces INTEL*64:
“Approximately every decade a new computer class forms as a new “minimal”
computer either through using fewer components or use of a small fractional
part of the state-of-the-art chips.”
FWIW: Bell basically has claimed a technical point, based on Christenson's
observation; the 'lessor' technology will displace the 'better one.' Or
as I say it, sophisticated architecture always losses to better economics.
On 2021-Feb-03 09:58:37 -0500, Clem Cole <clemc(a)ccc.com> wrote:
>but the original released (distributed - MC68000) part was binned at 8 and
>10
There was also a 4MHz version. I had one in my MEX68KECB but I'm not
sure if they were ever sold separately. ISTR I got the impression
that it was a different (early) mask or microcode variant because some
of the interface timings weren't consistent with the 8/10 MHz versions
(something like one of the bus timings was a clock-cycle slower).
>as were the later versions with the updated paging microcode called the
>MC68010 a year later. When the 68020 was released Moto got the speeds up
>to 16Mhz and later 20. By the '040 I think they were running at 50MHz
I also really liked the M68k architecture. Unfortunately, as with the
M6800, Motorola lost out to Intel's inferior excuse for an architecture.
Moving more off-topic, M68k got RISC-ified as the ColdFire MCF5206.
That seemed (to me) to combine the feel of the M68k with the
clock/power gains from RISC. Unfortunately, it didn't take off.
--
Peter Jeremy
On 2021-Feb-03 17:33:56 -0800, Larry McVoy <lm(a)mcvoy.com> wrote:
>The x86 stuff is about as far away from PDP-11 as you can get. Required
>to know it, but so unpleasant.
Warts upon warts upon warts. The complete opposite of orthogonal.
>I have to admit that I haven't looked at ARM assembler, the M1 is making
>me rethink that. Anyone have an opinion on where ARM lies in the pleasant
>to unpleasant scale?
I haven't spent enough time with ARM assembler to form an opinion but
there are a number of interesting interviews with the designers on YT:
* A set of videos by Sophie Wilson (original designer) starting at
https://www.youtube.com/watch?v=jhwwrSaHdh8
* A history of ARM by Dave Jaggar (redesigner) at
https://www.youtube.com/watch?v=_6sh097Dk5k
If you don't want to pony up for a M1, there are a wide range of ARM
SBCs that you could experiment with.
--
Peter Jeremy
On Fri, Feb 05, 2021 at 01:16:08PM +1100, Dave Horsfall wrote:
> [ Directing to COFF, where it likely belongs ]
>
> On Thu, 4 Feb 2021, Arthur Krewat wrote:
>
> >>-- Dave, wondering whether anyone has ever used every VAX instruction
> >
> >Or every VMS call, for that matter. ;)
>
> Urk... I stayed away from VMS as much as possible (I had a network of
> PDP-11s to play with), although I did do a device driver course; dunno why.
Me too, though I did use Eunice, it was a lonely place, it did not let
me see who was on VMS. I was the only one. A far cry from BSD where
wall went to everyone and talk got you a screen where you talked.