Born on this day in 1969 with the publication of RFC-1 "Host Software" by
Steve Crocker, it basically specified the ARPAnet and the IMPs.
Oh, and it really peeves me when the stupid media call it "the internet";
it's a proper noun, hence is capitalised.
-- Dave
On Fri, Apr 2, 2021 at 1:50 PM Theodore Ts'o <tytso(a)mit.edu> wrote:
> Out of curiousity, how was TCF different or similar to Mosix?
>
Many similar ideas. TCF was basically the commercial implementation of the
Locus, which Jerry and students built at UCLA (one 11/70s original). I
want to say the Locus papers are in some early SOSPs.
MOSIX was its own Unix-like OS, as was Locus [and some of this was in
Sprite too BTW]. TCF was a huge number of rewrites to BSD and was UNIX.
The local/remote restructuring was ad-hoc. By the time Roman and I lead
TNF, we had created a formal VPROC layer as an analog to the VFS layer
(more in a minute). TNC was to be the gut of Intel's Paragon using OSF/1
as base OS.
The basic idea of all of them is that the cluster is looks like a single
protection domain with nodes contributing resources. A Larry says a ps is
cluster-wide. TCF had the idea of features that each node provides (ISA,
floating-point unit, AP, *etc*..) so if a process needed specific
resources, it would only run on a node that had those resources. But it
also meant that processes could be migrated from a node that had the same
resources.
One of the coolest demos I ever saw was we took a new unconfigured PS/2 at
a trade show and connected the ethernet to it on the trade show network,
and put in a boot floppy. We dialed back into a system at an LCC, and
filled in some security things, details like the IP address of the new
system and soon it booted and joined the cluster. It immediately started
to add services to the cluster, we walked away, and (overnight) the system
had set up the hard disk and started caching locally things that were
needed for speed. Later I was editing a file and from another screen
migrated the process around the cluster while the editing was active.
The problem with VPROC (like VFS) is it takes surgery all over the kernel.
In fact, for Linux 2.x kernel the OpenSSI
<https://sourceforge.net/projects/ssic-linux/> folks did all the kernel
work to virtualize the concept of process, which sadly never got picked up
as the kernel.org folks did not like it (a real shame IMO). BTW, one of
the neat side effects of a layer like VPROC is things like
checkpoint/restart are free -- you are just migrating a process to the
storage instead of an active processor.
Anyway, Mosix has a lot of the same types of ideas. I have much less
experience with it directly.
On Fri, Apr 02, 2021 at 09:11:47AM -0700, Larry McVoy wrote:
> > Long before Linus released Linux into the wild in 1990 for the >>386<< much
> > less any other ISA, IBM had been shipping as a product AIX/370 (and AIX/PS2
> > for the 386); which we developed at Locus for them. The user-space was
> > mostly System V, the kernel was based on BSD (4.1 originally) pluis a great
> > deal of customization, including of course the Locus OS work, which IBM
> > called TCF - the transparent computing facility. It was very cool you
> > could cluster 370s and PS/2 and from >>any<< node run a program of either
> > ISA. It has been well discussed in this forum, previously.
>
> It's really a shame that TCF didn't get more widespread usage/traction.
> That's exactly what BitMover wanted to do, I wanted to scale small cheap
> SMPs in a cluster with a TCF layer on it. I gave some talks about it,
> it obviously went nowhere but might have if we had TCF as a starting
> point. TCF was cool.
(Moving this to COFF...)
Out of curiousity, how was TCF different or similar to Mosix?
- Ted
The tab/detab horse was still twitching, so I decided to beat it a little
more.
Doug's claim that tabs saving space was an urban legend didn't ring true,
but betting again Doug is a good way to get poor quick. So I tossed
together a perl script (version run through col -x is at the end of this
note) to measure savings. A simpler script just counted tabs,
distinguishing leading tabs, which I expected to be very common, from
embedded tabs, which I expected to be rare. In retrospect, embedded tabs
are common in (my) C code, separating structure types from the element
names and trailing comments. As Norman pointed out, genuine tabs often
preserve line to line alignment in the presence of small changes. So the
fancier script distinguishes between leading tabs and embedded tabs for
various possible tab stops. Small tab stops keep heavily indented code
lines short, large tab stops can save more space when tabbing past leading
blanks. My coding style uses "set-width" of 4, which vi turns into spaces
or tabs, with "standard" tabs every 8 columns. My code therefore benefits
most with tabstops every 4 columns. A lot of code is indented 4 spaces,
which saves 3 bytes when replaced by a tab, but there is no saving with
tabstops at 8. Here's the output when run on itself (before it was
detabbed) and on a largish C program:
/home/jpl/bin/tabsave.pl /home/jpl/bin/tabsave.pl rsort.c
/home/jpl/bin/tabsave.pl, size 1876
2: Leading 202, Embedded 3, Total 205
4: Leading 303, Embedded 4, Total 307
8: Leading 238, Embedded 5, Total 243
rsort.c, size 209597
2: Leading 13186, Embedded 4219, Total 17405
4: Leading 19776, Embedded 5990, Total 25766
8: Leading 16506, Embedded 6800, Total 23306
The bytes saved by using tabs compared to the (detabbed) original size are
not chump change, with 2, 4 or 8 column tabstops. On ordinary text, savings
are totally unimpressive, usually 0. Your savings may vary. I think the
horse is now officially deceased. -- jpl
===
#!/usr/bin/perl -w
use strict;
my @Tab_stops = ( 2, 4, 8 );
sub check_stop {
my ($line, $stop_at) = @_;
my $pos = length($line);
my ($leading, $embedded) = (0,0);
while ($pos >= $stop_at) {
$pos -= ($pos % $stop_at); # Get to previous tab stop
my $blanks = 0;
while ((--$pos >= 0) && (substr($line, $pos, 1) eq ' ')) {
++$blanks; }
if ($blanks > 1) {
my $full = int($blanks/$stop_at);
my $partial = $blanks - $full * $stop_at;
my $savings = (--$partial > 0) ? $partial : 0;
$savings += $full * ($stop_at - 1);
if ($pos < 0) {
$leading += $savings;
} else {
$embedded += $savings;
}
}
}
return ($leading, $embedded);
}
sub dofile {
my $file = shift;
my $command = "col -x < $file";
my $notabsfh;
unless (open($notabsfh, "-|", $command)) {
printf STDERR ("Open failed on '$command': $!");
return;
}
my $size = 0;
my ($leading, $embedded) = (0,0);
my @savings;
for (my $i = 0; $i < @Tab_stops; ++$i) { $savings[$i] = [0,0]; }
while (my $line = <$notabsfh>) {
my $n = length($line);
$size += $n;
$line =~ s/(\s*)$//;
for (my $i = 0; $i < @Tab_stops; ++$i) {
my @l_e = check_stop($line, $Tab_stops[$i]);
for (my $j = 0; $j < @l_e; ++$j) {
$savings[$i][$j] += $l_e[$j];
}
}
}
print("$file, size $size\n");
for (my $i = 0; $i < @Tab_stops; ++$i) {
print(" $Tab_stops[$i]: ");
my $l = $savings[$i][0];
my $e = $savings[$i][1];
my $t = $l + $e;
print("Leading $l, Embedded $e, Total $t\n");
}
print("\n");
}
sub main {
for my $file (@ARGV) {
dofile($file);
}
}
main();
On Mar 11, 2021, at 10:08 AM, Warner Losh <imp(a)bsdimp.com> wrote:
>
> On Thu, Mar 11, 2021 at 10:40 AM Bakul Shah <bakul(a)iitbombay.org> wrote:
>> From https://www.freebsd.org/cgi/man.cgi?hosts(5)
>> For each host a single line should be present with the following information:
>> Internet address
>> official host name
>> aliases
>> HISTORY
>> The hosts file format appeared in 4.2BSD.
>
> While this is true wrt the history of FreeBSD/Unix, I'm almost positive that BSD didn't invent it. I'm pretty sure it was picked up from the existing host file that was published by sri-nic.arpa before DNS.
A different and more verbose format. See RFCs 810 & 952. Possibly because it had to serve more purposes?
> Warner
>
>>> On Mar 11, 2021, at 9:14 AM, Grant Taylor via TUHS <tuhs(a)minnie.tuhs.org> wrote:
>>> Hi,
>>>
>>> I'm not sure where this message best fits; TUHS, COFF, or Internet History, so please forgive me if this list is not the best location.
>>>
>>> I'm discussing the hosts file with someone and was wondering if there's any historical documentation around it's format and what should and should not be entered in the file.
>>>
>>> I've read the current man page on Gentoo Linux, but suspect that it's far from authoritative. I'm hoping that someone can point me to something more authoritative to the hosts file's format, guidelines around entering data, and how it's supposed to function.
>>>
>>> A couple of sticking points in the other discussion revolve around how many entries a host is supposed to have in the hosts file and any ramifications for having a host appear as an alias on multiple lines / entries. To whit, how correct / incorrect is the following:
>>>
>>> 192.0.2.1 host.example.net host
>>> 127.0.0.1 localhost host.example.net host
>>>
>>>
>>>
>>> --
>>> Grant. . . .
>>> unix || die
>> _______________________________________________
>> COFF mailing list
>> COFF(a)minnie.tuhs.org
>> https://minnie.tuhs.org/cgi-bin/mailman/listinfo/coff
Hi,
I'm not sure where this message best fits; TUHS, COFF, or Internet
History, so please forgive me if this list is not the best location.
I'm discussing the hosts file with someone and was wondering if there's
any historical documentation around it's format and what should and
should not be entered in the file.
I've read the current man page on Gentoo Linux, but suspect that it's
far from authoritative. I'm hoping that someone can point me to
something more authoritative to the hosts file's format, guidelines
around entering data, and how it's supposed to function.
A couple of sticking points in the other discussion revolve around how
many entries a host is supposed to have in the hosts file and any
ramifications for having a host appear as an alias on multiple lines /
entries. To whit, how correct / incorrect is the following:
192.0.2.1 host.example.net host
127.0.0.1 localhost host.example.net host
--
Grant. . . .
unix || die
I am currently reading "Memoirs of a Computer Pioneer" by Maurice
Wilkes, MIT press. The following text from p. 145 may amuse readers.
[p. 145] By June 1949 people had begun to realize that it was not so
easy to get a program right as had at one time appeared. I well
remember then this realization first came on me with full force. The
EDSAC was on the top floor of the building and the tape-punching and
editing equipment one floor below [...]. I was trying to get working my
first non-trivial program, which was one for the numerical integration
of Airy's differential equation. It was on one of my journeys between
the EDSAC room and the punching equipment that "hesitating at the angles
of stairs" the realization came over me with full force that a good part
of the remainder of my life was going to spent in finding errors in my
own programs.
N.
=> COFF since it's left Unix history behind.
On 2021-Feb-16 21:08:15 -0700, Grant Taylor via TUHS <tuhs(a)minnie.tuhs.org> wrote:
>I like SQLite and Berkeley DB in that they don't require a full RDBMS
>running. Instead, an application can load what it needs and access the
>DB itself.
I also like SQLite and use it quite a lot. It is a full RDBMS, it
just runs inside the client instead of being a separate backend
server. (BDB is a straight key:value store).
>I don't remember how many files SQLite uses to store a DB.
One file. I often ship SQLite DB files between systems for various
reasons and agree that the "one file" is much easier that a typical RDBMS.
--
Peter Jeremy
I've been maintaining a customer's application which uses C-ISAM as
database interface on SCOUNIX, TRU64 and HP-UX. Simple and above all
in 'C' ! As wiki says
"IBM still recommends the use of the Informix Standard Engine for
embedded applications"
https://en.wikipedia.org/wiki/IBM_Informix_C-ISAM
Mind you, the page hasn't seen significant changes since 2006 :-)
Of course not having C-ISAM as a shared library can make executables a
bit big unless included in a custom made shared library which I never
really tried on SCOUNIX, but did on the newer UNIXes. A diskette used
on SCOUNIX for certain offline salvage actions just isn't 'spacious'
enough.
Cheers,
uncle rubl
.
>From: Grant Taylor <gtaylor(a)tnetconsulting.net>
>To: coff(a)minnie.tuhs.org
>Cc:
>Bcc:
>Date: Sat, 20 Feb 2021 10:23:35 -0700
>Subject: Re: [COFF] [TUHS] cut, paste, join, etc.
>On 2/18/21 12:32 AM, Peter Jeremy via COFF wrote:
>>I also like SQLite and use it quite a lot. It is a full RDBMS, it just runs inside the client >>instead of being a separate backend server. (BDB is a straight key:value store).
>
>Fair enough.
>
>I was referring to an external and independent daemon with it's own needs for care & >feeding.
>.
>>One file. I often ship SQLite DB files between systems for various reasons and agree >>that the "one file" is much easier that a typical RDBMS.
>
>*nod*
>
>--
>Grant. . . .
>unix || die
--
The more I learn the better I understand I know nothing.
The COFF folks may have a bead on this if no one on TUHS does.
---------- Forwarded message ---------
From: ron minnich <rminnich(a)gmail.com>
Date: Wed, Feb 10, 2021, 9:33 AM
Subject: [TUHS] nothing to do with unix, everything to do with history
To: TUHS main list <tuhs(a)minnie.tuhs.org>
There's so much experience here, I thought someone might know:
"Our goal is to develop an emulator for the Burroughs B6700 system. We
need help to find a complete release of MCP software for the Burroughs
B6700.
If you have old magnetic tapes (magtapes) in any format, or computer
printer listings of software or micro-fiche, micro-film, punched-card
decks for any Burroughs B6000 or Burroughs B7000 systems we would like
to hear from you.
Email nw(a)retroComputingTasmania.com"
<moving to coff, less unix heritage content here>
On 2021-02-07 23:29, Doug McIntyre wrote:
> On Sun, Feb 07, 2021 at 04:32:56PM -0500, Nemo Nusquam wrote:
>> My Sun UNIX layout keyboards (and mice) work quite well with my Macs.
>> I share your sentiments.
>
> Most of the bespoke mechanical keyboard makers will offer a dipswitch
> for what happens to the left of the A, and with an option to print the
> right value there, my keyboards work quite well the right way.
I've been using the CODE[0] keyboard with 'clear' switches for the past
few years and have been very happy with it. Has the dipswitches for
swapping around CTRL/CAPS and the meta/Alt, probably others as well.
When I don't have hardware solutions to this, most modern OSes let you
remap keys in software. Being a gnu screen user, CTRL & A being right
next too each other makes life easier.
I've used enough keyboards over the years that didn't even have an ESC
key (Mac Plus, the Commodore 64, the keyboard on my Samsung tablet,
probably a few others), that I got in the habit of using CTRL-[ to
generate an ESC and still do that most of the time rather than reaching
for the ESC up there in the corner.
> I did use the Sun Type5 USB Unix layout for quite some years, but I
> always found it a but mushy, and liked it better switching back to
> mechanical keyboards with the proper layout.
Before I got this keyboard, I used a Sun Type 7 keyboard (USB with the
UNIX layout). It had the CTRL and ESC keys in the "right" places (as
noted above, ESC location doesn't bother me as much), but yeah, they're
mushy, and big. Much happier with the mechanical keyboard for my daily
driver.
I've been eyeballing the TEX Shinobi[1], a mechanical keyboard with the
ThinkPad type TrackPoint, cut down on reasons for my fingers to leave
the keyboard even more.
--
Michael Parson
Pflugerville, TX
[0] http://codekeyboards.com/
[1] https://tex.com.tw/products/shinobi?variant=16969884106842
I have to agree with Clem here. Mind you I still mourn the demise of
Alpha and even Itanium but then I never had to pay for those systems.
I only make sure they run properly so the customer can enjoy their
applications.
My 32-1/2 cents (inflation adjusted).
Take care and stay as healthy as some of my 25 year old servers :-)
Cheers,
uncle rubl
>From: Clem Cole <clemc(a)ccc.com>
>To: Larry McVoy <lm(a)mcvoy.com>
>Cc: COFF <coff(a)minnie.tuhs.org>
>Bcc:
>Date: Fri, 5 Feb 2021 09:36:20 -0500
>Subject: Re: [COFF] Architectures -- was [TUHS] 68k prototypes & microcode
<snip>
>BTW: Once again we 100% agree on the architecture part of the discussion. And frankly >pre-386 days, I could not think how anyone would come up with it. As computer >architecture it is terrible, how did so many smart people come up with such? It defies >everything we are taught about 'good' computer architectural design. But .... after all of >the issues with the ISA's of Vax and the x86/INTEL*64 vs. Alpha --- is how I came to the >conclusion, architecture does not matter nearly as much as economics and we need to >get over it and stop whining. Or in Christensen's view, a new growing market is often >made from a product that has technically not as good as the one in the original >mainstream market but has some value to the new group of people.
<snip>
--
The more I learn the better I understand I know nothing.
Moved to COFF - and I should prefix this note with a strongly worded --
these are my own views and do not necessarily follow my employers (and
often have not, as some of you know that have worked with me in the past).
On Wed, Feb 3, 2021 at 8:34 PM Larry McVoy <lm(a)mcvoy.com> wrote:
> The x86 stuff is about as far away from PDP-11 as you can get. Required
> to know it, but so unpleasant.
>
BTW: Once again we 100% agree *on the architecture part of* *the discussion*.
And frankly pre-386 days, I could not think how anyone would come up with
it. As computer architecture it is terrible, how did so many smart people
come up with such? It defies everything we are taught about 'good'
computer architectural design. But .... after all of the issues with the
ISA's of Vax and the x86/INTEL*64 *vs.* Alpha --- is how I came to the
conclusion, *architecture does not matter nearly as much as economics and
we need to get over it and stop whining. * Or in Christensen's view, a new
growing market is often made from a product that has technically not as
good as the one in the original mainstream market but has some value to the
new group of people.
x86 (and in particular once the 386 added linear 32 bit addressing), even
though DOS/Windows sucked compared to SunOS (or whatever), the job (work)
that the users needed to do was performed to the customer's satisfaction *and
for a lot less.* The ISVs could run their codes there and >>they<< sell
more copies of their code which is what they care about. The end-users,
really just care about getting a job done.
What was worse was at the time, it was the ISV's keep their own prices
higher on the 'high-value platform' - which makes the cost of those
platforms ever higher. During the Unix wars, this fact was a huge issue.
The same piece of SW for a Masscomp would cause 5-10 more than a Sun -- why
we were considered a minicomputer and Sun was a workstation. Same 10MHz
68000 inside (we had a better compiler so we ran 20% faster). This was
because the ISV's classified Masscomp's competition was considered the Vax
8600; not Sun and Apollo -- sigh.
In the end, the combination of x86 and MSFT did it to Sun. For example,
my college roommates (who were trained on the first $100K
architecture/drawing 3D systems developed at CMU on PDP-11/Unix and Triple
Drip Graphic's Wonder) Wintel running a 'boxed' AutoCAD was way more
economical than a Sun box with a custom architecture package -- economics
won, even though the solution was technically not as good. Another form
of the same issue did you ever try to write a technical >>publication<<
with Word (not a letter or a memo) -- it sucks -- The pro's liked FrameMaker
and other 'authoring tools' (hey I think even Latex and Troff are -- much '
better' for the author) -- but Frame costs way more and Word, so what do
the publishers want -- ugh Word DOC format [ask Steinhart about this issue,
he lived it a year ago].
In the case of the Arm, Intel #$%^'ed 101-15 yrs ago up when Jobs said he
wanted a $20 processor for what would become the iPhone and our execs told
him to take a hike (we were making too much money with high margin
Window's) boxes. At the time, Arm was 'not as good' - but it had two
properties Jobs cared about (better power - although at the time Arm was
actually not much better than the laptop x86s, but Apple got Samsung to
make/sell parts at less than $20 -- i.e. economics).
Again, I'm not a college professor. I'm an engineer that builds real
computer systems that sometimes people (even ones like the folks that read
this list) want to/have wanted buy. As much as I like to use sold
architecture principle to guide things, the difference is I know be
careful. Economics is really the higher bit. What the VAX engineers at
DEC or the current INTEL*64 folks (like myself) was/is not what some of the
same engineers did with Alpha -- today, we have to try to figure out how to
make the current product continue to be economically attractive [hence the
bazillion new instructions that folks like Paul W in the compiler team
figure out how to exploit, so the ISV's codes run better and can sell more
copies and we sell more chips to our customers to sell to end users].
But like Jobs's, DEC management got caught up in the high margin game, and
ignored the low end (I left Compaq after I managed to build the $1K Alpha
which management blew off -- it could be sold at 45% margins like the Alpha
TurboLaser or 4x00 series). Funny, one of the last things I had proposed
at Masscomp in the early 80s before I went to Stellar, was a low-end system
(also < $1K) and Masscomp management wanted to part of it -- it would have
meant competing with Sun and eventually the PC.
FWIW: Intel >>does<< know how to make a $20 SOC, but the margins will
suck. The question is what will management want to? I really don't
know. So far, we have liked the server chip margins (don't forget Intel
made more $s last year than it ever has - even in the pandemic).
I feel a little like Dr Seuss' 'Onceler' in the Lorax story ... if Arm can
go upscale from the phone platform who knows what will happen - Bell's Law
predicts Arm displaces INTEL*64:
“Approximately every decade a new computer class forms as a new “minimal”
computer either through using fewer components or use of a small fractional
part of the state-of-the-art chips.”
FWIW: Bell basically has claimed a technical point, based on Christenson's
observation; the 'lessor' technology will displace the 'better one.' Or
as I say it, sophisticated architecture always losses to better economics.
On 2021-Feb-03 09:58:37 -0500, Clem Cole <clemc(a)ccc.com> wrote:
>but the original released (distributed - MC68000) part was binned at 8 and
>10
There was also a 4MHz version. I had one in my MEX68KECB but I'm not
sure if they were ever sold separately. ISTR I got the impression
that it was a different (early) mask or microcode variant because some
of the interface timings weren't consistent with the 8/10 MHz versions
(something like one of the bus timings was a clock-cycle slower).
>as were the later versions with the updated paging microcode called the
>MC68010 a year later. When the 68020 was released Moto got the speeds up
>to 16Mhz and later 20. By the '040 I think they were running at 50MHz
I also really liked the M68k architecture. Unfortunately, as with the
M6800, Motorola lost out to Intel's inferior excuse for an architecture.
Moving more off-topic, M68k got RISC-ified as the ColdFire MCF5206.
That seemed (to me) to combine the feel of the M68k with the
clock/power gains from RISC. Unfortunately, it didn't take off.
--
Peter Jeremy
On 2021-Feb-03 17:33:56 -0800, Larry McVoy <lm(a)mcvoy.com> wrote:
>The x86 stuff is about as far away from PDP-11 as you can get. Required
>to know it, but so unpleasant.
Warts upon warts upon warts. The complete opposite of orthogonal.
>I have to admit that I haven't looked at ARM assembler, the M1 is making
>me rethink that. Anyone have an opinion on where ARM lies in the pleasant
>to unpleasant scale?
I haven't spent enough time with ARM assembler to form an opinion but
there are a number of interesting interviews with the designers on YT:
* A set of videos by Sophie Wilson (original designer) starting at
https://www.youtube.com/watch?v=jhwwrSaHdh8
* A history of ARM by Dave Jaggar (redesigner) at
https://www.youtube.com/watch?v=_6sh097Dk5k
If you don't want to pony up for a M1, there are a wide range of ARM
SBCs that you could experiment with.
--
Peter Jeremy
On Fri, Feb 05, 2021 at 01:16:08PM +1100, Dave Horsfall wrote:
> [ Directing to COFF, where it likely belongs ]
>
> On Thu, 4 Feb 2021, Arthur Krewat wrote:
>
> >>-- Dave, wondering whether anyone has ever used every VAX instruction
> >
> >Or every VMS call, for that matter. ;)
>
> Urk... I stayed away from VMS as much as possible (I had a network of
> PDP-11s to play with), although I did do a device driver course; dunno why.
Me too, though I did use Eunice, it was a lonely place, it did not let
me see who was on VMS. I was the only one. A far cry from BSD where
wall went to everyone and talk got you a screen where you talked.
Hello all,
I'm looking to compile a list of historic "wars" in computing, just for
personal interest cause they're fun to read about (I'll also put the list
up online for others' reference).
To explain what I mean, take for example, The Tcl War
<https://vanderburg.org/old_pages/Tcl/war/>. Other examples:
- TCP/IP wars (BBN vs Berkley, story by Kirk McKusick
<https://www.youtube.com/watch?v=DEEr6dT-4uQ>)
- The Tanenbaum-Torvalds Debate
<https://www.oreilly.com/openbook/opensources/book/appa.html> (it doesn't
have 'war(s)' in the name but it still counts IMO, could be called The
Microkernel Wars)
- UNIX Wars <https://en.wikipedia.org/wiki/Unix_wars>
Stuff like "vi vs. emacs" counts too I think, though I'm looking more for
historical significance, so maybe "spaces vs. tabs" isn't interesting
enough.
Thanks, any help is appreciated : )
Josh
On Thu, Feb 4, 2021 at 9:57 AM John Cowan <cowan(a)ccil.org> wrote:
>
> On Wed, Feb 3, 2021 at 8:34 PM Larry McVoy <lm(a)mcvoy.com> wrote:
>
>> The x86 stuff is about as far away from PDP-11 as you can get. Required
>> to know it, but so unpleasant.
>>
>
> Required? Ghu forbid. After doing a bunch of PDP-11 assembler work, I
> found out that the Vax had 256 opcodes and foreswore assembly thereafter.
> Still, that was nothing compared to the 1500+ opcodes of x86*. I think I
> dodged a bullet.
>
IMHO: the Vax instruction set was the assembler guys (like Culter) trying
to delay the future and keep assembler as king of the hill. That said,
Dave Pressotto, Scotty Baden, and I used to fight with Patterson in his
architecture seminar (during the writing of the RISC papers). DEC hit a
grand slam with that machine. Between the Vax and x86 plus being part of
Alpha, I have realized ISA has nothing to do with success (i.e. my previous
comments about economics vs. architecture).
Funny thing, Dave Cane was the lead HW guy on the 750, worked on the 780 HW
team, and lead the Masscomp HW group. Dave used to stay "Culter got his
way" whenever we talked about the VAX instruction set. It was supposed to
be the world's greatest assembler machine. The funny part is that DEC had
already started to transition to BLISS by then in the applications teams.
But Cutler was (is) an OS weenie and he famously hated BLISS. Only the
other hand, Culter (together with Dick Hustvedt and Peter Lipman), got the
SW out on that system (Starlet - *a.k.a.* VMS) quickly and it worked really
well/pretty much as advertised. [Iknowing all of them I suspect having
Roger Gourd as their boss helped a good bit also).
Clem
On Wed, Feb 3, 2021 at 8:34 PM Larry McVoy <lm(a)mcvoy.com> wrote:
> I have to admit that I haven't looked at ARM assembler, the M1 is making
> me rethink that. Anyone have an opinion on where ARM lies in the pleasant
> to unpleasant scale?
>
Redirecting to "COFF" as this is drifting away from Unix.
I have a soft spot for ARM, but I wonder if I should. At first blush, it's
a pleasant RISC-ish design: loads and stores for dealing with memory,
arithmetic and logic instructions work on registers and/or immediate
operands, etc. As others have mentioned, there's an inline barrel shifter
in the ALU that a lot of instructions can take advantage of in their second
operand; you can rotate, shift, etc, an immediate or register operand while
executing an instruction: here's code for setting up page table entries for
an identity mapping for the low part of the physical address space (the
root page table pointer is at phys 0x40000000):
MOV r1, #0x0000
MOVT r1, #0x4000
MOV r0, #0
.Lpti: MOV r2, r0, LSL #20
ORR r2, r2, r3
STR r2, [r1], #4
ADD r0, r0, #1
CMP r0, #2048
BNE .Lpti
(Note the `LSL #20` in the `MOV` instruction.)
32-bit ARM also has some niceness for conditionally executing instructions
based on currently set condition codes in the PSW, so you might see
something like:
1: CMP r0, #0
ADDNE r1, r1, #1
SUBNE r0, r0, #1
BNE 1b
The architecture tends to map nicely to C and similar languages (e.g.
Rust). There is a rich set of instructions for various kinds of arithmetic;
for instance, they support saturating instructions for DSP-style code. You
can push multiple registers onto the stack at once, which is a little odd
for a RISC ISA, but works ok in practice.
The supervisor instruction set is pretty nice. IO is memory-mapped, etc.
There's a co-processor interface for working with MMUs and things like it.
Memory mapping is a little weird, in that the first-level page table isn't
the same second-level tables: the first-level page table maps the 32-bit
address space into 1MiB "sections", each of which is described by a 32-bit
section descriptor; thus, to map the entire 4GiB space, you need 4096 of
those in 16KiB of physically contiguous RAM. At the second-level, 4KiB page
frames map page into the 1MiB section at different granularities; I think
the smallest is 1KIB (thus, you need 1024 32-bit entries). To map a 4KiB
virtual page to a 4KiB PFN, you repeat the relevant entry 4 times in the
second-level page. It ends up being kind of annoying. I did a little toy
kernel for ARM32 and ended up deciding to use 16KiB pages (basically, I map
4x4KiB contiguous pages) so I could allocate a single sized structure for
the page tables themselves.
Starting with the ARMv8 architecture, it's been split into 32-bit aarch32
(basically the above) and 64-bit aarch64; the latter has expanded the
number and width of general purpose registers, one is a zero register in
some contexts (and I think a stack pointer in others? I forget the
details). I haven't played around with it too much, but looked at it when
it came out and thought "this is reasonable, with some concessions for
backwards compatibility." They cleaned up the paging weirdness mentioned
above. The multiple push instruction has been retired and replaced with a
"push a pair of adjacent registers" instruction; I viewed that as a
concession between code size and instruction set orthogonality.
So...Overall quite pleasant, and far better than x86_64, but with some
oddities.
- Dan C.
I will ask Warren's indulgence here - as this probably should be continued
in COFF, which I have CC'ed but since was asked in TUHS I will answer
On Wed, Feb 3, 2021 at 6:28 AM Peter Jeremy via TUHS <tuhs(a)minnie.tuhs.org>
wrote:
> I'm not sure that 16 (or any other 2^n) bits is that obvious up front.
> Does anyone know why the computer industry wound up standardising on
> 8-bit bytes?
>
Well, 'standardizing' is a little strong. Check out my QUORA answer: How
many bits are there in a byte
<https://www.quora.com/How-many-bits-are-there-in-a-byte/answer/Clem-Cole>
and What is a bit? Why are 8 bits considered as 1 byte? Why not 7 bit or 9
bit?
<https://www.quora.com/What-is-a-bit-Why-are-8-bits-considered-as-1-byte-Why…>
for my details but the 8-bit part of the tail is here (cribbed from those
posts):
The Industry followed IBM with the S/360.The story of why a byte is 8- bits
for the S/360 is one of my favorites since the number of bits in a byte is
defined for each computer architecture. Simply put, Fred Brooks (who lead
the IBM System 360 project) overruled the chief hardware designer, Gene
Amdahl, and told him to make things power of two to make it easier on the
SW writers. Amdahl famously thought it was a waste of hardware, but Brooks
had the final authority.
My friend Russ Robeleon, who was the lead HW guy on the 360/50 and later
the ASP (*a.k.a.* project X) who was in the room as it were, tells his yarn
this way: You need to remember that the 360 was designed to be IBM's
first *ASCII
machine*, (not EBCDIC as it ended up - a different story)[1] Amdahl was
planning for a word size to be 24-bits and the byte size to be 7-bits for
cost reasons. Fred kept throwing him out of his office and told him not to
come back “until a byte and word are powers of two, as we just don’t know
how to program it otherwise.”
Brooks would eventually relent on the original pointer on the Systems 360
became 24-bits, as long as it was stored in a 32-bit “word”.[2] As a
result, (and to answer your original question) a byte first widely became
8-bit with the IBM’s Systems 360.
It should be noted, that it still took some time before an 8-bit byte
occurred more widely and in almost all systems as we see it today. Many
systems like the DEC PDP-6/10 systems used 5, 7-bit bytes packed into a
36-bit word (with a single bit leftover) for a long time. I believe that
the real widespread use of the 8-bit byte did not really occur until the
rise of the minis such as the PDP-11 and the DG Nova in the late
1960s/early 1970s and eventually the mid-1970s’ microprocessors such as
8080/Z80/6502.
Clem
[1] While IBM did lead the effort to create ASCII, and System 360 actually
supported ASCII in hardware, but because the software was so late, IBM
marketing decided not the switch from BCD and instead used EBCDIC (their
own code). Most IBM software was released using that code for the System
360/370 over the years. It was not until IBM released their Series 1
<https://en.wikipedia.org/wiki/IBM_Series/1>minicomputer in the late 1970s
that IBM finally supported an ASCII-based system as the natural code for
the software, although it had a lot of support for EBCDIC as they were
selling them to interface to their ‘Mainframe’ products.
[2] Gordon Bell would later observe that those two choices (32-bit word and
8-bit byte) were what made the IBM System 360 architecture last in the
market, as neither would have been ‘fixable’ later.
Migration to COFF, methinks
On 30/01/2021 18:20, John Cowan wrote:
> Those were just examples. The hard part is parsing schemas,
> especially if you're writing in C and don't know about yacc and lex.Â
> That code tends to be horribly buggy.
True but tools such as the commercial ASN.1 -> C translators are fairly
good and even asn1c has come a long way in the past few decades.
N.
>
> But unless you need to support PER (which outright requires the
> schema) or unless you are trying to map ASN.1 compound objects to C
> structs or the equivalent, you can just process the whole thing in the
> same way you would JSON, except that it's binary and there are more
> types. Easy-peasy, especially in a dynamically typed language.
>
> Once there was a person on the xml-dev mailing list who kept repeating
> himself, insisting on the superiority of ASN.1 to XML. Finally I told
> him privately that his emails could be encoded in PER by using 0x01 to
> represent him (as the value of the author field) and allowing the
> recipients to reconstruct the message from that! He took it in good part.
>
>
>
> John Cowan http://vrici.lojban.org/~cowan
> <http://vrici.lojban.org/%7Ecowan> cowan(a)ccil.org <mailto:cowan@ccil.org>
> Don't be so humble. You're not that great.
> --Golda Meir
>
>
> On Fri, Jan 29, 2021 at 10:52 PM Richard Salz <rich.salz(a)gmail.com
> <mailto:rich.salz@gmail.com>> wrote:
>
> PER is not the reason for the hatred of ASN.1, it's more that the
> specs were created by a pay-to-play organization that fought
> against TCP/IP, the specs were not freely available for long
> years, BER was too flexible, and the DER rules were almost too
> hard to get right. Just a terse summary because this is probably
> off-topic for TUHS.
>
Born on this day in 1925, he was a pioneer in human/computer interaction,
and invented the mouse; it wasn't exactly ergonomic, being just a square
box with a button.
-- Dave
Howdy,
Perhaps this is off topic for this list, if so, apologies in advance.
Or perhaps this will be of interest to some who do not trace yet
another mailing list out there :-).
Two days ago I received a notice that bugtraq would be terminated, and
archive shut down on 31st this month. Only then I realized (looking at
the archive also helped a bit in this) that last post to bugtraq
happened in the last days of Feb 2020. After that, eleven months of
nothing, and shutdown notice. It certainly was not because of list
being shunned, because I have seen posters on other lists cc-ing to
bt, yet their posts never went that route (apparently) and I suppose
they were not postponed either. If they were, I would now get an
eleven months worth of it. But no.
Too bad. I liked bt, even if I had not followed every post.
Today, a notice that they would not terminate bt (after second
thought, as they wrote). And a fresh post from yesterday.
But what could possibly explain an almost year long gap? Their
computers changed owners last year, and maybe someone switched the
flip, were fired, nobody switched it on again? Or something else?
Just wondering.
--
Regards,
Tomasz Rola
--
** A C programmer asked whether computer had Buddha's nature. **
** As the answer, master did "rm -rif" on the programmer's home **
** directory. And then the C programmer became enlightened... **
** **
** Tomasz Rola mailto:tomasz_rola@bigfoot.com **
[moved to COFF]
On Monday, 18 January 2021 at 15:47:48 -0500, Steve Nickolas wrote:
> On Mon, 18 Jan 2021, John Cowan wrote:
>
>> (When I met my future wife I was 21, and she wanted me to grow a beard, so
>> I did. Since then I have occasionally asked cow orkers who have complained
>> about shaving why *they* don't grow beards: the most common answer is "My
>> wife doesn't want me to." *My* wife doesn't believe this story.)
>
> I actually had to shave for a while specifically because of my
> then-girlfriend, so... ;p I can see that.
Early on I made a decision that no woman could make me shave my
beard, and I stuck to it. Not that the beard was more important, but
if she wanted it gone, she was looking at the wrong things.
Greg
--
Sent from my desktop computer.
Finger grog(a)lemis.com for PGP public key.
See complete headers for address and phone numbers.
This message is digitally signed. If your Microsoft mail program
reports problems, please read http://lemis.com/broken-MUA