On 1/17/20, Rob Pike <robpike(a)gmail.com> wrote:
> I am convinced that large-scale modern compute centers would be run very
> differently, with fewer or at least lesser problems, if they were treated
> as a single system rather than as a bunch of single-user computers ssh'ed
> together.
>
> But history had other ideas.
>
[moving to COFF since this isn't specific to historic Unix]
For applications (or groups of related applications) that are already
distributed across multiple machines I'd say "cluster as a single
system" definitely makes sense, but I still stand by what I said
earlier about it not being relevant for things like workstations, or
for server applications that are run on a single machine. I think
clustering should be an optional subsystem, rather than something that
is deeply integrated into the core of an OS. With an OS that has
enough extensibiity, it should be possible to have an optional
clustering subsystem without making it feel like an afterthought.
That is what I am planning to do in UX/RT, the OS that I am writing.
The "core supervisor" (seL4 microkernel + process server + low-level
system library) will lack any built-in network support and will just
have support for local file servers using microkernel IPC. The network
stack, 9P client filesystem, 9P server, and clustering subsystem will
all be separate regular processes. The 9P server will use regular
read()/write()-like APIs rather than any special hooks (there will be
read()/write()-like APIs that expose the message registers and shared
buffer to make this more efficient), and similarly the 9P client
filesystem will use the normal API for local filesystem servers (which
will also use read()/write() to send messages). The clustering
subsystem will work by intercepting process API calls and forwarding
them to either the local process server or to a remote instance as
appropriate. Since UX/RT will go even further than Plan 9 with its
file-oriented architecture and make process APIs file-oriented, this
will be transparent to applications. Basically, the way that the
file-oriented process API will work is that every process will have a
special "process server connection" file descriptor that carries all
process server API calls over a minimalist form of RPC, and it will be
possible to redirect this to an intermediary at process startup (of
course, this redirection will be inherited by child processes
automatically).
Originally, I meant to reply to the Linux-origins thread by pointing to
AST's take on the matter but I failed to find it. So, instead, here is
something to warm the cockles of troff users:
From https://www.cs.vu.nl/~ast/home/faq.html
Q: What typesetting system do you use?
A: All my typesetting is done using troff. I don't have any need to see
what the output will look like. I am quite convinced that troff will
follow my instructions dutifully. If I give it the macro to insert a
second-level heading, it will do that in the correct font and size, with
the correct spacing, adding extra space to align facing pages down to
the pixel if need be. Why should I worry about that? WYSIWYG is a step
backwards. Human labor is used to do that which the computer can do
better. Also, using troff means that the text is in ASCII, and I have a
bunch of shell scripts that operate on files (whole chapters) to do
things like produce a histogram by year of all the references. That
would be much harder and slower if the text were kept in some
manufacturer's proprietary format.
Q: What's wrong with LaTeX?
A: Nothing, but real authors use troff.
N.
>From the museum pages via the KG-84 picture to wiki. Reading a bit on
crypto devices, stumbling over M-209 and
"US researcher Dennis Ritchie has described a 1970s collaboration with
James Reeds and Robert Morris on a ciphertext-only attack on the M-209
that could solve messages of at least 2000–2500 letters.[3] Ritchie
relates that, after discussions with the NSA, the authors decided not
to publish it, as they were told the principle was applicable to
machines then still in use by foreign governments.[3]"
https://en.wikipedia.org/wiki/M-209
The paper
https://cryptome.org/2015/12/ReedsTheHagelinCipherBellLabs1978.pdf
ends with
"The program takes about two minutes to produce a solution on a DEC PDP-11/70."
No info on the program coding.
More info around the story from Richie himself
https://www.bell-labs.com/usr/dmr/www/crypt.html
A post in a Facebook IBM retirees group of an IBM PC museum (in
Germany?) made be follow up with reference to Glenn's museum. Glenn
showed me around the museum last time I saw him at Centaur, but I did
not know until today about the Web presence at http://www.glennsmuseum.com/.
Rise of the Centaur (https://www.imdb.com/title/tt5690958/) includes
Glenn discussing some of the museum.
Charlie
--
voice: +1.512.784.7526 e-mail: sauer(a)technologists.com
fax: +1.512.346.5240 Web: https://technologists.com/sauer/
Facebook/Google/Skype/Twitter: CharlesHSauer
-TUHS, +COFF, in line with Warren's wishes.
On Sun, Jan 12, 2020 at 7:36 PM Bakul Shah <bakul(a)bitblocks.com> wrote:
> There is similar code in FreeBSD kernel. Embedding head and next ptrs
> reduces
> memory allocation and improves cache locality somewhat. Since C doesn't
> have
> generics, they try to gain the same functionality with macros. See
>
> https://github.com/freebsd/freebsd/blob/master/sys/sys/queue.h
>
> Not that this is the same as what Linux does (which I haven't dug into) but
> I suspect they may have had similar motivation.
>
I was actually going to say, "blame Berkeley." As I understand it, this
code originated in BSD, and the Linux implementation is at least inspired
by the BSD code. There was code for singly and doubly linked lists, queues,
FIFOs, etc.
I can actually understand the motivation: lists, etc, are all over the
place in a number of kernels. The code to remove an element from a list is
trivial, but also tedious and repetitive: if it can be wrapped up into a
macro, why not do so? It's one less thing to mess up.
I agree it's gone off the rails, however.
- Dan C.
All, can we move this not-really-Unix discussion to COFF?
Thanks, Warren
P.S A bit more self-regulation too, please. You shouldn't need me to point
out when the topic has drifted so far :-)
Noel Chiappa writes:
> The code is still extant, in 'SYSTEM; TV >'. It only worked (I think)
> from Knight TV keyboards
(This isn't TUHS material, but I can't resist. CC to COFF.)
There is also a Chaosnet service to call for the elevator or open the
door, so it can be used remotely. The ITS program ESCE uses this
service. I suppose there must have been something for the Lisp machines
too, maybe even a Super-Hyper-Cokebottle keyboard command.
Steffen - I think your reply just proved my point. I am as asking people
what they really value/what they really need. Clearly, you and I value
different things. And that is fine. You can disagree with my choices, but
accept it is what *I value*, and I understand *you value something
different*.
FWIW: I will not identify who sent it me because he sent it to me
privately, but someone else commented to me on my post that I had nailed
it. He observed "I see this sort of not respecting other people's choices
as a general trend these days. We want tolerance for our *views* but at the
same time, we are becoming more intolerant of *other* people's views!"
For a smile, this was yesterdays comic and expresses the issue well:
https://www.comicskingdom.com/brilliant-mind-of-edison-lee/2020-01-08
Answering, but CCing COFF if folks want to continue. This is less about
UNIX and more about how we all got to where we are.
On Wed, Jan 8, 2020 at 11:24 PM Jon Steinhart <jon(a)fourwinds.com> wrote:
> Clem, this seems like an unusual position for you to take. vim is
> backwards
> compatible with vi (and also ed), so it added to an existing ecosystem.
>
No, really unusually when you think about it. vim is backward compatible
except when it's not (as Bakul points out) - which is my complaint. It's
*almost* compatible and those small differences are really annoying when
you expect one thing and get something else (*i.e.* the least astonishment
principle).
The key point here is for *some people*, those few differences are not an
issue and are not astonished by them. But for *some of the rest of us*
(probably people like me that have used the program since PDP-11 days) that
only really care about the original parts, the new stuff is of little value
and so the small differences are astonishing. Which comes back to the
question of good and best. It all depends on one what you value/where you
put the high order bit. I'm not willing to "pay" for the it; as it gives
me little value.
Doug started this thread with his observation that ex/vi was huge compared
to other editors. * i.e.* value: small simple easy to understand (Rob's old
"*cat -v considered harmful*" argument if you will). The BSD argument had
always been: "the new stuff is handy." The emacs crew tends to take a
similar stand. I probably don't go quite as far as Rob, but I certainly
lean in that direction. I generally would rather something small and new
that solves a different (set of) problem(s), then adding yet another wart
on to an older program, *particularly when you change the base
functionality *- which is my vi *vs. *vim complaint*.* [i.e. 'partial
credit' does not cut it].
To me, another good example is 'more', 'less' and 'pg'. Eric Schienbrood
wrote the original more(ucb) to try to duplicate the ITS functionality (he
wrote it for the PDP-11/70 in Cory Hall BTW - Ernie did not exist and
4.1BSD was a few years in the future - so small an simple of a huge
value). It went out in the BSD tapes, people loved it and were happy. It
solved a problem as we had it. Life was good. Frankly, other than NIH,
I'm not sure why the folks at AT&T decided to create pg a few years later
since more was already in the wild, but at least it was a different program
(Mary Ann's story of vi *vs*. se if probably in the same vein). But
because of that behavior, if someone like me came to an AT&T based system
with only pg installed, so those of us that liked/were used to more(ucb)
could install it and life was good. Note pg was/is different in
functionality, it's similar, but not finger compatible.
But other folks seem to have thought neither was 'good enough' -- thus
later less(gnu) was created adding a ton of new functionality to Eric's
program. The facts are clear, some (ney many) people >>love<< that new
functionality, like going backward. I >>personally<< rarely care/need for
it, Eric's program was (is) good enough for me. Like Doug's observation
of ed *vs.* ex/vi; less is huge compared to the original more (or pg for
that matter). But if you value the new features, I suspect you might
think that's not an issue. Thanks to Moore's law, the size in this case
probably does not matter too much (other than introducing new bugs). At
least, when folks wrote did Gnu's less, the basic more(ucb) behavior was
left along and if you set PAGER=more less(gnu) pretty much works as I
expect it too. So I now don't bring Eric's program with me, the same way
Bakul describes installing nvi on new systems (an activiity I also do).
Back to vi *vs.* nvi *vs.* vim *et. al.* Frankly, in my own case, I do
>>occaisonally<< use split screens, but frankly, I can get most of the same
from having a window manager, different iterm2 windows and cut/paste. So
even that extension to nvi, is of limited value to me. vim just keeps
adding more and more cruft and its even bigger. I personally don't care
for the new functionality, and the size of it all is worrisome. What am I
buying? That said, if the new features do not hurt me, then I don't really
care. I might even use some of the new functionality - hey I run mac OS
not v7 or BSD 4.x for my day to day work and I do use the mac window
manager, the browser *et al*, but as I type this message I have 6 other
iterm2 windows open with work I am doing in other areas.
Let me take a look at this issue in a different way. I have long been a
'car guy' and like many of those times in my youth spent time and money
playing/racing etc. I've always thought electric was a great idea/but there
has been nothing for me. Note: As many of you know my work in computers has
been in HPC, and I've been lucky to spend a lot of time with my customers,
in the auto and aerospace industry (*i.e.* the current Audi A6 was designed
on one of my supercomputer systems). The key point is have tended to
follow technology in their area and tend to "in-tune" with a lot of
developments. The result, except for my wife's minivan (that she preferred
in the years when our kids were small), I've always been a
die-hard German-engineered/performance car person. But when Elon announced
the Model 3 (like 1/2 the techie world), I put down a deposit and waited.
Well why I was waiting, my techie daughter (who also loves cars), got a
chance to drive one. She predicted I would hate it!!! So when my ticket
finally came up, I went to drive them. She was right!!! With the Model 3,
you get a cool car, but it's about the size of a Corrolla. Coming from
Germans cars for the last 35 years, the concept of spending $60K US in
practice for a Corrolla just did not do it for me. I ended up ordering the
current Unixmobile, my beloved Tesla Model S/P100D.
The truth is, I paid a lot of money for it but I *value *what I got for my
money. A number of people don't think it's worth it. I get that, but I'm
still happy with what I have. Will there someday be a $20K electric car
like my Model S? While I think electric cars will get there (I point out
the same price curve on technology such microwave ovens from the 1970so
today), but I actually doubt that there will be a $20K electric vehicle
like my Model S.
The reason is that to sell this car because it as to be expensive for
technology-based reasons, so Tesla had to add a lot of 'luxury' features
like other cars in the class, other sports cars, Mercedes, *et al*. As
they removed them (*i.e.* the Model 3) you still get a cool car, but it's
not at all the same as the Model S. So the point is, if I wanted an
electric car, I had to choose between a performance/luxury *vs*.
size/functionality. I realized I valued the former (and still do), but I
understand not everyone does or will.
Coming back to our topic, I really don't think this is a 'get my lawn'
issue as much, as asking someone what they really value/what they really
need. If you place a high-value you something, you will argue that its
best; if it has little value you will not.
below... -- warning veering a little from pure UNIX history, but trying to
clarify what I can and then moving to COFF for follow up.
On Wed, Jan 8, 2020 at 12:23 AM Brian Walden <tuhs(a)cuzuco.com> wrote:
> ....
>
> - CMU's ALGOL68S from 1978 list all these ways --
> co comment
> comment comment
> pr pragmat
> pragmat pragmat
> # (comment symbol) comment
> :: (pragmat symbol) pragmat
> (its for UNIX v6 or v7 so not surprising # is a comment)
> http://www.softwarepreservation.org/projects/ALGOL/manual/a68s.txt/view
Be careful of overthinking here. The comment in that note says was it was
for* PDP-11's *and lists V6 and V7 was *a possible target*, but it did not
say it was. Also, the Speach and Vision PDP-11/40e based systems ran a
very hacked v6 (which a special C compiler that supported CMU's csv/cret
instructions in the microcode), which would have been the target systems.
[1]
To my knowledge/memory, the CMU Algol68 compiler never ran anywhere but
Hydra (and also used custom microcode). IIRC there was some talk to move
it to *OS (Star OS for CM*) I've sent a note to dvk to see if he remembers
it otherwise. I also ask Liebensperger what he remembers, he was hacking on
*OS in those days. Again, IIRC Prof. Peter Hibbard was the mastermind
behind the CMU Algol68 system. He was a Brit from Cambridge (and taught
the parallel computing course which I took from him at the time).
FWIW: I also don't think the CMU Algol68 compiler was ever completely
self-hosting, and like BLISS, required the PDP-10 to support it. As to why
it was not moved to the Vax, I was leaving/had left by that time, but I
suspect the students involved graduated and by then the Perq's had become
the hot machine for language types and ADA would start being what the gvt
would give research $s too.
>
>
> ...
>
> But look! The very first line of that file! It is a single # sitting all
> by itself. Why? you ask. Well this is a hold over from when the C
> preprocessor was new. C orginally did not have it and was added later.
> PL/I had a %INCLUDE so Ritchie eventaully made a #include -- but pre 7th
> Edition the C preprocessor would not be inkoved unless the very first
> character of the C source file was an #
>
That was true of V7 and Typesetter C too. It was a separate program (
/lib/cpp) that the cc command called if needed.
> Since v7 the preprocessor always run on it. The first C preprocessor was
> Ritchie's work with no nested includes and no macros. v7's was by John
> Reiser which added those parts.
>
Right, this is what I was referring too last night in reference to Sean
comments. As I said, the /bin/cc command was a shell script and it peaked
at the first character to see if it was #. I still find myself starting
all C programs with a # on a line by itself ;-)
Note that the Ritchie cpp was influenced by Brian's Ratfor work, so using #
is not surprising.
This leads to a question/thought for this group, although I think needs to
move to COFF (which I have CC'ed for follow up).
I have often contended, that one of the reasons why C, Fortran, and PL/1
were so popular as commercial production languages were because they could
be preprocessed. For a commercial place where lots of different targets is
possible, that was hugely important. Pascal, for instance, has semantics
that makes writing a preprocessor like cpp or Ratfor difficult (which was
one of the things Brian talks about in his "*Why Pascal is not my favorite
Programming Language <http://www.lysator.liu.se/c/bwk-on-pascal.html>*"
paper). [2]
So, if you went to commercial ISV's and looked at what they wrote in. It
was usually some sort of preprocessed language. Some used Ratfor like a
number of commercial HPC apps vendors, Tektronix wrote PLOT10 in MORTRAN.
I believe it was Morgan-Stanley had a front-end for PL/1, which I can not
recall the name. But you get the point ... if you had to target different
runtime environments, it was best for your base code to not be specific.
However ... as C became the system programming language, the preprocessor
was important. In fact, it even gave birth the other tools like autoconfig
to help control them. Simply, the idiom:
#ifdef SYSTEMX
#define SOME_VAR (1)
... do something specific
#endif /* SYSTEMX */
While loathsome to read, it actually worked well in practice.
That fact is I hate the preprocessor in many ways but love it for what it
for the freedom it actually gave us to move code. Having programmed since
the 1960s, I remember how hard it was to move things, even if the language
was the same.
Today, modern languages try to forego the preprocessor. C++'s solution is
to throw the kitchen sink into the language and have 'frameworks', none of
which work together. Java's and its family tries to control it with the
JVM. Go is a little too new to see if its going to work (I don't see a lot
of production ISV code in it yet).
Note: A difference between then and now, is 1) we have few target
architectures and 2) we have fewer target operating environments, 3) ISV
don't like multiple different versions of their SW, they much prefer very
few for maintenance reasons so they like # 1 and #2 [i.e. Cole's law of
economics in operation here].
So ... my question, particularly for those like Doug who have programmed
longer and at least as long as I, what do you think? You lived the same
time I did and know the difficulties we faced. Is the loss of a
preprocessor good or bad?
Clem
[1] Historical footnote about CMU. I was the person that brought V7 into
CMU and I never updated the Speach or Vision systems and I don't think
anyone did after I left. We ran a CMU V7 variant mostly on the 11/34s (and
later on a couple of 11/44s I believe) that had started to pop up.
Although later if it was a DEC system, CS was moving to Vaxen when they
could get the $s (but the Alto's and Perq's had become popular with the CMU
SPICE proposal). Departments like bio-engineering, mech ee, ran the
cheaper systems on-site and then networked over the Computer Center's Vaxen
and PDP-20's when they needed address space).
[2] Note: Knuth wrote "Web" to handle a number of the issues, Kernighan
talks about - but he had to use an extended Pascal superset and his program
was notable for not being portable (he wrote for it for the PDP-10
Pascal). [BTW: Ward Cunningham, TW Cook and I once counted over 8
different 'Tek Pascal' variants and 14 different 'HP Basics'].