We lost Robert Taylor, computer scientist and Internet pioneer, on this
day in 2017. Amongst other things, he helped invent the mouse, pioneered
computer communications leading up to ARPAnet, developed the computer
science lab at Xerox...
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
Today I reached a minor milestone with my 'Venix restoration project' that
I talked about months ago. I ran a Venix 86 binary (sync) to successful
completion on my FreeBSD/amd64 box (though none of the code should be too
FreeBSD specific).
So, I hacked https://github.com/tkchia/reenigne.git to remove the DOS
loader and emulator and to add a Venix system call loader and emulator, or
at least the start of one. I've also added FP instruction parsing, but it's
100% wrong (it just parses the instructions and does nothing else to decode
or implement them). With this, I'm able to load OMAGIC binaries from the
extant venix 86 distributions and run them. The only one that runs
successfully is sync() since I've not yet implemented argument passing or
any of the other 58 system calls :). NMAGIC should be pretty quick after
this.
This is but a step on the road to getting the Venix compiler running so I
can see how much of the system I can recreate from the v7 and other sources
that are on TUHS.
Not sure who, if anybody, cares about this stuff. I thought people here
might be interested. I've pushed the results to
https://github.com/bsdimp/venix if you care. This program is in the
tools/86sim directory. There's also a doc directory where I document the
Venix 86 ABI, as well as doing a very deep-dive into a disassembled
/bin/sync to discover what I can from it (turns out, it's quite a lot).
So, I thought I'd share this here. Don't know if anybody else is
interested, but you never know until you tell people about stuff...
Warner
I was sure that I'd read a paper on the legal history of Unix. So I did a
Google search for it, and found a link to the PDF. The linked PDF was on
the TUHS website :-)
http://wiki.tuhs.org/lib/exe/fetch.php?media=publications:theses:gmp_thesis…
I'd better do a backup of my brain, as I've got a few flakey DRAM chips.
Cheers, Warren
> From: Clem Cole
> first of Jan 83 was the day the Arpanet was supposed to be turned off
Err, NCP, not the ARPANet. The latter kept running for quite some time,
serving as the Internet's wide-area backbone, and was only slowly turned off
(IMP by IMP) in the late 80's, with the very last remnants finally being
turned off in 1990.
> The truth is, it did not happen, there were a few exceptions granted for
> some sites that were not quite ready (I've forgotten now).
A few, yes, but NCP was indeed turned off for most hosts on January 1, 1983.
> From: "Erik E. Fair"
> as of the advent of TCP/IP, all those Ethernet and Chaosnet connected
> workstations became first class hosts on the Internet, which they
> could not be before.
Huh? As I just pointed out, TCP/IP (and the Internet) was a going concern well
_before_ January 1, 1983 - and one can confidently say that even had NCP _not_
been turned off, history would have proceeded much as it actually did, since
all the machines not on the ARPANET would have wanted to be connected to the
Internet.
(Also, to be technical, I'm not sure if TCP/IP ever really ran on CHAOSNet
hardware - I know I did a spec for it, and the C Gateway implemented it, and
there was a Unix machine at EECS that tried to use it, but it was not a great
success. Workstations connected to the CHAOSNet as of that date - AFAIK, just
LISP Machines - could only get access to the Internet via service gateways,
since at that point they all only implemented the CHAOS protocols; Symbolics
did TCP/IP somewhat later, IIRC, although I don't know the exact date.)
Noel
> I rewrote the article on the Software Tools project
An excellent job, Deborah.
> the Software Tools movement established one of the earliest traditions of open source
Would you be open to saying "reestablished"? Open source (not so called,
and in no way portable) was very much a tradition of SHARE in the late
1950s. Portability, as exemplified in ACM's collected algorithms, came
in at the same time that industry moved to a model of trade secrets and
intellectual property. Open source went into eclipse.
Doug
I rewrote the article on the Software Tools project and, thanks to Bruce
Borden's efforts to upload, they accepted it within 1 day. You can see
it here: https://en.wikipedia.org/wiki/Software_tools_users_group
The Usenix article in Wiki is pretty thin, in case anyone would like to
spiffy it up.
Deborah
> From: Steve Nickolas
> I thought the epoch of the Internet was January 1, 1983.
Turning off NCP was a significant step, but not that big a deal in terms of
its actual effects, really.
For those of us already on the Internet before that date (since as the number
of ARPANet ports was severely limited, for many non-ARPANet-connected machines
- which were almost all time-sharing systems, at that point in time, so lots
of actual users - there was a lot of value in an Internet connection, so there
were quite a few), it didn't produce any significant change - the universe of
machines we could talk to didn't change (since we could only talk to
ARPANet-connected machines with TCP), etc.
And for ARPANET-connected machines, there too, things didn't change much - the
services available (remote login, email, etc) remained the same - it was just
carried over TCP, not NCP.
I guess in some sense it marked 'coming of age' for TCP/IP, but I'd analogize
it to that, rather than a 'birth' date.
Noel
> From: Clem Cole
> Katie Hafner's: Where Wizards Stay Up Late: The Origins Of The Internet
> ...
> It's a great read
Yes, she did a great deal of careful research, and it's quite accurate.
It _is_ pointed toward a general readership, not a technical one, so it's not
necessarily the best _technical_ history (which she had the material at hand
to produce, had she wanted to - but did not). Still, very worthwhile.
Noel
A nerdy group on an Aussie list are discussing old Unix cracks, and the
infamous "SPL 0" brick-that-box came up. I first saw it in ";login:" (I
think), and, err, tried it (as did others)...
Can anyone reproduce the code? It went something like:
> [ SPL 0 ]
>
> I only did that once (and you should've heard what he said to me)...
> I'm still trying to find the source for it (it was published in a
> ";login:" journal) to see if SIMH is vulnerable.
The concept was simple enough - fill your entire memory space with an uninterruptible instruction. It would have gone something like:
opc = 000230 ; 000230 is the opcode for SPL 0
sys brk, -1 ; or whatever value got you all 64k of address space
mov #place, sp
jmp place
. = opc - 2 ; the -2 is to allow for the PC increment on an instruction fetch, which I believe happens before any execution
place:
jsr pc, -(pc)
Ring any bells, anyone?
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
> From: Dave Horsfall
> The Internet ... was born on this day in 1969, when RFC-1 got published
I have this vague memory that the Internet-History list decided that the
appropriate day was actually the day the format of the v4 headers was set,
i.e. 16 June, 1978. (See IEN-68, pg. 12, top.)
Picking the date of RFC-1 seems a little odd. Why not the day the first packet
was send over a deployed IMP, or the day the RFP was sent out, or the contract
let? And the ARPANet was just one predecessor; one might equally have picked a
CYCLADES date...
> (spelled with a capital "I", please, as it is a proper noun) ... As I
> said at a club lecture once, there are many internets, but only one
> Internet.
I myself prefer the formulation 'there are many white houses, but only one
White House'! :-)
Noel
J. Presper Eckert was born on this day in 1919; along with John Mauchly,
he was a co-designer of ENIAC, one of the world's first programmable
electronic computers. Yes, there is a long-running dispute over whether
ENIAC or Colossus was first; being a Pommie, I'm siding with Colossus :-)
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
Steve Johnson:
But in this case, part of the requirement was to pass some standard
simulation tests (in FORTRAN, of course). He was complaining that
these programs had bugs and didn't give the right answer.
====
This reminds me of an episode during my time at Bell Labs.
The System V folks wanted to make pipes that were streams;
our experience in Research showed that that was useful. We'd
done it just by making pipe(2) create a stream. This caused
some subtle differences in semantics (pipes became full-duplex;
writing to a pipe put a delimiter in the stream, so that a
corresponding read on the other end would stop at the delimiter;
write(pipefd, "", 0) therefore generated something that would
make read(pipeotherfd, buf, len) return 0). We'd been running
all our systems that way for a while, and had uncovered no
serious problems.
But the System V folks were very nervous about it anyway, and
wrote a planning document in which they proposed to create a
new, different system call to make stream pipes. pipe(2) would
make an old-fashioned pipe; spipe(2) (or whatever it was called,
I forget the name) had to be called to get a stream. The document
didn't really explain the justification for this. To us in
Research it just sounded crazy.
Someone else was going to attend a meeting with the developers,
but at the last minute he had a conflict, so he drafted me to
go. Although I can be pretty blunt in writing, I try not to be
so much so in person when dealing with people I don't know; so
rather than asking how they could be so crazy as to add a new
kind of pipe, I asked why they really thought it necessary.
It took a little probing, but the answer turned out to be that
their management insisted that everything pass an official
verification suite to prove compliance with the System V,
Consider It Standard; and said verification suite didn't just
check that the first file descriptor returned by pipe(2) could
be read and the second written, it insisted that the first could
not be written and the second not read. Full-duplex pipes didn't
meet the standard, it was claimed.
I asked what exactly is the standard? The SVID, I was told.
What does the SVID really say, I wondered? We got a copy and
looked up pipe(2). According to the official standard, the
first file descriptor must be readable and the second writeable,
but there was no statement that it couldn't work the other way too.
Full-duplex pipes did in fact meet the standard; it was the
verification suite that, in an excess of zeal, didn't conform.
The developers were absolutely delighted with this. They too
thought it was stupid to have two different kinds of pipes,
particularly given our experience that full-duplex delimited
pipes didn't break anything. They were very happy to have
Research not just yell at them for doing things differently
from us, but help them figure out how to justify doing things
right.
I don't know just how they took this further with management,
but as it came out in SVr4, pipe(2) returned a full-duplex
stream. This is still true even unto Solaris 10, where I just
tested it.
I made friends that day. That developer group kept in touch
with me as they did further work on pipes, the terminal driver,
pseudo-ttys, and other things. I didn't agree with everything
they did, but we were able to discuss it all cordially.
Sometimes the verification program just needs to be fixed.
And sometimes the developers that seem set on doing the wrong
thing really want help in subverting whatever is forcing that
on them, because they really do know what the right thing is.
Norman Wilson
Toronto ON
Just had a look at RFC-1, my first look ever. First thing I noticed is
the enormous amount of abbreviations one is assumed to be able to
instantly place :-)
So looking up IMP for instance the wiki page gives me this funny titbit
"When Massachusetts Senator Edward Kennedy learned of BBN's
accomplishment in signing this million-dollar agreement, he sent a
telegram congratulating the company for being contracted to build the
"Interfaith Message Processor"."
https://en.wikipedia.org/wiki/Interface_Message_Processor
> Shortly after I arrived, the comp center announced a
brand-new feature -- permanent disc storage! (actually, I think it
was a drum...)
Irrelevant to the story (or Unix), but it was indeed a disc drive--much
more storage per unit volume than drums, which date to the 1940s, if
not before. Exact opposite of current technology: super heavy and
rigid combs banged in and out of the disk stack. The washing-machine
sized machine could be driven to walk across the floor. It would not
be nice to be caught in its path. (Fortunately ordinary work loads
did not have such an effect.) Vic Vyssotsky calculated that with only
10 times its 10MB capacity, we could have kept the entire printed
output since the advent of computers at the Labs on line.
Doug
The Internet (spelled with a capital "I", please, as it is a proper noun)
was born on this day in 1969, when RFC-1 got published; it described the
IMP and ARPAnet.
As I said at a club lecture once, there are many internets, but only one
Internet.
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
> Date: Fri, 30 Mar 2018 00:28:13 -0400
> From: Clem cole <clemc(a)ccc.com>
>
> Also, joy / BSD 4.2 was heavily influenced by Accent (and RIG )and the Mach memory system would eventually go back into BSD (4.3 IIRC) - which we have talked about before wrt to sockets and Accent/Mach’s port concept.
From an "outsider looking in” perspective I’m not sure I recognise such heavy influence in the sockets code base. Of course, suitability for distributed systems was an important part of the 4.2BSD design brief and Rick Rashid was one of the steering committee members, that is all agreed.
However in the code evolution for sockets I only see two influences that seem not direct continuations from earlier arpanet unices and have a possible Accent origins:
- Addition of sendto()/recvfrom() to the API. Earlier code had poor support for UDP and was forced through the TCP focused API’s, with fixed endpoint addresses. It could be Accent inspired, it could also be a natural solution for sending datagrams. For example, Jack Haverty had hacked a “raw datagram” facility into UoI Arpanet Unix around ’79 (it’s in the Unix Tree).
- Addition of a facility to pass file descriptors using sendmsg()/recvmsg() in the local domain. This facility was only added at the very last moment (it was not in 4.1c, only in 4.2). I’m being told that the CSRG team procrastinated on this because they did not see much use for it — and indeed it was mostly ignored in the field for the next two decades. Joy had left by that time, so perhaps the dynamics had changed.
Earlier I though that the select() call may have come from Accent, but it was inspired by the Ada select statement. Conceptually, it was preceded on Unix by Haverty’s await() call (also ’79).
For clarity: I wasn’t there, just commenting on what I see in the code.
Paul
The recent discussion of long-lived applications, and backwards
compatibility in Unix, got me thinking about the history of shared
objects. My own experience with Linux and MacOS is that
statically-linked applications tend to continue working from release
to release, but shared objects provided by the OS tend not to be
backwards compatible, and one often has to carry around with your
application the exact C runtime and other shared objects your program
was linked against. This is in big contrast to shared libraries on
VMS, where great care is taken to maintain strict backward
compatibility release to release.
What is the history of shared objects on Unix? When did they first
appear, and with what object/executable file format? The a.out ZMAGIC
format doesn't seem to support them. I don't recall if COFF does.
MACH-O, at least the MacOS dialect of it, supports dynamic libraries.
ELF supports them.
Also, when was symbol preemption invented? Traditional shared library
designs such as in IBM System/370, VMS, and Windows NT doesn't have
it. As one who worked on optimizations in compilers, I came to hate
symbol preemption because it prohibits many useful optimizations. ELF
does provide a way to turn it off, but it's on by default--you have to
explicitly declare symbols as protected or hidden via source language
pragmas to get rid of it.
-Paul W.
Time for another hand-grenade in the duck pond :-) Or as we call it
down-under, "stirring the possum".
On this day in 2010, it was found unanimously that Novell, not SCO, owned
"Unix". SCO appealed later, and it was dismissed "with prejudice"; SCO
shares plummeted as a result.
As an aside, this was the first and only time that I was on IBM's side,
and I still wonder whether M$ was bankrolling SCO in an effort to wipe
Linux off the map; what sort of an idiot would take on IBM?
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
On this day in 1778 businessman Oliver Pollock created the "$" sign, and
now we see it everywhere: shell prompts and variables, macro strings, Perl
variables, system references (SYS$BLAH, SWAP$SYS, etc), etc; where would
we be without it?
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
[TUHS] long lived programs (was Re: RIP John Backus
> Every year someone takes some young hotshot and points them at some
"impossible" thing and one of them makes it work. I don't see that
changing.
Case in point.
We hired Tom Killian, a young high-energy physicist disenchanted
with contributing to hundred-author papers. He'd done plenty of
instrument programming, but no operating systems. So, high-energy
as he was, he cooked up an exercise to get his feet wet.
The result: /proc
Doug
On 3/17/2018 12:22 PM, Arthur Krewat <krewat(a)kilonet.net> wrote:
> Leave it to IBM to do something backwards.
>
> Of course, that was in 1954, so I can't complain, it was 11 years before
> I was born. But that's ... odd.
>
> Was subtraction easier than addition with digital electronics back then?
> I would think that they were both the same level of effort (clock
> cycles) so why do something obviously backwards logically?
Subtraction was done by taking the two's complement and adding. I
suspect the CPU architect (Gene Amdahl -- not exactly a dullard)
intended programmers store array elements at increasing memory
addresses, and reference an array element relative to the address of the
last element plus one. This would allow a single index register (and
there were only three) to be used as the index and the (decreasing)
count. See the example on page 97 of:
James A. Saxon
Programming the IBM 7090: A Self-Instructional Programmed Manual
Prentice-Hall, 1963
http://www.bitsavers.org/pdf/ibm/7090/books/Saxon_Programming_the_IBM_7090_…
The Fortran compiler writers decided to reverse the layout of array
elements so a Fortran subscript could be used directly in an index register.
Hi,
The Hacker's Dictionary says that daemons were so named in CTSS. I'm
guessing then that Ken Thompson brought them into Unix? I've noticed that
more recent implementations of init have shunned the traditional
terminology in favor of the more prosaic word "services". For example,
Solaris now has SMF, the Service Management Facility, and systemd, the
linux replacement for init, has services as well. It makes me a little sad,
because it feels like some of the imaginativeness, fancifulness, and
playfulness that imbue the Unix spirit are being lost.
[try-II]
On Fri, Mar 23, 2018 at 6:43 AM, Tim Bradshaw <tfb(a)tfeb.org> wrote:
> On 22 Mar 2018, at 21:05, Bakul Shah <bakul(a)bitblocks.com> wrote:
>
>
> I was thinking about a similar issue after reading Bradshaw's
> message about FORTRAN programs being critical to his country's
> security. What happens in 50-100 years when such programs have
> been in use for a long time but none of the original authors
> may be alive? The world may have moved on to newer languages
> and there may be very few people who study "ancient" computer
> languages and even they won't have in-depth experience to
> understand the nuances & traps of these languages well enough.
> No guarantee that FORTRAN will be in much use then! Will it be
> like in science fiction where ancient spaceships continue
> working but no one knows what to do when they break?
>
>
> My experience of large systems like this is that this isn't how they work
> at all. The program I deal with (which is around 5 million lines naively
> (counting a lot of stuff which probably is not source but is in the source
> tree)) is looked after by probably several hundred people. It's been
> through several major changes in its essential guts and in the next ten
> years or so it will be entirely replaced by a new version of itself to deal
> with scaling problems inherent in the current implementation. We get a new
> machine every few years onto which it needs to be ported, and those
> machines have not all been just faster versions of the previous one, and
> will probably never be so.
>
> What it doesn't do is to just sit there as some sacred artifact which
> no-one understands, and it's unlikely ever to do so. The requirements for
> it to become like that would be at least that the technology of large-scale
> computers was entirely stable, compilers, libraries and operating systems
> had become entirely stable and people had stopped caring about making it do
> what it does better. None of those things seems very likely to me.
>
> (Just to be clear: this thing isn't simulating bombs: it's forecasting the
> weather.)
>
+1 - exactly
my
point.
We have drifted a bit from pure UNIX, but I actually do think this is
relevant to UNIX history. Once UNIX started to run on systems targeting
HPC loads where Fortran was the dominate programming language, UNIX quickly
displaced custom OSs and became the dominant target even if at the
beginning of that transition
as
the 'flavor' of UNIX did vary (we probably can and should discuss how that
happened and why independently
-- although
I will point out the UNIX/Linux implementation running at say LLNL != the
version running at say Nasa Moffitt). And the truth is today, for small
experiments you probably run Fortran on Windows on your desktop. But for
'production' - the primary OS for Fortran is a UNIX flavor of some type
and has been that way since the mid-1980s - really starting with the UNIX
wars of that time.
As I also have said here and elsewhere, while HPC and very much its
lubricant, Fortran, are not something 'academic CS types' like to study
these days
- even though
Fortran (HPC) pays my
and many of our
salar
ies
. Yet it runs on the system the those same academic types all prefer -
*i.e.* Ken and Dennis' ideas. The primary difference is the type of
program the users are running. But Ken and Dennis ideas work well for
almost all users and spans
specific
application market
s.
Here is a
picture
I did a few years ago for a number of Intel exec's. At the time I was
trying to explain to them that HPC is not a single style of application and
also help them understand that there two types of value - the code itself
and the data. Some markets (
*e.g.*
Financial) use public data but the methods they use
to crunch it
(
*i.e.* the
code
s
)
are
private, while others
market segments
might have private data (*e.g.*
oil and gas) but
different customers
use the same or similar codes to crunch it.
F
or this discussion, think about how much of the code I sho
w
below is complex arithmetics -
while
much of it is searching
google style
, but a lot is
just plain
nasty math. The 'nasty math' that has not changed
over the years
and thus those codes are dominated by Fortran.
[Note Steve has pointed out that with AI maybe the math could change in
the future - but certainly so far, history of these markets is basically
differential equations solvers].
As Tim says, I really can not
see
that changing and
the
reason (I believe) is I do not see any
compelling
economic reason to do so.
Clem
ᐧ
ᐧ