> Shortly after I arrived, the comp center announced a
brand-new feature -- permanent disc storage! (actually, I think it
was a drum...)
Irrelevant to the story (or Unix), but it was indeed a disc drive--much
more storage per unit volume than drums, which date to the 1940s, if
not before. Exact opposite of current technology: super heavy and
rigid combs banged in and out of the disk stack. The washing-machine
sized machine could be driven to walk across the floor. It would not
be nice to be caught in its path. (Fortunately ordinary work loads
did not have such an effect.) Vic Vyssotsky calculated that with only
10 times its 10MB capacity, we could have kept the entire printed
output since the advent of computers at the Labs on line.
Doug
The Internet (spelled with a capital "I", please, as it is a proper noun)
was born on this day in 1969, when RFC-1 got published; it described the
IMP and ARPAnet.
As I said at a club lecture once, there are many internets, but only one
Internet.
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
> Date: Fri, 30 Mar 2018 00:28:13 -0400
> From: Clem cole <clemc(a)ccc.com>
>
> Also, joy / BSD 4.2 was heavily influenced by Accent (and RIG )and the Mach memory system would eventually go back into BSD (4.3 IIRC) - which we have talked about before wrt to sockets and Accent/Mach’s port concept.
From an "outsider looking in” perspective I’m not sure I recognise such heavy influence in the sockets code base. Of course, suitability for distributed systems was an important part of the 4.2BSD design brief and Rick Rashid was one of the steering committee members, that is all agreed.
However in the code evolution for sockets I only see two influences that seem not direct continuations from earlier arpanet unices and have a possible Accent origins:
- Addition of sendto()/recvfrom() to the API. Earlier code had poor support for UDP and was forced through the TCP focused API’s, with fixed endpoint addresses. It could be Accent inspired, it could also be a natural solution for sending datagrams. For example, Jack Haverty had hacked a “raw datagram” facility into UoI Arpanet Unix around ’79 (it’s in the Unix Tree).
- Addition of a facility to pass file descriptors using sendmsg()/recvmsg() in the local domain. This facility was only added at the very last moment (it was not in 4.1c, only in 4.2). I’m being told that the CSRG team procrastinated on this because they did not see much use for it — and indeed it was mostly ignored in the field for the next two decades. Joy had left by that time, so perhaps the dynamics had changed.
Earlier I though that the select() call may have come from Accent, but it was inspired by the Ada select statement. Conceptually, it was preceded on Unix by Haverty’s await() call (also ’79).
For clarity: I wasn’t there, just commenting on what I see in the code.
Paul
The recent discussion of long-lived applications, and backwards
compatibility in Unix, got me thinking about the history of shared
objects. My own experience with Linux and MacOS is that
statically-linked applications tend to continue working from release
to release, but shared objects provided by the OS tend not to be
backwards compatible, and one often has to carry around with your
application the exact C runtime and other shared objects your program
was linked against. This is in big contrast to shared libraries on
VMS, where great care is taken to maintain strict backward
compatibility release to release.
What is the history of shared objects on Unix? When did they first
appear, and with what object/executable file format? The a.out ZMAGIC
format doesn't seem to support them. I don't recall if COFF does.
MACH-O, at least the MacOS dialect of it, supports dynamic libraries.
ELF supports them.
Also, when was symbol preemption invented? Traditional shared library
designs such as in IBM System/370, VMS, and Windows NT doesn't have
it. As one who worked on optimizations in compilers, I came to hate
symbol preemption because it prohibits many useful optimizations. ELF
does provide a way to turn it off, but it's on by default--you have to
explicitly declare symbols as protected or hidden via source language
pragmas to get rid of it.
-Paul W.
Time for another hand-grenade in the duck pond :-) Or as we call it
down-under, "stirring the possum".
On this day in 2010, it was found unanimously that Novell, not SCO, owned
"Unix". SCO appealed later, and it was dismissed "with prejudice"; SCO
shares plummeted as a result.
As an aside, this was the first and only time that I was on IBM's side,
and I still wonder whether M$ was bankrolling SCO in an effort to wipe
Linux off the map; what sort of an idiot would take on IBM?
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
On this day in 1778 businessman Oliver Pollock created the "$" sign, and
now we see it everywhere: shell prompts and variables, macro strings, Perl
variables, system references (SYS$BLAH, SWAP$SYS, etc), etc; where would
we be without it?
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
[TUHS] long lived programs (was Re: RIP John Backus
> Every year someone takes some young hotshot and points them at some
"impossible" thing and one of them makes it work. I don't see that
changing.
Case in point.
We hired Tom Killian, a young high-energy physicist disenchanted
with contributing to hundred-author papers. He'd done plenty of
instrument programming, but no operating systems. So, high-energy
as he was, he cooked up an exercise to get his feet wet.
The result: /proc
Doug
On 3/17/2018 12:22 PM, Arthur Krewat <krewat(a)kilonet.net> wrote:
> Leave it to IBM to do something backwards.
>
> Of course, that was in 1954, so I can't complain, it was 11 years before
> I was born. But that's ... odd.
>
> Was subtraction easier than addition with digital electronics back then?
> I would think that they were both the same level of effort (clock
> cycles) so why do something obviously backwards logically?
Subtraction was done by taking the two's complement and adding. I
suspect the CPU architect (Gene Amdahl -- not exactly a dullard)
intended programmers store array elements at increasing memory
addresses, and reference an array element relative to the address of the
last element plus one. This would allow a single index register (and
there were only three) to be used as the index and the (decreasing)
count. See the example on page 97 of:
James A. Saxon
Programming the IBM 7090: A Self-Instructional Programmed Manual
Prentice-Hall, 1963
http://www.bitsavers.org/pdf/ibm/7090/books/Saxon_Programming_the_IBM_7090_…
The Fortran compiler writers decided to reverse the layout of array
elements so a Fortran subscript could be used directly in an index register.