Has anyone other than the owner of m88k.com preserved Motorola System V/68? Besides that, there’s the SVR1 (v2.8) for the Motorola VME/10 on Bitsavers and that’s about it.
I’m especially curious as to whether anyone has preserved the SVR2 1.1 binary+sources distribution, since there might be useful information in it—or derivable from it—about much of the early VME hardware.
— Chris
Sent from my iPad
Noel wrote:
>> 2. Note that the BBN TCP worked over NCP as its primary transport.
>
> Your terminology is confused. TCP _never_ ran 'on' NCP; they were
> _alternative_ protocol stacks on top of IHP (on the ARPANET). No
> AHHP, no NCP.
Yes, of course you are right. I meant BBN TCP used *Arpanet* as its primary transport and hence has drivers for the IMP interface hardware.
Lars wrote:
> Here's the rub. Some hosts may have jumped the TCP/IP gun ahead of the
> 1/1 1983 flag day. The host tables don't say. Could it be that all
> those VAXen were running experimental TCP/IP in January 1982?
From Mike Muuss’ TCP digest mailing list and a mail conversation with Vint Cerf a few years ago I understood the following. “Flag day” wasn’t as black and white as we remember it now. During 1982 there was a continuous push to move systems to TCP, and over the year more and more systems became dual protocol capable and later even TCP only. Because all TCP traffic used the same, dedicated Arpanet link number, BBN’s network control team could monitor the level of usage. From memory, in the Summer of 1982 traffic was about 50% TCP and by October 70%. Presumably it reached 80-90% by the end of the year.
During 1982 on 3 occasions, network control activated a feature in the IMP’s that refused traffic on link #0, which NCP used to negotiate a connection. This caused NCP applications to stop working. Again from memory, the first outage was a few hours, the second one a day and third one, late in 1982, for two days. This highlighted systems that needed conversion and helped motivate higher ups to approve conversion resources. It seems that making the switch often involved upgrading PDP11 to VAX.
From what I can tell flag day went well, although there were issues with mail gateways that lasted for several weeks.
At the start of 1982 there was no (usable) VAX Unix TCP code that I am aware of. There were several options for the PDP11, but of those I think only the 3COM code worked well. Around March/April there was code from BBN (see TUHS 4.1BSD) and from CSRG (4.1a BSD). A special build of PDP11 2.8BSD with TCP arrived somewhat later. My impression is that this was still the state of play on flag day, with 4.1cBSD only arriving well into 1983.
> I have searched the TUHS archive and elsewhere, but all I
> find for Unix is a copy of the PDP-11 Unix V6 NCP from Illinois.
>
> Has any other NCP implementation for Unix survived? From old host
> tables I think there may have been some VAXen online before the switch
> to TCP/IP.
Lars,
You may want to look at the 4 surviving BBN tapes on Kirk McKusick’s DVD software collection. A small part of that is on the TUHS Unix tree page - see the 4.1BSD entry.
1. A history of NCP on the VAX at BBN can be found in the change log:
https://www.tuhs.org/cgi-bin/utree.pl?file=BBN-Vax-TCP/history
In short they started with 32V in the Fall of 1979, and ported UIUC’s NCP code to it in May 1980. They then moved to 4.1BSD in August and ported yet again. It would seem that the ports were fairly straightforward. Coding for TCP begins in January 1981.
2. Note that the BBN TCP worked over NCP as its primary transport. The driver is still there if you look through the surviving BBN tapes. Part of that code is on TUHS:
https://www.tuhs.org/cgi-bin/utree.pl?file=BBN-Vax-TCP/dev/acc.chttps://www.tuhs.org/cgi-bin/utree.pl?file=BBN-Vax-TCP/bbnnet-oct82/imp_io.c
It will take some effort, but probably the NCP VAX code can be reconstructed from the surviving PDP11 UIUC code and these BBN tapes (the file names in the change log match).
3. The BBN tapes also have some user level software: telnet, ftp, mtp. This code consists of straight NCP to TCP conversions and the source code has #ifdef’s for NCP and TCP. An example is here:
https://www.tuhs.org/cgi-bin/utree.pl?file=BBN-Vax-TCP/src/telnet/netser.c
Hope this helps.
Paul
PS - Info on the DVD is here (bottom of the page):
https://www.mckusick.com/csrg/
Hello,
I'm working on setting up an emulated ARPANET using the original IMP
software recovered some years ago. It turns out, the greatest challenge
is finding the NCP software on the host side that implements the ARPANET
protocols. I have searched the TUHS archive and elsewhere, but all I
find for Unix is a copy of the PDP-11 Unix V6 NCP from Illinois.
Has any other NCP implementation for Unix survived? From old host
tables I think there may have been some VAXen online before the switch
to TCP/IP.
Best regards,
Lars Brinkhoff
hello all,
i am a new unix user, so please excuse my ignorance.
I am trying to setup using unixV7 with simh pdp11 emulator. The guide i am
following is by Will Senn (in PDF form). I have been able to successfully
get the machine to boot with unix, and login as root. what i am having
problems with, is trying to get telnet access via dci to work. when i
follow the guide and do the following:
# cd /usr/sys/conf
# rm l.o c.o
# cp hptmconf myconfnf
# echo 4dc >> myconf
# mkconf < myconf
# make
as - -o l.o l.s
cc -c c.c
ld -o unix -X -i l.o mch.o c.o ../sys/LIB1 ../dev/LIB2
# sum unix
10314
106
# ls -l unix
-rwxrwxr-x 1 root
54122 Dec 31 19:09 unix
etc...
when i issue the mkconf < myconf command, i get a bunch of text printed
out, but with a 'root device not found'. the sum unix value is different,
as well as the size of the ls -l unix file size.. now when i try booting it
with the newly created mboot.ini file (as per the guide), i go to start up
the system with 'hp(0,0)munix' and it starts but hangs with the text 'fault
devtab'
what am I doing wrong?
regards,
Joseph Turco
I’ve gotten Minix 1.5 up and running on Hatari, the Atari ST emulator, and I’d like to update it to the latest in the 1.5 series (1.5.10.7).
The patch sets used to be quite readily available, but the only patch sets I’ve been able to find have been the 1.5.10.3 to 1.5.10.4 patches posted to Usenet (via minnie, thanks!) which won’t apply cleanly to my sources because I’m only running 1.5.
(I know about the 1.6.25-on-Atari efforts, I’m trying to do something different and also fill in some git history…)
— Chris
On Wed, Sep 29, 2021 at 09:40:23AM -0700, Greg A. Woods wrote:
> I think perhaps the problem was that mmap() came too soon in a narrow
> sub-set of the Unix implementations that were around at the time, when
> many couldn't support it well (especially on 32-bit systems -- it really
> only becomes universally useful with either segments or 64-bit and
> larger address spaces). The fracturing of "unix" standards at the time
> didn't help either.
>
> Perhaps these "add-on hack" problems are the reason so many people think
> fondly of the good old Unix versions where everything was still coming
> from a few good minds that could work together to build a cohesive
> design. The add-ons were poorly done, not widely implemented, and
> usually incompatible with each other when they were adopted by
> additional implementations.
mmap() did come from those days and minds.
The first appearance of mmap() was in 32V R3, done by John Reiser in 1981. This is the version of 32V with full demand paging; it implemented a unified buffer cache. According to John, that version of mmap() already had the modern 6 argument API. John added mmap() because he worked with Tenex a lot during his PhD days and missed PMAP. He needed some 6 months to design, implement and debug this version of 32V as a skunkworks project.
I am trying to revert early VAX SVr1/r2 code to get a better view of what 32V R3 looked like, but unfortunately I did not have much time for this effort in last couple of months. It would seem that 32V R3 assumed that disk blocks and memory pages were the same size (true on a 1980 VAX) and with that assumption a unified buffer cache comes natural in this code base.
For 4.2BSD initially Joy cs. had a different approach to memory mapped files in mind (see the 1981 tech report #4 from CSRG). By the time of 4.2BSD’s release the manual defined a mmap() system call, but it was not implemented and it appears to have been largely forgotten until SunOS 4 and dynamic libraries six years later.
In the SysV lineage it is less clear. For sure mmap() is not there, but the first implementation of the shmem IPC feature might derive from the 32V R3 code. On the inside, SVr2 virtual memory appears to implement the segments (now called regions) that Joy envisaged for 4.2BSD but did not implement.
CB Unix had a precursor to shmem as well, where a portion of system core was reserved for shared memory purposes and could be accessed either via the /dev/mem device or could be mapped into the PDP-11 address space (using 1 of the 8 segment registers for each map). Here too the device and the map were unified.
So far, I have not come across any shared library implementations or precursors in early Unix prior to SunOS 4.
Paul
Dear TUHS members,
The IEEE Annals on the History of Computing magazine, the primary
publication for recording, analyzing, and debating the history of
computing, is seeking a new Editor in Chief [2]. The EiC term begins on
January 1st 2022 and is for three years, renewable for two years. The
application deadline is October 31st 2021.
It would be valuable for us interested in our discipline's history to to
serve in the publication's leadership. You can contact Carrie Clark at
c.clark(a)computer.org to submit an application. Alternatively, if you
have connections in the community and some time to spare to head the EiC
selection committee, please drop me a note.
[1] https://www.computer.org/csdl/magazine/an
[2]
https://www.computer.org/press-room/2021-news/ieee-cs-publications-seek-app…
Kind regards,
Diomidis Spinellis
I have received message from his family that Jörg Schilling has
passed away from complications related to kidney cancer this sunday
around noon (CEST).
He will be remembered for his open source projects including
- cdrtools, the first portable CD burning program
- star, a powerful and fast tar implementation, the first to
use two processes with a shared ring buffer for better
performance.
- smake, a make implementation with autoconf features
- sformat, a versatile SCSI disk formatting program
- SING, an autoconf fork with a comprehensive set of libc
shims, providing a uniform API across operating systems
- ved, an early visual editor for the UNOS operating system (I believe)
- bosh, a carefully maintained fork of the Bourne shell
- sccs, a carefully maintained fork of SCCS. His attempts
to teach it projects and networking will remain unfinished.
- libfind, an implementation of find(1) as a library for
integration into other software.
- libxtermcap, an extended termcap library
- libscg, an early portable SCSI driver and library
He is also remembered for his commitment to open source, portability,
and his work on POSIX. He was working on adapting his software to
Z/OS and introducing message catalogues just weeks before his death.
Jörg worked for the Bethold typesetting company, one of the first
European customers of SUN microsystems. It is there that his love
for UNIX and SUN OS in particular was kindled. [1]
His interest in SUN OS culminated in Schillix, one of the first
open source Solaris distributions.
We will of course also remember him for his flames.
[1]: https://web.archive.org/web/20061201103910/http://www.opensolaris.org/os/ar…
May his software immortalise him.
Robert Clausecker
--
() ascii ribbon campaign - for an 8-bit clean world
/\ - against html email - against proprietary attachments
John Cowan:
"Between each" has been part of Standard English for a thousand years, and
still is today.
====
As in between each pair of elements, or between each element?
The latter strikes me rather like the currently-in-vogue phrase
`one of the only': it may have a defined meaning, but it sure
sounds distractingly stupid. (If it's one of the group at all,
it's by definition one of the only members; if what is meant is
one of the few, then say so, dammit.)
It's rather like obfuscated C, or nearly any use of Perl: sure,
you can write it to require extra mental effort to make sense of
it, but there are simpler ways to be rude.
Norman Wilson
Toronto ON
Please, sir, I'd like to join The Few.
I'm sorry, there are far too many.
Apropos of "finding the right exposition"!, consider the cited wiki article:
Separator: There is a symbol between each element.
The more carefully you read this the more it becomes nonsense.
1, "Each element" is an individual. You can't put something between an
individual.
2 The defining sentence states a property of a representation of a
sequence. It fails to indicate that "separator" is the symbol's role.
In fact what's being defined is "separator notation", not the bare
word "separator". This usage appears only later in the article. It
should be employed throughout--most importantly in the title and the
definition. The same goes for "terminator".
Doug
> Doug, if you insist on applying your superb editing skills on wiki material, we will never hear from you again!
Thanks, Bill, for the wise advice. If I'm putting out stuff like this
you shouldn't hear from me again.
Apologies for again(!) posting to the wrong mailing list.
Doug
Hello All,
I am attempting to restore 4.3BSD-Tahoe to a usable state on the VAX. It
appears, based on the work that I have done already, that this is
possible. Starting with stock 4.3BSD I got a source tree into /usr/tahoe
and using it I replaced all of /usr/include and /sys, recompiled and
replaced /bin, /lib, and /etc, recompiled a GENERIC kernel, and from there
I was able to successfully reboot using the new kernel. As far as I can
tell (fingers crossed!) the hardest part is over and I'm in the process of
working on /usr.
My question is: how was this sort of thing done in the real world? If I
was a site running stock 4.3BSD, would I have received (or been able to
request) updated tapes at regular intervals? The replacement process that
I have been using is fairly labor intensive and on a real VAX would have
been very time intensive too. Fortunately two to three years' worth of
changes were not so drastic that I ever found myself in a position where
the existing tools were not able to compile pieces of Tahoe that I needed
to proceed, but I could easily imagine finding myself in such a place.
(This was, by the way, what I ran into when attempting to upgrade from
2.9BSD to 2.10BSD, despite a fully documented contemporary upgrade
procedure).
-Henry
I can't speak to the evolution and use of specific
groups; I suspect it was all ad-hoc early on.
Groups appeared surprisingly late (given how familiar
they seem now): they don't show up in the manual
until the Sixth Edition. Before that, chown took
only two arguments (filename and owner), and
permission modes had three bits fewer.
I forget how it came up, but the late Lee McMahon
once told me an amusing story about their origin:
Ken announced that he was adding groups.
Lee asked what they were for.
Ken replied with a shrug and `I dunno.'
Norman Wilson
Toronto ON
Groups appeared surprisingly late (given how familiar
they seem now): they don't show up in the manual
until the Sixth Edition.
Mea culpa; read too hastily. The change actually
came with the Fourth Edition, at the same time as
several other landmark system changes:
-- Time changing from a 32-bit count of 60Hz clock
ticks (which ran out so quickly that its epoch kept
moving) to the modern 32 bits of whole seconds, based
at 1970-01-01T00:00:00 GMT (which takes much longer
to run out, though the horizon is now visible).
-- The modern super-block. In 4/e, the super-block
began at block 0, not 1 (so bootstrapping was rather
more complicated); the free list was a bitmap rather
than the later list of blocks containing lists of
free block numbers.
-- The super-block contained a bitmap of free
i-numbers too. All told, the free block and free
i-node map made up a 1024-byte super-block.
-- I-numbers 1-40 were device files; the root
directory was i-number 41. The only file-type
indication in the mode word was a single bit to
denote directory.
It was already clear that the lifetime of the
bitmaps was running out: the BUGS section says
two blocks isn't enough for the bitmaps for a
20-megabyte RP02.
Norman Wilson
Toronto ON
Hi all,
I was reading a recent thread, over on the FreeBSD forums about groups that quickly devolved into a discussion on the origin of the operator group:
https://forums.freebsd.org/threads/groups-overview.82303/
I thought y’all would be the best place to ask the questions that arose in me during my read of the thread.
Here they are in no special order:
1. Where did operator come from and what was it intended to solve?
2. How has it evolved.
3. What’s a good place to look/ref to read about groups, generally?
I liked one respondent’s answer about using find, heir, and the files themselves to learn about groups being used in a running system, paying attention to the owner, Audi, etc along the way and this is how I do it now, but this approach doesn’t account for the history and evolution.
Thanks!
Willu
Sent from my iPhone
Greg wrote:
> I guess pipe(2) kind of started this mess, [...] Maybe I'm
> going to far with thinking pipe() could/should have just been a library
> call that used open(2) internally, perhaps connecting the descriptors by
> opening some kind of "cloning" device in the filesystem.
At times I’ve been pondering this as well. All of creat/open/pipe could have been rolled into just open(). It is not clear to me why this synthesis did not happen around the time of 7th edition; although it seems the creat/open merger happened in BSD around that time.
As to pipe(), the very first implementation returned just a single fd where writes echoed to reads. It was backed by a single disk buffer, so could only hold ~500 bytes, which was probably not enough in practice. Then it was reimplemented using an anonymous file as backing store and got the modern two fd system call. The latter probably arose as a convenient hack to store the two file pointers needed.
It would have been possible to implement the anonymous file solution still using a single fd, and storing the second file pointer in the inode. Maybe this felt as a worse hack at the time (the conceptual split in vnode / inode was still a decade into the future.)
With a single fd, it would also have been possible to have a cloning device for pipe’s as you suggest (e.g. /dev/pipe, somewhat analogous to the implementation of /dev/stdin in 10th edition). Arguably, in total code/data size this would not have been much different from pipe().
My guess is that from a 1975 perspective, creat/open/pipe was not perceived as something that needed fixing.
Dan wrote:
> 3BSD and I think 4.1BSD had vread() and vwrite(), which looked like
> regular read() and write() but accessed pages only on demand. I was a
> grad student at Berkeley at the time and remember their genesis. Bill
> and I were eating lunch from Top Dog on the Etcheverry Hall plaza, and
> were talking about memory-mapped I/O. I remember suggesting the actual
> names, perhaps as a parallel to vfork(). I had used both TENEX and
> Multics, which both had page mapping. Multics' memory-mapped segments
> were quite fundamental, of course. I think we were looking for something
> vaguely upward compatible from the existing system calls. We did not
> leap to an mmap() right away just because it would have been a more
> radical shift than continuing the stream orientation of UNIX. I did not
> implement any of this: it was just a brainstorming session.
Thank you for reminding me of these.
On a substrate with a unified buffer cache and copy-on-write, vread/vwrite would have been very close to regular read/write and maybe could have been subsumed into them, using flags to open() as the differentiator. The user discernible effect would have been the alignment requirement on the buffer argument.
John Reiser wrote that he "fretted” over adding a 6 argument system call. Perhaps he was considering something like the above as the alternative, I never asked.
I looked at the archives and vread/vwrite were introduced with 3BSD, present in 4BSD but marked deprecated, and absent from 4.1BSD. This short lifetime suggests that using vread and vwrite wasn’t entirely satisfactory in 1980/81 practice. Maybe the issue was that there was no good way to deallocate the buffer after use.
Hello,
I've recently started to implement a set of helper functions and
procedures for parsing Unix-like command-line interfaces (i.e., POSIX +
GNU-style long options, in this case) in Ada.
While doing that, I learned that there is a better way to approach
this problem – beyond using getopt(s) (which never really made sense to
me) and having to write case statements in loops every time: Define a
grammar, let a pre-built parser do the work, and have the parser
provide the results to the program.
Now, defining such a grammar requires a thoroughly systematic approach
to the design of command-line interfaces. One problem with that is
whether that grammar should allow for sub-commands. And that leads to
the question of how task-specific tool sets should be designed. These
seem to be a relatively new phenomenon in Unix-like systems that POSIX
doesn't say anything about, as far as I can see.
So, I've prepared a bit of a write-up, pondering on the pros and cons
of two different ways of having task-specific tool sets
(non-hierarchical command sets vs. sub-commands) that is available at
https://www.msiism.org/files/doc/unix-like_command-line_interfaces.html
I tend to think the sub-command approach is better. But I'm neither a UI
nor a Unix expert and have no formal training in computer things. So, I
thought this would be a good place to ask for comment (and get some
historical perspective).
This is all just my pro-hobbyist attempt to make some people's lives
easier, especially mine. I mean, currently, the "Unix" command line is
quite a zoo, and not in a positive sense. Also, the number of
well-thought-out command-line interfaces doesn't seem to be a growing
one. But I guess that could be changed by providing truly easy ways to
make good interfaces.
--
Michael
> one other thing that SLS breaks, for data files, is the whole Unix 'pipe'
> abstraction, which is at the heart of the whole Unix tools paradigm.
Multics had an IO system with an inherent notion of redirectable data
streams. Pipes could have--and eventually did (circa 1987)--fit into
that framework. I presume a pipe DIM (device interface manager)
was not hard to build once it was proposed and accepted.
Doug
> From: Larry McVoy
> If you read(2) a page and mmap()ed it and then did a write(2) to the
> page, the mapped page is the same physical memory as the write()ed
> page. Zero coherency issues.
Now I'm confused; read() and write() semantically include a copy operation
(so there are then two copies of that data chunk, and possible consistency
issues between them), and the copied item is not necessarily page-sized (so
you can't ensure consistency between the original+copy by mapping it in). So
when one does a read(file, &buffer, 1), one gets a _copy of just that byte_
in the process' address space (and similar for write()).
Yes, there's no coherency issue between the contents of an mmap()'d page, and
the system's idea of what's in that page of the file, but that's a
_different_ coherency issue.
Or am I confused?
PS:
> From: "Greg A. Woods"
> I now struggle with liking the the Unix concept of "everything is a
> file" -- especially with respect to actual data files. Multics also got
> it right to use single-level storage -- that's the right abstraction
Oh, one other thing that SLS breaks, for data files, is the whole Unix 'pipe'
abstraction, which is at the heart of the whole Unix tools paradigm. So no
more 'cmd | wc' et al. And since SLS doesn't have the 'make a copy'
semantics of pipe output, it would be hard to trivially work around it.
Yes, one could build up a similar framework, but each command would have to
specify an input file and an output file (no more 'standard in' and 'out'),
and then the command interpreter would have to i) take command A's output file
and feed it to command B, and ii) delete A's output file when the whole works
was done. Yes, the user could do it manually, but compare:
cmd aaa | wc
and
cmd aaa bbb
wc bbb
rm bbb
If bbb is huge, one might run out of room, but with today's 'light my cigar
with disk blocks' life, not a problem - but it would involve more disk
traffic, as bbb would have to be written out in its entirety, not just have a
mall piece kept in the disk cache as with a pipe.
Noel
> From: "Greg A. Woods"
> the elegance of fork() is incredible!
That's because in PDP-11 Unix, they didn't have the _room_ to create a huge
mess. Try reading the exec() code in V6 or so.
(I'm in a bit of a foul mood today; my laptop sorta locked up when a _single_
Edge browser window process grew to almost _2GB_ in size. Are you effing
kidding me? If I had any idea what today would look like, back when I was 20 -
especially the massive excrement pile that the Internet has turned into - I
never would have gone into computers - cabinetwork, or something, would have
been an infinitely superior career choice.)
> I now struggle with liking the the Unix concept of "everything is a
> file" -- especially with respect to actual data files. Multics also got
> it right to use single-level storage -- that's the right abstraction
Well, files a la Unix, instead of the SLS, are OK for a _lot_ of data storage
- pretty much everything except less-common cases like concurrent access to a
shared database, etc.
Where the SLS really shines is _code_ - being able to just do a subroutine
call to interact with something else has incredible bang/buck ratio - although
I concede doing it all securely is hard (although they did make a lot of
progress there).
Noel
>> > It's part of my academic project to work on provable compiler security.
>> > I tried to do it according to the "Reflections on Trusting Trust" by Ken
>> > Thompson, not only to show a compiler Trojan horse but also to prove that
>> > we can discover it.
>>
>> Of course it can be discovered if you look for it. What was impressive about
>> the folks who got Thompson's compiler at PWB is that they found the horse
>> even though they weren't looking for it.
> I had not heard this story. Can you elaborate, please? My impression from having
> read the paper (a long time ago now) is that Ken did the experiment locally only.
Ken did it locally, but a vigilant person at PWB noticed there was an
experimental
compiler on the research machine and grabbed it. While they weren't looking for
hidden stuff, they probably were trying to find what was new in the
compiler. Ken
may know details about what they had in the way of source and binary.
Doug
> It's part of my academic project to work on provable compiler security.
> I tried to do it according to the "Reflections on Trusting Trust" by Ken
> Thompson, not only to show a compiler Trojan horse but also to prove that
> we can discover it.
Of course it can be discovered if you look for it. What was impressive about
the folks who got Thompson's compiler at PWB is that they found the horse
even though they weren't looking for it.
Then there was the first time Jim Reeds and I turned on integrity control in
IX, our multilevel-security version of Research Unix. When it reported
a security
violation during startup we were sure it was a bug. But no, it had snagged Tom
Duff's virus in the act of replication. It surprised Tom as much as it did us,
because he thought he'd eradicated it.
Doug