> On 6/6/20, Paul Ruizendaal <pnr at planet.nl
> > wrote:
> >
> > In my view, exposing the host names through integration in the Unix file
> > name space makes a lot of conceptual sense, but it unfortunately falls down
> > on the practicalities, with the host name set being hard to enumerate (it is
> > large, distributed and not stable - even back then).
> >
> With a proper dynamic VFS architecture, there is no reason why a
> resolver with a filesystem API has to bother supporting enumeration at
> all. All it needs to be able to do is respond to open() and stat()
> calls, returning ENOENT when resolution fails.
That is an intriguing thought. In Research Unix terms it would be a virtual directory that was not readable or writable, but still explorable (i.e. only the x bit set).
Maybe enumeration is only impractical for the networks that were designed to be ‘large’, such as Arpanet and Datakit. It would have been feasible for contemporary networks that were designed to be local only, such as Chaosnet or ARCnet.
A half-way house would be to only enumerate the local network and leaving everything else merely explorable. That is conceptually very messy, though.
> From: Peter Jeremy <peter(a)rulingia.com>
> My view may be unpopular but I've always been disappointed that Unix
> implemented blocking I/O only and then had to add various hacks to cover
> up for the lack of asynchonous I/O. It's trivial to build blocking I/O
> operations on top of asynchonous I/O operations. It's impossible to do
> the opposite without additional functionality.
Back when I started working on networks, I looked at other kinds of systems
to see what general lessons I could learn about the evolution of systems, which
might apply to the networks we were building. (I should have written all that up,
never did, sigh.)
One major one was that a system, when small, often collapses multiple needs
onto one machanism. Only as the system grows in size do scaling effects, etc
necessitate breaking them up into separate mechanisms. (There are some good
examples in file systems, for example.)
I/O is a perfect example of this; a small system can get away with only one
kind; it's only when the system grows that one benefits from having both
synchronous and asynchronous. Since the latter is more complicated, _both_ in
the system and in the applications which use it, it's no surprise that
synchronous was the pick.
The reasons why synchronous is simpler in applications have a nice
illustration in operating systems, which inevitably support both blocking
(i.e. implied process switching) and non-blocking 'operation initiation' and
'operation completed notification' mechanisms. (The 'timeout/callout'
mechanism is Unix is an example of the latter, albeit specialized to timers.)
Prior to the Master Control Program in the Burroughs B000 (there may be older
examples, but I don't know of them - I would be more than pleased to be
informed of any such, if there are), the technique of having a per-process
_kernel_ stack, and on a process block (and implied switch), switching stacks,
was not used. This idea was picked up for Jerry Saltzer's PhD thesis, used in
Multics, and then copied by almost every other OS since (including Unix).
The advantage is fairly obvious: if one is deep in some call stack, one can
just wait there until the thing one needs is done, and then resume without
having to work one's way back to that spot - which will inevitably be
complicated (perhaps more in the need to _return_ through all the places that
called down - although the code to handle a 'not yet' return through all those
places, after the initial call down, will not be inconsiderable either).
Exactly the same reasoning applies to blocking I/O; one can sit where one is,
waiting for the I/O to be done, without having to work one's way back there
later. (Examples are legion, e.g. in recursive descent parsers - and can make
the code _much_ simpler.)
It's only when one _can't_ wait for the I/O to complete (e.g. for a packet to
arrive - although others have mentioned other examples in this thread, such as
'having other stuff to do in the meanwhile') than having only blocking I/O
becomes a problem...
In cases where blocking would be better, one can always build a 'blocking' I/O
subsystem on top of asynchronous I/O primitives.
However, in a _tiny_ system (remember my -11/40 which ran Unix on a system
with _48KB_ of main memory _total_- i.e. OS and application together had to be
less than 48KB - no virtual memory on that machine :-), building blocking I/O
on top of asynchonous I/O, for those very few cases which need it, may not be
the best use of very limited space - although I agree that it's the way to go,
overall.
Noel
>> On 2020, Jun 3, at 1:38 AM, Lars Brinkhoff <lars at nocrew.org> wrote:
>>
>> Lawrence Stewart wrote:
>>> I remember working on getting Arpanet access on an 11/34 running V7
>>> around 1978 or 1979. (SU-ISL). We used an 11/23 as a front end to run
>>> NCP, using a variation of Rand’s code. I wrote some sort of bisync
>>> driver for packet communications between the /23 and the /34, and I
>>> think added an IOCTL or some hack to ask if there was a message ready.
>>> So a polling variety of non-blocking I/O :)
>>
>> Has this, or Rand's code, been preserved?
>>
>> I'm only aware of one Arpanet NCP implementation for Unix that is
>> online, the one from University of Illinois.
> Alas I do not know. I may have some old emails from that era. I will check.
I have not so far come across an NCP Unix that was not based on the UoI code base.
At a time there was a “Rand Unix”, that combined the UoI code with a few other
extensions. A 1978 document describes the kernel as:
"UNIX version 6 with modifications some of which are:
- Accounting
- A system call to write end of files on pipes
- A system call which checks whether a process would sleep upon doing a read on either a pipe or tty
- Pseudo teletypes
- The network control program NCP written at the University of Illinois
- A new method of interprocess communication called ports. Ports are similar to pipes but have names and can be accessed by any process.”
The system call to write EOF on pipes is eofp() and the blocking read test is empty(). Both are available in the NCP Unix source code as included on the TUHS Unix Tree page. It could well be that the hack Lawrence refers to above was a modification to empty() to also cover his bisync driver.
I guess empty() is the first step on the path that includes capac()/await() and later select()/poll() as the preferred mechanism to prevent blocking on IPC files (incl. tty’s).
OK. Must be off my game... I forgot to tell people about my BSDcan talk
earlier today. It was streamed live, and will be online in a week or
three...
It's another similar to the last two. I've uploaded a version to youtube
until the conference has theirs ready. It's a private link, but should work
for anybody that has it. Now that I've given my talk it's cool to share
more widely... https://www.youtube.com/watch?v=NRq8xEvFS_g
The link at the end is wrong. https://github.com/bsdimp/bsdcan2020-demos is
the proper link.
Please let me know what you think.
Warner
Hi All.
I've put Clem's file up at http://www.skeeve.com/text-processing.tar.bz2.
It's a little over 16 meg and includes more than just the vi stuff.
Enjoy,
Arnold
Michael Stiller <mstiller(a)icloud.com> wrote:
> Hi Clem,
>
> why offline? Other people are also interested. :-)
>
> Cheers,
>
> Michael
>
>
> > On 3. Jun 2020, at 14:33, Clem Cole <clemc(a)ccc.com> wrote:
> >
> > I think so... I'll send you a URL off line.
> >
> > Clem
> >
> > On Wed, Jun 3, 2020 at 1:14 AM <arnold(a)skeeve.com> wrote:
> > Hi.
> >
> > Does anyone have a mirror of the files that once upon a time were
> > at http://alf.uib.no/pub/vi? They were mirrored at
> > ftp://ftp.uu.net/pub/text-processing/vi also.
> >
> > If so, can you please send me a tarball off list (or tell me where
> > I can download a copy from)?
> >
> > Thanks!
> >
> > Arnold
> From: Paul Winalski
> I'm curious as to what the rationale was for Unix to have been designed
> with basic I/O being blocking rather than asynchronous.
It's a combination of two factors, I reckon. One, which is better depends a
lot on the type of thing you're trying to do. For many typical thing (e.g.
'ls'), blocking is a good fit. And, as As Arnold says, asyhchronous I/O is
more complicated, and Unix was (well, back then at least) all about getting
the most bang for the least bucks.
More complicated things do sometimes benefit from asynchronous I/O, but
complicated things weren't Unix's 'target market'. E.g. even though pipes
post-date the I/O decision, they too are a better match to blocking I/O.
> From: Arnold Skeeve
> the early Unixs were on smaller -11s, not the /45 or /70 with split I&D
> space and the ability to address lost more RAM.
Ahem. Lots more _core_. People keeep forgetting that we're looking at
decicions made at a time when each bit in main memory was stored in a
physically separate storage device, and having tons of memory was a dream of
the future.
E.g. the -11/40 I first ran Unix on had _48 KB_ of core memory - total!
And that had to hold the resident OS, plus the application! It's no
surprise that Unix was so focused on small size - and as a corollary, on
high bang/buck ratio.
But even in his age of lighting one's cigars with gigabytes of main memory
(literally), small is still beautiful, because it's easier to understand, and
complexity is bad. So it's too bad Unix has lost that extreme parsimony.
> From: Dan Cross
> question whether asynchrony itself remains untamed, as Doug put it, or
> if rather it has proved difficult to retrofit asynchrony onto a system
> designed around fundamentally synchronous primitives?
I'm not sure it's 'either or'; I reckon they are both true.
Noel
OK. I've written a script to take the 2.11BSD pl 195 tape and reverse apply
all the patches that we have to get back to 2.11BSD original.
There's some problems. The biggest one is that ld.c was rewritten during
this series and what it replaced is lost. And the 2.10.1 ld.c isn't what's
in 2.11BSD and the patches to get from 2.10.1 to 2.11 don't seem to be out
there. And the other patches in the series make it clear that patches are
missing. 2.11BSD likely will work with 2.10.1's ld, so this isn't fatal.
There's 122 other files that I recovered from 2.10.1. Almost all of those
are a good guess, if not what's actually in 2.11. Many of these can be
verified via other means. Some of these can be snagged from comp.bugs.2bsd
(that's how as was recovered).
There's a number of small cosmetic changes made via copying, some to get
rid of redundant files, etc. I think these don't matter for the function of
the system, but are small differences from the actual tape that shipped.
I still need to try to still create a tape and try to compile. And forward
apply all the patches and create a git repo from it.
All things considered 99% of the files have been recovered at this point.
It's early days, but I've pushed this to github for comments.
http://github.com/bsdimp/mk211bsd
It just assumes you have the Tuhs archive (including the new Usenet
section) and that apout works on your computer. It works on FreeBSD for
sure. No clue about anything else...
Warner
> At around that point in time (I don't have the very _earliest_ code, to get an exact date, but the oldest traces I see [in mapalloc(), below] are from September '78), the CSR group at MIT-LCS (which were the people in LCS doing networking) was doing a lot with asynchronous I/O (when you're working below the reliable stream level, you can't just do a blocking 'read' for a packet; it pretty much has to be asynchronous). I was working in Unix V6 - we were building an experimental 1Mbit/second ring - and there was work in Multics as well.
> I don't think the wider Unix community heard about the Unix work, but our group regularly filed updates on our work for the 'Internet Monthly Reports', which was distributed to the whole TCP/IP experimental community. If you can find an archive of early issues (I'm too lazy to go look for one), we should be in there (although our report will alsocover the Multics TCP/IP work, and maybe some other stuff too).
Sounds very interesting!
Looked around a bit, but I did not find a source for the “Internet Monthly Reports” for the late 70’s (rfc-editor.org/museum/ has them for the 1990’s).
In the 1970’s era, it seems that NCP Unix went in another direction, using newly built message and event facilities to prevent blocking. This is described in "CAC Technical Memorandum No. 84, Illinois Inter-Process Communication Facility for Unix.” - but that document appears lost as well.
Ah, well, topics for another day.
It is too big to attach to email.
On Wed, Jun 3, 2020 at 8:39 AM Michael Stiller <mstiller(a)icloud.com> wrote:
> Hi Clem,
>
> why offline? Other people are also interested. :-)
>
> Cheers,
>
> Michael
>
>
> > On 3. Jun 2020, at 14:33, Clem Cole <clemc(a)ccc.com> wrote:
> >
> > I think so... I'll send you a URL off line.
> >
> > Clem
> >
> > On Wed, Jun 3, 2020 at 1:14 AM <arnold(a)skeeve.com> wrote:
> > Hi.
> >
> > Does anyone have a mirror of the files that once upon a time were
> > at http://alf.uib.no/pub/vi? They were mirrored at
> > ftp://ftp.uu.net/pub/text-processing/vi also.
> >
> > If so, can you please send me a tarball off list (or tell me where
> > I can download a copy from)?
> >
> > Thanks!
> >
> > Arnold
>
>
Hi.
Does anyone have a mirror of the files that once upon a time were
at http://alf.uib.no/pub/vi? They were mirrored at
ftp://ftp.uu.net/pub/text-processing/vi also.
If so, can you please send me a tarball off list (or tell me where
I can download a copy from)?
Thanks!
Arnold