> From: Clem Cole
> Eric Schienbrood
> .. Noel might remember his MIT moniker
No, alas; and I tried 'finger Schienbrood(a)lcs.mit.edu' and got no result.
Maybe he was in some other part of MIT, not Tech Sq?
> From: Arnold Skeeve
> Here too I think stuff written at ATT got out through Berkeley. (SCCS)
That happened at MIT, too - we had SCCS quite early (my MIT V6 manual has
it), plus all sorts of other stuff (e.g. TROFF).
I think some of it may have come through Jon Sieber, who, while he was in high
school, had been part of (IIRC) a Scout troop which had some association with
Bell Labs, and continued to have contacts there after he became an MIT
undergrad.
Noel
> From: Peter Jeremy <peter(a)rulingia.com>
> Why were the original read(2) and write(2) system calls written to
> offer synchronous I/O only?
A very interesting question (to me, particularly, see below). I don't think
any of the Unix papers answer this question?
> It's relatively easy to create synchronous I/O functions given
> asynchronous I/O primitives but it's impossible to do the opposite.
Indeed, and I've seen operating systems (e.g. a real-time PDP-11 OS I worked
with a lot called MOS) that did that.
I actually did add asynchronous I/O to V6 UNIX, for use with very early
Internet networking software being done at MIT (in a user process). Actually,
it wasn't just asynchronous, it was _raw_ asynchronous I/O! (The networking
device was DMA, and the s/w did DMA directly into the user process' memory.)
The code also allowed more than one outstanding I/O request, too. (So the
input could be re-enabled on the device ASAP, without having to wake up a
process, have it run, do a new read call, etc.)
We didn't redo the whole Unix I/O system, to support/use asyn I/O throughout,
though; I just kind of warted it onto the side. (IIRC, it notified the user
process via a signal that the I/O had completed; the user software then had
to do an sgtty() call to get the transfer status, size, etc.)
Anyway, back to the original topic: I don't want to speculate (although I
could :-); perhaps someone who was around 'back then' can offer some insight?
If not, time for speculation! :-)
Noel
Why were the original read(2) and write(2) system calls written to offer
synchronous I/O only? It's relatively easy to create synchronous I/O
functions given asynchronous I/O primitives but it's impossible to do the
opposite.
Multics (at least) supported asynchronous I/O so the concept wasn't novel.
And any multi-tasking kernel has to support asynchronous I/O internally so
suitable code exists in the kernel.
--
Peter Jeremy
As I was dropping off to sleep last night, I wondered why the superuser
account on Unix is called root.
There's a hierarchy of directories and files beginning at the tree root /.
There's a hierarchy of processes rooted with init. But there's no hierarchy
of users, so why the moniker "root"?
Any ideas?
Cheers, Warren
> Did any Unix or Unix like OS ever zero fill on realloc?
> On zero fill, I doubt many did that. Many really early on when memory
> was small.
This sparks rminiscence. When I wrote an allocation strategy somewhat
more sophisticated than the original alloc(), I introduced realloc() and
changed the error return from -1 to the honest pointer value 0. The
latter change compelled a new name; "malloc" has been with us ever since.
To keep the per-byte cost of allocation low, malloc stuck with alloc's
nonzeroing policy. The minimal extra code to handle calls that triggered
sbrk had the startling property that five passes through the arena might
be required in some cases--not exactly scalable to giant virtual address
spaces!
It's odd that the later introduction of calloc() as a zeroing malloc()
has never been complemented by a similar variant of realloc().
> Am I the only one that remembers realloc() being buggy on some systems?
I've never met a particular realloc() bug, but realloc does inherit the
portability bug that Posix baked into malloc(). Rob Pike and I
requested that malloc(0) be required to return a pointer distinct from
any live pointer. Posix instead allowed an undefined choice between
that behavior and an error return, confounding it with the out-of-memory
indication. Maybe it's time to right the wrong and retire "malloc".
The name "alloc" might be recycled for it. It could also clear memory
and obsolete calloc().
Doug
Dave Horsfall:
Today is The Day of the Programmer, being the 0x100'th day of the year.
===
Are you sure you want to use that radix as your standard?
You risk putting a hex on our profession.
Norman Wilson
Toronto ON
> Today is The Day of the Programmer, being the 0x100'th day of the year.
Still further off topic, but it reminds me of a Y2K incident circa 1960.
Our IBM 7090 had been fitted with a homegrown time-of-day clock (no, big
blue did not build such into their machines back then). The most significant
bits of the clock registered the day of the year. On day 0x100 the clock
went negative and the system went wild.
Doug
Hi there,
we just restored our PDP-11/23+ rebulding a new PSU around a
normal PC PSU and creating the real time clock needed for some
OS.
we're wondering about what UNIX can eventually run on it :)
http://museo.freaknet.org/en/restauro-pdp1123plus/
bye,
Gabriele
--
[ ::::::::: 73 de IW9HGS : http://freaknet.org/asbesto ::::::::::: ]
[ Freaknet Medialab :: Poetry Hacklab : Dyne.Org :: Radio Cybernet ]
[ NON SCRIVERMI USANDO LETTERE ACCENTATE - NON MANDARMI ALLEGATI ]
[ *I DELETE* EMAIL > 100K, ATTACHMENTS, HTML, M$-WORD DOC and SPAM ]
Today is The Day of the Programmer, being the 0x100'th day of the year.
Take a bow, all programmers...
Did you know that it's an official professional holiday in Russia?
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
I'll support shark-culling when they have been observed walking on dry land.