All, today after some heroic efforts by numerous people over many years,
Nokia-Alcatel has issued this statement at
https://media-bell-labs-com.s3.amazonaws.com/pages/20170327_1602/statement%…
Statement Regarding Research Unix Editions 8, 9, and 10
Alcatel-Lucent USA Inc. (“ALU-USA”), on behalf of itself and Nokia
Bell Laboratories agrees, to the extent of its ability to do so, that it
will not assert its copyright rights with respect to any non-commercial
copying, distribution, performance, display or creation of derivative
works of Research Unix®1 Editions 8, 9, and 10. The foregoing does not
(i) transfer ownership of, or relinquish any, intellectual property rights
(including patent rights) of Nokia Corporation, ALU-USA or any of their
affiliates, (ii) grant a license to any patent, patent application,
or trademark of Nokia Corporation, ALU-USA. or any of their affiliates,
(iii) grant any third-party rights or licenses, or (iv) grant any rights
for commercial purposes. Neither ALU-USA. nor Nokia Bell Laboratories will
furnish or provided support for Research Unix Editions 8, 9, and 10, and
make no warranties or representations hereunder, including but not limited
to any warranty or representation that Research Unix Editions 8, 9, and
10 does not infringe any third party intellectual property rights or that
Research Unix Editions 8, 9, and 10 is fit for any particular purpose.
There are some issues around the copyright of third party material in
8th, 9th and 10th Editions Unix, but I'm going to bite the bullet and
make them available in the Unix Archive. I'll post details later today.
Cheers, Warren
> From: Tim Newsham
> Would be great if someone scripted it up to make it dog-simple.
But if people just have to press a button (basically), they won't learn
anything. I guess I'm not understanding the point of the exercise? To say they
have V6 running? So what? All they did was press a button. If it's to
experience a retro-computing environment, well, a person who's never used one
of these older systems is going to be kind of lost - what are they going to
do, type 'ls -ls' and look at the output? Not very illuminating. (On V6,
without learning 'ed', they can't even type in a small C program, and compile
and run it.) Sorry, I don't mean to be cranky, but I'm not understanding the
point.
Noel
> From: Grant Taylor
> However, I've had to teach enough people to know that they need a way to
> boot strap themselves into an environment to start learning.
Right, but wouldn't they learn more from a clear and concise hand-holding
which explains what they are doing and why - 'do this which does that to get
this'?
There is no more a royal road to knowing a system, than there is to
mathematics.
> I do consider what (I believe) Warren put together for the UUCP project
> to be a very good start. Simple how to style directions that are easy
> to follow that yield a functional system.
Exactly....
Noel
On 2017-03-28 09:37, jnc(a)mercury.lcs.mit.edu (Noel Chiappa) wrote:
>
> > From: Johnny Billquist
>
> > the PDP-11 have the means of doing this as well.... If anyone ever
> > wondered about the strangeness of the JSR instruction of the PDP-11, it
> > is precisely because of this.
> > ...
> > I doubt Unix ever used this, but maybe someone know of some obscure
> > inner kernel code that do. :-)
>
> Actually Unix does use JSR with a non-PC register to hold the return address
> very extensively; but it also uses the 'saved PC points to the argument'
> technique; although only in a limited way. (Well, there may have been some
> user-mode commands that were not in C that used it, I don't know about that.)
>
> First, the 'PC points to arguments': the device interrrupts use that. All
> device interrupt vectors point to code that looks like:
>
> jsr r0, _call
> _iservice
>
> where iservice() is the interrupt service routine. call: is a common
> assembler-language routine that calls iservice(); the return from there goes
> to later down in call:, which does the return from interrupt.
Ah. Thanks for that. I hadn't dug into those parts, but that's the kind
of place where I might have suspected it might have been, if anywhere.
> Use of a non-PC return address register is used in every C routine; to save
> space, there is only one copy of the linkage code that sets up the stack
> frame; PDP-11 C, by convention, uses R5 for the frame pointer. So that common
> code (csv) is called with a:
>
> jsr r5, csv
>
> which saves the old FP on the stack; CSV does the rest of the work, and jumps
> back to the calling routine, at the address in R5 when csv: is entered. (There's
> a similar routine, cret:, to discard the frame, but it's 'called' with a plain
> jmp.)
Hah! Thinking about it, I actually knew that calling style, but didn't
reflect on it, as you're not passing any arguments in the instruction
stream in that situation.
But it's indeed not using the PC as the register in the call, so I guess
it should count in some way. :-)
Johnny
In some sense the "command subcommand" syntax dates from ar in v1,
though option flags were catenated with the mandatory subcommand.
The revolutionary notion that flags/subcommands might be denoted
by more than one letter originated at PWB (in "find", IIRC).
Doug
> From: Johnny Billquist
> the PDP-11 have the means of doing this as well.... If anyone ever
> wondered about the strangeness of the JSR instruction of the PDP-11, it
> is precisely because of this.
> ...
> I doubt Unix ever used this, but maybe someone know of some obscure
> inner kernel code that do. :-)
Actually Unix does use JSR with a non-PC register to hold the return address
very extensively; but it also uses the 'saved PC points to the argument'
technique; although only in a limited way. (Well, there may have been some
user-mode commands that were not in C that used it, I don't know about that.)
First, the 'PC points to arguments': the device interrrupts use that. All
device interrupt vectors point to code that looks like:
jsr r0, _call
_iservice
where iservice() is the interrupt service routine. call: is a common
assembler-language routine that calls iservice(); the return from there goes
to later down in call:, which does the return from interrupt.
Use of a non-PC return address register is used in every C routine; to save
space, there is only one copy of the linkage code that sets up the stack
frame; PDP-11 C, by convention, uses R5 for the frame pointer. So that common
code (csv) is called with a:
jsr r5, csv
which saves the old FP on the stack; CSV does the rest of the work, and jumps
back to the calling routine, at the address in R5 when csv: is entered. (There's
a similar routine, cret:, to discard the frame, but it's 'called' with a plain
jmp.)
Noel
Lots of tools now seem to use this strategy: there's some kind of wrapper which has its own set of commands (which in turn might have further subcommands). So for instance
git remote add ...
is a two layer thing.
Without getting into an argument about whether that's a reasonable or ideologically-correct approach, I was wondering what the early examples of this kind of wrapper-command approach were. I think the first time I noticed it was CVS, which made you say `cvs co ...` where RCS & SCCS had a bunch of individual commands (actually: did SCCS?). But I think it's possible to argue that ifconfig was an earlier example of the same thing. I was thinking about dd as well, but I don't think that's the same: they're really options not commands I think.
Relatedly, does this style originate on some other OS?
--tim
(I realise that in the case of many of these things, particularly git, the wrapper is just dispatching to other tools that do the werk: it's the command style I'm interested in not how it's implemented.)
On 2017-03-27 04:00, Greg 'groggy' Lehey <grog(a)lemis.com> wrote:
> On Monday, 27 March 2017 at 6:49:30 +1100, Dave Horsfall wrote:
>> And as for subroutine calls on the -8, let's not go there... As I dimly
>> recall, it planted the return address into the first word of the called
>> routine and jumped to the second instruction; to return, you did an
>> indirect jump to the first word. Recursion? What was that?
> This was fairly typical of the day. I've used other machines (UNIVAC,
> Control Data) that did the same. Later models added a second call
> method that stored the return address in a register instead, only
> marginally easier for recursion.
>
> At Uni I was given a relatively simple task to do in PDP-8 assembler:
> a triple precision routine (36 bits!) to clip a value to ensure it
> stayed between two limits. Simple, eh? Not on the PDP-8. Three
> parameters, each three words long. only one register, no index
> registers. I didn't finish it. Revisiting now, I still don't know
> how to do it elegantly. How *did* the PDP-8 pass parameters?
This is probably extremely off-topic, so I'll keep it short.
This is actually very simple and straight forward on a PDP-8, but it
might seem strange to people used to todays computers.
Essentially, you pass parameters in memory, as a part of the code
stream. Also, the PDP-8 certainly do have index registers.
The first thing one must do is stop thinking of the AC as a register.
The accumulator is the accumulator. Memory is registers.
Some memory locations autoincrement when used indirectly, they are
called index registers.
That said, then. A simple example of a routine passing two parameters
(well, three):
First the calling:
CLA
TAD (42 / Setup AC with the value 42.
JMS COUNT
BUFPTR
BUFSIZ
. / Next instruction executed, with AC holding
number of matching words in buffer.
.
Now, this routine is expected to count the number of occurances of a
specific word in a memory buffer with a specific size.
At calling, AC will contain the word to search for, while the address
following the JMS holds the address, and the following address holds the
size.
The routine:
COUNT, 0
CIA
DCA CHR / Save the negative of the word to search for.
CMA
TAD I COUNT
DCA PTR / Setup pointer to the address before the buffer.
ISZ COUNT / Point to next argument.
TAD I COUNT
CIA
DCA CNT / Save negative value of size.
DCA RESULT / Clear out result counter.
LOOP, TAD I PTR / Get next word in buffer.
TAD CHR / Compare to searched for word.
SNA / Skip if they are not equal.
ISZ RESULT / Equal. Increment result counter.
ISZ CNT / Increment loop counter.
JMP LOOP / Repeat unless end of buffer.
CLA / All done. Get result.
TAD RESULT
JMP I COUNT / Done.
PTR=10
CNT=20
CHR=21
RESULT=22
Addresses 10-17 are the index registers, so the TAD I PTR instruction
will autoincrement the pointer everytime, and the increment happens
before the defer, which is why the initial value should be one less than
the buffer pointer.
Hopefully this gives enough of an idea, but unless you know the PDP-8
well, you might be a little confused by the mnemonics.
As you can see, the return address at the start is used for more than
just doing a return. It's also your argument pointer.
Johnny
> From: Doug McIlroy
> As Erdos would say of a particularly elegant proof, "It comes from the
> Book," i.e. had divine inspiration.
Just to clarify, Erdos felt that a deity (whom he referred to as the 'Supreme
Facist') was unlikly to exist; his use of such concepts was just a figure of
speech. 'The Book' was sort of a Platonic Ideal. Nice concept, though!
Noel
> From: Dave Horsfall
> And as for subroutine calls on the -8, let's not go there... As I dimly
> recall, it planted the return address into the first word of the called
> routine and jumped to the second instruction; to return, you did an
> indirect jump to the first word.
That do be correct.
That style of subroutine call goes back a _long_ way. IIRC, Whirlwind used
that kind of linkage (alas, I've misplaced my copy of the Whirlwind
instruction manual, sigh - a real tresure).
ISTVR there was something about the way Whirlwind did it that made it clear
how it came to be the way it was - IIRC, the last instruction in the
subroutine was normally a 'jump to literal' (i.e. a constant, in the
instruction), and the Whirlwind 'jump to subroutine' stored the return address
in a register; there was a special instruction (normally the first one in any
subroutine) that stored the low-order N bits of that register in the literal
field of the last instruction: i.e. self-modifying code.
The PDP-6 (of which the PDP-10 was a clone) was on the border of that period;
it had both types of subroutine linkage (store the return in the destination,
and jump to dest+1; and also push the return on the stack).
Noel
> From: Arthur Krewat
>> The PDP-6 (of which the PDP-10 was a clone)
> Um. More like a natural progression.
> Like 8086->80186->80286->80386->80486->...
No, the PDP-6 and PDP-10 have identical instruction sets, and in general, a
program that will run on one, will run on the other. See "decsystem10 System
Reference Manual" (DEC-10-XSRMA-A=D", pg. 2-72., which provides a 7-instruction
code fragment which allows a program to work out if it's running on a PDP-6, a
KA10, or a KI10.
The KA10 is a re-implementation (using mostly B-series Flip Chips) of the
PDP-6 (which was built out of System Modules - the predecessor to Flip Chips).
Noel
Nudging the thread back twoard Unix history:
> I really
> like how blindingly obvious a lot of the original Unix code was. Not saying
> it was all that way, but a ton of it was sort of what you would imagine it
> to be before you saw it. Which means I understood it and could bugfix it.
That's an important aspect of Thompson's genius--code so clean and right
that, having seen it, one cannot imagine it otherwise. But the odds are
that the same program from another hand would not have the same ring of
inevitability. As Erdos was wont to say of an elegant proof, "It comes
> like how blindingly obvious a lot of the original Unix code was
> ... sort of what you would imagine it to be before you saw it.
That's a facet of Thompson's genius--code so clean and right that,
having seen it, one cannot imagine it otherwise. Odds are, though,
that the same program from another hand would not have the same
aura of inevitability. As Erdos would say of a particularly elegant
proof, "It comes from the Book," i.e. had divine inspiration.
Doug
OT, but of interest to a few people here :-)
The venerable PDP-8 was introduced in 1965 today (or tomorrow if you're on
the wrong side of the date line). It was the first computer I ever
used, back around 1970 (I think I'd just left school and was checking out
the local University's computer department, with a view to majoring in
Computer Science (which I did)).
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
I think Noel put it very well, when I saw the read()/write() vs
recvfrom()/sendto() stuff mentioned earlier I was going to say that part of
the contract of read()/write() is that they are stream oriented thus things
like short reads and writes should behave in a predictable way and have the
expected recovery semantics. So having it be boundary preserving or having
a header on the data would in my view make it not read()/write() even if
the new call can be overlaid on read()/write() at the ABI level. I think
the contract of a syscall is an important part of its semantics even if it
may be an "unwritten rule". However, Noel said it better: If it's not
file-like then a file-like interface may not be appropriate.
Having said that, Kurt raises an equally valid point which is that the
"every file is on a fast locally attached harddisk" traditional unix
approach has serious problems. Not that I mind the simplicity: contemporary
systems had seriously arcane file abstractions that made file I/O useless
to all but the most experienced developer. I come from a microcomputer
background and I am thinking of Apple II DOS 3.3, CP/M 2.2 and its FCB
based interface and MSDOS 1.x (same). When MSDOS 2.x came along with its
Xenix-subset file API it was seriously a revelation to me and others.
Microcomputers aside, my understanding is IBM 360/370 and contemporary DEC
OS's were also complicated to use, with record based I/O etc.
So it's hard to criticize unix's model, but on the other hand the lack of
any nonblocking file I/O and the contract against short reads/writes (but
only for actual files) and the lack of proper error reporting or recovery
due to the ASSUMPTION of a write back cache, whether or not one is actually
used in practice... makes the system seriously unscaleable, in particular
as Kurt points out, the system is forced to try to hide the network
socket-like characteristics of files that are either on slow DMA or
PIO/interrupt based devices (think about a harddisk attached by serial
port, something that I actually encountered on a certain cash register
model and had to develop for back in the day), or an NFS based file, or
maybe a file on a SAN accessed by iSCSI, etc.
Luckily I think there is an easy fix, have the OS export a more socket-like
interface to files and provide a userspace compatibility library to do
things like detecting short reads/writes and retrying them, and/or blocking
while a slow read or write executes. It woukd be slightly tricky getting
the EINTR semantics correct if the second or subsequent call of a multipart
read or write was interrupted, but I think possible.
On the other hand I would not want to change the sockets API one bit, it is
perfect. (Controversial I know, I discussed this in detail in a recent
post).
Nick
On Mar 26, 2017 12:46 PM, "Kurt H Maier" <khm(a)sciops.net> wrote:
On Sat, Mar 25, 2017 at 09:35:20PM -0400, Noel Chiappa wrote:
>
> For instance, files, as far as I know, generally don't have timeout
> semantics. Can the average application that deals with a file, deal
reasonably
> with the fact that sometimes one gets half-way through the 'file' - and
things
> stop working? And that's a _simple_ one.
The unwillingness to even attempt to solve problems like these in a
generalized and consistent manner is a source of constant annoyance to
me. Of course it's easier to pretend that never happens on a "real"
file, since disks never, ever break.
Of course, there are parts of the world that don't make these
assumptions, and that's where I've put my career, but the wider IT
industry still likes to pretend that storage and networking are
unrelated concepts. I've never understood why.
khm
Hi all, I don't mind a bit of topic drift but getting into generic OS
design is a bit too far off-topic for a list based on Unix Heritage.
So, back on topic please!
Cheers, Warren
> From: Ron Minnich
> There was no shortage of people at the time who were struggling to find
> a way to make the Unix model work for networking ... It didn't quite
> work out
For good reason.
It's only useful to have a file-name _name_ for a thing if... the thing acts
like a file - i.e. you can plug that file-name into other places you might use
a file-name (e.g. '> {foo}' or 'ed <foo>', etc, etc).
There is very little in the world of networking that acts like a file. Yes,
you can go all hammer-nail, and use read() and write() to get data back and
forth, and think that makes it a file - but it's not.
For instance, files, as far as I know, generally don't have timeout
semantics. Can the average application that deals with a file, deal reasonably
with the fact that sometimes one gets half-way through the 'file' - and things
stop working? And that's a _simple_ one. How does a file abstraction match
to a multi-cast lossy (i.e. packets may be lost) datagram group?
For another major point (and the list goes on, I just can't be bothered to go
through it all), there's usually all sorts of other higher-level protocol in
there, so only specialized applications can make use of it anyway. Look at
HTTP: there's specialized syntax one has to spit out to say what file you
want, and the HTML files you get back from that can generally only be usefully
used in a browser.
Etc, etc.
Noel
Maybe of interest, here is something spacewarish..
|I was at Berkeley until July 1981.
I had to face a radio program which purported that..
Alles, was die digitale Kultur dominiert, haben wir den Hippies
zu verdanken.
Anything which dominates the digital culture is owed to the
Hippies
This "We owe it all to the Hippies" as well as "The real legacy of
the 60s generation is the Computer Revolution" actually in English
on [1] (talking about the beginning of the actual broadcast), the
rest in German. But it also lead(s) (me) to an article of the
Rolling Stone, December 1972, on "Fanatic Life and Symbolic Death
Among the Computer Bums"[2].
[1] http://www.swr.de/swr2/programm/sendungen/essay/swr2-essay-flowerpowerdaten…
[2] http://www.wheels.org/spacewar/stone/rolling_stone.html
That makes me quite jealous of your long hair, i see manes
streaming in the warm wind of a golden Californian sunset.
I don't think the assessment is right, though, i rather think it
is a continous progress of science and knowledge, then maybe also
pushed by on-all-fronts efforts like "bringing a moon to the moon
by the end of the decade", and, sigh, SDI, massive engineer and
science power concentration, etc. And crumbs thereof approaching
the general public, because of increased knowledge in the
industry. (E.g., the director of the new experimental German
fusion reactor that finally sprang into existence claimed
something like "next time it will not be that expensive due to
what everybody has learned".) And hunger for money, of course,
already in the 70s we had game consoles en masse in Italy, for
example, with Pacman, Donkey Kong and later then with Dragons Lair
or what its name was ("beautiful cool graphics!" i recall, though
on the street there were Italian peacocks, and meaning the
birds!).
--steffen
> From: Random832
> Does readlink need to exist as a system call? Maybe it should be
> possible to open and read a symbolic link - using a flag passed to open
What difference does it make? The semantics are the same, only the details of the
syntax are different.
Noel
On 3/23/17, Larry McVoy <lm(a)mcvoy.com> wrote:
>
> I only know a tiny amount about Plan 9, never dove into deeply. QNX,
> on the other hand, I knew quite well. I was pretty good friends with
> one of the 3 or 4 people who were allowed to work on the microkernel, Dan
> Hildebrandt. He died in 1998, but before then he and I used to call each
> other pretty often to talk about kernel stuff, how stuff ought to be done.
> The calls weren't jerk off sessions, they were pretty deep conversations,
> we challenged each other. I was very skeptical about microkernels,
> I'd seen Mach and found it lacking, seen Minix and found it lacking
> (though that's a little unfair to Minix compared to Mach). Dan brought
> me around to believing in the microkernel but only if the kernel was
> actually a kernel. Their kernel didn't use up a 4K instruction cache,
> it left room for the apps and the processes that did the rest. That's
> why only a few people were allowed to work on the kernel, they counted
> every single instruction cache footprint.
>
> So tell me more about your OS, I'm interested.
Where do I start? I've got much of the design planned out. I've
thought about the design for many years now and changed my plans on
some things quite a few times. Currently the only code I have is a
bootloader that I've been working on somewhat intermittently for a
while now, as well as a completely untested patch for seL4 to add
support for QNX-ish single image boot. I am planning to borrow Linux
and BSD code where it makes sense, so I have less work to do.
It will be called UX/RT (for Universally eXtensible Real Time
operating system, although it's also a kind of a nod to QNX originally
standing for Quick uNiX), and it will be primarily intended as a
workstation and embedded OS (although it would be also be good for
servers where security is more important than I/O throughput, and also
for HPC). It will be a microkernel-based multi-server OS.
Like I said before, it will superficially resemble QNX in its general
architecture (for example, much like QNX, the lowest-level user-mode
component will be a single root server called "proc", which will
handle process management and filesystem namespace management),
although the specifics of the IPC model will be somewhat different. I
will be using seL4 and/or Rux as the microkernel (so unlike that of
QNX, UX/RT's process server will be a completely separate program).
proc and most other first-party components will be mostly written in
Rust rather than C, although there will still be a lot of third-party
C code. The network stack, disk filesystems, and graphics drivers will
be based on the NetBSD rump kernel and/or LKL (which is a
"librarified" version of the Linux kernel; I'll probably provide both
Linux and NetBSD implementations and allow switching between them and
possibly combining them) with Rust glue layers on top. For
performance, the disk drivers and disk filesystems will run within the
same server (although there will be one disk server per host adapter),
and the network stack will also be a single server, much like in QNX
(like QNX, UX/RT will mostly avoid intermediary servers and will
usually follow a process-per-subsystem architecture rather than a
process-per-component one like a lot of other multi-sever OSes, since
intermediary servers can hurt performance).
As I said before, UX/RT will take file-oriented architecture even
further than Plan 9 does. fork() and some thread-related APIs will be
pretty much the only APIs implemented as non-file-based primitives.
Pretty much the entire POSIX/Linux API will be implemented although
most non-file-based system calls will have file-based implementations
underneath (for example, getpid() will do a readlink() of /proc/self
to get the PID). Even process memory like the process heap and stack
will be implemented as files in a per-process memory filesystem (a bit
like in Multics) rather than being anonymous like on most other OSes.
ioctl() will be a pure library function for compatibility purposes,
and will be implemented in terms of read() and write() on a secondary
file.
Unlike other microkernel-based OSes, UX/RT won't provide any way to
use message passing outside of the file-based API. read() and write()
will use kernel calls to communicate directly with the process on the
other end (unlike some microkernel Unices in which they go through an
intermediary server). There will be APIs that expose the underlying
transport (message registers for short messages and a shared per-FD
buffer for long ones), although they will still operate on file
descriptors, and read()/write() and the low-level messaging APIs will
all use the same header format so that processes don't have to care
which API the process on the other
end uses (unlike on QNX where there are a few different incompatible
messaging APIs). There will be a new "message special" file type that
will preserve message boundaries, similar to SEQPACKET Unix-domain
sockets or SCTP (these will be ideal for RPC-type APIs).
File descriptors will be implemented as sets of kernel capabilities,
meaning that servers won't have to check permissions like they do on
QNX. The API for servers will be somewhat socket-like. Each server
will listen on a "port" file in a special filesystem internal to the
process server, sort of like /mnt on Plan 9 (although it will be
possible for servers to export ports within their own filesystems as
well). Reads from the port will produce control messages which may
transfer file descriptors. Each client file descriptor will have a
corresponding file descriptor on the server side, and servers will use
a superset of the regular file API to transfer data. Device numbers as
such will not exist (the device number field in the stat structure
will be a port ID that isn't split into major and minor numbers), and
device files will normally be exported directly by their server,
rather than residing on a filesystem exported by one driver but being
serviced by another as in conventional Unix. However, there will be a
sort of similar mechanism, allowing a server to export "firm links"
that are like cross-filesystem hard links (similar to in QNX).
Per-process namespaces like in Plan 9 will be supported. Unlike in
Plan 9, it will be possible for processes with sufficient privileges
to mount filesystems in the namespaces of other processes (to allow
more flexible scoping of mount points). Multiple mounts on one
directory will produce a union like in both QNX and Plan 9. Binding
directories as in Plan 9 will also be supported. In addition to the
per-process root on / there will also be a global root directory on //
into which filesystems are mounted. The per-process name spaces will
be constructed by binding directories from // into the / of the
process (neither direct mounts under / nor bindings in // will be
supported, but bindings between parts of / will of course be
supported).
The security model will be based on a per-process default-deny ACL
(which will be purely in memory; persisting process ACLs will be
implemented with an external daemon). It will be possible for ACL
entries to explicitly specify permissions, or to use the permissions
from the filesystem with (basically) normal Unix semantics. It will
also be possible for an entry to be a wildcard allowing access to all
files in a directory. Unlike in conventional Unix, there will be no
root-only system calls (anything security-sensitive will have a
separate device file to which access can be controlled through process
ACLs), and running as root will not automatically grant a process full
privileges. The suid and sgid bits will have no effect on executables
(the ACL management daemon will handle privilege escalation instead).
The native API will be mostly compatible with that of Linux, and a
purely library-based Linux compatibility layer will be available. The
only major thing the Linux compatibility layer specifically won't
support will be stuff dealing with logging users in (since it won't be
possible to revert to traditional Unix security, and utmp/wtmp won't
exist). The package manager will be based on dpkg and apt with hooks
to make them work in a functional way somewhat like Nix but using
per-process bindings, allowing for multiple environments or universes
consisting of different sets of packages (environments will be able to
be either native UX/RT or Linux compatibility environments, and it
will be possible to create Linux environments that aren't managed by
the UX/RT package manager to allow compatibility environments for
non-dpkg Linux distributions).
The init system will have some features in common with SMF and
systemd, but unlike those two, it will be modular, flexible, and
lightweight. System initialization such as checking/mounting
filesystems and bringing up network interfaces will be script-driven
like in traditional init systems, whereas starting daemons will be
done with declarative unit files that will be able to call (mostly
package-independent) hook scripts to set up the environment for the
daemon.
Initially I will use X11 as the window system, but I will replace it
with a lightweight compositing window server that will export
directories providing a low-level DRI-like interface per window.
Unlike a lot of other compositing window systems, UX/RT's window
system will use X11-style central client-side window management. I was
thinking of using a default desktop environment based on GNUstep
originally but I'm not completely sure if I'll still do that (a long
time ago I had wanted to put together a Mac OS X-like or NeXTStep-like
Linux distribution using a GNUstep-based desktop, a DPS-based window
server, and something like a fork of mkLinux with a stable module ABI
for a kernel, but soon decided I wanted to write a QNX-like OS
instead).
>
> Sockets are awesome but I have to agree with you, they don't "fit".
> Seems like they could have been plumbed into the file system somehow.
>
Yeah, the almost-but-not-quite-file-based nature of the socket API is
my biggest complaint about it. UX/RT will support the socket API but
it will be implemented on top of the normal file system.
> Can't speak to your plan 9 comments other than to say that the fact
> that you are looking for provided value gives me hope that you'll get
> somewhere. No disrespect to plan 9 intended, but I never saw why it
> was important that I moved to plan 9. If you do something and there
> is a reason to move there, you could be worse but still go farther.
I'd say a major problem with Plan 9 is the way it changes things in
incompatible ways that provide little advantage over the traditional
Unix way of doing things (for example, instead of errno, libc
functions set a pointer to an error string instead, which I don't
think provides enough of a benefit to break compatibility). Another
problem is that some facilities Plan 9 provides aren't general enough
(e.g. the heavy focus on SSI clustering, which never really was widely
adopted, or the rather 80s every-window-is-a-terminal window system).
UX/RT will try to be compatible with conventional Unices and
especially Linux wherever it is reasonable, since lack of applications
would significantly hold it back. It will also try to be as general as
possible without overcomplicating things.
All, I'm setting up a uucp site 'tektronix'. When I send e-mail, I'm seeing
this error:
ASSERT ERROR (uux) pid: 235 (3/24-00:09) CAN'T OPEN D.tektronX00D0 (0)
Something seems to be trimming the hostname to seven chars. If I do:
# hostname
tektronix
Thanks, Warren
On Fri, Mar 24, 2017 at 4:08 PM, Andy Kosela <andy.kosela(a)gmail.com> wrote:
>
> [snip]
> Dan, that was an excellent post.
>
Thanks! I thought it was rather too long/wordy, but I lacked the time to
make it shorter.
I always admired the elegance and simplicity of Plan 9 model which indeed
> seem to be more UNIX like than todays BSDs and Linux world.
>
> The question though remains -- why it has not been more successfull? The
> adoption of Plan 9 in the real world is practically zero nowadays and even
> its creators like Rob Pike moved on to concentrate on other things, like
> the golang.
>
I think two big reasons and one little one.
1. It wasn't backwards compatible with the rest of the world and forced you
to jump headlong into embracing a new toolset. That is, there was no
particularly elegant way to move gradually to Plan 9: you had to adopt it
all from day one or not at all. That was a bridge too far for most. (Yeah,
there were some shims, but it was all rather hacky.)
2. Relatedly, it wasn't enough of an improvement over its predecessor to
pull people into its orbit. Are acme or sam really all that much better
than vi or emacs? Wait, don't answer that...but the reality is that people
didn't care enough whether they were or not. The "everything is a
file(system)" idea is pretty cool, but we've already had tools that worked
then and work now. Ultimately, few people care how elegant the abstractions
are or how well the kernel is implemented.
And the minor issue: The implementation. Plan 9 was amazing for when it was
written, but now? Not so much.
I work on two related kernels: one that is directly descended from Plan 9
(Harvey, for those interested) and one that's borrowed a lot of the code
(Akaros) and in both we've found major, sometimes decades old bugs. There
are no tests, and there are subtle race conditions or rarely tickled bugs
lurking in odd places. Since the system is used so little, these don't
really get shaken out the way they do in Linux (or to a lesser extent the
BSDs or commercial Unixes). In short, some code is better than other code
and while I'd argue that the median quality of the implementation is
probably higher than that of Linux or *BSD in terms of elegance and
understandability, it's not much higher and it's certainly buggier.
And the big implementation issue is lack of hardware support. I stood up
two plan 9 networks at work for Akaros development and we ran into major
driver issues with the ethernets that took a week or two to sort out. On
the other hand, Linux just worked.
Eventually, one of those networks got replaced with Linux and the other is
probably headed that way. In fairness, this has to do with the fact that no
one besides Ron and I was interested in using them or learning how they
work: people *want* Linux and the idea that there's this neat system out
there for them to explore and learn about the cool concepts it introduced
just isn't a draw. I gave a presentation on Plan 9 concepts to the Akaros
team a year and a half or so ago and a well-known figure in the Linux
community who working with us at the time had only to say that, "the user
interface looks like it's from 1991." None of the rest didn't interest him
at all: the CPU command usually kind of blows people's minds, but after I
went through the innards of it the response was, "why not just use SSH?"
I've had engineers ask me why Plan 9 used venti and didn't "just use git"
(git hadn't been invented yet). It's somewhat lamentable, but it's also
reality.
- Dan C.