> My experience is that the problems involved in making a program faster are
> often quite interesting and fun to work on. But the problems making things
> fit in a small space are, IMHO, really deadly.
First make things "as simple as possible, but no simpler" (Einstein). Ken and
Dennis not only cut out fat, they also found generalizations that combined
traditionally disparate features, so the new whole was smaller (and more
comprehensible) than the sum of the old parts. The going gets tough in the
presence of constraints on space or time. Steve's perception, I think, is
colored by the experience of facing hard limits on space, but not on time.
Describing one complication of hard time constraints, John Kelly used to
say that the Packard Bell 250 was "the only machine I ever used where
you transfer to a time of day rather than a memory location". (The delay-
line memory had two instruction formats: one was operation + address-of-
next-instruction, the other was just the operation--the next instruction
being whatever came out of the delay line when the operation ended. The
latter mode minimized both execution time and code space, but the attention
one had to pay to time was, to borrow Steve's phrase, "really deadly".)
Design tradeoffs for efficiency pose an almost moral conundrum:
whether to make things fast or make them easy. For example, the
classic Unix kernel typically did table lookup by linear search,
whereas Linux (when I last looked) typically used binary search.
The price of Linux's choice is that one must take care to keep
the tables sorted. Heavy discipline has to be imposed on making
entries and deletions.
Doug
What do you mean plan9 is dead? It's even possible make a facebook post
from inside plan9 nowadays!
https://twitter.com/bigatojj/status/949838932841201664
See vmx(1)
Em 3 de fev de 2018 23:25, "Steve Simon" <steve(a)quintile.net> escreveu:
Hi,
I have to take issue with the “plan9 is dead” statement,
I agree it seems on a downward spiral, but are a few who fight on.
[sent from my plan9]
-Steve
You can use make as much as you like; Go just doesn't need it. You can use
Go to fetch code from internet if you like, or you can do it yourself if
you prefer.
Regarding the "hardwired" directories, you can change it through an
environment variable.
Em 3 de fev de 2018 23:20, "Lyndon Nerenberg" <lyndon(a)orthanc.ca> escreveu:
That's the second endorsement I've seen for Go; I guess I should learn it.
>
In the scheme of "current" languages, Go is pretty good. With two major
caveats, IMO:
1) The build system. It doesn't work with make(1). That makes it a
non-starter for anything other than trivial projects at $WORK. While I
appreciate the arguments for the apparent simplicity of the "go" command,
that doesn't work for us. Which would have been fine, but for the entirely
antagonistic bent they have taken against being able to build Go programs
with make(1). Our build environment entirely precludes Go's promiscuous
insistence on unfettered internet access, and hardwired directory paths.
2) Hardwired directory paths for the development/build environment (see
above).
It seems they have unlearned all the UNIX lessons. Sad, really. I would
love to toss out Python, Ruby, PHP, Perl, et al. And could make the
argument for it, I think. But the build environment will never work in our
shop, therefore Go won't either.
And that ... sucks.
Hi,
Interested in some perspective here since the list has influential Linux
people like Ted Tso.
Linux has been described as influenced by Minix and System V. The Minix
connection is well discussed. The SysV connection something something
Linus had access to a spec manual. But I’d guess reality would be more
gradual — new contributors that liked CSRG BSD would have mostly gravitated
to the continuations in 386/BSDi/Net/Free that were concurrent to early and
formative Linux development.. so there’d be an implicit vacuum of BSD
people for Linux development.
What I am curious about is the continuing ignorance of BSD ideas. Linux
isn’t exactly insular; a lot of critical people and components came much
later on from other SysV flavors (lvm, jfs, xfs, RCU)
The kinds of BSD things I am talking about are ufs, kqueue, jails, pf,
Capsicum. Linux has grown alternatives, but with sometimes willful
ignorance of other technology. It seems clear epoll was not a good design
from the start. Despite jails not being taken to the logical conclusion of
modern containers like zones, the architecture is fundamentally closer
aligned to how people want to securely use containers versus namespaces and
cgroups. And Google ported Capscicum to Linux but it’s basically been
ignored in lieu of nebulous concepts like seccomp. And then there seems to
be outright hostility toward other platforms from the postmodern generation
with things like systemd.
This seems strange to me as BSD people are generally open to other /ideas/,
we have to be careful with Linux code due to license incompatibility, but
the converse is does not seem true either in interest in other ideas or
license hampering code flow.
The history of UNIX is spectacularly successful because different groups
got together at the table and agreeed on the ideas. Is there room for that
in the modern era where Linux is the monopoly OS? The Austin Group is
still a thing but it’s not clear people in any of the Freenix communities
really care about evolving the standards. I get that, but not so much
completely burrying ones head in the sand to what other OSes are doing. Is
there any future for UNIX as an “open system” in this climate or are people
going to go there separate ways?
Regards,
Kevin
We gained John von Neumann on this day in 1903, and if you haven't heard
of him then you are barely human... As computer science goes, he's right
up there with Alan Turing. There is speculation that he knew of Babbage's
work; see
https://cstheory.stackexchange.com/questions/10828/the-relation-between-bab… .
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
> From: Sergio Pedraja
> I am very interested in your work exactly for the reasons you explain
We will generally be sending messages about it to the CCTalk list (for
collectors of 'classic' computers), so if you want to be notified about it,
sign up there.
We currently have only wire-wrap prototypes (the FPGA is on an off-the-shelf
daughter-card); we're hoping to produce PCB production versions 'soon'. We do
have the technology for getting PCB's, which we have used for doing 'indicator
panels':
http://ana-3.lcs.mit.edu/~jnc/tech/DECIndicatorPanels.html
We have those working (they've been a big help in debugging, actually :-), and
they'll be available too. I don't personally have one of them yet, I'm
_really_ looking forward to watching the lights blink as UNIX boots... :-)
> The only difference is that my PDP-11 is one 23/PLUS. There are some
> differences between this one and the PDP-11/23 and perhaps your emulator
> woudln't work in this PDP-11 model.
No, AFAIK the /23 and /23-PLUS are effectively identical, as far as the QBUS
goes. (Obviously the -PLUS has devices, etc, the plain /23 doesn't, but they
don't enter into whether our board will work with one.)
We also plan at UNIBUS version; the two busses are similar enough that we'll
probably start with a PCB version for that one.
> I will test the working state of my PDP. If all is fine perhaps I could
> help doing with some tests on it.
Sure; use CCTalk.
Noel
As I mentioned recently, Dave Bridgham and I are doing an RK11 emulator (and
soon an RP11, which just will take editing the Verilog a tiny bit) using an
SD card for storage, for heritage computer collectors who want to run their
PDP-11's but don't want to risk a head crash (since replacement heads are now
un-obtainium) - and also for people who have an -11, but no mass storage.
So the last stage was bringing up V6 on a hardware PDP-11/23, using the
emulated RK11, and we were having an issue (which turned out to be partial
block reads/writes not working properly; this has since been fixed, and we
have successfullly booted V6 on the -11/23 with only an SD card).
In the process of debugging that issue, I added logging to the RK driver, and
noticed that in the process of attempting to boot single-user, the system was
trying to do some swapping - two writes, and two reads - before it even tried
doing the fork() to create the shell process, followed by opening /dev/tty8
and doing the exec() of /bin/sh.
This confused me, since both newproc() and expand() have code which uses a
memory-memory copy if there is enough free memory - and on first booting,
there definitely was. Why was it swapping?
So I added some more logging, and the cause turned out that at one point, I
had re-compiled /etc/init - and without thinking about it too hard, had made
it a pure-text program.
And that's what did it: in exec() (called when process 1 tries to exec()
/etc/init), it calls xalloc(), which i) if the text was not already
available, gets together the pure text, and puts a copy on the swap device,
and ii) if the text is not already in core, swaps the rest of the process
out, because, as the code explains:
* if the calling process
* is misplaced in core the text image might not fit.
* Quite possibly the code after "out:" could check to
* see if the text does fit and simply swap it in.
If you're looking at the code, the process doesn't have a data segment at
that point, just the U area; the data will be set up later in the exec()
code, after the call to xalloc() returns, after the process is swapped back
in.
In fact, the process will _never_ have anything other than a U area when in
this code; the only call to xalloc() in the system is immediately preceeded
by an "expand(USIZE)" which throws away everything except the U area.
So I'm contemplating doing what the comment suggests - adding code to check
to see if there's enough memory available to hold the pure text, and if so,
avoiding the swap-out/in of the process' U-only data segment.
If done 'right', it should also be possible to allow avoiding the reading-in
of the text from the swap device, when first exec'ing a pure text...
Probably not as interesting a project as doing the splice() system call was,
but no doubt it will teach me about a corner of the system I don't know very
well (back in the day, doing networking stuff, I was mostly looking at I/O
code).
Although there is potentially a certain amount of amusement in an additional
enhancement, which is trying to make sure there's enough room for both the
text and the data; immediately upon the return from xalloc(), the process is
expanded to have room for the data. Which might involve _another_ swap-out/in
sequence... So perhaps the U area should simply be moved out of the way via a
direct memory-memory copy, rather than swapping it out/in, just before doing
it again!
You are not expected to understand this... :-)
I guess I should first take a look at the PWB1 pure text code, which is
heavily modified from the stock V6, to see what it does - this all may have
already been done there.
Noel