hi all,
For those who have been following my 2.11BSD conversion, I was working
on this in about 2005 and I might have posted about it then, and then
nothing much happened while I did a university degree and so on, but
recently I picked it up again. When I left it, I was partway through
an ambitious conversion of the BSD build system to my own design (a
file called "defs.mk" was included in all Makefiles apparently), and I
threw this out because it was too much work upfront. The important
build tools like "cc" were working, but I have since reviewed all
changes and done things differently. The result is I can now build the
C and kernel libraries and a kernel, and they work OK.
This seems like a pretty good milestone so I'm releasing the code on bitbucket.
See https://bitbucket.org/nick_d2/ for the list of my repositories,
there is another one there called "uzi" which is a related project,
but not document so I will write about it later, in the meantime
anyone is welcome to view the source and changelogs of it.
The 2.11BSD repository is at the following link:
https://bitbucket.org/nick_d2/211bsd
There is a detailed readme.txt in the root of repository which
explains exactly how I approached the conversion and gives build
instructions, caveats and so forth. To avoid duplication I won't post
this in the list, but I suggest people read it as a post, since it's
extremely interesting to see all the porting issues laid out together.
See
https://bitbucket.org/nick_d2/211bsd/src/27343e0e0b273c2df1de958db2ef5528cc…
Happy browsing :)
cheers, Nick
> From: Clem Cole
> You might say something like: Pipe's were developed in a 3rd edition
> kernel, where there was is evidence of nascent idea (its has a name and
> there are subs for it), but the code to fully support it is lacking in
> the 3rd release. Pipes became a completed feature in the 4th edition.
To add to what others have pointed out (about the assembler and C kernels),
let me add one more data-bit. In the Unix oral histories done by Michael S.
Mahoney, there's this:
McIlroy: .. And on-e day I came up with a syntax for the shell that went
along with the piping, and Ken said, "I'm going to do it!" He was tired of
hearing all this stuff, and that was - you've read about it several times,
I'm sure - that was absolutely a fabulous day the next day. He said, "I'm
going to do it." He didn't do exactly what I had proposed for the pipe
system call; he invented a slightly better one that finally got changed
once more to what we have today. He did use my clumsy syntax.
He put pipes into Unix, he put this notation [Here McIlroy pointed to the
board, where he had written f > g > c] into shell, all in one night. The next
morning, we had this - people came in, and we had - oh, and he also changed
a lot of - most of the programs up to that time couldn't take standard
input, because there wasn't the real need. So they all had file arguments;
grep had a file argument, and cat had a file argument, and Thompson saw
that that wasn't going to fit with this scheme of things and he went in and
changed all those programs in the same night. I don't know how ... And the
next morning we had this orgy of one-liners.
So I don't think that suggested text, that it was added slowly, is
appropriate. If this account is correct, it was pretty atomic.
It sounds more the correct answer to the stuff in the source is the one
proposed, that it got added to the assembler version of the system before it
was done in the C version.
Noel
Warren:
Can anybody help explain the "not in assembler" comment?
====
I think it means `as(1) has predefined symbols with the
numbers of many system calls, but not this one.'
Norman Wilson
Toronto ON
I recall reading a long time ago a sentence in a paper Dennis wrote which
went something like "Unix is profligate with processes". The word
profligate sticks in my mind. This is a 30+-year-old memory of a probably
35+-year-old paper, from back in the day when running a shell as a user
level process was very controversial. I've scanned the papers (and BSTJ) I
can find but can't find that quote. Geez, is my memory that bad? Don't
answer that!
Rob Pike did a talk in the early 90s about right and wrong ways to expose
the network stack in a synthetic file system. I'd like to find those
slides, because people keep implementing synthetics for network stacks and
they always look like the "wrong" version from Rob's slides. I've asked him
but he can't find it. I've long since lost the email with the slides,
several jobs back ...
thanks
ron
> In one of his books, Wirth laments about programmers proudly
> showing him terrible code written in Pascal
For your amusement, here's Wirth himself committing that sin:
http://www.cs.dartmouth.edu/~doug/wirth.pdf
> From: Nick Downing
> way overcomplicated and using a very awkward structure of millions of
> interdependent C++ templates and what-have-you.
> ...
> the normal standard use cases that his group have tested and made to
> work by extensive layers of band-aid fixes, leaving the code in an
> incomprehensible state.
Which just goes to provide support for my long-term contention, that language
features can't help a bad programmer, or prevent them from writing garbage.
Sure, you can take away 'goto' and other dangerous things, and add a lot of
things that _can_ be used to write good code (e.g. complete typing and type
checking), but that doesn't mean that a user _will_ write good code.
I once did a lot of work with an OS written in a macro assembler, done by
someone really good. (He'd even created macros to do structure declarations!)
It was a joy to work with (very clean and simple), totally bug-free; and very
easy to change/modify, while retaining those characteristics. (I modified the
I/O system to use upcalls to signal asynchronous I/O completion, instead of
IPC messages, and it was like falling off a log.)
Thinking we can provide programming tools/languages which will make good
programmers is like thinking we can provide sculpting equipment which will
make good sculptors.
I don't, alas, have any suggestions for what we _can_ do to make good
programmers. It may be impossible (like making good sculptors - they are born,
not made).
I do recall talking to Jerry Saltzer about system architects, and he said
something to the effect of 'we can run this stuff past students, and some of
them get it, and some don't, and that's about all we can do'.
Noel
I have a ImageMagic CD that I got back in 1994 that I found in my
garage. It has a bunch of versions of linux that aren't on kernel.org.
The 0.99 series, the 0.98 series and what looks like 1.0 alpha pl14
and pl15.
Is anybody here interested in them?
I have fallen out of contact with the Linux folks, so don't know if
anybody on kernel.org would be interested in these. Does anybody care?
Warner
On 2016-12-29 03:00, Nick Downing <downing.nick(a)gmail.com> wrote:
> I will let you know when I get
> it working :) It's not a current focus, but I will return to it someday.
> In the meantime, I'm putting it on bitbucket, so others will be able to
> pick it up if they wish. However, this also isn't my current focus, it's
> there, but it's not documented.
>
> The IAR compiler on the Z180 supports a
> memory model similar to the old "medium" memory model that we used to
> use with Microsoft or Turbo C on DOS machines, that is, multiple code
> segments with a single data segment. Yes, the Z180 compiled C code is
> larger than the PDP-11 compiled C code, but luckily you can have
> multiple code segments, which you cannot (easily) have on the PDP-11.
>
> Unfortunately code and data segments share the same 64 kbyte logical
> address space, so what I did was to partition the address space into 4
> kbytes (always mapped, used for interrupt handlers, bank switching
> routines, IAR compiler helper routines, etc), 56 kbytes (kernel or
> current process data and stack) and 4 kbytes (currently executing
> function). The currently executing function couldn't be more than 4
> kbytes and couldn't cross a physical 4 kbyte boundary due to the
> hardware mapping granularity, but this was acceptable in practice.
>
> I got
> the Unix V7 clone working OK under this model and then added the
> networking, so although it was a bit of a dogs breakfast, it proves the
> concept works. My memory management left a fair bit to be desired (too
> much work to fix) however I think porting 2.11BSD would solve this
> problem since it works in the PDP-11 under split I/D, which has similar
> constraints except the 4 kbyte code constraint. My understanding is
> 2.11BSD is actually a cut down 4.3BSD running on the HAL from 2.xxBSD, I
> would like to audit each change from 4.3BSD to make sure I agree with
> it, so essentially my project would be porting 4.3BSD rather than
> 2.11BSD. But I'd take the networking stack and possibly a lot more code
> from 2.11BSD, since it is simplified, for instance the networking stack
> does not use SYN cookies. cheers, Nick
Having written quite some code on the Z180, as well as god knows how
much code on the PDP-11, I'm going to agree with Peter Jeremy in that I
do not believe 2.11BSD can be made to run on a Z180. (Well, of course,
anything is possible, you could just write a 68000-emulator, for
example, but natively... no.)
Unix V7 is miles from 2.11BSD. Unix V7 can run on very modest PDP-11
models. 2.11BSD cannot be made to run on a PDP-11 without split I/D
space, which effectively gives you 128K of address space to play with,
in addition to the overlaying done with the MMU remappings.
The MMU remappings might be possible to emulate enough with the segment
registers of the Z180 for the Unix needs, but the split I/D space just
won't happen.
2.9BSD was the last version (I believe) which ran on non split-I/D machines.
Johnny
>
> On Wed, Dec 28, 2016 at 6:14 PM, Peter Jeremy <peter(a)rulingia.com> wrote:
>> On 2016-Dec-25 17:21:31 -0500, Steve Nickolas <usotsuki(a)buric.co> wrote:
>>> On Mon, 26 Dec 2016, Nick Downing wrote:
>>>> I became frustrated with the limitations of both UZI and NOS and decided to
>>>> port 2.11BSD to the cash register as the next step, my goal was (a) make it
>>>> cross compile from Linux to PDP-11, (b) check it can build an identical
>>>> release tape through cross compilation, (c) port it to Z80 using my
>>>> existing cross compiler.
>>> A Z180 is powerful enough to run 2.11BSD? o.o;
>> I suspect shoe-horning 2.11BSD onto a Z180 would be difficult - 2.11BSD
>> on a PDP-11 requires split I+D and has kernel and userland in separate
>> address spaces. Even with that, keeping the non-overlay part of the
>> kernel in 64KB is difficult. Equivalent Z180 code is going to be much
>> larger than PDP-11 code.
>>
>> I'd be happy to be proved wrong.
>>
>> --
>> Peter Jeremy
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol