Today I came across an article about the MGR window system for Unix:
https://hack.org/mc/mgr/
One thing that interested me was a note that some versions worked on the
Macintosh:
> The window system ran on many different hardware platforms, at least
> these: Sun 3/xx workstations running SunOS, which was the the original
> development platform, Sun SPARCstations (SunOS and then ported by me to
> Solaris), Intel x86 based PCs (Coherent, Minix, FreeBSD or Linux),
> Atari ST (under MiNT), AT&T UnixPC (SysV) and the Macintosh.
As the owner of a Macintosh Plus, I think it would be a very interesting
thing to experiment with, but I haven't had much luck finding any more
information about it.
Does anyone know more about MGR, particularly on the Mac? That page has
the source for MGR 0.69, but there's no mention of the Macintosh in it
(aside from comments about how it was supported on older versions...)
John
I know something!
On Fri, Jul 01, 2022 at 04:05:30PM +0300, Ori Idan wrote:
> > o why CTRL/S and CTRL/Q are used for flow control in a shell command
> > line session
> >
> Also would be happy to know.
https://en.wikipedia.org/wiki/Software_flow_control
But I don't know the answer to Ctrl-D. :( And also the bus error
and maybe the segmentation fault if it hasn't to do with a segment
register.
Matthias
--
When You Find Out Your Normal Daily Lifestyle Is Called Quarantine
> I heard that the IBM 709
> series had 36 bit words because Arthur Samuel,
> then at IBM, needed 32 bits to identify the playable squares on a
> checkerboard, plus some bits for color and kinged
To be precise, Samuel's checkers program was written for
the 701, which originated the architecture that the 709 inherited.
Note that IBM punched cards had 72 data columns plus 8
columns typically dedicated to sequence numbers. 700-series
machines supported binary IO encoded two words per row, 12
rows per card--a perfect fit to established technology. (I do
not know whether the fit was deliberate or accidental.)
As to where the byte came from, it was christened for the IBM
Stretch, aka 7020. The machine was bit-addressed and the width
of a byte was variable. Multidimensional arrays of packed bytes
could be streamed at blinding speeds. Eight bits, which synced
well with the 7020's 64-bit words, was standardized in the 360
series. The term "byte" was not used in connection with
700-series machines.
Doug
Hello,
I have on my hands many images of tapes that seems to have been written
by various implementaions of dump. I see the magic numbers 60011 and
60012 in little and big endian at offsets 18 (16-bit version?) and 24
(32-bit version?). I don't know the dating of the tapes, but around
1980 would be a reasonable guess.
Are there some easy to use (ready to run on a modern Unix) tools to
extract files from such tape files?
I'm not looking to restore a file system on disk, just extract the
files.
Hello all,
I've recently been improving the AT&T/Teletype DMD 5620 simulator I wrote a few years ago. It can now run either the 8;7;3 or 8;7;5 firmware. It also now supports executing a local shell or connecting directly to a physical or virtual tty device. It runs natively on Linux or macOS with X11 or Wayland, but I would love help creating a Windows version if you're a Windows programmer (I am an occasional Windows user, but I am not at all knowledgeable about Windows programming).
Full details are available here: https://loomcom.com/3b2/dmd5620_emulator.html
The source code is here: https://github.com/sethm/dmd_gtk
Many thanks go to my friend Sark (@crtdude on Twitter) for tracking down the 8;7;3 firmware and dumping it for me. I'd also like to thank Mike Haertel for helping find bugs, providing feedback, and inspiring me to get it working with Research Unix in addition to SVR3.
Feedback, bug reports, and pull requests are all welcome!
-Seth
--
Seth Morabito
Poulsbo, WA
web(a)loomcom.com
Anecdote prompted by the advent of Burroughs in this thread:
At the 1968 NATO conference on Software Engineering, the discussion
turned to language design strategies. I noted that the design of Algol
68, for example, presupposed a word-based machine, whereupon Burroughs
architect Bob Barton brought the house down with the remark, "In the
beginning was the Word, all right--but it was not a fixed number of
bits!"
[Algol 68's presupposition is visible in declarations like "long long
long ... int". An implementation need support only a limited number of
"longs", but each supported variety must have a definite maximum
value, which is returned by an "environment enquiry" function. For
amusement, consider the natural idea of implementing the longest
variety with bignums.]
Doug
The error was introduced on 13 September 2005, by an anonymous user from an IP address allocated to Web Perception, a Californian ISP, and (currently) geolocated to Sonoma. The change comment was:
Changes - 386BSD factual errors corrected, potentially libelous statements removed, links updated, refocus on 386BSD history, authority-386BSD authors, published works, DMR refs
The same IP address was used for a series of edits over 2005-2006, to topics including 386BSD, Lynne Jolitz, William Jolitz, and Radiocarbon Dating.
I imagine it was simply a mistake.
d
> On 10 Sep 2022, at 12:26, Grant Taylor via COFF <coff(a)tuhs.org> wrote:
>
> On 9/9/22 8:05 PM, Greg 'groggy' Lehey wrote:
>> Done.
>
> Thank you!
>
>> Do you have an idea about how this error crept in?
>
> No, I do not.
>
> I came to this article after reading about the DDJ DVD archive on the geeks mailing list. I was sensitive to the emails about DDJ because I've been looking to acquire the issues (or at least articles) with the Porting Unix to the 386 articles in them.
>
> Now I have them! :-D
>
>
>
> --
> Grant. . . .
> unix || die
>
https://www.timeanddate.com/on-this-day/september/9
``Unix time or Unix epoch, POSIX time or Unix timestamp, is a time system
that measures the number of seconds since midnight UTC of January 1, 1970,
not counting leap seconds. At 01:46:40 UTC on September 9, 2001, Unix time
reached the billionth second timestamp.''
Hard to believe that it was that long ago...
-- Dave
Paul Winalski and Bakul Shah commented on bit addressable machines
on the TUHS list recently. From Blaauw and Brooks' excellent
Computer Architecture book
http://www.math.utah.edu/pub/tex/bib/master.html#Blaauw:1997:CAC
on page 98, I find
>> ...
>> The earliest computer with bit resolution is the [IBM 7030] Stretch.
>> The Burroughs B1700 (1972) and CDC STAR100 (1973) are later examples.
>>
>> Bit resolution is costly in format space, since it uses a maximum
>> number of bits for address and length specification. Sharpening
>> resolution from the byte to the bit costs the same as increasing
>> address-space size eight-fold.
>>
>> Since almost all storage realizations are organized as matrices,
>> bit resolution is also expensive in time or equipment.
>> ...
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
Doug McIlroy:
Bit-addressing is very helpful for manipulating characters
in a word-organized memory. The central idea of my ancient
(patented!) string macros that underlay SNOBOL was that it's
more efficient to refer to 6-bit characters as living at
bits 0,6,12,... of a 36-bit word than as being characters
0,1,2,... of the word. I've heard that this convention was
supported in hardware on the PDP-10.
====
Indeed it was. The DEC-10 had `byte pointers' as well as
(36-bit) word addresses. A byte pointer comprised an address,
a starting bit within the addressed word, and a length.
There were instructions to load and store an addressed byte
to or from a register, and to do same while incrementing
the pointer to the next byte, wrapping the start of the next
word if the remainder of the current word was too small.
(Bytes couldn't span word boundaries.)
Byte pointers were used routinely to process text. ASCII
text was conventionally stored as five 7-bit bytes packed
into each 36-bit word. The leftover bit was used by some
programs as a flag to mean these five characters (usually
the first of a line) were special, e.g. represented a
five-decimal-digit line number.
Byte pointers were used to access Sixbit characters as
well (each character six bits, so six to the word,
character set comprising the 64-character subset of
ASCII starting with 0 == space).
Norman Wilson
Toronto ON
(spent about four years playing with TOPS-10 before
growing up to play with UNIX)
Andrew Hume:
if i recall correctly, V1 of Unix had time measured in milliseconds.
were folks that sure that this would change before wrap-around?
====
Not milliseconds (which were infinitesimally small to the
computers of 1969!) but clock ticks, 60 per second.
Initially such times were stored in a pair of 18-bit PDP-7
words, giving a lifetime of about 36 years, so not so bad.
The PDP-11's 16-bit words made that a 32-bit representation,
or about two and a quarter years before overflow. Which
explains why the time base was updated a few times in early
days, then the representation changed to whole seconds, which
in 32 bits would last about as long as 36 bits of 60 Hz ticks.
The PDP-7 convention is documented only in the source code,
so far as I know. The evolution of time on the PDP-11 can
be tracked in time(II) in old manuals; the whole-seconds
representation first appears in the Fourth Edition.
Norman Wilson
Toronto ON
Not that old a timer, but once looked into old time
> From: Jim Capp
> See "The Preparation of Programs for an Electronic Digital Computer",
> by Maurice V. Wilkes, David J. Wheeler, and Stanley Gill
Blast! I looked in the index in my copy (ex the Caltech CS Dept Library :-),
but didn't find 'word' in the index!
Looking a little further, Turing's ACE Report, from 1946, uses the term
(section 4, pg. 25; "minor cycle, or word"). My copy, the one edited by
Carpenter and Doran, has a note #1 by them, "Turing seems to be the first
user of 'word' with this meaning." I have Brian's email, I can ask him how
they came to that determination, if you'd like.
There aren't many things older than that! I looked quickly through the "First
Draft on the EDVAC", 1945 (re-printed in "From ENIAC to UNIVAC", by Stein),
but did not see word there. It does use the term "minor cycle", though.
Other places worth checking are the IBM/Harvard Mark I, the ENIAC and ...
I guess therer's not much else! Oh, there was a relay machine at Bell, too.
The Atanasoff-Berry computer?
> From: "John P. Linderman"
> He claims that if you wanted to do decimal arithmetic on a binary
> machine, you'd want to have 10 digits of accuracy to capture the 10
> digit log tables that were then popular.
The EDVAC draft talks about needing 8 decimal digits (Appendix A, pg.190);
apparently von Neumann knew that that's how many digits one needed for
reasonable accuracy in differential equations. That is 27 "binary digits"
(apparently 'bit' hadn't been coined yet).
Noel
> Doug or anyone, why do bit pointers make sense? Why?
Bit-addressing is very helpful for manipulating characters
in a word-organized memory. The central idea of my ancient
(patented!) string macros that underlay SNOBOL was that it's
more efficient to refer to 6-bit characters as living at
bits 0,6,12,... of a 36-bit word than as being characters
0,1,2,... of the word. I've heard that this convention was
supported in hardware on the PDP-10.
In the IBM 7020 floats and ints were word-addressed. But
those addresses could be extended past the "decimal point"
to refer to bits. Bits were important. The computer was designed
in parallel with the Harvest streaming "attachment" for
NSA. Harvest was basically intended to gather statistics useful
in code-breaking, such as frequency counts and autocorrelations,
for data typically encoded in packed 5- to 8-bit characters. It
was controlled by a 20-word "setup" that specified operations on
rectangular and triangular indexing patterns in multidimensional
arrays. Going beyond statistics, one of the operations was SQML
(sequential multiple lookup) where each character was looked
up in a table that specified a replacement and a next table--a
spec for an arbitrary Turing machine that moved its tape at
byte-streaming speed!
Doug
> Well, you can imagine what happened when the leading digit changed
> from an ASCII "9" to an ASCII "1". Oops.
I first saw a time-overflow bug more than 60 years ago. Accounting
went haywire in the Bell Labs' comp center on day 256 of the year,
when the encoded output of a new time clock reached the sign bit.
Doug
On Thu, Sep 8, 2022 at 12:51 PM segaloco via TUHS <tuhs(a)tuhs.org> wrote:
> Both Coherent and 4.4BSD have stuck out to me as examples of
> not-quite-so-clean-room implementations that did well enough (more than
> enough for BSD) and didn't die a fiery death in litigation (as much as USL
> tried...).
Be careful with that statement both parts of it are not wholly on target.
In the first, AT&T chose not to litigate against Coherent fully. As was
pointed out, Dennis and the team that examined the code base determined it
was 'clean enough.' If I recall, his comment was something like "It was
clear they had seen and had access to the AT&T IP at some point, most
likely at University (IIRC many of the founders were ex-University
Waterloo), but they did not find evidence of direct copying of files."
BSDi/UCB *vs. *USL was a different kettle of fish altogether. As has been
discussed here extensively (and needs not to be repeated), that suit was
about *Trade Secrets and >>ideas<< that make up what we call UNIX.* The
real interesting thing about that case is that had USL/AT&T won, the
repercussions for the industry would have been way beyond just BSDi - *but
all of the UNIX clones* and many of us on this list who had been "mentally
contaminated" with AT&T's ideas (I still have my 'mental contamination'
button somewhere in my archives).
The good news is that the US courts had the good sense to realize that the
moment the US Gov put the consent decree down in 1956 and required that
AT&T make their IP available and then enabled AT&T had its people start to
write about their work in the open literature (in UNIX's case the original
CACM paper, but continuing with all the books, follow on papers, etc), plus
being such wonderfully active participants in the research community at
large, it could not be called a secret.
> What I find interesting is that in this day and age, it seems there is
> almost a requirement for true "clean-room" implementation if something is
> going to be replicated, which I understand to mean the team developing the
> new implementation can't be the same team that performed a detailed
> analysis of the product being reimplemented, but the latter team can
> produce a white paper/writeup/memorandum describing the results of their
> analysis and the development team can then use that so long as it doesn't
> contain any implementation details.
>
It's not "day and age" it's from the original case law -- the term was
coined by the late Arthur Kahn, Esquire who was the lead attorney for
Franklin Computers, Inc in the Franklin *vs.* Apple Case - which he
originally won and ultimately lost on appeal [Good guy BTW, particularly
for a non-technically trained person - he 'got it']. The concept is that
one group is in a dirty room and the other in a clean room. Information is
unidirectional. The dirty room can read published documentation, probe,
and test the device/implementation using standard programming techniques.
And then write a new document that describes the functionality of the
device in question. Then hand it to the folks in the clean room who can
reimplement a device to that new specification.
The point of contention in the case is if *the original documentation for
the device*, in this case, the Apple Assembler listing for Wos's Apple-II
ROMs were protected by copy once they had been transformed from their
printed form in Apple;'s red books into the binary and stored in the ROMS
themselves.
Franklin's 'dirty room' ultimately wrote a series of test programs that
demonstrated each of the externally available locations and entries in the
ROMs. This they documents and their new clean-room programmers wrote a new
set of ROM that duplicated the functionality. IIRC the story is that
Franklin ROMs were a few bytes smaller in the end. Compaq would later the
same scheme for the IBM PC.
> I would assume the current definition of a clean-room implementation only
> requires that the developers/implementors don't have access to the code of
> the parent product (source or reverse engineered), but could read manuals,
> study behavior in-situ, and use that level of "reverse engineering" to
> extract the design from the implementation, so not knowing the gritty
> details, Coherent could be a true clean-room.
>
Be careful here. I used to work for a firm that did a lot of work for
different vendors that would build some of these clean-room sub-systems (in
fact for some of the folks -- at least one -- of the readers of this
list). We were always careful for the clean-room people to be ones we
were fairly sure had not seen that customers product previously. I was
almost always on the 'dirty' team in many of those projects because I was
so contaminated with the IP of so many of our customers' work. Because we
worked for IBM, Sun, DEC, HP, DG, AT&T, *etc*. all at the same time had
their IP in-house we had very strict rules about how things were handled.
Even what sites and what sub-nets data could be on -- which system admins
had the passwords. No one person had access to all of the passwords. We
had a locked safe for each customer with secure things like passwords
(really) and rooms with locks and videos, and access doors. It was really
serious stuff.
Frankly, I think part of why we got some of the "work for hires" tasks was
because those firms trusted us to handle their IP properly. No way did we
want to contaminate something accidentally. Some projects like our big TNC
[Transparent Network Computing] system we were working on for all of IBM,
DEC, HP, and Intel simultaneously -- 4 different teams. The architects
could talk to each other, and we talked about common issues, but it was a
different code. I know we implemented things a couple of times - although
we got smarter. For instance, the original RPC marshaling was done for
IBM with 'the awk script from hell' which later became an interface
generator that all four teams used.
>
> BSD is a different beast, as they were literally replacing the AT&T source
> code before their eyes, so there isn't much argument that can be made for
> 4.4BSD being a "clean-room" implementation of UNIX.
It was not a clean-room as Arthur defined it. It was rewritten over time,
which replaced AT&T's implementation. Which is all that was ever claimed.
> Given that, that's one of the more surprising things to me about 4.4BSD
> prevailing in the lawsuit, because while Berkeley could easily prove that
> they had replaced most of AT&T's code, there's still the fact that their
> team did have complete and unfettered access to Bell UNIX code at least
> circa 32V.
I expect this is because you don't quite understand what happened.
> but I remember reading somewhere that CSRG students and faculty avoided
> commercial UNIX like the plague,
Hmmm, I read it on the Internet -- it must be true ;-)
CSRG had Ultrix/VAX, SunOS/3, and I believe HP-UX/PA sources. They shipped
several DEC-developed drivers in 4.1A/4.1B/4.1C -- Sam, Bill Shanon, and I
tested a couple of them on one of my machines in Cory Hall as DEC has
donated one of the 3 CAD machines [UCBCAD - a.k.a. 'coke' ], and it was the
only 'pure' DEC machine on campus - without any 3rd party HW in it. After
I graduated, I suspect Sam continued the relationship with Tom Quarles, so
4.2BSD was likely tested on that system too. But I know the RH-based TAPES
and DISKs were all straight from Shannon's SCCS Ultrix repo as he sent them
to me to try before I gave them to Sam.
> Does anyone know if there was a "formal" PDP-11 and/or VAX disassembler
> produced by Bell?
Most of the compiler kits have disassemblers, as do many debuggers -- what
are you asking?
> saying something to the effect "Rumor has it there is a PDP-11
> disassembler" but I'm curious if such tools were ever provided in any sort
> of official capacity.
>
In the mid/late-70s (*i.e.* V6/V7 time frame) there are a couple of them --
where to start -- V7 has one inside of adb, and if I recall later versions
of PCC2 has one. But if you look in the USENIX tapes you can find a couple
of pretty well-adorned ones. There was one that IIRC was done by ??Cooper
Union?? guys that spit out DEC MACRO-11 syntax for the Harvard assembler.
That should be on the TUHS archives. Thinking about it, Phil Karn had
one too that did some interesting label patch-up IIRC - which I think he
brought with him to CMU from somewhere -- but I may be miss remembering
that.
Hi,
The following comment was made on the geeks mailing list and I figured
it was worth cross posting to the TUHS mailing list. -- I'm BCCing the
original poster so that they are aware of the cross post and in case
they want to say more.
--8<--
In related news that might entertain and inform, there are some
interesting old-timey UNIXes out there that I've come across recently
though:
XV6:
https://pdos.csail.mit.edu/6.828/2012/xv6.html
OMU:
http://www.pix.net/mirrored/discordia.org.uk/~steve/omu.html
V7/x86:
https://www.nordier.com/
RUnix and Singlix:
https://www.singlix.com/runix/
-->8--
I don't know if any of it should be included in the TUHS archives or
not. -- I figure discussing it on TUHS is the best way to find out.
P.S. Re-sending to the correct TUHS email address. Somehow I had
something on file reflecting the old server. :-/
--
Grant. . . .
unix || die
> It was used, in the modern sense, in "Planning a Computer System",
> Buchholz,1962.
Also in the IBM "650 Manual of Operation", June, 1955. (Before I was
born! :-)
Noel
> On Sep 8, 2022, at 9:51 AM, Jon Steinhart <jon(a)fourwinds.com> wrote:
> One of those questions for which there is no search engine incantation.
Whatever it is, it's really old. I found it used, not quite in the modern
sense, in "Hi-Speed Computing Devices", by ERA, 1950. It was used, in the
modern sense, in "Planning a Computer System", Buchholz,1962.
Noel
> (Research) Unix ... 'shipped' with zero known bugs.
It wasn't a Utopia. Right from the start man pages reported BUGS,
though many were infelicities, not implementation errors.
Dennis once ran a demo of a ubiquitous bug: buffer overflow. He fed a
2000-character line on stdin to every program in /bin. Many crashed.
Nobody was surprised; and nobody was moved to fix the offenders. The
misdesign principle that "no real-life input looks like that" fell
into disrepute, but the bad stuff lived on. Some years down the road a
paper appeared (in CACM?) that repeated Dennis's exercise.
> An emergent property is "Good Security”
Actually security (or at least securability) was a conscious theme
from the start to which Ken, Bob Morris, and Fred Grampp gave serious
attention. Networking brought insecurity, especially to Berkeley Unix.
But research was not immune; remote execution via uucp caused much
angst, but not enough to kill it.
In regards to the basic question. To oversimplify: Theme 1. Unix
facilities encouraged what Brian recognized and proselytized as
software tools. Theme 2. OS portability was new and extraordinarily
final. Subsequent OS's were all portable and were all Unix.
Doug
Does anybody out there have a copy of the old AT&T Toolchest "dmd-pgmg"
package?
This apparently includes the a SysV port of Sam for 5620/630 as well
as other programs for the AT&T windowing terminals.
I’ve been looking at this question for a time and thought it could’ve appeared on the TUHS list - but don’t have an idea of the search terms to use on the list.
Perhaps someone suggest some to me.
As a starting point, below is what John Lions wrote on a similar topic in 1978. Conspicuously, “Security” is missing, though “Reliability & Maintenance” would encompass the idea.
With hindsight, I’d suggest (Research) Unix took a very strong stance on “Technical Debt” - it was small, clean & efficient, even elegant. And ‘shipped' with zero known bugs.
It didn’t just bring the Unix kernel to many architectures, the same tools were applied to create what we now call “Open Source” in User land:
- Multi-platform / portable
- the very act of porting software to diverse architectures uncovered new classes of bugs and implicit assumptions. Big- & Little-endian were irrelevant or unknown Before Unix.
- full source
- compatibility layers via
- written in common, well-known, well-supported languages [ solving the maintenance & update problem ]
- standard, portable “toolchains”
- shell, make, compiler, library tools for system linker, documentation & doc reading tools
- distribution systems including test builds, issue / fault reporting & tracking
An emergent property is "Good Security”, both by Design and by (mostly) error-free implementations.
In the Epoch Before Unix (which started when exactly?), there was a lot of Shared Software, but very little that could be mechanically ported to another architecture.
Tools like QED and ROFF were reimplemented on multiple platforms, not ‘ported’ in current lingo.
There are still large, complex FORTRAN libraries shared as source.
There’s an important distinction between “Open” and “Free” : cost & availability.
We’ve gone on to have broadband near universally available with easy to use Internet collaboration tools - e.g. “git”, “mercurial” and “Subversion” just as CVS’s.
The Unix-created Open Source concept broke Vendor Lock-in & erased most “Silos”.
The BSD TCP/IP stack, and Berkeley sockets library, were sponsored by DARPA, and made freely available to vendors as source code.
Similarly, important tools for SMTP and DNS were freely available as Source Code, both speeding the implementation of Internet services and providing “out of the box” protocol / function compatibility.
The best tools, or even just adequate, became only a download & install away for all coding shops, showing up a lot of poor code developed by in-house “experts” and radically trimming many project schedules.
While the Unix “Software Tools” approach - mediated by the STDOUT / STDIN interface, not API’s - was new & radical, and for many classes of problems, provided a definitive solution,
I’d not include it in a list of “Open Source” features.
It assumes a “command line” and process pipelines, which aren’t relevant to very large post-Unix program classes: Graphical Apps and Web / Internet services.
regards
steve jenkin
==============
Lions, J., "An operating system case study" ACM SIGOPS Operating Systems Review, July 1978, ACM SIGOPS Oper. Syst. Rev. 12(3): 46-53 (1978)
2. Some Comments on UNIX
------------------------
There is no space here to describe the technical features of UNIX in detail (see Ritchie and Thompson, 1974 ; also Kernighan and Plauger, 1976),
nor to document its performance characteristics, which we have found to be very satisfactory.
The following general comments do bear upon the present discussion:
(a) Cost.
UNIX is distributed for "academic and educational purposes" to educational institutions by the Western Electric Company for only a nominal fee,
and may be implemented effectively on hardware configurations costing less than $50,000.
(b) Reliability and Maintenance.
Since no support of any kind is provided by Western Electric,
each installation is potentially on its own for software maintenance.
UNIX would not have prospered if it were not almost completely error-free and easy to use.
There are few disappointments and no unpleasant surprises.
(c) Conciseness.
The PDP-11 architecture places a strong limitation on the size of the resident operating system nucleus.
As Ritchie and Thompson (1974) observe,
"the size constraint has encouraged not only economy but a certain elegance of design".
The nucleus provides support services and basic management of processes, files and other resources.
Many important system functions are carried out by utility programs.
Perhaps the most important of these is the command language interpreter, known as the "shell".
(Modification of this program could alter, even drastically, the interface between the system and the user.)
(d) Source Code.
UNIX is written almost entirely in a high level language called "C" which is derived from BCPL and which is well matched to the PDP-11.
It provides record and pointer types,
has well developed control structures,
and is consistent with modern ideas on structured Programming.
(For the curious, the paper by Kernighan (1975) indirectly indicates the flavour of "C"
and exemplifies one type of utility program contained in UNIX.)
Something less than i0,000 lines of code are needed to describe the resident nucleus.
pg 47
(e) Amenability.
Changes can be made to UNIX with little difficulty.
A new system can be instituted by recompiling one or more files (at an average of 20 to 30 seconds per file),
relinking the file containing the nucleus (another 30 seconds or so),
and rebooting using the new file.
In simple cases the whole process need take no more than a few minutes.
(f) Intrinsic Interest.
UNIX contains a number of features which make it interesting in its own right:
the run-time support for the general tree structured file system is particularly efficient;
the use of a reserved set of file names smooths the concepts of device independence;
multiple processes (three or four per user is average) are used in a way which in most systems is regarded as totally extravagant
(this leads to considerable simplification of the system/user interface);
and the interactive intent of the system has resulted in an unusually rich set of text editing and formatting programs.
(g) Limitations.
There are few limitations which are of concern to us.
The PDP-11 architecture limits program size, and this for example frustrated an initial attempt to transfer Pascal P onto the 11/40.
Perhaps the greatest weakness of UNIX as it is presently distributed (and this is not fundamental!)
is in the area where other systems usually claim to be strong:
support for "bread and butter" items such as Fortran and Basic.
(h) Documentation.
The entire official UNIX documentation, including tutorial material, runs to less than 500 pages.
By some standards this is incredibly meagre,
but it does mean that student can carry his own copy in his brief case.
Features of the documentation include:
- an unconventional arrangement of material (unsettling at first, but really very convenient);
- a terse, enigmatic style, with much information conveyed by innuendo;
- a permuted KWIC index.
Most importantly perhaps UNIX encourages the programmer to document his work.
There is a very full set of programs for editing and formatting text.
The extent to which this has been developed can be gauged from the paper by Kernighan and Cherry (1975).
==============
--
Steve Jenkin, IT Systems and Design
0412 786 915 (+61 412 786 915)
PO Box 38, Kippax ACT 2615, AUSTRALIA
mailto:sjenkin@canb.auug.org.au http://members.tip.net.au/~sjenkin
>> a paper appeared (in CACM?) that repeated Dennis's exercise.
> Maybe this one?
> B.P. Miller, L. Fredriksen, and B. So, "An Empirical Study of the Reliability
> of UNIX Utilities", Communications of the ACM 33, 12 (December 1990).
> http://www.paradyn.org/papers/fuzz.pdf
Probably. I had forgotten that the later effort was considerably more
elaborate than Dennis's. It created multiple random inputs that might
stumble on other things besides buffer overflow. I see a Unix parable
in the remarkable efficacy of Dennis's single-shot test.
Doug
I added i386 binary compiled from 4.4BSD-Alpha source.
http://www.netside.co.jp/~mochid/comp/bsd44-build/
Boot with bochs works rather well. qemu-system-i386 also boots, and
NIC (NE2000 ne0) works good, but kernel prints many "ISA strayintr" messages.
I got many useful infomations from below 2 sites:
"Fun with virtualization" https://virtuallyfun.com/ 386bsd bochs qemu
"Computer History Wiki!" https://gunkies.org/wiki/Main_Page
Installing 386BSD on BOCHS
First time, I tried to compile i386 using 4.4BSD final (1995) source,
patching many many pieces from 386BSD, NetBSD, and else..
but then, I felt "Well, we have BSD/OS 2.0, NetBSD 1.0, and FreeBSD 2.0
those are full of good improvements.."
So, I changed target, and remebered Pace Willisson's memo in 4.4BSD
(and in 4.4BSD-Lite2 also) sys/i386/i386/README:
"4.4BSD-alpha 80386/80486 Status" June 20, 1992
that file says "can be compiled into a fairly usable system".
yeah, needed chages not so small, though.
-mochid
Hi All.
Thanks to Jeremy C. Reed who's email to the maintainer got the PCC Revived
website and CVS back up.
Thanks to everyone who let me know that it's back up. :-)
My github mirror is https://github.com/arnoldrobbins/pcc-revived and there
are links there to the website etc.
My repo has a branch 'ubuntu18' with diffs for running PCC on Ubuntu,
if that interests anyone.
Enjoy,
Arnold