> From: John Floren
> Can anyone on the list point me to either an existing archive where
> these exist
The canonical repository for historic documentation online is BitSavers.
It has an almost-complete set of DEC stuff (both manuals and prints. QBUS
devices are at:
http://www.bitsavers.org/pdf/dec/qbus/
QBUS CPU's will be in the relevant model directory, e.g.:
http://www.bitsavers.org/pdf/dec/pdp11/1123/
and disk drives are in:
http://www.bitsavers.org/pdf/dec/disc/
I haven't checked your list, but I suspect most of them are there; I think the
ADV11-A prints are missing, though. You can either send the originals to Al
Kossow, or scan them for him; but check with him first, to make sure he doen't
already have them, just hasn't got around to posting them yet.
There's another site which indexes DEC online documentation:
https://manx-docs.org/
There are a very few things which aren't in Bitsavers, and can be found there.
> KFD11-A cpu
I assume that's a typo for 'KDF11-A'?
Noel
I've been hauling around a pile of DEC Field Maintenance Print Sets
for PDP-11 components for over a decade now, intending to see if
they're worth having scanned or if there are digital versions out
there already. Can anyone on the list point me to either an existing
archive where these exist, or an archivist who would be interested in
scanning them? They're full of exploded diagrams, schematics, and
assembly listings.
Here's the list of what I have:
Field Maintenance Print Set (17" wide, 11" high):
RLV11 disk controller
RL01-AK disk drive
ADV-11A (??)
Field Maintenance Print Set (14" wide, 8.5" high):
RL01 disk drive
DLV11-J serial line controller
RLV11 disk controller
KFD11-A cpu
KEF11-A floating point processor
PDP11/23
PDP11/03-L
Thanks,
John Floren
I could chip in with my own strong opinions about code formatting,
but I think others have already posted plenty of such boring off-topic
fluff.
A straight answer to Will's original question might be interesting,
though:
The oldest extant UNIX code samples I know are those the TUHS archive,
in Distributions/Research/Dennis_v3/nsys.tar.gz; they're a very old
kernel source tree. There are plenty of tabs there.
This matches my memories of the V7-era source code, and of what I saw
people like ken and dmr and rob and bwk and pjw typing in the not-
so-early days of the 1980s when I worked with them.
Tabs were generally eight spaces apart. In code, nobody worried about
the effects on long lines, because the coding style was spare and
didn't run to many deeply-nested constructs, so lines didn't get that
long. (Maybe it was considered a feature rather than a bug that
deep nesting and deep indentation looked messy, because it wasn't
usually a good idea anyway.)
I can't speak to original motivations, but I suspect my own reasons
for using tabs aren't too different:
-- It's quicker and faster to type than multiple spaces
-- When not at the start of the line, tabs more often preserve
alignment when earlier part of the line are edited
-- Back when terminals were connected at speeds like 110 or 300 bps
(I am old enough to have experienced that, especially when working
from home), letting the system send a tab and the local terminal
expand it was a lot faster, especially when reading code (more likely
to have lots of indentation than prose). Not every device supported
tabs, but enough did that it made a real difference.
UNIX didn't originate any of this. I used tabs when writing in FORTRAN
and ALGOL/SIMULA and MACRO-10 on the TOPS-10 system I used before I
encountered UNIX. So did all the other hackers I knew in the terminal
room where we all hung out.
I don't know the history of entab/detab. Neither appears to have
been around in the Research systems; they're not in V7 and they're
not in V10. V7 does.
As an aside, the V10 manual has a single manual page for col, [23456],
mc, fold, and expand. It's a wonderful example of how gracefully
Doug assembled collections of related small programs onto a single
page to keep the manual size down. Also of his gift for concise
prose: the first sentence is
These programs rearrange files for appearance's sake.
which is a spot-on but non-stodgy summary. I wish I could write
half as well as Doug can.
And as an almost-joke, it's a wonder those programs haven't all been
made into options to cat in modern systems.
Norman Wilson
Toronto ON
For sure, I've seen at least two interesting changes:
- market forces have pushed fast iteration and fast prototyping into the
mainstream in the form of Silicon valley "fail fast" culture and the
"agile" culture. This, over the disastrous "waterfall" style, has led to a
momentous improvement in overall productivity improvements.
- As coders get pulled away from the machine and performance is less and
less in coders hands, engineers aren't sucked into (premature) optimization
as much.
Tyler
On Sat, Jan 30, 2021 at 6:10 AM M Douglas McIlroy <
m.douglas.mcilroy(a)dartmouth.edu> wrote:
> Have you spotted an evolutionary trend toward better, more productive
> programmers? Or has programmer productivity risen across the board due to
> better tools? Arguably what's happened is that principle has been
> self-obsoleting, for we have cut back on the demand for unskilled (i.e.
> less capable) programmers. A broad moral principle may be in play:
> programmers should work to put themselves out of business, i.e. it is wrong
> to be doing the same kind of work (or working in the same way) tomorrowas
> yesterday.
>
> Doug
>
>
> On Tue, Jan 26, 2021 at 5:23 AM Tyler Adams <coppero1237(a)gmail.com> wrote:
>
>> Looking at the 1978 list, the last one really stands out:
>>
>> "Use tools in preference to unskilled help to lighten a programming task"
>> -- The concept of unskilled help for a programming task...doesn't really
>> exist in 2020. The only special case is doing unskilled labor yourself.
>> What unskilled tasks did people used to do back in the day?
>>
>> Tyler
>>
>>
>> On Tue, Jan 26, 2021 at 4:07 AM M Douglas McIlroy <
>> m.douglas.mcilroy(a)dartmouth.edu> wrote:
>>
>>> It might be interesting to compare your final list with the two lists in
>>> the 1978 special issue of the BSTJ--one in the Foreword, the other in the
>>> revised version of the Ritchi/Thompson article from the CACM. How have
>>> perceptions or values changed over time?
>>>
>>> Doug
>>>
>>>
>>> On Mon, Jan 25, 2021 at 7:32 AM Steve Nickolas <usotsuki(a)buric.co>
>>> wrote:
>>>
>>>> On Mon, 25 Jan 2021, Tyler Adams wrote:
>>>>
>>>> > I'm writing about my 5 favorite unix design principles on my blog this
>>>> > week, and it got me wondering what others' favorite unix design
>>>> principles
>>>> > are? For reference, mine are:
>>>> >
>>>> > - Rule of Separation (from TAOUP <
>>>> http://catb.org/~esr/writings/taoup/html/>
>>>> > )
>>>> > - Let the Machine Do the Dirty Work (from Elements of Programming
>>>> Style)
>>>> > - Rule of Silence (from TAOUP <
>>>> http://catb.org/~esr/writings/taoup/html/>)
>>>> > - Data Dominates (Rob Pike #5)
>>>> > - The SPOT (Single Point of Truth) Rule (from TAOUP
>>>> > <http://catb.org/~esr/writings/taoup/html/>)
>>>> >
>>>> > Tyler
>>>> >
>>>>
>>>> 1. Pipes
>>>> 2. Text as the preferred format for input and output
>>>> 3. 'Most everything as a file
>>>> 4. The idea of simple tools that are optimized for a single task
>>>> 5. A powerful scripting language built into the system that, combined
>>>> with
>>>> 1-4, makes writing new tools heaps easier.
>>>>
>>>> -uso.
>>>>
>>>
> - separation of code and data using read-only and read/write file systems
I'll bite. How do you install code in a read-only file system? And
where does a.out go?
My guess is that /bin is in a file system of its own. Executables from
/letc and /lib are probably there too. On the other hand, I guess
users' personal code is still read/write.
I agree that such an arrangement is prudent. I don't see a way,
though, to update bin without disrupting most running programs.
Doug
All,
I was introduced to Unix in the mid 1990's through my wife's VMS account
at UT Arlington, where they had a portal to the WWW. I was able to
download Slackware with the 0.9 kernel on 11 floppies including X11. I
installed this on my system at the time - either a DEC Rainbow 100B? or
a handme down generic PC. A few years later at Western Illinois
University - they had some Sun Workstations there and I loved working
with them. It would be several years later, though, that I would
actually use unix in a work setting - 1998. I don't even remember what
brand of unix, but I think it was again, sun, though no gui, so not as
much love. Still, I was able to use rcs and and when my Windows bound
buddies lost a week's work because of some snafu with their backups, I
didn't lose anything - jackflash was the name of the server - good
memories :). However, after this it was all DOS and Windows until, 2005.
I'd been eyeing Macs for some time. I like the visual aesthetics and
obvious design considerations. But, in 2005, I finally had a bonus big
enough to actually buy one. I bought a G5 24" iMac and fell in love with
Mac. Next, it was a 15" G4 Powerbook. I loved those Macs until Intel
came around and then it was game over, no more PC's in my life (not
really, but emotionally, this was how I felt). With Mac going intel, I
could dual boot into Windows, Triple boot into Linux, and Quadruple boot
into FreeBSD, and I could ditch Fink and finally manage my unix tools
properly (arguable, I know) with Homebrew or MacPorts (lately, I've gone
back to MacPorts due to Homebrew's lack of support for older OS
versions, and for MacPorts seeming rationality).
Anyhow, I have thoroughly enjoyed the Mac ride, but with Catalina, the
ride got really bumpy (too much phone home, no more 32 bit programs and
since Adobe Acrobat X, which I own, outright, isn't 64 bit, among other
apps, this just in not an option for me), and with Big Sur, it's gotten
worse, potholes, sinkholes, and suchlike, and the interface is downright
patronizing (remember Microsoft Bob?). So, here I am, Mr.
Run-Any-Cutting-Edge-OS anytime guy, hanging on tooth and nail to Mac OS
Mojave where I still have a modicum of control over my environment.
My thought for the day and question for the group is... It seems that
the options for a free operating system (free as in freedom) are
becoming ever more limited - Microsoft, this week, announced that their
Edge update will remove Edge Legacy and IE while doing the update -
nuts; Mac's desktop is turning into IOS - ew, ick; and Linux is wild
west meets dictatorship and major corporations are moving in to set
their direction (Microsoft, Oracle, IBM, etc.). FreeBSD we've beat to
death over the last couple of weeks, so I'll leave it out of the mix for
now. What in our unix past speaks to the current circumstance and what
do those of you who lived those events see as possibilities for the next
revolution - and, will unix be part of it?
And a bonus question, why, oh why, can't we have a contained kernel that
provides minimal functionality (dare I say microkernel), that is
securable, and layers above it that other stuff (everything else) can
run on with auditing and suchlike for traceability?
Hi,
As I find myself starting yet another project that that wants to use
ANSI control sequences for colorization of text, I find myself -- yet
again -- wondering if there is a better way to generate the output from
the code in a way that respects TERMinal capabilites.
Is there a better / different control sequence that I can ~> should use
for colorizing / stylizing output that will account for the differences
in capabilities between a VT100 and XTerm?
Can I wrap things that I output so that I don't send color control
sequences to a TERMinal that doesn't support them?
--
Grant. . . .
unix || die
The recent discussions on the TUHS list of whether /bin and /usr/bin
are different, or symlinked, brought to mind the limited disk and tape
sizes of the 1970s and 1980s. Especially the lower-cost tape
technologies had issues with correct recognition of an end-of-tape
condition, making it hard to span a dump across tape volumes, and
strongly suggesting that directory tree sizes be limited to what could
fit on a single tape.
I made an experiment today across a broad range of operating systems
(many with multiple versions in our test farm), and produced these two
tables, where version numbers are included only if the O/S changed
practices:
------------------------------------------------------------------------
Systems with /bin a symlink to /usr/bin (or both to yet another common
directory) [42 major variants]:
ArchLinux Kali RedHat 8
Arco Kubuntu 19, 20 Q4OS
Bitrig Lite ScientificLinux 7
CentOS 7, 8 Lubuntu 19 Septor
ClearLinux Mabox Solaris 10, 11
Debian 10, 11 Magiea Solydk
Deepin Manjaro Sparky
DilOS Mint 20 Springdale
Dyson MXLinux 19 Ubuntu 19, 20, 21
Fedora Neptune UCS
Gnuinos Netrunner Ultimate
Gobolinux Oracle Linux Unleashed
Hefftor Parrot 4.7 Void
IRIX PureOS Xubuntu 19, 20
------------------------------------------------------------------------
Systems with separate /bin and /usr/bin [60 major variants]:
Alpine Hipster OS108
AltLinux KaOS Ovios
Antix KFreeBSD PacBSD
Bitrig Kubuntu 18 Parrot 4.5
Bodhi LibertyBSD PCBSD
CentOS 5, 6 LMDE PCLinuxOS
ClonOS Lubuntu 17 Peppermint
Debian 7--10 LXLE Salix
DesktopBSD macOS ScientificLinux 6
Devuan MidnightBSD SlackEX
DragonFlyBSD Mint 18--20 Slackware
ElementaryOS MirBSD Solus
FreeBSD 9--13 MXLinux 17, 18 T2
FuryBSD NetBSD 6-1010 Trident
Gecko NomadBSD Trisquel
Gentoo OmniOS TrueOS
GhostBSD OmniTribblix Ubuntu 14--18
GNU/Hurd OpenBSD Xubuntu 18
HardenedBSD OpenMandriva Zenwalk
Helium openSUSE Zorinos
------------------------------------------------------------------------
Some names appear in both tables, indicating a transition from
separate directories to symlinked directories in more recent O/S
releases.
Many of these system names are spelled in mixed lettercase, and if
I've botched some of them, I extend my apologies to their authors.
Some of those systems run on multiple CPU architectures, and our test
farm exploits that; however, I found no instance of the CPU type
changing the separation or symbolic linking of /bin and /usr/bin.
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
To fill out the historical record, the earliest doctype I know of
was a shell (not rc) script. From my basement heater that happens
to run 10/e:
b$ man doctype | uniq
DOCTYPE(1) DOCTYPE(1)
NAME
doctype - guess command line for formatting a document
SYNOPSIS
doctype [ option ... ] [ file ]
DESCRIPTION
Doctype guesses and prints on the standard output the com-
mand line for printing a document that uses troff(1),
related preprocessors like eqn(1), and the ms(6) and mm
macro packages.
Option -n invokes nroff instead of troff. Other options are
passed to troff.
EXAMPLES
eval `doctype chapter.?` | apsend
Typeset files named chapter.0, chapter.1, ...
SEE ALSO
troff(1), eqn(1), tbl(1), refer(1), prefer(1), pic(1),
ideal(1), grap(1), ped(9.1), mcs(6), ms(6), man(6)
BUGS
It's pretty dumb about guessing the proper macro package.
Page 1 Tenth Edition (printed 2/24/2021)
doctype(1) is in the 8/e manual, so it existed in early 1985;
I bet it's actually older than that. The manual page is on
the V8 tape, but, oddly, not the program; neither is it in
the V10 pseudo-tape I cobbled together for Warren long ago.
I'm not sure why not.
The version in rc is, of course, a B-movie remake of the
original.
Norman Wilson
Toronto ON
Lately, I've been playing around in v6 unix and mini-unix with a goal of
better understanding how things work and maybe doing a little hacking.
As my fooling around progressed, it became clear that moving files into
and out of the v6 unix world was a bit tedious. So it occurred to me
that having a way to mount a v6 filesystem under linux or another modern
unix would be kind of ideal. At the same time it also occurred to me
that writing such a tool would be a great way to sink my teeth into the
details of old Unix code.
I am aware of Amit Singh's ancientfs tool for osxfuse, which implements
a user-space v6 filesystem (among other things) for MacOS. However,
being read-only, it's not particularly useful for my problem. So I set
out to create my own FUSE-based filesystem capable of both reading and
writing v6 disk images. The result is a project I call retro-fuse,
which is now up on github for anyone to enjoy
(https://github.com/jaylogue/retro-fuse)
A novel (or perhaps just peculiar) feature of retro-fuse is that, rather
than being a wholesale re-implementation of the v6 filesystem, it
incorporates the actual v6 kernel code itself, "lightly" modernized to
work with current compilers, and reconfigured to run as a Unix process.
Most of file-handling code of the kernel is there, down to a trivial
block device driver that reflects I/O into the host OS. There's also a
filesystem initialization feature that incorporates code from the
original mkfs tool.
Currently, retro-fuse only works on linux. But once I get access to my
mac again in a couple weeks, I'll port it to MacOS as well. I also hope
to expand it to support other filesystems as well, such as v7 or the
early BSDs, but we'll see when that happens.
As I expected, this was a fun and very educational project to work on.
It forced me to really understand what was going in the kernel (and to
really pay attention to what Lions was saying). It also gave me a
little view into what it was like to work on Unix back in the day.
Hopefully someone else will find my little self-education project useful
as well.
--Jay
Some additions:
Systems with /bin a symlink to /usr/bin
Digital UNIX 4.0
Tru64 UNIX 5.0 to 5.1B
HP-UX 11i 11.23 and 11.31
Systems with separate /bin and /usr/bin
SCO UNIX 3.2 V4.0 to V4.2
--
The more I learn the better I understand I know nothing.
> I can imagine a simple perl (or python or whatever) script that would run
> through groff input [and] determine which preprocessors are actually
> needed ...
Brian imagined such and implemented it way back when. Though I used
it, I've forgotten its name. One probably could have fooled it by
tricks like calling pic only in a .so file and perhaps renaming .so.
But I never heard of it failing in real life. It does impose an extra
pass over the input, but may well save a pass compared to the
defensive groff -pet that I often use or to the rerun necessary when I
forget to mention some or all of the filters.
All,
So, we've been talking low-level design for a while. I thought I would
ask a fundamental question. In days of old, we built small
single-purpose utilities and used pipes to pipeline the data and
transformations. Even back in the day, it seemed that there was tension
to add yet another option to every utility. Today, as I was marveling at
groff's abilities with regard to printing my man pages directly to my
printer in 2021, I read the groff(1) page:
example here: https://linux.die.net/man/1/groff
What struck me (the wrong way) was the second paragraph of the description:
The groff program allows to control the whole groff system by command
line options. This is a great simplification in comparison to the
classical case (which uses pipes only).
Here is the current plethora of options:
groff [-abcegilpstzCEGNRSUVXZ] [-d cs] [-f fam] [-F dir] [-I dir] [-L
arg] [-m name] [-M dir] [-n num] [-o list] [-P arg] [-r cn] [-T dev] [-w
name] [-W name] [file ...]
Now, I appreciate groff, don't get me wrong, but my sensibilities were
offended by the idea that a kazillion options was in any way simpler
than pipelining single-purpose utilities. What say you? Is this the
perfected logical extension of the unix pioneers' work, or have we gone
horribly off the trail.
Regards,
Will
Will Senn wrote,
> join seems like part of an aborted (aka never fully realized) attempt at a text based rdb to me
As the original author of join, I can attest that there was no thought
of parlaying join into a database system. It was inspired by
databases, but liberated from them, much as grep was liberated from an
editor.
Doug
Hi:
I've been following the discussion on abstractions and the recent
messages have been talking about a ei200 batch driver (ei.c:
https://minnie.tuhs.org/cgi-bin/utree.pl?file=AUSAM/sys/dmr/ei.c) I
have access to DtCyber (CDC Cyber emulator) that runs all/most of the
cdc operating system. I'm toying with the idea of getting ei200 running.
In looking at things, I ran across the following in
https://minnie.tuhs.org/cgi-bin/utree.pl?file=AUSAM/READ_ME
> The UNSW batch system has not been provided with this
> distribution, because of its limited appeal.
> If you are unfortunate enough to have a CYBER to talk to,
> please contact us and we will forward it to you.
Does anyone happen to know if the batch system is still around?
thanks
-ron
To quote from Jon’s post:
> There have been heated discussions on this list about kernel API bloat. In my
> opinion, these discussions have mainly been people grumbling about what they
> don't like. I'd like to flip the discussion around to what we would like.
> Ken and Dennis did a great job with initial abstractions. Some on this list
> have claimed that these abstractions weren't sufficient for modern times.
> Now that we have new information from modern use cases, how would we rethink
> the basic abstractions?
I’d like to add the constraint of things that would have been implementable
on the hardware of the late 1970’s, let’s say a PDP11/70 with Datakit or
3Mbps Ethernet or Arpanet; maybe also Apple 2 class bitmap graphics.
And quote some other posts:
> Because it's easy pickings, I would claim that the socket system call is out
> of line with the UNIX abstractions; it exists because of practical political
> considerations, not because it's needed. I think that it would have fit
> better folded into the open system call.
>>
>> Somebody once suggested a filesystem interface (it certainly fits the Unix
>> philosophy); I don't recall the exact details.
>
> And it was done, over 30 years ago; see Plan 9 from Bell Labs....
I would argue that quite a bit of that was implementable as early as 6th
Edition. I was researching that very topic last Spring [1] and back ported
Peter Weinberger’s File System Switch (FSS) from 8th to 6th Edition; the
switch itself bloats the kernel by about half a kilobyte. I think it may be
one of the few imaginable extensions that do not dilute the incredible
bang/buck ratio of the V6 kernel.
With that change in place a lot of other things become possible:
- a Kilian style procfs
- a Weinberger style network FS
- a text/file based ioctl
- a clean approach to named pipes
- a different starting point to sockets
Each of these would add to kernel size of course, hence I’m thinking about
a split I/D kernel.
To some extent it is surprising that the FSS did not happen around 1975, as
many ideas around it were 'in the air' at the time (Heinz Lycklama’s peripheral
Unix, the Spider network Filestore, Rand ports, Arpanet Unix, etc). With the
benefit of hindsight, it isn’t a great code leap from the cdev switch to the
FSS - but probably the ex ante conceptual leap was just too big at the time.
Paul
[1] Code diffs here:
https://1587660.websites.xs4all.nl/cgi-bin/9995/vdiff?from=fab15b88a6a0f36b…
The last group before I left the labs in 1992 was on was the
POST team.
pq stood for "post query," but POST consisted of -
- mailx: (from SVR3.1) as the mail user agent
- UPAS: (from research UNIX) as the mail delivery agent
- pq: the program to query the database
- EV: (pronounced like the biblical name) the database (and the
genesis program to create indices)
- post: program to combine all the above to read email and to send mail via queries
pq by default would looku up people
pq lastname: find all people with lastname, same as pq last=lastname
pq first.last: find all people with first last, same as pq first=first/last=last
pq first.m.last: find all people with first m last, same as pq first=first/middle=m/last=last
this how email to dennis.m.ritchie @ att.com worked to send it on to research!dmr
you could send mail to a whole department via /org=45267 or the whole division
via /org=45 or a whole location via /loc=mh or just the two people in a specific
office via /loc=mh/room=2f-164
these are "AND"s an "OR" is just another query after it on the same line
There were some special extentions -
- prefix, e.g. pq mackin* got all mackin, mackintosh, mackinson, etc
- soundex, e.g. pq mackin~ got all with the last name that sounding like mackin,
so names such as mackin, mckinney, mckinnie, mickin, mikami, etc
(mackintosh and mackinson did not match the soundex, therefore not included)
The EV database was general and fairly simple. It was directory with
files called "Data" and "Proto" in it.
"Data" was plain text, pipe delineated fields, newline separated records -
123456|ritchie|dennis|m||r320|research!dmr|11273|mh|2c-517|908|582|3770
(used data from preserved at https://www.bell-labs.com/usr/dmr/www/)
"Proto" defined the fields in a record (I didn't remember exact syntax anymore) -
id n i
last a i
first a i
middle a -
suffix a -
soundex a i
email a i
org n i
loc a i
room a i
area n i
exch n i
ext n i
"n" means a number so 00001 was the same as 1, and "a" means alpha, the "i" or "-"
told genesis if an index should be generated or not. I think is had more but
that has faded with the years.
If indices are generated it would then point to the block number in Data, so an lseek(2)
could get to the record quick. I beleive there was two levels of block pointing indices.
(sort of like inode block pointers had direct and indirect blocks)
So everytime you added records to Data you had to regenerate all the indices, that was
very time consuming.
The nice thing about text Data was grep(1) worked just fine, or cut -d'|' or awk -F'|'
but pq was much faster with a large numer of records.
-Brian
Dan Cross <crossd at gmail.com> wrote:
> It seems that Andrew has addressed Daytona, but there was a small database
> package called `pq` that shipped with plan9 at one point that I believe
> started life on Unix. It was based on "flat" text files as the underlying
> data source, and one would describe relations internally using some
> mechanism (almost certainly another special file). An interesting feature
> was that it was "implicitly relational": you specified the data you wanted
> and it constructed and executed a query internally: no need to "JOIN"
> tables on attributes and so forth. I believe it supported indices that were
> created via a special command. I think it was used as the data source for
> the AT&T internal "POST" system. A big downside was that you could not add
> records to the database in real time.
>
> It was taken to Cibernet Inc (they did billing reconciliation for wireless
> carriers. That is, you have an AT&T phone but make a call that's picked up
> by T-Mobile's tower: T-Mobile lets you make the call but AT&T has to pay
> them for the service. I contracted for them for a short time when I got out
> of the Marine Corps---the first time) and enhanced and renamed "Eteron" and
> the record append issue was, I believe, solved. Sadly, I think that
> technology was lost when Cibernet was acquired. It was kind of cool.
>
> - Dan C.
>
Hello All.
Many of you may remember the AT&T UNIX PC and 3B1. These systems
were built by Convergent Technologies and sold by AT&T. They had an
MC 68010 processor, up to 4 Meg Ram and up to 67 Meg disk. The OS
was System V Release 2 vintage. There was a built-in 1200 baud modem,
and a primitive windowing system with mouse.
I had a 3B1 as my first personal system and spent many happy hours writing
code and documentation on it.
There is an emulator for it that recently became pretty stable. The original
software floppy images are available as well. You can bring up a fairly
functional system without much difficulty.
The emulator is at https://github.com/philpem/freebee. You can install up
to two 175 Meg hard drives - a lot of space for the time.
The emulator's README.md there has links to lots of other interesting
3B1 bits, both installable software and Linux tools for exporting the
file system from disk image so it can be mounted under Linux and
importing it back. Included is an updated 'sysv' Linux kernel module
that can handle the byte-swapped file system.
I have made a pre-installed disk image available with a fair amount
of software, see https://www.skeeve.com/3b1/.
The emulator runs great under Linux; not so sure about MacOS or Windows. :-)
So, anyone wishing to journey back to 1987, have fun!
Arnold
FYI, interesting.
---------- Forwarded message ---------
From: Tom Van Vleck <thvv(a)multicians.org>
Date: Sun, Feb 14, 2021, 12:35 PM
Subject: Re: [multicians] History of C (with Multics reference)
To: <multicians(a)groups.io>
Remember the story that Ken Thompson had written a language called "Bon"
which was one of the forerunners of "B" which then led to "new B" and then
to "C"?
I just found Ken Thompson's "Bon Users Manual" dated Feb 1, 1969, as told
to M. D. McIlroy and R. Morris
in Jerry Saltzer's files online at MIT.
http://people.csail.mit.edu/saltzer/Multics/MHP-Saltzer-060508/filedrawers/…
_._,_._,_
------------------------------
Groups.io Links:
You receive all messages sent to this group.
View/Reply Online (#4231) <https://groups.io/g/multicians/message/4231> | Reply
To Group
<multicians@groups.io?subject=Re:%20Re%3A%20%5Bmulticians%5D%20History%20of%20C%20%28with%20Multics%20reference%29>
| Reply To Sender
<thvv@multicians.org?subject=Private:%20Re:%20Re%3A%20%5Bmulticians%5D%20History%20of%20C%20%28with%20Multics%20reference%29>
| Mute This Topic <https://groups.io/mt/78835051/481017> | New Topic
<https://groups.io/g/multicians/post>
------------------------------
-- sent via multicians(a)groups.io -- more Multics info at https:://
multicians.org/
------------------------------
Your Subscription <https://groups.io/g/multicians/editsub/481017> | Contact
Group Owner <multicians+owner(a)groups.io> | Unsubscribe
<https://groups.io/g/multicians/leave/5961246/1924879241/xyzzy> [
crossd(a)gmail.com]
_._,_._,_
Was thinking about our recent discussion about system call bloat and such.
Seemed to me that there was some argument that it was needed in order to
support modern needs. As I tried to say, I think that a good part of the
bloat stemmed from we-need-to-add-this-to-support-that thinking instead
of what's-the-best-way-to-extend-the-system-to-support-this-need thinking.
So if y'all are up for it, I'd like to have a discussion on what abstractions
would be appropriate in order to meet modern needs. Any takers?
Jon
I was lucky enough to actually have a chance to use wm at Carnegie Mellon before it was fully retired in favor of X11 on the systems in public clusters; it made a monochrome DECstation 3100 with 8MB much more livable.
When it was retired, it was still usable for a while because the CMU Computer Club maintained an enhanced version (wmc) that everyone had access to, and Club members got access to its sources.
Did anyone happen to preserve the wm or wmc codebase? There's some documentation in the papers that were published about the wm and Andrew API but no code.
-- Chris
I've been writing about unix design principles recently and tried
explaining "The Rule of Silence" by imagining unix as a restaurant
<http://codefaster.substack.com/p/rule-of-silence>. Do you agree with how I
presented it? Would you do it differently?
Tyler
All,
I'm tooling along during our newfangled rolling blackouts and frigid
temperatures (in Texas!) and reading some good old unix books. I keep
coming across the commands cut and paste and join and suchlike. I use
cut all the time for stuff like:
ls -l | tr -s ' '| cut -f1,4,9 -d \
...
-rw-r--r-- staff main.rs
and
who | grep wsenn | cut -c 1-8,10-17
wsenn console
wsenn ttys000
but that's just cuz it's convenient and useful.
To my knowledge, I've never used paste or join outside of initially
coming across them. But, they seem to 'fit' with cut. My question for
y'all is, was there a subset of related utilities that these were part
of that served some common purpose? On a related note, join seems like
part of an aborted (aka never fully realized) attempt at a text based
rdb to me...
What say you?
Will
Rich Morin <rdm(a)cfcl.com> wrote:
> PTF was inspired, in large part, by the volunteer work that produced the
> Sun User Group (SUG) tapes. Because most of the original volunteers had
> other fish to fry, I decided to broaden the focus and attempt a
> (somewhat) commercial venture. PTF, for better or worse, was the
> result.
>
> So, I should also relate some stories about running for and serving on
> the SUG board, hassling with AT&T and Sun's lawyers, assembling
> SUGtapes, etc. My copies of the SUGtapes are (probably) long gone, but
> John Gilmore (if nobody else :-) probably has the tapes and/or their
> included bits.
While I was involved, the Sun User Group made three tapes of freely
available software, in 1985, 1987, and 1989. The 1989 tape includes
both of the earlier ones, as well as new material.
A copy of both the 1987 tape and the 1989 tape are here:
http://www.toad.com/SunUserGroupTape-Rel-1987.1.0.tar.gzhttp://www.toad.com/SunUserGroupTape-Rel-1989.tarhttp://www.toad.com/
I'll have to do a bit more digging to turn up more than vague memories
about our dealings with the lawyers...
John