> Then one day a couple of them ‘fell off a truck’ and my Dad just happened to be there to pick them up and bring them home.
Wonderful story. It reminded me of the charming book, "five Finger Discount"
by Helene Stapinski, whose father brought home truckfall steaks.
Thanks for sharing the tale.
Doug
Is it worth putting a copy of this mailing list into the Unix Archive?
I don't want to dump the mbox in, as it has all our e-mail addresses:
spam etc. I could symlink in the monthly text archives, e.g.
http://minnie.tuhs.org/pipermail/tuhs/2016-December.txt.gz
What do you think? Perhaps in Documentation/TUHS_Mail?
Warren
It’s embarrassing to mention this, but I thought I’d share.
I’ve always wondered what on earth a TAHOE was, as they disappeared just about as quickly as they came out. As we all know that they were instrumental from separating out the VAX code from 4.3BSD in the official CSRG source. I was looking through old usenet stuff when I typed in something wrong, and came across people looking for GCC for the Tahoe running BSD. (http://altavista.superglobalmegacorp.com/usenet/b128/comp/sys/tahoe/79.txt)
In article <2287(a)trantor.harris-atd.com>, bbadger@x102c (Badger BA 64810) writes:
`We have a Harris HCX-9 computer, also known as a Tahoe, and we'd like to
`get gcc and g++ up and running. I haven't seen anything refering to
`the HCX or any Tahoe machines in the gcc distribution. Anyone have it?
`Working on it? Pointers to who might? Know if Berkely cc/ld/asm is PD?
Turns out they were using Harris mini’s called the HCX-9. That’s when I went back to the source and saw this:
#
# GENERIC POWER 6/32 (HCX9)
#
machine tahoe
cpu "TAHOE"
ident GENERIC
So if anyone else is wondering what was a Tahoe, did it exist, was there actual sales, is their pictures of it, etc, the answer is yes, it was a real machine, yes it was sold, and there are even print ads in Computer world.
I thought it was interesting though.
Sent from Mail for Windows 10
Since the X86 discussions seem to have focused on BSD & Linux, I thought I
should offer another perspective.
TLDR: I worked on System V based UNIX on PCs from 1982 to 1993. IMO,
excessive royalties & the difficulty of providing support for diverse
hardware doomed (USL) UNIX on x86. It didn't help that SCO was entrenched in
the PC market and slow to adopt new UNIX versions.
Longer Summary:
>From 1975-82 at IBM Research and UT-Austin C.S. dept, I tried to get access
to UNIX but couldn't.
At IBM Austin from '82 to '89, I worked on AIX and was involved with IBM's
BSD for RT/PC.
Starting in '89, I was the executive responsible for Dell UNIX
(https://notes.technologists.com/notes/2008/01/10/a-brief-history-of-dell-un…)
for most of its existence.
The royalties Dell paid for SVR4 plus addons were hard to bear. Those
royalties were at least an order of magnitude greater than what we paid to
Microsoft.
We couldn't support all of the devices Dell supplied to customers, certainly
couldn't afford to support hardware only supplied by other PC vendors.
SCO had dominant marketplace success with Xenix and SVRx products, seemingly
primarily using PCs with multiport serial cards to enable traditional
timesharing applications. Many at Dell preferred that we emphasize SCO over
Dell SVR4.
When I joined my first Internet startup in 1996 and had to decide what OS to
use for hosting, I was pretty cognizant of all the options. I had no hands
on Linux experience but thought Linux the likely choice. A Linux advocate
friend recommended I choose between Debian and Red Hat. I chose Red Hat and
have mostly used Red Hat & Fedora for my *IX needs since then.
Today, Linux device support is comprehensive, but still not as complete as
with Windows. I installed Fedora 24 on some 9 and 15 year old machines last
week. The graphics hardware is nothing fancy, a low end NVIDIA card in the
older one, just what Intel supplied on their OEM circuit boards in the newer
one. Windows (XP/7/10) on those machines gets 1080p without downloading
extra drivers. (Without extra effort??) Fedora 24 won't do more than
1024x768 on one and 1280x1024 with the other.
Charlie
(somewhat long story)
After reading all the stories about how Unix source was protected and hard to access to I’ve got to say that my experience was a little different.
I was at UCSD from 76-80 when UCSD got a VAX and I think it was running 32V at the time. Well being a CS student didn’t get you access to that machine, it was for the grad students and others doing all the real work.
I became friends with the admin of the system (sdcsvax) and he mentioned one day that the thing he wanted more than anything else was more disks. He had a bunch of the removable disk packs and wanted a couple more to swap out to do things like change the OS quickly etc.
My dad worked for CDC at the time, and he was making removable media of the same type that the VAX was using. My luck. I asked him about getting a disk pack, or two. He said that these things cost thousands and he couldn’t just pick them up and bring them home.
Then one day a couple of them ‘fell off a truck’ and my Dad just happened to be there to pick them up and bring them home. You know, so the kids could see what he did for a job.
I took them into the lab and gave them to the admin who looked the disks, then at me, and asked what I wanted in exchange. I asked for a seat at the VAX, with full access.
Since then I’ve had a ucsd email account, and been a dyed in the wool Unix guy.
David
All, thanks to the hard effort of Noel Chiappa and Paul Ruizendaal,
we now have a couple of early Unix systems with networking modifications.
They can be found in:
http://www.tuhs.org/Archive/Distributions/Early_Networking/
I'll get them added to the Unix Tree soon.
Cheers, Warren
On Tue, Feb 21, 2017 at 9:25 PM, Steve Nickolas <usotsuki(a)buric.co> wrote:
> I started screwing around with Linux in the late 90s, and it would be many
> years before any sort of real Unix (of the AT&T variety), in any form, was
> readily available to me - that being Solaris when Sun started offering it
> for free download.
See my comment to Dan. I fear you may not have known where to look, or whom
to ask. As I asked Dan, were you not at an university at time? Or where
you running a Sun or the like -- i.e. working with real UNIX but working
for someone with binary license, not sources from AT&T (and UCB)?
I really am curious because I have heard this comment before and never
really understood it because the sources really were pretty much available
to anyone that asked. Most professionals and almost any/all
university students had did have source access if they ask for it. That is
part of why AT&T lost the case. The trade secret was out, by definition.
The required by the 1956 consent decree to make the trade secrets
available. A couple of my European university folks have answer that the
schools kept the sources really locked down. I believe you, I never saw
that at places like Cambridge, Oxford, Edinburg, Darmstad or other places I
visited in those days in Europe. Same was true of CMU, MIT, UCB et al
where I had been in the USA, so I my experience was different.
The key that by definition, UNIX was available and there were already
versions from AT&T or not "in the wild." You just need to know where to
look and whom to ask. The truth is that the UCB/BSDi version of UNIX - was
based on the AT&T trade secret, as was Linux, Minix, Coherent and all of
the other "clones" -- aka look-a-likes and man of those sources were
pretty available too (just as Minix was to Linus and 386BSD was to him also
but he did not know to where/whom to ask).
So a few years later when the judge said, these N files might be tain'ted
by AT&T IP, but can't claim anything more. The game was over. The problem
was when the case started, techies (like me, and I'm guessing Larry, Ron
and other ex BSD hackers that "switched") went to Linux and started to
making it better because we thought we going to lose BSD.
That fact is if we had lost BSD, legally would have lost Linux too; but we
did not know that until after the dust settled. But by that time, many
hackers had said, its good enough and made it work for everyone.
As you and Dan have pointed out, many non-hackers did know that UNIX really
was available so they went with *Linux because they thought that had no
other choice, *when if fact, you actually did and that to me was the sad
part of the AT&T case.
A whole generation never knew and by the time they did have a choice but a
few religion began and new wars could be fought.
Anyway - that's my thinking/answer to Noel's original question.
Of why Linux over the over the PC/UNIX strains... I think we all agree that
one of the PC/UNIX was going to be the winner, the question really is why
did Linux and not a BSD flavor?
Tonal languages are real fun. I'm living and working in Bangkok,
Thailand and slightly tone deaf am still struggling.
Which reminds me, regarding binary there are 10 types of people, those
who understand and those who don't :-)
Cheers,
rudi
Noel:
Instead, you have to modify the arguments so that the re-tried call takes up
where it left off - in the example above, tries to read 5 characters, starting
5 bytes into the buffer). The hard part is that the return value (of the
number of characters actually read) has to count the 5 already read! Without
the proper design of the system call interface, this can be hard - how does
the system distinguish between the _first_ attempt at a system call (in which
the 'already done' count is 0), and a _later_ attempt? If the user passes in
the 'already done' count, it's pretty straightforward - otherwise, not so
much!
====
Sometime in the latter days of the Research system (somewhere
between when the 9/e and 10/e manuals were published), I had
an inspiration about that, and changed things as follows:
When a system call like read is interrupted by a signal:
-- If no characters have been copied into the user's
buffer yet, return -1 and set errno to EINTR (as would
always have been done in Heritage UNIX).
-- If some data has already been copied out, return the
number of characters copied.
So no data would be lost. Programs that wanted to keep
reading into the same buffer (presumably until a certain
terminator character is encountered or the buffer is full
or EOF) would have to loop, but a program that didn't loop
in that case was broken anyway: it probably wouldn't work
right were its input coming from a pipe or a network connection.
I don't remember any programs breaking when I made that change,
but since it's approaching 30 years since I did it, I don't
know whether I can trust my memory. Others on this list may
have clearer memories.
All this was a reaction to the messy (both in semantics and
in implementation) compromise that had come from BSD, to
have separate `restart the system call' and `interrupt the
system call' states. I could see why they did it, but was
never satisfied with the result. If only I'd had my inspiration
some years earlier, when there was a chance of it getting out
into the real world and influencing POSIX and so on. Oh, well.
Norman Wilson
Toronto ON
>On Tue, 21 Feb 2017 19:08:33 -0800 Cory Smelosky wrote:
>
>>On Tue, Feb 21, 2017, at 17:22, Rudi Blom wrote:
>> Probably my (misplaced?) sense of humour, but I can't help it.
>>
>> After reading all comment I feel I have to mention I had a look at
>> freeDOS :-).
>>
>> Cheers,
>> rudi
>
>Do I need to pull out TOPS-10 filesystem code now, too? ;)
In 1967 I was 12 and probably had barely discovered ScienceFiction
novel and computers.
Just quickly downloaded a TOPS-10 OS Commands Manual (from 1988) but
no mention of the Level-D filesystem
Probably my (misplaced?) sense of humour, but I can't help it.
After reading all comment I feel I have to mention I had a look at freeDOS :-)
Cheers,
rudi
All, after getting your feedback, I've reorganised the Unix Archive at
http://www.tuhs.org/Archive/
I'm sure there will be some rough edges, let me know if there is anything
glaringly obvious.
I'd certainly like a few helpers to take over responsibility for specific
sections, e.g. UCB, DEC.
Cheers all, Warren
P.S It will take a while for the mirrors to pick this up.
> 2) **Most** Operating systems do not support /dev/* based access to SCSI.
> This includes a POSIX certified system like Mac OS X.
>
> 3) **Most** Operating systems do not even support a file descriptor based
> interface to SCSI commands.
> This includes a POSIX certified system like Mac OS X.
Had Ken thought that way, Unix's universal byte-addressable file format
would never have happened; this mailing list would not exist; and we
all might still be fluent in dialects of JCL. dd was sufficient glue
to bridge the gap between Unix and **Most** Operating Systems.
Meanwhile everyday use of Unix was freed from the majority's folly.
Doug
> From: Larry McVoy
> The DOS file system, while stupid, was very robust in the face of
> crashes
I'm not sure it's so much the file system (in the sense of the on-disk
format), as how the system _used_ it (although I suppose one could consider
that part of the FS too).
The duplicated FAT, and the way file sectors are linked using it, is I suppose
a little more robust than other designs (e.g. the way early Unixes did it,
with indirect blocks and free lists), but I think a lot of it must have been
that DOS wrote stuff out quickly (as opposed to e.g. the delayed writes on
early Unix FS's, etc). That probably appoximated the write-ordering of more
designed-robust FS's.
Noel
> From: Diomidis Spinelli
> Arguably, the same can also be claimed for the networking system calls.
Well, it depends on exactly what you mean by "networking system calls". If
you mean networking a la BSD, perhaps.
However, I can state (from personal experience :-) that the I/O architecture
circa V6/V7 was not very suitable for TCP/IP internetworking (with its
emphasis on an un-reliable network, and smart endpoints). The reason is that
such networking doesn't really fit well into the 'start one I/O operation and
then block the process until it completes' model.
Yes, if you have an application running on top of a reliable stream, you
might be able to coerce that into the 'uni-directional, blocking' I/O model
(if the reliable stream implementation is in, or routed through, the kernel),
but lots of other thing don't work so well. (Think, e.g. an interface with
asynchronous, un-predictable, RPC calls in both directions.)
Noel
> Linus had the qualities of being a good programmer, a good architect,
> and a good manager. I've never seen all 3 in a person before or since.
No comment about Linus, but Vic Vyssotsky is my pick for the title.
He created the first dataflow language (in 1960!). He invented
bit-parallel flow analysis and put it into Fortran 2 years later.
He was one of the technical triumvirs for Multics. Ran several
big development groups at Bell Labs, and was 2 levels up from
the Unix team in Research. I could go on and on. What he
didn't do was publish; he got ahead on pure innate ability
and brilliant insight--a profound influence on almost all]
the original Unix crowd.
Doug
I dont know if it's worth even trying to find and mirror pre 1993 ( IE when cheap CD-ROM mastering was possible) GNU software?
Things like binutils, gas, and GCC can be tremendously useful, along with binaries for long "dead" platforms?
I know that I've always been super thankful of the GNAT people for having some pre-compiled version of the ADA translator which would also include GCC. Sometimes having some kind of native toolset is a big positive, when you don't have anything, especially earlier versions that have issues cross or Canadian cross compiling.
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
OMG. I don't know how many times I've consulted the Unix
Tree and blissfully ignored the cross-links that come at
the top of every file--I'm so intent on the content.
Apologies for cluttering the mailing list about a solved topic.
Doug
>Date: Sun, 19 Feb 2017 20:58:59 -0500
>From: Clem Cole <clemc(a)ccc.com>
>To: Nick Downing <downing.nick(a)gmail.com>
>Cc: Jason Stevens <jsteve(a)superglobalmegacorp.com>,
> "tuhs(a)minnie.tuhs.org" <tuhs(a)minnie.tuhs.org>
>Subject: Re: [TUHS] Mach for i386 / Mt Xinu or other
>Message-ID: <CAC20D2NM_oyDz0tAM2o5_vJ8Ky_3fHoAmPHn8+DOqNwKoMyqfQ(a)mail.gmail.com>
Content-Type: text/plain; charset="utf-8"
>
>On Sun, Feb 19, 2017 at 7:29 PM, Nick Downing <downing.nick(a)gmail.com> wrote:
...
>Anyway, Tru64 is based on OSF/1 but also has a lot of DEC proprietary
>things (like TruClusters and anything Alpha specific) that goes beyond the
>based OSF license, so you need the HP clearance before any of that can be
>made available [same is true for HP/UX of course]. To my knowledge,
>DEC/Compaq/HP never released the sources to Tru64 (or HP/UX) to the world
>they way Sun did for Solaris, which in the case of Tru64 is sort of shame.
>there is some every good stuff in there like the file systems, the lock
>managers, cluster scaling, messaging, etc - which would be nice to compare
>to today's solutions. Since HP did have a bought out AT&T license, that
>clearly could have done so, but I do not think anyone left there to make
>that decision - sigh.
As far as I know only the TRU64 Advanced File System (aka AdvFS) has
been released to the OpenSource community, in 2008. Status now unknown
(to me)
See also
. http://advfs.sourceforge.net
. https://www.cyberciti.biz/tips/download-tru64-unix-advanced-filesystem-advf…
Cheers,
rudi
Wow that'd be incredible!!!
I'd love to see how Mach 2.5/4.3BSD compared to the Mach 3.0/Lites 1.1 that
is as close as I've been able to find... I know about the NeXT stuff, as I
have NS 3.3 installed although running it on 'white' hardware gets harder
and harder as PC's get newer and the IDE controllers just are too feature
ful, and too new for NS to deal with, beyond it can only use 2GB disks
properly. Obviously with no source or any way to get in to write drivers or
update the FFS on NeXTSTEP it's basically stuck in those P1 era machines, or
emulation. There is even previous a 68030/68040 cube based emulator for
running all the 'native' versions.
archive what you can, I can only contribute minro things I stubmle uppon,
mostly by accident.
> ----------
> From: Atindra Chaturvedi
> Reply To: Atindra Chaturvedi
> Sent: Friday, February 17, 2017 11:47 PM
> To: jsteve(a)superglobalmegacorp.com; tuhs(a)minnie.tuhs.org
> Subject: Re: [TUHS] Mach for i386 / Mt Xinu or other
>
> Amazing - brings back memories. I was a Unix "enterprise IT user" not a
> "kernel developer guru" back in the day working at a pharmaceutical
> company and was responsible for moving the company off IBM 3090 and SNA to
> Unix and TCP/IP.
>
> Used to buy the new Unix-like releases as they were available to stay
> current - including the Mt. Xinu Mach 386 distro. I still have it and will
> happily send it to the archives - if I can be guided a bit.
>
> Ran the Mt. Xinu for many years as my home machine - it is pre-SCSI for
> booting ( needs ESDI disks ) but was very stable. So will need tweaking to
> boot/install.
>
> Happy to have worked in the mid-70 - 80's era when there were huge changes
> in computer hardware and software technology. I have my books and the
> software for all the cool stuff as it came out in those days - some day I
> will compile it and send it to where it can be better used or archived as
> history.
>
> Atindra.
>
>
>
> -----Original Message-----
> From: jsteve(a)superglobalmegacorp.com
> Sent: Feb 17, 2017 6:30 AM
> To: "tuhs(a)minnie.tuhs.org"
> Subject: [TUHS] Mach for i386 / Mt Xinu or other
>
>
>
> While testing a crazy project I wanted to get working I came across
> this ancient link:
>
>
>
>
> http://altavista.superglobalmegacorp.com/usenet/b182/comp/os/mach/542.txt
>
>
>
> --------8<--------8<--------8<--------8<
>
>
>
> Newsgroups: comp.os.mach
>
> Subject: Mach for i386 - want to beta?
>
> Message-ID: <1364(a)mtxinu.UUCP>
>
> Date: 2 Oct 90 17:12:19 GMT
>
> Reply-To: scherrer(a)mtxinu.COM (Deborah Scherrer)
>
> Organization: mt Xinu, Berkeley
>
> Lines: 24
>
>
>
> Mt Xinu is currently finishing up its release of 2.6 MSD for the
> i386.
>
> 2.6 MSD is a CMU-funded standard distribution of the Mach kernel,
>
> release-engineered with the following:
>
> 2.5 Mach kernel, with NFS & BSD-tahoe enhancements
>
> Transarc's AFS
>
> X11R4
>
> most of the 4.3-tahoe BSD release
>
> Andrew Tool Kit
>
> Camelot transaction processing system
>
> Cornell's ISIS distributed programming environment
>
> most of the FSF utilities
>
> a few other nifty things
>
>
>
> --------8<--------8<--------8<--------8<
>
>
>
> Was any of this stuff ever saved? I know on the CSRG CD there is
> some buried source for Mach 2.5 although I haven't seen anything on where
> to even start to compile it, how or even how to boot it... I know Mach is
> certainly not fast, nor all that 'small' but it'd be interesting to see a
> 4.3BSD on a PC!
>
>
> That's what the Unix Tree is for!
Yes, but it doesn't have cross links as far as I know.
What I have in mind is effectively one more entry in
the root. Call it "union" perhaps. In a leaf of that
tree, say /union/usr/src/cmd/find, will ne a page that
links to all the "find sources in the other systems.
I don't know the range of topologies in the Unix Tree.
For example, some systems may have /src while others
have /usr/src. That could be hidden completely by
simply not revealing the path names. Alternatively
every level in the union tree could record its cousins
in the various systems, as well as its children
in the union system.
Doug
If things are filed by provenance, one useful kind of
cross-linking would be a generic tree whose "leaves"
link to all the versions of the "same" file. All the
better if it could also indicate the degree of
relatedness of the versions--perhaps an inferred
evolutionary tree or a shaded grid, where the
intensity of grid point x,y shows the relatedness
of x and y.
doug
Hi all, I think the current layout of the Unix Archive at
http://www.tuhs.org/Archive/ is starting to show its limitations as we get
more systems and artifacts that are not specifically PDP-11 and Vax.
I'm after some suggestions on how to reorganise the archive. Obviously
there are many ways to do this. Just off the top of my head, top level:
- Applications: things which run at user level
- Systems: things which have a kernel
- Documentation
- Tools: tools which can be used to deal with systems and files
Under Applications, several directories which hold specific things. If
useful, perhaps directories that collect similar things.
Under Systems, a set of directories for specific organisations (e.g. Research,
USL, BSD, Sun, DEC etc.). In each of these, directories for each system.
Under Documentation, several directories which hold specific things. If
useful, perhaps directories that collect similar things.
Under Tools, subdirectories for Disk, Tape, Emulators etc., then subdirs
for the specific tools.
Does this sound OK? Any refinements or alternate suggestions?
Cheers, Warren