Ooo. Fun. We're talking PDP-10s on a Unix list... :-)
On 2017-02-27 16:13, Arthur Krewat <krewat(a)kilonet.net> wrote:
> In TOPS-10, you could detach from your current job, login again, and
> keep going. Then, attach to the previous job, and go back and forth
> endlessly.
Right. But that is a different thing. Each terminal session only have
one job. The fact that you can detach that, and log in as a new session
is a different concept.
> As for keeping memory around, it was very common on TOPS-10 to put code
> in a "hiseg" that would stick around, and was shareable between "jobs".
Yes. Again, that is a different thing as well. Hisegs are more related
to shared memory.
I assume you know all this, so I'm not going to go into details.
But having the memory around for a program, even if it is not running,
is actually sometimes very useful. If ITS could handle that, while
treating them as separate processes, all associated to one terminal, and
let you select which one you were currently fooling around in, while the
others stayed around, that is something I don't think I've seen elsewhere.
> For something like EMACS, it would be very efficient to have the first
> person run it "compile" all the LISP, leave it in the hiseg, and other
> jobs can then run that code.
That would work, but it would then require that all other users be
suspended until the first user actually completes the initialization,
and after that, all the memory must be readonly.
> Not knowing anything about EMACS, I'm not sure that compiled code was
> actually shareable if it was customized, just thinking out loud.
You can certainly customize and save your own image. But the general
bootstrapping of Emacs consists of starting up the core system, and then
loading a whole bunch of modules and configurations. All that loading
and parsing of those files into data structures in memory is quite cpu
intensive.
Once all that processing is finished, you can start editing.
Each person essentially wants all that work done, no matter what they'd
like to do later. So, Emacs does it once, and then saves the state at
the point where you can start editing.
But it does not mean that the memory is shareable. It's full of various
data structures, and code, and that will change as you go along editing
things as well.
> But even without leveraging the hiseg capability, it was relatively easy
> to save an entire core image back to a .SAV or .LOW or later a .EXE. I
> don't remember how easy it was to do that programmatically, but it was
> easy from the terminal and if it saves a lot of processor time (and
> elapsed time) people would have been happy to do it manually.
Indeed. Like I said, Tops-10 have the same concept as Emacs does today.
But there it was essentially what you always did.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
I ported GNU Emacs to the Celerity product line mostly because most of the programmers there wanted it over vi. Not me, I’m a vi guy.
I remember that GNU Emacs launched the first time and then dumped itself out as a core file. Each subsequent launch would then ‘undump’ itself back into memory. All this because launching emacs the first time required compiling all that lisp code.
Does anyone else remember this?
David
On 26 February 2017 at 12:28, Andy Kosela <andy.kosela(a)gmail.com> wrote:
[...]
> Are you sure it was emacs? Most probably it was pico, which was the default
> editor for pine. We used pine/pico for all email at our university in the
> 90's. It was wildly popular.
Ah well, I am not sure -- that betrayed my emacs bias. I saw ^X^C and
assumed emacs.
N.
> From: Deborah Scherrer
> On 2/25/17 11:25 AM, Cory Smelosky wrote:
>> MtXinu is something I really want.
> I worked there for 10 years (eventually becoming President). I'll try
> to dig up a tape.
Say what you will about RMS, but he really did change the world of software.
Most code (except for very specialized applications) just isn't worth much
anymore (because of competition from open source) - which is part of why all
these old code packages are now freely available.
Although I suppose the development of portabilty - which really took off with
C and Unix, although it had existed to some degree before that, q.v. the tools
collection in FORTRAN we just mentioned - was also a factor, it made it
possible to amortize code writing over a number of different types of
machines.
There were in theory portable languages beforehand (e.g. PL/1), but I think it
probably over-specified things - e.g. it would be impossible to port Multics
to another architecture without almost completely re-writing it from scratch,
the code is shot through with "fixed bin(18)"'s every other line...
Noel
On 26 February 2017 at 07:46, Michael Kjörling <michael(a)kjorling.se> wrote:
> On 26 Feb 2017 07:39 -0500, from jnc(a)mercury.lcs.mit.edu (Noel Chiappa):
>> I was never happy with the size of EMACS, and it had nothing to do with the
>> amount of memory resources used. That big a binary implies a very large amount
>> of source, and the more lines of code, the more places for bugs...
>
> But remember; without Emacs, we might never have had _The Cuckoo's
> Egg_. Imagine the terror of that loss.
Hhhmmm.... I must dig my copy out of storage because I do not remember
emacs in there.
As for emac uses, my wife was on (non-CS) staff at a local college
affiliated with U of T. At the time, DOS boxes sat on staff desks and
email was via a telnet connection to an SGI box somewhere on campus.
A BATch file connected and ran pine but shelled out to an external
editor. What was the editor? Well, I saw her composing a message
once and ending the editor session by ^X^C.
N.
Wasn't the default FS type S51K? Limitations like 14 chars directory
names only. No symbolic link ?
>Date: Sun, 26 Feb 2017 11:13:25 -0500
>From: Arthur Krewat <krewat(a)kilonet.net>
>To: Cory Smelosky <b4(a)gewt.net>, Jason Stevens
> <jsteve(a)superglobalmegacorp.com>, tuhs(a)minnie.tuhs.org
>Subject: Re: [TUHS] SCO OpenDesktop 386 2.0.0
>Message-ID: <f5a1d513-3cc1-6a4d-64a3-669b49d7226f(a)kilonet.net>
>Content-Type: text/plain; charset="utf-8"; Format="flowed"
>
>What filesystem type does it use for root/boot/whatever?
>
>Install operating system "X" that supports that filesystem type in the
>virtual guest, create a new disk, newfs/mkfs it, arrange the bits from
>the tape, take the newly-assembled disk and move to another VM and try
>to boot it.
>
>Not remembering anything about how SVR3.2 boots (I think that's what
>Opendesktop is?) that's the end of my help on the subject :)
Hey,
Does anyone have any of the floppies for OpenDesktop 2.0.0? Mine got
damaged in a dehumidifier failure before they got to California. The
only survivor was of all things...the QIC-24 tape (which I have read
fine)
sco-tape> tar tf file0 | more
./tmp/_lbl/prd=odtps/typ=u386/rel=2.0.0a/vol=1
Anyone know a good starting point for attempting to install it in to a
VM? ;)
--
Cory Smelosky
b4(a)gewt.net
> On 26 Feb 2017 07:39 -0500, from jnc(a)mercury.lcs.mit.edu (Noel Chiappa):>> I was never happy with the size of EMACS, and it had nothing to do with >> the amount of memory resources used. That big a binary implies a very >> large amount of source, and the more lines of code, the more places for >>bugs...GNU Emacs 26.0.50, GTK+ Version 3.22.8) of 2017-02-25 (Fedora25, Kernel: 4.9.11:Virtual: 794.6Resident: 36.8
> From: Joerg Schilling
> He is a person with a strong ego and this may have helped to spread
> Linux.
Well, I wasn't there, and I don't know much about the early Linux versus
UNIX-derivative contest, but from personal experience in a similar contest
(the TCP/IP versus ISO stack), I doubt such personal attributes had _that_
much weight in deciding the winner.
The maximum might have been that it enabled him to keep the Linux kernel
project unified and heading in one direction. Not inconsiderable, perhaps, if
there's confusion on the other side.,,
So there is a question here, though, and I'm curious to see what others who
were closer to the action think. Why _did_ Linux succeed, and not a Unix
derivative? (Is there any work which looks at this question? Some Linux
history? If not, there should be.)
It seems to me that they key battleground must have been the IMB PC-compatible
world - Linux is where it is now because of its success there. So why did
Linux succeed there?
Was is that it was open-source, and the competitor(s) all had licensing
issues? (I'm not saying they did, I just don't know.) Was it that Linux worked
better on that platform? (Again, don't know, only asking.) Perhaps there was
an early stage where it was the only good option for that platform, and that's
how it got going? Was is that there were too many Unix-derived alternatives,
so there was no clarity as to what the alternatives were?
Some combination of all of the above (perhaps with different ones playing a key
role at different points in time)?
Noel
All,
I'm dumping as much BSD/OS stuff as I can tonight. This includes: SPARC,
sources, and betas.
Unable to dump any floppies, however.
--
Cory Smelosky
b4(a)gewt.net
> Then one day a couple of them ‘fell off a truck’ and my Dad just happened to be there to pick them up and bring them home.
Wonderful story. It reminded me of the charming book, "five Finger Discount"
by Helene Stapinski, whose father brought home truckfall steaks.
Thanks for sharing the tale.
Doug
Is it worth putting a copy of this mailing list into the Unix Archive?
I don't want to dump the mbox in, as it has all our e-mail addresses:
spam etc. I could symlink in the monthly text archives, e.g.
http://minnie.tuhs.org/pipermail/tuhs/2016-December.txt.gz
What do you think? Perhaps in Documentation/TUHS_Mail?
Warren
It’s embarrassing to mention this, but I thought I’d share.
I’ve always wondered what on earth a TAHOE was, as they disappeared just about as quickly as they came out. As we all know that they were instrumental from separating out the VAX code from 4.3BSD in the official CSRG source. I was looking through old usenet stuff when I typed in something wrong, and came across people looking for GCC for the Tahoe running BSD. (http://altavista.superglobalmegacorp.com/usenet/b128/comp/sys/tahoe/79.txt)
In article <2287(a)trantor.harris-atd.com>, bbadger@x102c (Badger BA 64810) writes:
`We have a Harris HCX-9 computer, also known as a Tahoe, and we'd like to
`get gcc and g++ up and running. I haven't seen anything refering to
`the HCX or any Tahoe machines in the gcc distribution. Anyone have it?
`Working on it? Pointers to who might? Know if Berkely cc/ld/asm is PD?
Turns out they were using Harris mini’s called the HCX-9. That’s when I went back to the source and saw this:
#
# GENERIC POWER 6/32 (HCX9)
#
machine tahoe
cpu "TAHOE"
ident GENERIC
So if anyone else is wondering what was a Tahoe, did it exist, was there actual sales, is their pictures of it, etc, the answer is yes, it was a real machine, yes it was sold, and there are even print ads in Computer world.
I thought it was interesting though.
Sent from Mail for Windows 10
Since the X86 discussions seem to have focused on BSD & Linux, I thought I
should offer another perspective.
TLDR: I worked on System V based UNIX on PCs from 1982 to 1993. IMO,
excessive royalties & the difficulty of providing support for diverse
hardware doomed (USL) UNIX on x86. It didn't help that SCO was entrenched in
the PC market and slow to adopt new UNIX versions.
Longer Summary:
>From 1975-82 at IBM Research and UT-Austin C.S. dept, I tried to get access
to UNIX but couldn't.
At IBM Austin from '82 to '89, I worked on AIX and was involved with IBM's
BSD for RT/PC.
Starting in '89, I was the executive responsible for Dell UNIX
(https://notes.technologists.com/notes/2008/01/10/a-brief-history-of-dell-un…)
for most of its existence.
The royalties Dell paid for SVR4 plus addons were hard to bear. Those
royalties were at least an order of magnitude greater than what we paid to
Microsoft.
We couldn't support all of the devices Dell supplied to customers, certainly
couldn't afford to support hardware only supplied by other PC vendors.
SCO had dominant marketplace success with Xenix and SVRx products, seemingly
primarily using PCs with multiport serial cards to enable traditional
timesharing applications. Many at Dell preferred that we emphasize SCO over
Dell SVR4.
When I joined my first Internet startup in 1996 and had to decide what OS to
use for hosting, I was pretty cognizant of all the options. I had no hands
on Linux experience but thought Linux the likely choice. A Linux advocate
friend recommended I choose between Debian and Red Hat. I chose Red Hat and
have mostly used Red Hat & Fedora for my *IX needs since then.
Today, Linux device support is comprehensive, but still not as complete as
with Windows. I installed Fedora 24 on some 9 and 15 year old machines last
week. The graphics hardware is nothing fancy, a low end NVIDIA card in the
older one, just what Intel supplied on their OEM circuit boards in the newer
one. Windows (XP/7/10) on those machines gets 1080p without downloading
extra drivers. (Without extra effort??) Fedora 24 won't do more than
1024x768 on one and 1280x1024 with the other.
Charlie
(somewhat long story)
After reading all the stories about how Unix source was protected and hard to access to I’ve got to say that my experience was a little different.
I was at UCSD from 76-80 when UCSD got a VAX and I think it was running 32V at the time. Well being a CS student didn’t get you access to that machine, it was for the grad students and others doing all the real work.
I became friends with the admin of the system (sdcsvax) and he mentioned one day that the thing he wanted more than anything else was more disks. He had a bunch of the removable disk packs and wanted a couple more to swap out to do things like change the OS quickly etc.
My dad worked for CDC at the time, and he was making removable media of the same type that the VAX was using. My luck. I asked him about getting a disk pack, or two. He said that these things cost thousands and he couldn’t just pick them up and bring them home.
Then one day a couple of them ‘fell off a truck’ and my Dad just happened to be there to pick them up and bring them home. You know, so the kids could see what he did for a job.
I took them into the lab and gave them to the admin who looked the disks, then at me, and asked what I wanted in exchange. I asked for a seat at the VAX, with full access.
Since then I’ve had a ucsd email account, and been a dyed in the wool Unix guy.
David
All, thanks to the hard effort of Noel Chiappa and Paul Ruizendaal,
we now have a couple of early Unix systems with networking modifications.
They can be found in:
http://www.tuhs.org/Archive/Distributions/Early_Networking/
I'll get them added to the Unix Tree soon.
Cheers, Warren
On Tue, Feb 21, 2017 at 9:25 PM, Steve Nickolas <usotsuki(a)buric.co> wrote:
> I started screwing around with Linux in the late 90s, and it would be many
> years before any sort of real Unix (of the AT&T variety), in any form, was
> readily available to me - that being Solaris when Sun started offering it
> for free download.
See my comment to Dan. I fear you may not have known where to look, or whom
to ask. As I asked Dan, were you not at an university at time? Or where
you running a Sun or the like -- i.e. working with real UNIX but working
for someone with binary license, not sources from AT&T (and UCB)?
I really am curious because I have heard this comment before and never
really understood it because the sources really were pretty much available
to anyone that asked. Most professionals and almost any/all
university students had did have source access if they ask for it. That is
part of why AT&T lost the case. The trade secret was out, by definition.
The required by the 1956 consent decree to make the trade secrets
available. A couple of my European university folks have answer that the
schools kept the sources really locked down. I believe you, I never saw
that at places like Cambridge, Oxford, Edinburg, Darmstad or other places I
visited in those days in Europe. Same was true of CMU, MIT, UCB et al
where I had been in the USA, so I my experience was different.
The key that by definition, UNIX was available and there were already
versions from AT&T or not "in the wild." You just need to know where to
look and whom to ask. The truth is that the UCB/BSDi version of UNIX - was
based on the AT&T trade secret, as was Linux, Minix, Coherent and all of
the other "clones" -- aka look-a-likes and man of those sources were
pretty available too (just as Minix was to Linus and 386BSD was to him also
but he did not know to where/whom to ask).
So a few years later when the judge said, these N files might be tain'ted
by AT&T IP, but can't claim anything more. The game was over. The problem
was when the case started, techies (like me, and I'm guessing Larry, Ron
and other ex BSD hackers that "switched") went to Linux and started to
making it better because we thought we going to lose BSD.
That fact is if we had lost BSD, legally would have lost Linux too; but we
did not know that until after the dust settled. But by that time, many
hackers had said, its good enough and made it work for everyone.
As you and Dan have pointed out, many non-hackers did know that UNIX really
was available so they went with *Linux because they thought that had no
other choice, *when if fact, you actually did and that to me was the sad
part of the AT&T case.
A whole generation never knew and by the time they did have a choice but a
few religion began and new wars could be fought.
Anyway - that's my thinking/answer to Noel's original question.
Of why Linux over the over the PC/UNIX strains... I think we all agree that
one of the PC/UNIX was going to be the winner, the question really is why
did Linux and not a BSD flavor?
Tonal languages are real fun. I'm living and working in Bangkok,
Thailand and slightly tone deaf am still struggling.
Which reminds me, regarding binary there are 10 types of people, those
who understand and those who don't :-)
Cheers,
rudi
Noel:
Instead, you have to modify the arguments so that the re-tried call takes up
where it left off - in the example above, tries to read 5 characters, starting
5 bytes into the buffer). The hard part is that the return value (of the
number of characters actually read) has to count the 5 already read! Without
the proper design of the system call interface, this can be hard - how does
the system distinguish between the _first_ attempt at a system call (in which
the 'already done' count is 0), and a _later_ attempt? If the user passes in
the 'already done' count, it's pretty straightforward - otherwise, not so
much!
====
Sometime in the latter days of the Research system (somewhere
between when the 9/e and 10/e manuals were published), I had
an inspiration about that, and changed things as follows:
When a system call like read is interrupted by a signal:
-- If no characters have been copied into the user's
buffer yet, return -1 and set errno to EINTR (as would
always have been done in Heritage UNIX).
-- If some data has already been copied out, return the
number of characters copied.
So no data would be lost. Programs that wanted to keep
reading into the same buffer (presumably until a certain
terminator character is encountered or the buffer is full
or EOF) would have to loop, but a program that didn't loop
in that case was broken anyway: it probably wouldn't work
right were its input coming from a pipe or a network connection.
I don't remember any programs breaking when I made that change,
but since it's approaching 30 years since I did it, I don't
know whether I can trust my memory. Others on this list may
have clearer memories.
All this was a reaction to the messy (both in semantics and
in implementation) compromise that had come from BSD, to
have separate `restart the system call' and `interrupt the
system call' states. I could see why they did it, but was
never satisfied with the result. If only I'd had my inspiration
some years earlier, when there was a chance of it getting out
into the real world and influencing POSIX and so on. Oh, well.
Norman Wilson
Toronto ON
>On Tue, 21 Feb 2017 19:08:33 -0800 Cory Smelosky wrote:
>
>>On Tue, Feb 21, 2017, at 17:22, Rudi Blom wrote:
>> Probably my (misplaced?) sense of humour, but I can't help it.
>>
>> After reading all comment I feel I have to mention I had a look at
>> freeDOS :-).
>>
>> Cheers,
>> rudi
>
>Do I need to pull out TOPS-10 filesystem code now, too? ;)
In 1967 I was 12 and probably had barely discovered ScienceFiction
novel and computers.
Just quickly downloaded a TOPS-10 OS Commands Manual (from 1988) but
no mention of the Level-D filesystem
Probably my (misplaced?) sense of humour, but I can't help it.
After reading all comment I feel I have to mention I had a look at freeDOS :-)
Cheers,
rudi
All, after getting your feedback, I've reorganised the Unix Archive at
http://www.tuhs.org/Archive/
I'm sure there will be some rough edges, let me know if there is anything
glaringly obvious.
I'd certainly like a few helpers to take over responsibility for specific
sections, e.g. UCB, DEC.
Cheers all, Warren
P.S It will take a while for the mirrors to pick this up.
> 2) **Most** Operating systems do not support /dev/* based access to SCSI.
> This includes a POSIX certified system like Mac OS X.
>
> 3) **Most** Operating systems do not even support a file descriptor based
> interface to SCSI commands.
> This includes a POSIX certified system like Mac OS X.
Had Ken thought that way, Unix's universal byte-addressable file format
would never have happened; this mailing list would not exist; and we
all might still be fluent in dialects of JCL. dd was sufficient glue
to bridge the gap between Unix and **Most** Operating Systems.
Meanwhile everyday use of Unix was freed from the majority's folly.
Doug