> From: Larry McVoy <lm(a)mcvoy.com>
I am completely non-LISP person (I think my brain was wired in C before C
existed :-), but...
> Nobody has written a serious operating system
Well, the LISP Machine OS was written entirely in LISP. Dunno if you call that
a 'serious OS', but it was a significantly more capable OS than, say,
DOS. (OK, there was a lot of microcde that did a lot of the low-level stuff,
but...)
> or a serious $BIG_PROJECT in Lisp.
Have you ever seen a set of Symbolics manuals? Sylph-like, it wesn't!
> Not one that has been commercially successful, so far as I know.
It's true that Symbolics _eventually_ crashed, but I think the biggest factor
there was that commodity microprocessors (e.g. Pentium) got faster so much
faster than Symbolics' custom LISP hardware, so that the whole rationale for
Symbolics (custom hardware to run LISP fast) went away. They still exist as a
software company selling their coding environment, FWTW.
> C performs far better even though it is, in the eyes of lisp people, far
> more awkward to do things.
I think it depend on what you're doing. For some kinds of things, LISP is
probably better.
I mean, for most of the kind of things I do, I think C is the bees' knees
(well, except I had to add conditions and condition handlers when I went to
write a compiler in it), but for some of the AI projects I know a little
about, LISP seems (from a distance, admittedly) to be a better match.
Noel
On Feb 15, 2018, Ian Zimmerman <itz(a)very.loosely.org> wrote:
>>
>> So, how's this relevant to Unix? Well, I'd like to know more about the
>> historical interplay between the Unix and Lisp communities. What about
>> the Lisp done at Berkeley on the VAX (Franz Lisp).
>
> I know one of the Franz founders, I'll ask him when I have a chance.
There is some information about Franz Lisp and its origins here:
http://www.softwarepreservation.org/projects/LISP/maclisp_family/#Franz_Lis…
(And lots more information about many other varieties of Lisp at the same web site.)
On Sat, Feb 3, 2018 at 5:59 PM, Dave Horsfall <dave(a)horsfall.org> wrote:
> On Sat, 3 Feb 2018, Arthur Krewat wrote:
>
>> I would imagine that Windows wouldn't be what it is today without UNIX.
>> Matter of fact, Windows NT (which is what Windows has been based on since
>> Windows ME went away) is really DEC's VMS underneath the covers at least to
>> a small extent.
>>
>
> I thought that NT has a POSIX-y kernel, which is why it was so reliable?
> Or was VMS a POSIX-like system? I only used it for a couple of years in
> the early 80s (up to 4.0, I think), and never dug inside it; to me, it was
> just RSX-11/RSTS-11 on steroids.
The design of the original NT kernel was overseen by Dave Cutler, of VMS
and RSX-11M fame, and had a very strong and apparent VMS influence. Some
VAX wizards I know told me that they saw a lot of VMS in NT's design, but
that it probably wasn't as good (different design goals, etc: apparently
Gates wanted DOS++ and a quick time to market; Cutler wanted to do a *real*
OS and they compromised to wind up with VMS--).
It's true that there was (is? I don't know anymore...) a POSIX subsystem,
but that seemed more oriented at being a marketing check in the box for
sales to the US government and DoD (which had "standardized" on POSIX and
made it a requirement when investing in new systems).
Now days, I understand that one can run Linux binaries natively; the
Linux-compatibility subsystem will even `apt-get install` dependencies for
you. Satya Nadella's company isn't your father's Microsoft anymore. VSCode
(their new snazzy editor that apparently all the kids love) is Open Source.
Note that there is some irony in the NT/POSIX thing: the US Government
standardized on Windows about two decades ago and now can't seem to figure
out how to get off of it.
A short story I can't resist telling: a couple of years ago, some folks
tried to recruit me back into the Marine Corps in some kind of technical
capacity. I asked if I'd be doing, you know, technical stuff and was told
that, since I was an officer no, I wouldn't. Not really interested. I ended
up going to a bar with a recon operator (Marine special operations) to get
the straight scoop and talking to a light colonel (that's a Lieutenant
Colonel) on the phone for an hour for the hard sell. Over a beer, the recon
bubba basically said, "It was weird. I went back to the infantry." The
colonel kept asking me why I didn't run Windows: "but it's the most popular
operating system in the world!" Actually, I suspect Linux and BSD in the
guise of iOS/macOS is running on a lot more devices than Windows at this
point. I didn't bother pointing that out to him.
Would VMS become what it was without UNIX's influence? Would UNIX become
>> what it later was without VMS?
>>
>> Would UNIX exist, or even be close to what it became without DEC?
>>
>
> I've oft wondered that, but we have to use a new thread to avoid
> embarrassing Ken :-)
>
The speculation of, "what would have happened?" is interesting, though of
course unanswerable. I suspect that had it not been for Unix, we'd all be
running software that was closer to what you'd find on a mainframe or RT-11.
- Dan C.
> already 20 years ago I met a guy (masters degree, university) who never freed dynamically allocated memory. He told me he is 'instantiating a object', but had no idea what an heap is, and what dynamically allocated memory means.
Years ago, I had an new programmer who I just couldn't teach. He never understood the difference between an array and pointer, and apparently couldn't be bothered to learn.
After string him along for three months, I was on my way into his office to fire him when I found out he had quit, but not before he checked a bunch of drek into our source code control system.
I thought I backed all his commits out at the time.
Years later I was running "purify" on our product looking for memory leaks. I found this small utility function that predated the source code control system leaking. This, I thought was odd, as it had been there FOREVER and was well tested. I brought up the source code system and checked it anyhow and found the afore mentioned programmer had checked in one change: he deleted the "free" call in it.
I KNOW what happened. He did something else to corrupt the malloc heap in his code and often this causes a core dump in a subsequent malloc/free call. Apparently this was the place it struck him, so he just deleted the free call there.
So, in:
http://minnie.tuhs.org/cgi-bin/utree.pl?file=V6/usr/source/s2/mv.c
what's the point of this piece of code:
p = place;
p1 = p;
while(*p++ = *argp3++);
p2 = p;
while(*p++ = *argp4++);
execl("/bin/cp","cp", p1, p2, 0);
I mean, I get that it's copying the two strings pointed to by 'argp3' and
'argp4' into a temporary buffer at 'place', and leaving 'p1' and 'p2' as
pointers to the copies of said strings, but... why is it doing that?
I at first thought that maybe the execl() call was smashing the stack (and
thus the copies pointed to by 'argp3' and 'argp4'), or something, but I don't
think it does that. So why couldn't the code have just been:
execl("/bin/cp","cp", argp3, argp4, 0);
Is this code maybe just a left-over from some previous variant?
Noel
> From: Dave Horsfall <dave(a)horsfall.org>
> I'd like to see it handle "JSR PC,@(SP)+"...
Heh!
But that does point out that the general concept is kind of confused - at
least, if you hope to get a fully working program out the far end. The only
way to do that is build (effectively) a simulator, one that _exactly_
re-creates the effects on the memory and registers of the original program.
Only instead of reading binary machine code, this one's going to read in the
machine language source, and produce a custom simulator, one that can run only
one program - the one fed into it.
Think of it as a 'simulator compiler'! :-)
Noel
>> I was wondering what it would take to convert the v6/v7 basic program
>> into something that can be run today.
>
> Hmmm... If it were C-generated then it would be (somewhat) easy, but it's
> hand-written and hand-optimised... You'd have to do some functional
> analysis on it e.g. what does this routine do, etc.
>
>> Its 2128 lines. It doesn't have that fun instruction in it :)
>
> I know! Say ~2,000 lines, say ~100 people on this list, distributed
> computing to the rescue! That's only 20 lines each, so it ought to be a
> piece of cake :-)
I'm up for that! However, only if the resulting C program can be compiled/run
on a V6/PDP11 again.
Let's assume that reverse engineering a subroutine of 20 lines takes
an hour. That then makes for 100 hours. If 10 people participate and
contribute one hour/routine per week, it will be done by May.
However, the initial analysis of the code architecture is a (time) hurdle.
Paul
PS: the Fortran66 of V6 is also assembler only...
IMO:
1) It kinda did catch on, in the form of macOS, but there was a time
when it was nearly dead as the major vendors moved to System V. For
some reason, Sun was the last major vendor to make the move, but they
caught most of the flack.
2) I think the main reason BSD nearly died, was the AT&T lawsuit. At
the time, Linux appeared to be a safer bet legally.
3) Linux got a reputation as an OS you had to be an expert to install,
so lots of people started it to install it to "prove themselves".
This was sort of true back when Linux came as 2 floppy images, but
didn't remain true for very long.
4) I believe the SCO lawsuit "against Linux" was too little, too late
to kill Linux's first mover advantage in the opensource *ix
department.
5) I think FreeBSD's ports and similar huge-source-tree approaches
didn't work out as well Linux developers contributing their changes
upstream.