> From: Larry McVoy <lm(a)mcvoy.com>
> I don't know all the details but lisp and performance is not a thing.
This isn't really about Unix, but I hate to see inaccuracies go into
archives...
You might want to read:
http://multicians.org/lcp.html
Of course, when it comes to the speed/efficientcy of the compiled code, much
depends on the program/programmer. If one uses CONS wildly, there will have to
be garbage collection, which is of course not fast. But properly coded to stay
away from expensive constructs, my understanding is that 'lcp' and NCOMPLR
produced pretty amazing object code.
Noel
Actually, Algol 60 did allow functions and procedures as arguments (with correct static scoping), but not as results, so they weren’t “first class” in the Scheme sense. The Algol 60 report (along with its predecessor and successor) is available, among other places, here:
http://www.softwarepreservation.org/projects/ALGOL/standards/
On Feb 16, 2018, Bakul Shah <bakul(a)bitblocks.com> wrote:
> They did lexical scoping "right", no doubt. But back when
> Landin first found that lambda calculus was useful for
> modeling programming languages these concepts were not clearly
> understood. I do not recall reading anything about whether
> Algol designers not allowing full lexical scopin was due to an
> oversight or realizing that efficient implementation of
> functional argument was not possible. May be Algol's call by
> name was deemed sufficient? At any rate Algol's not having
> full lexical scoping does not mean one can simply reject the
> idea of being influenced by it. Often at the start there is
> lots of fumbling before people get it right. May be someone
> should ask Steele?
Clueless or careless?
A customer program worked for many years till one of the transaction
messages had a few bytes added.
Looking into it I discovered that the program had only worked because
the receive buffer was followed by another buffer which was used in a
later sequence. Only when also that buffer overflowed some critical
integers got overwritten and used as index in tables that gave a lot
of fun.
Well, as all here know, C is fun :-)
> From: Larry McVoy <lm(a)mcvoy.com>
I am completely non-LISP person (I think my brain was wired in C before C
existed :-), but...
> Nobody has written a serious operating system
Well, the LISP Machine OS was written entirely in LISP. Dunno if you call that
a 'serious OS', but it was a significantly more capable OS than, say,
DOS. (OK, there was a lot of microcde that did a lot of the low-level stuff,
but...)
> or a serious $BIG_PROJECT in Lisp.
Have you ever seen a set of Symbolics manuals? Sylph-like, it wesn't!
> Not one that has been commercially successful, so far as I know.
It's true that Symbolics _eventually_ crashed, but I think the biggest factor
there was that commodity microprocessors (e.g. Pentium) got faster so much
faster than Symbolics' custom LISP hardware, so that the whole rationale for
Symbolics (custom hardware to run LISP fast) went away. They still exist as a
software company selling their coding environment, FWTW.
> C performs far better even though it is, in the eyes of lisp people, far
> more awkward to do things.
I think it depend on what you're doing. For some kinds of things, LISP is
probably better.
I mean, for most of the kind of things I do, I think C is the bees' knees
(well, except I had to add conditions and condition handlers when I went to
write a compiler in it), but for some of the AI projects I know a little
about, LISP seems (from a distance, admittedly) to be a better match.
Noel
On Feb 15, 2018, Ian Zimmerman <itz(a)very.loosely.org> wrote:
>>
>> So, how's this relevant to Unix? Well, I'd like to know more about the
>> historical interplay between the Unix and Lisp communities. What about
>> the Lisp done at Berkeley on the VAX (Franz Lisp).
>
> I know one of the Franz founders, I'll ask him when I have a chance.
There is some information about Franz Lisp and its origins here:
http://www.softwarepreservation.org/projects/LISP/maclisp_family/#Franz_Lis…
(And lots more information about many other varieties of Lisp at the same web site.)
On Sat, Feb 3, 2018 at 5:59 PM, Dave Horsfall <dave(a)horsfall.org> wrote:
> On Sat, 3 Feb 2018, Arthur Krewat wrote:
>
>> I would imagine that Windows wouldn't be what it is today without UNIX.
>> Matter of fact, Windows NT (which is what Windows has been based on since
>> Windows ME went away) is really DEC's VMS underneath the covers at least to
>> a small extent.
>>
>
> I thought that NT has a POSIX-y kernel, which is why it was so reliable?
> Or was VMS a POSIX-like system? I only used it for a couple of years in
> the early 80s (up to 4.0, I think), and never dug inside it; to me, it was
> just RSX-11/RSTS-11 on steroids.
The design of the original NT kernel was overseen by Dave Cutler, of VMS
and RSX-11M fame, and had a very strong and apparent VMS influence. Some
VAX wizards I know told me that they saw a lot of VMS in NT's design, but
that it probably wasn't as good (different design goals, etc: apparently
Gates wanted DOS++ and a quick time to market; Cutler wanted to do a *real*
OS and they compromised to wind up with VMS--).
It's true that there was (is? I don't know anymore...) a POSIX subsystem,
but that seemed more oriented at being a marketing check in the box for
sales to the US government and DoD (which had "standardized" on POSIX and
made it a requirement when investing in new systems).
Now days, I understand that one can run Linux binaries natively; the
Linux-compatibility subsystem will even `apt-get install` dependencies for
you. Satya Nadella's company isn't your father's Microsoft anymore. VSCode
(their new snazzy editor that apparently all the kids love) is Open Source.
Note that there is some irony in the NT/POSIX thing: the US Government
standardized on Windows about two decades ago and now can't seem to figure
out how to get off of it.
A short story I can't resist telling: a couple of years ago, some folks
tried to recruit me back into the Marine Corps in some kind of technical
capacity. I asked if I'd be doing, you know, technical stuff and was told
that, since I was an officer no, I wouldn't. Not really interested. I ended
up going to a bar with a recon operator (Marine special operations) to get
the straight scoop and talking to a light colonel (that's a Lieutenant
Colonel) on the phone for an hour for the hard sell. Over a beer, the recon
bubba basically said, "It was weird. I went back to the infantry." The
colonel kept asking me why I didn't run Windows: "but it's the most popular
operating system in the world!" Actually, I suspect Linux and BSD in the
guise of iOS/macOS is running on a lot more devices than Windows at this
point. I didn't bother pointing that out to him.
Would VMS become what it was without UNIX's influence? Would UNIX become
>> what it later was without VMS?
>>
>> Would UNIX exist, or even be close to what it became without DEC?
>>
>
> I've oft wondered that, but we have to use a new thread to avoid
> embarrassing Ken :-)
>
The speculation of, "what would have happened?" is interesting, though of
course unanswerable. I suspect that had it not been for Unix, we'd all be
running software that was closer to what you'd find on a mainframe or RT-11.
- Dan C.
> already 20 years ago I met a guy (masters degree, university) who never freed dynamically allocated memory. He told me he is 'instantiating a object', but had no idea what an heap is, and what dynamically allocated memory means.
Years ago, I had an new programmer who I just couldn't teach. He never understood the difference between an array and pointer, and apparently couldn't be bothered to learn.
After string him along for three months, I was on my way into his office to fire him when I found out he had quit, but not before he checked a bunch of drek into our source code control system.
I thought I backed all his commits out at the time.
Years later I was running "purify" on our product looking for memory leaks. I found this small utility function that predated the source code control system leaking. This, I thought was odd, as it had been there FOREVER and was well tested. I brought up the source code system and checked it anyhow and found the afore mentioned programmer had checked in one change: he deleted the "free" call in it.
I KNOW what happened. He did something else to corrupt the malloc heap in his code and often this causes a core dump in a subsequent malloc/free call. Apparently this was the place it struck him, so he just deleted the free call there.
So, in:
http://minnie.tuhs.org/cgi-bin/utree.pl?file=V6/usr/source/s2/mv.c
what's the point of this piece of code:
p = place;
p1 = p;
while(*p++ = *argp3++);
p2 = p;
while(*p++ = *argp4++);
execl("/bin/cp","cp", p1, p2, 0);
I mean, I get that it's copying the two strings pointed to by 'argp3' and
'argp4' into a temporary buffer at 'place', and leaving 'p1' and 'p2' as
pointers to the copies of said strings, but... why is it doing that?
I at first thought that maybe the execl() call was smashing the stack (and
thus the copies pointed to by 'argp3' and 'argp4'), or something, but I don't
think it does that. So why couldn't the code have just been:
execl("/bin/cp","cp", argp3, argp4, 0);
Is this code maybe just a left-over from some previous variant?
Noel
> From: Dave Horsfall <dave(a)horsfall.org>
> I'd like to see it handle "JSR PC,@(SP)+"...
Heh!
But that does point out that the general concept is kind of confused - at
least, if you hope to get a fully working program out the far end. The only
way to do that is build (effectively) a simulator, one that _exactly_
re-creates the effects on the memory and registers of the original program.
Only instead of reading binary machine code, this one's going to read in the
machine language source, and produce a custom simulator, one that can run only
one program - the one fed into it.
Think of it as a 'simulator compiler'! :-)
Noel