I've send a couple of you private messages with some more details of why I
ask this, but I'll bring the large question to debate here:
Have POSIX and
LSB lost
their
usefulness/relevance? If so, we know ISV’s like Ansys are not going to go
‘FOSS’ and make their sources available (ignore religious beliefs, it just
is not their business model); how to we get that level of precision to
allow
the part of the
market
that will be 'binary only' continue to
create applications?
Seriously, please try to stay away from religion on this
question. Clearly, there are a large number of ISVs have traditionally
used interface specifications. To me it started with things like the old
Cobol and Fortran standards for the languages. That was not good enough
since the systems diverge, and /usr/group then IEEE/ANSI/ISO did Posix.
Clearly, Posix enabled Unix implementations such a Linux to shine, although
Linux does not doggedly follow it. Apple was once Posix conformant, but
I'd not think they worry to much about it. Linux created LSB, but I see
fewer and fewer references to it.
I worry that without a real binary definition, it's darned hard (at least
in the higher end of the business that I live day-to-day) to get ISV's to
care.
What do you folks think?
Clem
As an aside about Wolfram and SMP (and one that actually has
something to do with UNIX):
I ran the VAX on which Wolfram et al (and it was very much et al)
developed SMP. It started out running UNIX/TS 1.0. I know how
that system was snuck out of Bell Labs, but if I told you I'd have
to terminate you with extreme prejudice. (I wasn't involved
anyway.)
SMP really needed dynamic paging; the TS 1.0 kernel had only
swapping. We had quite a few discussions about what to do.
Moving wholesale to 3BSD or early 4BSD (this was in 1981)
would have been a big upheaval for our entire user community.
Those systems were also notorious at the time for their delicate
stability: some people reported that they ran well, others that
they crashed frequently. Our existing system was pretty solid,
and I had already put some work into making it more so (better
handling of low-level machine errors, for example).
Somehow we ended up deciding that the least-painful path was
to lift the VM code out of 4BSD and shoehorn it into our
existing kernel, creating what we called Bastardized Paging
UNIX. I did most of the work; I was younger and more energetic
back then. Also considerably grumpier. In the heart of the
page-in (I think) code, the Berkeley guys had written a single
C function that stretched to about ten printed pages. (For those
too young to remember printers, that means about 600 lines.)
I was then and still am adamant that that's the wrong way to
write anything, but I didn't want to take the time to rewrite
it all, so (being young and grumpy) I relieved my feelings by
adding a grumpy comment at the top of the source file.
I also wrote a paper about the work, which was published in
(of all places) AUUGN. I haven't read it in years but it was
probably a bit snotty. It nevertheless ended up causing a
local UNIX-systems-software company to head-hunt me (but at
the time I had no interest in leaving Caltech), so it must
not have been too rude.
What days those were, when a single person could understand
enough of the OS to do stuff like that in only a month or two,
and get it pretty much right too. I did end up finding some
interesting race-condition bugs, probably introduced by me, but
fascinating to track down; e.g. something that went wrong only
if a page fault happened at exactly the right time with respect
to something else.
Norman Wilson
Toronto ON
Donald ODana:
already 20 years ago I met a guy (masters degree, university) who never
freed dynamically allocated memory. He told me he is 'instantiating
a object', but had no idea what an heap is, and what dynamically
allocated memory means.
====
This is the sort of programmer for whom garbage collection was named:
his programs are a collection of garbage.
Norman Wilson
Toronto ON
(In 1127-snark mode this evening)
>Date: Sat, 17 Feb 2018 17:47:22 +1100 (EST)
>From: Dave Horsfall <dave(a)horsfall.org>
>To: The Eunuchs Hysterical Society <tuhs(a)tuhs.org>
>Subject: [TUHS] Of birthdays etc
>Message-ID: <alpine.BSF.2.21.1802171649520.798(a)aneurin.horsfall.org>
>Content-Type: text/plain; format=flowed; charset=US-ASCII
>
>...
>Harris' Lament? Look it up with your favourite search engine (I don't use Google).
>
probably early 1995 I dabbled a bit in AltaVista Search. So even now
still using Yahoo somehow :-)
Keep it coming Dave, it's appreciated, at least by me.
>From a former DECcie,
uncle rubl
Blimey... How was I to know that a throw-away remark would almost develop
into a shitfight? It would help if people changed the Subject line too,
as I'm sure that Ken must've been a little peeved... It would also help
if users didn't bloody top-post either, but I suspect that I've lost that
fight.
Anyway, this whole business started when I thought it might be a good idea
to post reminders of historical events here, as I do with some of the
other lists that I infest^W infect^W inhabit. I figured that the old
farts here might like to be reminded of (IMHO) significant events, and
similarly the youngsters might want to be reminded that there was indeed
life before Linux (which, by the way, I happen to loathe, but that's a
different story).
I'm glad that some people appreciate it; and don't worry, Steffen, you'll
soon catch up, as they should all be in the archives :-) A long-term goal
(if I live that long) is to set up one of those "this day in history"
sites, but it looks like Harris' Lament[*] has already applied :-(
I've had a number of corrections (thanks!), some weird comments on
pronunciation (an Englishman can probably pick my ancestry from me saying
"castle" as "c-AH-stle" and "dance" as "d-A-nce" etc), but oddly enough no
criticism (well, unless I'm talking about mounting a magtape as a
filesystem; no, I will not forget the implication that I was a liar), and
Warren has yet to spank me...
For the morbidly curious I keep these events in Calendar on my MacBook
(which actually spends most of its time in Terminal, and I don't even know
how to use the Finder!), and am always noting things which interest me and
therefore possibly others.
Anyway, thanks all; it is an honour and a privilege to share a mailing
list with some of the people who wrote the software that I have both used
in the past and still use to this day.
[*]
Harris' Lament? Look it up with your favourite search engine (I don't use
Google).
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
> On Feb 14, 2018, Dave Horsfall <dave(a)horsfall.org> wrote:
>
> Computer pioneer Niklaus Wirth was born on this day in 1934; he basically
> designed ALGOL, one of the most influential languages ever, with just
> about every programming language in use today tracing its roots to it.
Wirth designed many languages, including Euler, Algol W, Pascal, Modula, and Oberon, but he did not design Algol; more specifically, he did not design Algol 60. Instead, a committee (J. W. Backus, F. L. Bauer, J. Green, C. Katz, J. McCarthy, P. Naur, A. J. Perlis, H. Rutishauser, K. Samelson, B. Vauquois, J .H. Wegstein, A. van Wijngaarden, and M. Woodger) designed it, and Peter Naur edited the remarkable Algol 60 specification. A few others, including Edsgar Dijkstra, who completed the first implementation, participated in meetings leading up to the final design.
From: Doug McIlroy <doug(a)cs.dartmouth.edu>
> Like PL/I, it also
> borrowed the indispensable notion of structs from business languages
> (Flowmatic, Comtran, Cobol).
That is an interesting insight. I always thought that structs were
inspired by the assembler DORG construct, and hence the shared namespace
for members.
The above insight goes some way to explain why PDP11 “as” did not have
a DORG construct, but early C did have ‘struct'.
Paul
So people have called me on the claim that lisp is not fast. Here's a
rebuttal.
Please write a clone of GNU grep in lisp to demonstrate that the claim
that lisp is slower that C is false.
Best of luck and I'll be super impressed if you can get even remotely
close without dropping into C or assembler. If you do get close, I
will with draw my claim, stand corrected, point future "lisp is slow"
people at the lisp-grep, and buy you dinner and/or drinks.
--lm
> From: Larry McVoy <lm(a)mcvoy.com>
> the proof here is to show up with a pure lisp grep that is fast as the C
> version. ... I've never seen a lisp program that out performed a well
> written C program.
Your exact phrase (which my response was in reply to) was "lisp and
performance is not a thing". You didn't say 'LISP is not just as fast as C' -
a different thing entirely. I disagreed with your original statement, which
seems to mean 'LISP doesn't perform well'.
Quite a few people spent quite a lot of time making LISP compiler output fast,
to the point that it was possible to say "this compiler is also intended to
compete with the S-1 Pascal and FORTRAN compilers for quality of compiled
numeric code" [Brooks,Gabriel and Steele, 1982] and "with the development of
the S-1 Lisp compiler, it once again became feasible to implement Lisp in Lisp
and to expect similar performance to the best hand-tuned,
assembly-language-based Lisp systems" [Steele and Gabriel, 1993].
Noel
> Computer pioneer Niklaus Wirth was born on this day in 1934; he basically
> designed ALGOL, one of the most influential languages ever, with just
> about every programming language in use today tracing its roots to it.
Rather than "tracing its roots to it", I'd say "has some roots in it".
Algol per se hardly made a ripple in the US market, partly due to
politics and habit, but also because it didn't espouse separate
compilation. However, as asserted above, it had a profound impact on
language designers and counts many languages as descendants.
To bring the subject back to Unix, C followed Fortran's modularity and
Algol's block structure. (But it reached back across the definitive Algol
60 to pick up the "for" statement from Algol 58.) Like PL/I, it also
borrowed the indispensable notion of structs from business languages
(Flowmatic, Comtran, Cobol). It adopted pointers from Lisp, as polished
by BCPL (pointer arithmetic) and PL/I (the -> operator). For better or
worse, it went its own way by omitting multidimensional arrays.
So C has many roots. It just isn't fashionable in computer-language
circles to highlight Cobol in your family tree.
Doug
> From: Larry McVoy <lm(a)mcvoy.com>
> I don't know all the details but lisp and performance is not a thing.
This isn't really about Unix, but I hate to see inaccuracies go into
archives...
You might want to read:
http://multicians.org/lcp.html
Of course, when it comes to the speed/efficientcy of the compiled code, much
depends on the program/programmer. If one uses CONS wildly, there will have to
be garbage collection, which is of course not fast. But properly coded to stay
away from expensive constructs, my understanding is that 'lcp' and NCOMPLR
produced pretty amazing object code.
Noel
Actually, Algol 60 did allow functions and procedures as arguments (with correct static scoping), but not as results, so they weren’t “first class” in the Scheme sense. The Algol 60 report (along with its predecessor and successor) is available, among other places, here:
http://www.softwarepreservation.org/projects/ALGOL/standards/
On Feb 16, 2018, Bakul Shah <bakul(a)bitblocks.com> wrote:
> They did lexical scoping "right", no doubt. But back when
> Landin first found that lambda calculus was useful for
> modeling programming languages these concepts were not clearly
> understood. I do not recall reading anything about whether
> Algol designers not allowing full lexical scopin was due to an
> oversight or realizing that efficient implementation of
> functional argument was not possible. May be Algol's call by
> name was deemed sufficient? At any rate Algol's not having
> full lexical scoping does not mean one can simply reject the
> idea of being influenced by it. Often at the start there is
> lots of fumbling before people get it right. May be someone
> should ask Steele?
Clueless or careless?
A customer program worked for many years till one of the transaction
messages had a few bytes added.
Looking into it I discovered that the program had only worked because
the receive buffer was followed by another buffer which was used in a
later sequence. Only when also that buffer overflowed some critical
integers got overwritten and used as index in tables that gave a lot
of fun.
Well, as all here know, C is fun :-)
> From: Larry McVoy <lm(a)mcvoy.com>
I am completely non-LISP person (I think my brain was wired in C before C
existed :-), but...
> Nobody has written a serious operating system
Well, the LISP Machine OS was written entirely in LISP. Dunno if you call that
a 'serious OS', but it was a significantly more capable OS than, say,
DOS. (OK, there was a lot of microcde that did a lot of the low-level stuff,
but...)
> or a serious $BIG_PROJECT in Lisp.
Have you ever seen a set of Symbolics manuals? Sylph-like, it wesn't!
> Not one that has been commercially successful, so far as I know.
It's true that Symbolics _eventually_ crashed, but I think the biggest factor
there was that commodity microprocessors (e.g. Pentium) got faster so much
faster than Symbolics' custom LISP hardware, so that the whole rationale for
Symbolics (custom hardware to run LISP fast) went away. They still exist as a
software company selling their coding environment, FWTW.
> C performs far better even though it is, in the eyes of lisp people, far
> more awkward to do things.
I think it depend on what you're doing. For some kinds of things, LISP is
probably better.
I mean, for most of the kind of things I do, I think C is the bees' knees
(well, except I had to add conditions and condition handlers when I went to
write a compiler in it), but for some of the AI projects I know a little
about, LISP seems (from a distance, admittedly) to be a better match.
Noel
On Feb 15, 2018, Ian Zimmerman <itz(a)very.loosely.org> wrote:
>>
>> So, how's this relevant to Unix? Well, I'd like to know more about the
>> historical interplay between the Unix and Lisp communities. What about
>> the Lisp done at Berkeley on the VAX (Franz Lisp).
>
> I know one of the Franz founders, I'll ask him when I have a chance.
There is some information about Franz Lisp and its origins here:
http://www.softwarepreservation.org/projects/LISP/maclisp_family/#Franz_Lis…
(And lots more information about many other varieties of Lisp at the same web site.)
On Sat, Feb 3, 2018 at 5:59 PM, Dave Horsfall <dave(a)horsfall.org> wrote:
> On Sat, 3 Feb 2018, Arthur Krewat wrote:
>
>> I would imagine that Windows wouldn't be what it is today without UNIX.
>> Matter of fact, Windows NT (which is what Windows has been based on since
>> Windows ME went away) is really DEC's VMS underneath the covers at least to
>> a small extent.
>>
>
> I thought that NT has a POSIX-y kernel, which is why it was so reliable?
> Or was VMS a POSIX-like system? I only used it for a couple of years in
> the early 80s (up to 4.0, I think), and never dug inside it; to me, it was
> just RSX-11/RSTS-11 on steroids.
The design of the original NT kernel was overseen by Dave Cutler, of VMS
and RSX-11M fame, and had a very strong and apparent VMS influence. Some
VAX wizards I know told me that they saw a lot of VMS in NT's design, but
that it probably wasn't as good (different design goals, etc: apparently
Gates wanted DOS++ and a quick time to market; Cutler wanted to do a *real*
OS and they compromised to wind up with VMS--).
It's true that there was (is? I don't know anymore...) a POSIX subsystem,
but that seemed more oriented at being a marketing check in the box for
sales to the US government and DoD (which had "standardized" on POSIX and
made it a requirement when investing in new systems).
Now days, I understand that one can run Linux binaries natively; the
Linux-compatibility subsystem will even `apt-get install` dependencies for
you. Satya Nadella's company isn't your father's Microsoft anymore. VSCode
(their new snazzy editor that apparently all the kids love) is Open Source.
Note that there is some irony in the NT/POSIX thing: the US Government
standardized on Windows about two decades ago and now can't seem to figure
out how to get off of it.
A short story I can't resist telling: a couple of years ago, some folks
tried to recruit me back into the Marine Corps in some kind of technical
capacity. I asked if I'd be doing, you know, technical stuff and was told
that, since I was an officer no, I wouldn't. Not really interested. I ended
up going to a bar with a recon operator (Marine special operations) to get
the straight scoop and talking to a light colonel (that's a Lieutenant
Colonel) on the phone for an hour for the hard sell. Over a beer, the recon
bubba basically said, "It was weird. I went back to the infantry." The
colonel kept asking me why I didn't run Windows: "but it's the most popular
operating system in the world!" Actually, I suspect Linux and BSD in the
guise of iOS/macOS is running on a lot more devices than Windows at this
point. I didn't bother pointing that out to him.
Would VMS become what it was without UNIX's influence? Would UNIX become
>> what it later was without VMS?
>>
>> Would UNIX exist, or even be close to what it became without DEC?
>>
>
> I've oft wondered that, but we have to use a new thread to avoid
> embarrassing Ken :-)
>
The speculation of, "what would have happened?" is interesting, though of
course unanswerable. I suspect that had it not been for Unix, we'd all be
running software that was closer to what you'd find on a mainframe or RT-11.
- Dan C.
> already 20 years ago I met a guy (masters degree, university) who never freed dynamically allocated memory. He told me he is 'instantiating a object', but had no idea what an heap is, and what dynamically allocated memory means.
Years ago, I had an new programmer who I just couldn't teach. He never understood the difference between an array and pointer, and apparently couldn't be bothered to learn.
After string him along for three months, I was on my way into his office to fire him when I found out he had quit, but not before he checked a bunch of drek into our source code control system.
I thought I backed all his commits out at the time.
Years later I was running "purify" on our product looking for memory leaks. I found this small utility function that predated the source code control system leaking. This, I thought was odd, as it had been there FOREVER and was well tested. I brought up the source code system and checked it anyhow and found the afore mentioned programmer had checked in one change: he deleted the "free" call in it.
I KNOW what happened. He did something else to corrupt the malloc heap in his code and often this causes a core dump in a subsequent malloc/free call. Apparently this was the place it struck him, so he just deleted the free call there.
So, in:
http://minnie.tuhs.org/cgi-bin/utree.pl?file=V6/usr/source/s2/mv.c
what's the point of this piece of code:
p = place;
p1 = p;
while(*p++ = *argp3++);
p2 = p;
while(*p++ = *argp4++);
execl("/bin/cp","cp", p1, p2, 0);
I mean, I get that it's copying the two strings pointed to by 'argp3' and
'argp4' into a temporary buffer at 'place', and leaving 'p1' and 'p2' as
pointers to the copies of said strings, but... why is it doing that?
I at first thought that maybe the execl() call was smashing the stack (and
thus the copies pointed to by 'argp3' and 'argp4'), or something, but I don't
think it does that. So why couldn't the code have just been:
execl("/bin/cp","cp", argp3, argp4, 0);
Is this code maybe just a left-over from some previous variant?
Noel
> From: Dave Horsfall <dave(a)horsfall.org>
> I'd like to see it handle "JSR PC,@(SP)+"...
Heh!
But that does point out that the general concept is kind of confused - at
least, if you hope to get a fully working program out the far end. The only
way to do that is build (effectively) a simulator, one that _exactly_
re-creates the effects on the memory and registers of the original program.
Only instead of reading binary machine code, this one's going to read in the
machine language source, and produce a custom simulator, one that can run only
one program - the one fed into it.
Think of it as a 'simulator compiler'! :-)
Noel
>> I was wondering what it would take to convert the v6/v7 basic program
>> into something that can be run today.
>
> Hmmm... If it were C-generated then it would be (somewhat) easy, but it's
> hand-written and hand-optimised... You'd have to do some functional
> analysis on it e.g. what does this routine do, etc.
>
>> Its 2128 lines. It doesn't have that fun instruction in it :)
>
> I know! Say ~2,000 lines, say ~100 people on this list, distributed
> computing to the rescue! That's only 20 lines each, so it ought to be a
> piece of cake :-)
I'm up for that! However, only if the resulting C program can be compiled/run
on a V6/PDP11 again.
Let's assume that reverse engineering a subroutine of 20 lines takes
an hour. That then makes for 100 hours. If 10 people participate and
contribute one hour/routine per week, it will be done by May.
However, the initial analysis of the code architecture is a (time) hurdle.
Paul
PS: the Fortran66 of V6 is also assembler only...
IMO:
1) It kinda did catch on, in the form of macOS, but there was a time
when it was nearly dead as the major vendors moved to System V. For
some reason, Sun was the last major vendor to make the move, but they
caught most of the flack.
2) I think the main reason BSD nearly died, was the AT&T lawsuit. At
the time, Linux appeared to be a safer bet legally.
3) Linux got a reputation as an OS you had to be an expert to install,
so lots of people started it to install it to "prove themselves".
This was sort of true back when Linux came as 2 floppy images, but
didn't remain true for very long.
4) I believe the SCO lawsuit "against Linux" was too little, too late
to kill Linux's first mover advantage in the opensource *ix
department.
5) I think FreeBSD's ports and similar huge-source-tree approaches
didn't work out as well Linux developers contributing their changes
upstream.
Hi all,
Would anyone here be able to help me troubleshoot my qd32 controller? I
have a pdp11/73 that's mostly working, boots 2.11 from rl02 okay, but I
need my big disk to work so I can load the rest of the distro.
I've been following the manual for the qd32 to enter the geometry of my
real working m2333 (jumpered correctly according to the manuals), but when
I load the special command into the qd32's SP register that's supposed to
load the geometry table from the pdp11 memory to the novram, I get a bad
status value from the qd32's SP register and it remains unresponsive when I
try to store the geometry. If I go ahead and try the built-in qd32 format
command, it responds similarly. When I pull in mkfs from tape (vtserver)
and try anyway, despite the failures, to run mkfs on the m2333, I get an
!online error from the standalone unix mkfs. The disk does respond (the
select light flashes and I can hear heads actuating), but without geometry
and format, I'm obviously dead in the water.
Any suggestions on how to proceed?
thx
jake