Someone on one of these lists seems to be at ok-labs.com; well...
aneurin% host ok-labs.com
;; connection timed out; no servers could be reached
aneurin%
Houston, we have a problem... Could whoever is responsible please
sprinkle some fairy dust somewhere?
Thanks.
-- Dave
My all time favorite presentation on tail-recursion:
https://www.youtube.com/watch?v=-PX0BV9hGZY
On 2/22/22 4:39 AM, Ralph Corderoy wrote:
> Hi Otto,
>
>> MacOS uses the GNU implementation which has a long standing issue with
>> deep recursion. It even cannot handle the tail recursive calls used
>> here and will run out of its stack.
> When learning dc and seeing it relied on tail calls, the first thing
> I did was check it did tail-call elimination, and it did. That was
> GNU dc.
>
> Trying just now, I see no growth in memory usage despite heavy CPU load
> shown by TIME increasing.
>
> $ dc
> !ps u `pidof dc`
> USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
> ralph 11489 0.0 0.0 2332 1484 pts/1 S+ 10:33 0:00 dc
> [lmx]smlmx
> ^C
> Interrupt!
> !ps u `pidof dc`
> USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
> ralph 11489 75.5 0.0 2332 1488 pts/1 S+ 10:33 0:46 dc
>
> The memory used remained at that level during the macro execution too,
> watched from outside.
>
> Do you have more detail on what GNU dc can't handle? dc without
> tail-call elimination is a bit crippled.
>
Sad news...
-- Dave
---------- Forwarded message ----------
Date: Tue, 15 Feb 2022 16:17:04 -0500
From: John P. Linderman <jpl.jpl(a)gmail.com>
To: The Eunuchs Hysterical Society <tuhs(a)tuhs.org>
Subject: [TUHS] Lorinda Cherry
I got his from a friend today (15 February):
===========
I'm sorry to report that Lorinda passed away a few days ago. I got a call
from her sister today. Apparently the dog walker hadn't seen her for a few
days and called the police. The police entered the house and found her
there. Her sister says they are assuming either a heart attack or a stroke.
[TUHS to Bcc, +COFF <coff(a)minnie.tuhs.org> ]
This isn't exactly COFF material, but I don't know what list is more
appropriate.
On Thu, Feb 3, 2022 at 9:41 PM Jon Steinhart <jon(a)fourwinds.com> wrote:
> Adam Thornton writes:
> > Do the august personages on this list have opinions about Rust?
> > People who generally have tastes consonant with mine tell me I'd like
> Rust.
>
> Well, I'm not an august personage and am not a Rust programmer. I did
> spend a while trying to learn rust a while ago and wasn't impressed.
>
> Now, I'm heavily biased in that I think that it doesn't add value to keep
> inventing new languages to do the same old things, and I didn't see
> anything
> in Rust that I couldn't do in a myriad of other languages.
>
I'm a Rust programmer, mostly using it for bare-metal kernel programming
(though in my current gig, I find myself mostly in Rust
userspace...ironically, it's back to C for the kernel). That said, I'm not
a fan-boy for the language: it's not perfect.
I've written basically four kernels in Rust now, to varying degrees of
complexity from, "turn the computer on, spit hello-world out of the UART,
and halt" to most of a v6 clone (which I really need to get around to
finishing) to two rather more complex ones. I've done one ersatz kernel in
C, and worked on a bunch more in C over the years. Between the two
languages, I'd pick Rust over C for similar projects.
Why? Because it really doesn't just do the same old things: it adds new
stuff. Honest!
Further, the sad reality (and the tie-in with TUHS/COFF) is that modern C
has strayed far from its roots as a vehicle for systems programming, in
particular, for implementing operating system kernels (
https://arxiv.org/pdf/2201.07845.pdf) C _implementations_ target the
abstract machine defined in the C standard, not hardware, and they use
"undefined behavior" as an excuse to make aggressive optimizations that
change the semantics of one's program in such a way that some of the tricks
you really do have to do when implementing an OS are just not easily done.
For example, consider this code:
uint16_t mul(uint16_t a, uint16_t b) { return a * b; }
Does that code ever exhibit undefined behavior? The answer is that "it
depends, but on most platforms, yes." Why? Because most often uint16_t is a
typedef for `unsigned short int`, and because `short int` is of lesser
"rank" than `int` and usually not as wide, the "usual arithmetic
conversions" will apply before the multiplication. This means that the
unsigned shorts will be converted to (signed) int. But on many
platforms `int` will be a 32-bit integer (even 64-bit platforms!). However,
the range of an unsigned 16-bit integer is such that the product of two
uint16_t's can include values whose product is larger than whatever is
representable in a signed 32-bit int, leading to overflow, and signed
integer overflow is undefined overflow is undefined behavior. But does that
_matter_ in practice? Potentially: since signed int overflow is UB, the
compiler can decide it would never happen. And so if the compiler decides,
for whatever reason, that (say) a saturating multiplication is the best way
to implement that multiplication, then that simple single-expression
function will yield results that (I'm pretty sure...) the programmer did
not anticipate for some subset of inputs. How do you fix this?
uint16_t mul(uint16_t a, uint16_t b) { unsigned int aa = a, bb = b; return
aa * bb; }
That may sound very hypothetical, but similar things have shown up in the
wild: https://people.csail.mit.edu/nickolai/papers/wang-undef-2012-08-21.pdf
In practice, this one is unlikely. But it's not impossible: the compiler
would be right, the programmer would be wrong. One thing I've realized
about C is that successive generations of compilers have tightened the
noose on UB so that code that has worked for *years* all of a sudden breaks
one day. There be dragons in our code.
After being bit one too many times by such issues in C I decided to
investigate alternatives. The choices at the time were either Rust or Go:
for the latter, one gets a nice, relatively simple language, but a big
complex runtime. For the former, you get a big, ugly language, but a
minimal runtime akin to C: to get it going, you really don't have to do
much more than set up a stack and join to a function. While people have
built systems running Go at the kernel level (
https://pdos.csail.mit.edu/papers/biscuit.pdf) that seemed like a pretty
heavy lift. On the other hand, if Rust could deliver on a quarter of the
promises it made, I'd be ahead of the game. That was sometime in the latter
half of 2018 and since then I've generally been pleasantly surprised at how
much it really does deliver.
For the above example, integer overflow is defined to trap. If you want
wrapping (or saturating!) semantics, you request those explicitly:
fn mul(a: u16, b: u16) -> u16 { a.wrapping_mul(b) }
This is perfectly well-defined, and guaranteed to work pretty much forever.
But, my real issue came from some of the tutorials that I perused. Rust is
> being sold as "safer". As near as I can tell from the tutorials, the model
> is that nothing works unless you enable it. Want to be able to write a
> variable? Turn that on. So it seemed like the general style was to write
> code and then turn various things on until it ran.
>
That's one way to look at it, but I don't think that's the intent: the
model is rather, "immutable by default."
Rust forces you to think about mutability, ownership, and the semantics of
taking references, because the compiler enforces invariants on all of those
things in a way that pretty much no other language does. It is opinionated,
and not shy about sharing those opinions.
To me, this implies a mindset that programming errors are more important
> than thinking errors, and that one should hack on things until they work
> instead of thinking about what one is doing. I know that that's the
> modern definition of programming, but will never be for me.
It's funny, I've had the exact opposite experience.
I have found that it actually forces you to invest a _lot_ more in-up front
thought about what you're doing. Writing code first, and then sprinkling in
`mut` and `unsafe` until it compiles is a symptom of writing what we called
"crust" on my last project at Google: that is, "C in Rust syntax." When I
convinced our team to switch from C(++) to Rust, but none of us were really
particularly adept at the language, and all hit similar walls of
frustration; at one point, an engineer quipped, "this language has a
near-vertical learning curve." And it's true that we took a multi-week
productivity hit, but once we reached a certain level of familiarity,
something equally curious happened: our debugging load went way, _way_ down
and we started moving much faster.
It turned out it was harder to get a Rust program to build at first,
particularly with the bad habits we'd built up over decades of whatever
languages we came from, but once it did those programs very often ran
correctly the first time. You had to think _really hard_ about what data
structures to use, their ownership semantics, their visibility, locking,
etc. A lot of us had to absorb an emotional gut punch when the compiler
showed us things that we _knew_ were correct were, in fact, not correct.
But once code compiled, it tended not to have the kinds of errors that were
insta-panics or triple faults (or worse, silent corruption you only noticed
a million instructions later): no dangling pointers, no use-after-free
bugs, no data races, no integer overflow, no out-of-bounds array
references, etc. Simply put, the language _forced_ a level of discipline on
us that even veteran C programmers didn't have.
It also let us program at a moderately higher level of abstraction;
off-by-one errors were gone because we had things like iterators. ADTs and
a "Maybe" monad (the `Result<T,E>` type) greatly improved our error
handling. `match` statements have to be exhaustive so you can't add a
variant to an enum and forget to update code to account in just that one
place (the compiler squawks at you). It's a small point, but the `?`
operator removed a lot of tedious boilerplate from our code, making things
clearer without sacrificing robust failure handling. Tuples for multiple
return values instead of using pointers for output arguments (that have to
be manually checked for validity!) are really useful. Pattern matching and
destructuring in a fast systems language? Good to go.
In contrast, I ran into a "bug" of sorts with KVM due to code I wrote that
manifested itself as an "x86 emulation error" when it was anything but: I
was turning on paging very early in boot, and I had manually set up an
identity mapping for the low 4GiB of address space for the jump from 32-bit
to 64-bit mode. I used gigabyte pages since it was easy, and I figured it
would be supported, but I foolishly didn't check the CPU features when
running this under virtualization for testing and got that weird KVM error.
What was going on? It turned out KVM in this case didn't support gig pages,
but the hardware did; the software worked just fine until the first time
the kernel went to do IO. Then, when the hypervisor went to fetch the
instruction bytes to emulate the IO instruction, it saw the gig-sized pages
and errored. Since the incompatibility was manifest deep in the bowels of
the instruction emulation code, that was the error that returned, even
though it had nothing to do with instruction emulation. It would have been
nice to plumb through some kind of meaningful error message, but in C
that's annoying at best. In Rust, it's trivial.
https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-validate/
70% of CVEs out of Microsoft over the last 15 years have been memory safety
issues, and while we may poo-poo MSFT, they've got some really great
engineers and let's be honest: Unix and Linux aren't that much better in
this department. Our best and brightest C programmers continue to turn out
highly buggy programs despite 50 years of experience.
But it's not perfect. The allocator interface was a pain (it's defined to
panic on allocation failure; I'm cool with a NULL return), though work is
ongoing in this area. There's no ergonomic way to initialize an object
'in-place' (https://mcyoung.xyz/2021/04/26/move-ctors/) and there's no
great way to say, essentially, "this points at RAM; even though I haven't
initialized it, just trust me don't poison it" (
https://users.rust-lang.org/t/is-it-possible-to-read-uninitialized-memory-w…
-- we really need a `freeze` operation). However, right now? I think it
sits at a local maxima for systems languages targeting bare-metal.
- Dan C.
Oh dear. This is getting a little heated. TUHS to Bcc:, replies to COFF.
On Sun, Feb 6, 2022 at 8:15 AM Ed Carp <erc(a)pobox.com> wrote:
> Since you made this personal and called me out specifically, I will
> respond:
>
> "In what way is automatic memory management harder, more unsafe, and
> less robust than hand-written memory management using malloc and
> free?"
>
> Because there's no difference in the two. Someone had to write the
> "automatic memory management", right?
>
I cannot agree with this, there is a big difference.
With GC, you are funneling all of the fiddly bits of dealing with memory
management through a runtime that is written by a very small pool of people
who are intimately familiar with the language, the runtime, the compilation
environment, and so on. That group of subject matter experts produce a
system that is tested by every application (much like the _implementation_
of malloc/free itself, which is not usually reproduced by every programmer
who _uses_ malloc/free).
It's like in "pure" functional languages such as Haskell, where everything
is immutable: that doesn't mean that registers don't change values, or that
memory cells don't get updated, or that IO doesn't happen, or the clock
doesn't tick. Rather, it means that the programmer makes a tradeoff where
they cede control over those things to the compiler and a runtime written
by a constrained set of contributors, in exchange for guarantees those
things make about the behavior of the program.
With manual malloc/free, one smears responsibility for getting it right
across every program that does dynamic memory management. Some get it
right; many do not.
In many ways, the difference between automatic and manual memory management
is like the difference between programming in assembler and programming in
a high-level language. People have written reliable, robust assembler for
decades (look at the airline industry), but few people would choose to do
so today; why? Because it's tedious and life is too short as it is.
Further, the probability of error is greater than in a high-level language;
why tempt fate?
[snip]
> "This discussion should probably go to COFF, or perhaps I should just
> leave the list. I am starting to feel uncomfortable here. Too much
> swagger."
>
> I read through the thread. Just because people don't agree with each
> other doesn't equate to "swagger". I've seen little evidence of
> anything other than reasoned analysis and rational, respectful
> discussion. Was there any sort of personal attacks that I missed?
>
It is very difficult, in a forum like this, to divine intent. I know for a
fact that I've written things to this list that were interpreted very
differently than I meant them.
That said, there has definitely been an air that those who do not master
manual memory management are just being lazy and that "new" programmers are
unskilled. Asserting that this language or that is "ours" due to its
authors while that is "theirs" or belongs solely to some corporate sponsor
is a bit much. The reality is that languages and operating systems and
hardware evolve over time, and a lot of the practices we took for granted
10 years ago deserve reexamination in the light of new context. There's
nothing _wrong_ with that, even if it may be uncomfortable (I know it is
for me).
The fact of the matter is, code written with malloc/free, if written
> carefully, will run for *years*. There are Linux boxes that have been
> running for literally years without being rebooted, and mainframes and
> miniframes that get booted only when a piece of hardware fails.
>
That there exist C programs that have run for many years without faults is
indisputable. Empirically, people _can_ write reliable C programs, but it
is often harder than it seems to do so, particularly since the language
standard gives so much latitude for implementations to change semantics in
surprising ways over time. Just in the past couple of weeks a flaw was
revealed in some Linux daemon that allowed privilege escalation to
root...due to improper memory management. That flaw had been in production
for _12 years_. Sadly, this is not an isolated incident.
That said, does manual memory management have a place in modern computing?
Of course it does, as you rightly point out. So does assembly language.
Rust came up in the context of this thread as a GC'd language, and it may
be worth mentioning that Rust uses manual memory management; the language
just introduces some facilities that make this safer. For instance, the
concept of ownership is elevated to first-class status in Rust, and there
are rules about taking references to things; when something's owner goes
out of scope, it is "dropped", but the compiler statically enforces that
there are no outstanding references to that thing. Regardless, when dealing
with some resource it is often the programmer's responsibility to make sure
that a suitable drop implementation exists. FWIW, I used to sit down the
hall from a large subgroup of the Go developers; we usually ate lunch
together. I know that many of them shared my opinion that Rust and Go are
very complimentary. No one tool is right for all tasks.
- Dan C.
Replying on COFF as firmly in COFF territory.
On 2022-02-01 16:50, Win Treese wrote:
>> On Feb 1, 2022, at 1:19 PM, Noel Chiappa <jnc(a)mercury.lcs.mit.edu> wrote:
>>
>> From: Clem Cole
>>> So by the late 70s/early 80s, [except for MIT where LISP/Scheme reigned]
>> Not quite. The picture is complicated, because outside the EECS department,
>> they all did their own thing - e.g. in the mid-70's I took a programming
>> intro couse in the Civil Engineering department which used Fortran. But in
>> EECS, in the mid-70's, their intro programming course used assembler
>> (PDP-11), Algol, and LISP - very roughly, a third of the time in each. Later
>> on, I think it used CLU (hey, that was MIT-grown :-). I think Scheme was used
>> later. In both of these cases, I have no idea if it was _only_ CLU/Scheme, or
>> if they did part of it in other languages.
> I took 6.001 (with Scheme) in the spring of 1983, which was using a course
> handout version of what became Structure and Interpretation of Computer
> Programs by Sussman and Abelson. My impression was that it had been
> around for a year before that, but not much more, and it was part of
> revamping the EECS core curriculum at the time.
I recall that one of the SICP authors wrote an interesting summary of
6.001 (with Scheme) but I cannot find it.
Incidentally, SICP with Javascript will be released next year:
https://mitpress.mit.edu/books/structure-and-interpretation-computer-progra…
N.
> In at least the early 80s, CLU was used in 6.170, Software Engineering
> Laboratory, in which a big project was writing a compiler.
>
> And Fortran was still being taught for the other engineering departments.
> In 1982(ish), those departments had the Joint Computing Facility for a lot
> of their computing, of which the star then was a new VAX 11/782.
>
> - Win
>
> From: Peter Jeremy
> all (AFAIK) PDP-11's were microcoded
Not the -11/20; it pre-dated the fast, cheap ROMs needed to go the micocode
route, so it used a state machine. All the others were, though (well, I don't
know about the Mentec ones).
Noel
=> COFF
On 2022-Jan-30 10:07:15 -0800, Dan Stromberg <drsalists(a)gmail.com> wrote:
>On Sun, Jan 30, 2022 at 8:58 AM David Barto <david(a)kdbarto.org> wrote:
>
>> Yes, the UCSD P-code interpreter was ported to 4.1 BSD on the VAX and it
>> ran natively there. I used it on sdcsvax in my senior year (1980).
>
>This reminds me of a question I've had percolating in the back of my mind.
>
>Was USCD Pascal "compiled" or "interpreted" or both?
>
>And is Java? They both have a byte code interpreter.
A bit late to the party but my 2¢:
I think it's fairly clear that both UCSD Pascal and Java are compiled
- to binary machine code for a p-code machine or JVM respectively.
That's no different to compiling (eg) C to PDP-11 or amd64 binary
machine code.
As for how the machine code is executed:
* p-code was typically interpreted but (as mentioned elsewhere) there
were a number of hardware implementions.
* Java bytecode is often executed using a mixture of interpretation
and (JIT) compilation to the host's machine code. Again there are
a number of hardware implementations.
And looking the other way, all (AFAIK) PDP-11's were microcoded,
therefore you could equally well say that PDP-11 machine code is being
interpreted by the microcode on a "real" PDP-11. And, nowadays,
PDP-11 machine code is probably more commonly interpreted using
something like simh than being run on a hardware PDP-11.
Typical amd64 implementations are murkier - with machine code being
further converted ("compiled"?) into a variable number of micro-ops
that have their own caches and are then executed on the actual CPU.
(And, going back in time, the Transmeta Crusoe explicity did JIT
conversion from iA32 machine code to its own proprietary machine code).
--
Peter Jeremy
On Mon, Jan 31, 2022 at 10:17 AM Paul Winalski <paul.winalski(a)gmail.com>
wrote:
> On 1/30/22, Steve Nickolas <usotsuki(a)buric.co> wrote:
> > And I think I've heard the Infocom compilers' bytecode called "Z-code" (I
> > use this term too).
> That is correct. The Infocom games ran on an interpreter for an
> abstract machine called the Z-machine. Z-code is the Z-machine's
> instruction set. There is a freeware implementation out there called
> Frotz.
>
>
There's a reasonably functional Frotz implementation for TOPS-20, as it
happens. The ZIP interpreter was easier to port to 2.11BSD on the PDP-11.
https://github.com/athornton/tops20-frotz
Adam
"Reflections on Trusting Trust" plus the fact that no one has designed new
real computers at the gate level for at least 30 years, maybe longer--it's
done in an HDL of some kind, which is to say, software--means it's already
way, way too late.
I for one welcome our new non-biological overlords.
Adam
Hi all,
Given the recent (awesome) discussions about the history of *roff and TeX, I
thought I'd ask about where Brian Reid's Scribe system fits in with all this.
His thesis is available online here:
http://reports-archive.adm.cs.cmu.edu/anon/scan/CMU-CS-81-100.pdf, and in my
opinion is very interesting (also cites papers on roff and TeX). Does anybody
know if Scribe was ever used on Unix systems? Does it exist at all today?
Thanks :)
Josh
Moving to COFF.
John Labovitz wrote:
>>> The earliest known text-formatting software, TJ-2, was created by
>>> MIT-trained computer scientist Peter Samson in 1963.
>>
>> I see claimed predecessors are JUSTIFY and TJ-1. How do you feel
>> about those?
>
> I’m sure I looked for TJ-1 when I did this research — an obvious
> question, given the ‘2’ suffix — but didn’t find anything then. I’m
> not familiar with JUSTIFY.
Note, later there was also a TJ6 for the PDP-6, written by Richard
Greenblatt.
> Do you have links/info for those?
This one mentions TJ-1 near the end:
https://www.computerhistory.org/pdp-1/_media/pdf/DEC.pdp_1.1972.102650621.p…
The TJ-2 page on Wikipedia mentions JUSTIFY and links here:
http://www.ultimate.com/phil/pdp10/
Moving to COFF, but Brian Dear's "The Friendly Orange Glow", about Plato,
talks a lot about some of the cool stuff happening in the middle of the
country.
https://www.amazon.com/Friendly-Orange-Glow-Untold-Cyberculture/dp/11019736…
And later, of course, NCSA Mosaic.
On Wed, Jan 12, 2022 at 4:15 PM Greg 'groggy' Lehey <grog(a)lemis.com> wrote:
> On Tuesday, 11 January 2022 at 14:34:16 -0500, John Cowan wrote:
> > On Tue, Jan 11, 2022 at 1:37 PM Dan Cross <crossd(a)gmail.com> wrote:
> >> It seems like Unix is largely a child of the coasts.
> >
> > We can add the eastern coast of Australia, where the original
> > Wollongong group made the first V6 port to the Interdata 7/32 (not
> > to be confused with the Labs port to the 8/32).
>
> To be fair, in the case of Australia almost everybody is on the east
> coast, though we have had our share of FreeBSD core team members from
> the "west coast" (which is really only Perth).
>
> Greg
> --
> Sent from my desktop computer.
> Finger grog(a)lemis.com for PGP public key.
> See complete headers for address and phone numbers.
> This message is digitally signed. If your Microsoft mail program
> reports problems, please read http://lemis.com/broken-MUA.php
>
Taking this to COFF...
> On Jan 10, 2022, at 7:13 PM, Lyndon Nerenberg (VE7TFX/VE6BBM) <lyndon(a)orthanc.ca> wrote:
>
> Greg 'groggy' Lehey writes:
>
>> As long as man pages are formatted with ?roff, I don't see it going
>> away. I don't suppose many people use troff any more, but there are
>> enough of us, and as long as man pages stay the way they are, I don't
>> think we're in any danger.
>
> Well there is mandoc(1). But as time goes by they just seem to be
> re-implementing nroff. Of course that *must* be easier than just
> learning n/troff in the first place :-P
As someone who did a lot of a Ph.D. in the history of computing, and then went into IT because he liked eating protein sometimes:
The great secret is that NO ONE EVER READS THE LITERATURE.
We have now made all the mistakes at least four times:
Once for each of mainframes, minis, micros, and mobile.
You can be a rock star at any development or operations job, even if you are, like me, a Bear Of Little Brain, simply by having some idea of what was tried already to solve a problem like this, and why it didn't work.
Which you can get by actually stopping to read up about your problem before diving headfirst into coding up a solution for it.
If you happen to get stinking rich from this advice, you can buy me a bottle of whiskey sometime.
Adam
I think I've posted this question before, perhaps on TUHS, but I'll ask
again.
I have a PDP-11/03 with a Sykes Twin 8" Floppy Drive unit. It has it's own
controller card, so I'm not sure if it's RX01/RX02 compatible. Problem is,
I need a bootstrap program for it. I can't find a technical manual for it,
so I'm stuck.
I see the source for LSX has a driver for the Sykes, so I may be able to
install and mount it on MX, which I'm preparing with Noel's help for my
machine, without booting from it. I'm hoping that Heinz, or someone who had
a Sykes drive in that era still has the bootstrap code.
Paul
*Paul Riley*
Moving to COFF since while this is a UNIX issue its really attitude,
experience and perspective.
On Thu, Dec 30, 2021 at 8:01 PM Rob Pike <robpike(a)gmail.com> wrote:
> Grumpy hat on.
>
> Sometimes the Unix community suffers from the twin attitudes of a)
> believing if it can't be done perfectly, any improvement shouldn't be
> attempted at all and b) it's already done as well as is possible anyway.
>
> I disagree with both of these positions, obviously, but have given up
> pushing against them.
>
> We're in the 6th decade of Unix and we still suffer from unintended,
> fixable consequences of decisions made long long ago.
>
> Grumpy hat off.
>
> -rob
>
While I often agree with you and am a huge fan of your work both written
and programming, I am going to take a different position:
I am very much into researching different solutions and love exploring them
and seeing how to apply the lessons, but *just because we can change a
change*, *does not always mean we should*. IMOI: *Economics has to play
into equation*.
I offer the IPv4 to IPv6 fiasco as an example of a change because we could
(and we thought it would help - hey I did in the early 1990s), but it
failed for economic reasons. In the end, any real change has to take into
account some level of economics.
The examples of the differences in the shell is actually a different issue
-- that was territorial and not economics -- each vendor adding stuff that
helped them (and drove IVS/end users of multiple platforms crazy). The
reality with SunOS sh vs Ultrix sh vs HP-UX sh vs System V (att sh) was yet
another similar but different -- every manufacturer messed with a V7
derivative sh was a little different -- including AT&T, Korn et al. For
that matter you (Rob) created a new syntax command with Plan9 [although you
did not try to be and never claimed to be V7 compatible -- to your point
you break things where you thought it matters and as a researcher I accept
that]. But because all the manufacturers were a little different, it was
exactly why IEEE said -- wait a minute -- let's define a base syntax which
will work everywhere and it is something we can all agree and if we all
support it -- great. We did that, and we call that POSIX (and because it
was designed by compromise and committee - like a camel it has some humps).
*But that does mean compromise -- some agreed 'sh' basics needs to keep the
base level.*
The problem Ted and Larry describes is real ... research *vs.* production.
So it begs the question, at what time does it make it sensible/ (worth
it/economically viable) to move on?
Apple famously breaks things and it drives me bonkers because many (most I
would suggest) of those changes are hardly worth it -- be it my iPhone or
my Mac. I just want to use the darned thing BTW: Last week, the clowns at
Telsa just rolled out a new UI for my Model S --- ugh -- because they could
(now I'm fumbling trying deal with the climate system or the radio -- it
would not do bad if they had rolled out a the new UI on a simulator for my
iPad so I could at least get used to it -- but I'm having to learn it live
-- what a PITA -- that really makes me grumpy).
What I ask for this august body to consider is that before we start looking
at these changes is to ask what we are really getting in return when a new
implementation breaks something that worked before. *e.g.* I did not think
systemd bought end users much value able, must like IPv6 in practice, it
was thought to solve many problems, but did not buy that much and has
caused (continues to cause) many more.
In biolog every so often we have an "ice age" and kill a few things off and
get to start over. That rarely happens in technology, except when a real
Christianen style disruption takes place -- which is based on economics --
a new market values the new idea and the old market dies off. I believe
that from the batch/mainframe 1960s/early 70s world, Unix was just that --
but we got to start over because the economics of 'open systems' and the
>>IP<< being 'freely available' [which compared to VMS and other really
proprietary systems] did kill them off. I also think that the economics
of completely free (Linux) ended up killing the custom Unix diversions.
Frankly, if (at the beginning) Plan9 has been a tad easier/cheaper/more
economical for >>everyone<< in the community obtain (unlike original Unix
release time, Plan9 was not the same rules because AT&T was under different
rules and HW cost rules had changed things), it >>might<< have been the
strong strain that killed off the old. If IPv6 has been (in practice)
cheaper to use than IPv4 [which is what I personally thought the ISP would
do with it - since it had been designed to help them] and not made as a
premium feature (i.e they had made it economically to change), it might
have killed of IPv4.
Look at 7 decades of Programming Language design, just being 'better' is
not good enough. As I have said here and many other places, the reality is
that Fortran still pays the salary of people like me in the HPC area [and I
don't see Julia or for that matter, my own company's pretty flower - Data
Parallel C++ making inroads soon]. It's possible that Rust as a system
programming language >>might<< prove economical to replace C. I personally
hope Go makes the inroads to replace C++ in user space. But for either to
do that, there has to be an economical reason - no brainer style for
management.
What got us here was a discussion of the original implementation of
directory files, WRT links and how paths are traversed. The basic
argument comes from issues with how and when objects are named. Rob, I
agree with you, that just because UNIX (or any other system) used a scheme
previously does not make the end-all. And I do believe that rethinking
some of the choices made 5-6 decades ago is in order. But I ask the
analysis of the new verse the old takes into account, how to mitigate the
damage done. If its economics prove valuable, the evolution to using it
will allow a stronger strain to take over, but just because something new
vs. the old, does not make it valuable.
Respectfully ....
Happy new year everyone and hopefully 2022 proves a positive time for all
of you.
Clem
Moving to COFF, perhaps prematurely, but...
It feels weird to be a Unix native (which I consider myself: got my first
taste of Irix and SVR3 in 1989, went to college where it was a Sun-mostly
environment, started running Linux on my own machines in 1992 and never
stopped). (For purposes of this discussion, of course Linux is Unix.)
It feels weird the same way it was weird when I was working for Express
Scripts, and then ESRX bought Medco, and all of a sudden we were the 500-lb
Gorilla. That's why I left: we (particularly my little group) had been
doing some fairly cool and innovative stuff, and after that deal closed, we
switched over entirely to playing defense, and it got really boring really
fast. My biggest win after that was showing that Pega ran perfectly fine
on Tomcat, which caused IBM to say something like "oh did we say $5 million
a year to license Websphere App Server? Uh...we meant $50K." So I saved
them a lot of money but it sucked to watch several months' work flushed
down the toilet, even though the savings to the company was many times my
salary for those months.
But the weird part is similar: Unix won. Windows *lost*. Sure, corporate
desktops still mostly run Windows, and those people who use it mostly hate
it. But people who like using computers...use Macs (or, sure, Linux, and
then there are those weirdos like me who enjoy running all sorts of
ancient-or-niche-systems, many of which are Unix). And all the people who
don't care do computing tasks on their phones, which are running either
Android--a Unix--or iOS--also a Unix. It's ubiquitous. It's the air you
breathe. It's no longer strange to be a Unix user, it means you use a
21st-century electronic device.
And, sure, it's got its warts, but it's still basically the least-worst
thing out there. And it continues to flabbergast me that a typesetting
system designed to run on single-processor 16-bit machines has, basically,
conquered the world.
Adam
P.S. It's also about time, he said with a sigh of relief, having been an
OS/2 partisan, and a BeOS partisan, back in the day. Nice to back a
winning horse for once.
On Thu, Dec 30, 2021 at 6:46 PM Bakul Shah <bakul(a)iitbombay.org> wrote:
> ?
>
> I was just explaining Ts'o's point, not agreeing with it. The first
> example I
> gave works just fine on plan9 (unlike on unix). And since it doesn't allow
> renames, the scenario T'so outlines can't happen there! But we were
> discussing Unix here.
>
> As for symlinks, if we have to have them, storing a path actually makes
> their
> use less surprising.
>
> We're in the 6th decade of Unix and we still suffer from unintended,
> fixable consequences of decisions made long long ago.
>
>
> No argument here. Perhaps you can suggest a path for fixing?
>
> On Dec 30, 2021, at 5:00 PM, Rob Pike <robpike(a)gmail.com> wrote:
>
> Grumpy hat on.
>
> Sometimes the Unix community suffers from the twin attitudes of a)
> believing if it can't be done perfectly, any improvement shouldn't be
> attempted at all and b) it's already done as well as is possible anyway.
>
> I disagree with both of these positions, obviously, but have given up
> pushing against them.
>
> We're in the 6th decade of Unix and we still suffer from unintended,
> fixable consequences of decisions made long long ago.
>
> Grumpy hat off.
>
> -rob
>
>
> On Fri, Dec 31, 2021 at 11:44 AM Bakul Shah <bakul(a)iitbombay.org> wrote:
>
>> On Dec 30, 2021, at 2:31 PM, Dan Cross <crossd(a)gmail.com> wrote:
>> >
>> > On Thu, Dec 30, 2021 at 11:41 AM Theodore Ts'o <tytso(a)mit.edu> wrote:
>> >>
>> >> The other problem with storing the path as a string is that if
>> >> higher-level directories get renamed, the path would become
>> >> invalidated. If you store the cwd as "/foo/bar/baz/quux", and someone
>> >> renames "/foo/bar" to "/foo/sadness" the cwd-stored-as-a-string would
>> >> become invalidated.
>> >
>> > Why? Presumably as you traversed the filesystem, you'd cache, (path
>> > component, inode) pairs and keep a ref on the inode. For any given
>> > file, including $CWD, you'd know it's pathname from the root as you
>> > accessed it, but if it got renamed, it wouldn't matter because you'd
>> > have cached a reference to the inode.
>>
>> Without the ".." entry you can't map a dir inode back to a path.
>> Note that something similar can happen even today:
>>
>> $ mkdir ~/a; cd ~/a; rm -rf ~/a; cd ..
>> cd: no such file or directory: ..
>>
>> $ mkdir -p ~/a/b; ln -s ~/a/b b; cd b; mv ~/a/b ~/a/c; cd ../b
>> ls: ../b: No such file or directory
>>
>> You can't protect the user from every such case. Storing a path
>> instead of the cwd inode simply changes the symptoms.
>>
>>
>>
>
(Moving to COFF, tuhs on bcc.)
On Tue, Dec 28, 2021 at 01:45:14PM -0800, Greg A. Woods wrote:
> > There have been patches proposed, but it turns out the sticky wicket
> > is that we're out of signal numbers on most architectures.
>
> Huh. What an interesting "excuse"! (Not that I know anything useful
> about the implementation in Linux....)
If recall correctly, the last time someone tried to submit patches,
they overloaded some signal that was in use, and it was NACK'ed on
that basis. I personally didn't care, because on my systems, I'll use
GUI program like xload, or if I need something more detailed, GKrellM.
(And GKreelM can be used to remotely monitor servers as well.)
> > SIGLOST - Term File lock lost (unused)
> > SIGSTKFLT - Term Stack fault on coprocessor (unused)
>
> If SIGLOST were used/needed it would seem like a very bad system design.
It's used in Solaris to report that the client NFSv4 code could not
recover a file lock on recovery. So that means one of the first
places to look would be to see if Ganesha (an open-source NFSv4
user-space client) isn't using SIGLOST (or might have plans to use
SIGLOST in the feature).
For a remote / distributed file system, Brewer's Theorem applies
--- Consistency, Availability, Partition tolerance --- chose any
two, but you're not always going to be able to get all three.
Cheers,
- Ted
On my Windows 11 notebook with WSL2 + Linux I got as default
rubl@DESKTOP-NQR082T:~$ echo $PS1
\[\e]0;\u@\h: \w\a\]${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@
\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$
rubl@DESKTOP-NQR082T:~$ uname -a
Linux DESKTOP-NQR082T 5.10.74.3-microsoft-standard-WSL2+ #4 SMP Sun Dec 19
16:25:10 +07 2021 x86_64 x86_64 x86_64 GNU/Linux
rubl@DESKTOP-NQR082T:~$
--
The more I learn the better I understand I know nothing.
On 12/22/21, Adam Thornton <athornton(a)gmail.com> wrote:
> MacOS finally pushed me to zsh. So I went all the way and installed
> oh-my-zsh. It makes me feel very dirty, and I have a two-line prompt (!!),
> but I can't deny it's convenient.
>
> tickets/DM-32983 ✗
> adam@m1-wired:~/git/jenkins-dm-jobs$
>
> (and in my terminal, the X glyph next to my git branch showing the status
> is dirty is red while the branch name is green)
>
> and if something doesn't exit with rc=0...
>
> adam@m1-wired:~/git/jenkins-dm-jobs$ fart
> zsh: command not found: fart
> tickets/DM-32983 ✗127 ⚠️
> adam@m1-wired:~/git/jenkins-dm-jobs$
>
> Then I also get the little warning glyph and the rc of the last command in
> my prompt.
>
> But then I'm also now using Fira Code with ligatures in my terminal, so
> I've pretty much gone full Red Lightsaber.
I try to keep my prompt as simple as possible. For years I have been using:
moon $
That 's it. No fancy colors, not even displaying current working
directory. I have an alias 'p' for that.
--Andy
On 2021-12-23 11:00, Larry McVoy wrote:
> On Thu, Dec 23, 2021 at 03:29:18PM +0000, Dr Iain Maoileoin wrote:
>>> Probably boomer doing math wrong.
>> I might get flamed for this comment, but is a number divided by a number not
>> arithmetic.?? I cant see any maths in there.
> That's just a language thing, lots of people in the US call arithmetic
> math. I'm 100% positive that that is not just me.
Classes in elementary grades are called "math classes" (but then there
is Serre's book).
N.
-tuhs +coff
On Thu, Dec 23, 2021 at 11:47 AM Dr Iain Maoileoin <
iain(a)csp-partnership.co.uk> wrote:
I totally agree. My question is about language use (or drift) - nothing
> else. In Scotland - amongst the young - "Arithmetic" is now referred
> to as "Maths". I am aware of the transition but cant understand what
> caused it to happen! I dont know if other countries had/have the same
> slide from a specific to a general - hence the questions - nothing deeper.
>
Language change is inexplicable in general. About all we know is that some
directions of change are more likely than others: we no more know *why*
language changes than we know *why* the laws of physics are what they are.
Both widening (_dog_ once meant 'mastiff') and narrowing (_deer_ once meant
'animal') are among the commonest forms of semantic change.
In particular, in the 19C _arithmetic_ meant 'number theory', and so the
part concerned with the computation of "ambition, distraction,
uglification, and derision" (Lewis Carroll) was _elementary arithmetic_.
(Before that it was _algorism_.) When _higher arithmetic_ got its own
name, the _elementary_ part was dropped in accordance with Grice's Maxim of
Quantity ("be as informative as you can, giving as much information as
necessary, but no more"). This did not happen to _algebra_, which still
can mean either elementary or abstract algebra, still less to _geometry_.
In addition, from the teacher's viewpoint school mathematics is a
continuum, including the elementary parts of arithmetic, algebra, geometry,
trigonometry, and in recent times probability theory and statistics, for
which there is no name other than _ mathematics_ when taken collectively.
> In lower secondary school we would go to both Arithmetic AND also to
> Maths classes.
>
What was taught in the latter?