(moving to COFF)
On Tue, Jan 3, 2023 at 9:55 AM Marshall Conover <marzhall.o(a)gmail.com>
wrote:
> Along these lines but not quite, Jupyter Notebooks have stood out to me as
> another approach to this concept, with behavior I'd like to see implemented
> in a shell.
>
>
Well, you know, if you're OK with bash:
https://github.com/takluyver/bash_kernel
Or zsh: https://pypi.org/project/zsh-jupyter-kernel/
One of the big things I do is work on the Notebook Aspect of the Rubin
Science Platform. Each JupyterLab notebook session is tied to a "kernel"
which is a specific language-and-extension environment. At the Rubin
Observatory, we support a Python 3.10 environment with our processing
pipeline included. Although JupyterLab itself is capable of doing other
languages (notably Julia and R, which are the other two from which the name
"Jupyter" came), many others have been adapted to the notebook environment
(including at least the two shells above). And while researchers are
welcome to write their own code and work with raw images, we work under the
presumption that almost everyone is going to use the software tools the
Rubin Observatory provides to work with the data it collects, because
writing your own data processing pipeline from scratch is a pretty
monumental project.
Most astronomers are perfectly happy with what we provide, which is Python
plus our processing pipelines, which are all Python from the
scientist-facing perspective (much of the pipeline code is implemented in
C++ for speed, but it then gets Python bindings, so unless you're actually
working very deep down in the image processing code, it's Python as far as
you're concerned). However, a certain class of astronomers still loves
their FORTRAN. This class, unfortunately, tends to be the older ones,
which means the ones who have societal stature, tenure, can relatively
easily get big grants, and wield a lot of power within their institutions.
I know that it is *possible* to run gfortran as a Jupyter kernel. I've
seen it done. I was impressed.
Fortunately, no one with the power to make it stick has demanded we provide
a kernel like that. The initial provision of the service wouldn't be the
problem. It's that the support would be a nightmare. No one on my team is
great at FORTRAN; I'm probably the closest to fluent, and I'm not very, and
I really don't enjoy working in FORTRAN, and because FORTRAN really doesn't
lend itself easily to the kind of Python REPL exploration that notebooks
are all about, and because someone who loves FORTRAN and hates Python
probably has a very different concept of what good software engineering
practices look like than I do, trying to support someone working in a
notebook environment in a FORTRAN kernel would very likely be exquisitely
painful. And fortunately, since there are not FORTRAN bindings to the C++
classes providing the core algorithms, much less FORTRAN bindings to the
Python implementations (because all the things that don't particularly need
to be fast are just written in Python in the first place), a gfortran
kernel wouldn't be nearly as useful as our Python-plus-our-tools.
Now, sure, if we had paying customers who were willing to throw enough
money at us to make it worth the pain and both bring a FORTRAN
implementation to feature-parity with the reference environment and make a
gfortran kernel available, then we would do it. But I get paid a salary
that is not directly tied to my support burden, and I get to spend a lot
more of my time working on fun things and providing features for
astronomers who are not mired in 1978 if I can avoid spending my time
supporting huge time sinks that aren't in widespread use. We are scoped to
provide Python. We are not scoped to provide FORTRAN. We are not making
money off of sales: we're employed to provide stable infrastructure
services so the scientists using our platform and observatory can get their
research done. And thus far we've been successful in saying "hey, we've
got finite resources, we're not swimming in spare cycles, no, we can't
support [ x for x in
things-someone-wants-but-are-not-in-the-documented-scope ]". (To be fair,
this has more often been things like additional visualization libraries
than whole new languages, but the principle is the same.) We have a
process for proposing items for inclusion, and it's not at all rare that we
add them, but it's generally a considered decision about how generally
useful the proposed item will be for the project as a whole and how much
time it's likely to consume to support.
So this gave me a little satori about why I think POSIX.2 is a perfectly
reasonable spec to support and why I'm not wild about making all my shell
scripts instead compatible with the subset of v7 sh that works (almost)
everywhere. It's not all that much more work up front, but odds are that a
customer that wants to run new software, but who can't guarantee a POSIX
/bin/sh, will be a much more costly customer to support than one who can,
just as someone who wants a notebook environment but insists on FORTRAN in
it is very likely going to be much harder to support than someone who's
happy with the Python environment we already supply.
Adam
[Apologies for resending; I messed up and used the old Minnie address
for COFF in the Cc]
On Mon, Jan 2, 2023 at 1:36 PM Dan Cross <crossd(a)gmail.com> wrote:
>
> On Mon, Jan 2, 2023 at 1:13 PM Clem Cole <clemc(a)ccc.com> wrote:
> > Maybe this should go to COFF but Adam I fear you are falling into a tap that is easy to fall into - old == unused
> >
> > One of my favorite stores in the computer restoration community is from 5-10 years ago and the LCM+L in Seatle was restoring their CDC-6000 that they got From Purdue. Core memory is difficult to get, so they made a card and physical module that could plug into their system that is both electrically and mechanically equivalent using modern semiconductors. A few weeks later they announced that they had the system running and had built this module. They got approached by the USAF asking if they could get a copy of the design. Seems, there was still a least one CDC-6600 running a particular critical application somewhere.
> >
> > This is similar to the PDP-11s and Vaxen that are supposed to be still in the hydroelectric grid [a few years ago the was an ad for an RSX and VMS programmer to go to Canada running in the Boston newspapers - I know someone that did a small amount of consulting on that one].
>
> One of my favorite stories along these lines is the train signalling
> system in Melbourne, running on a "PDP-11". I quote PDP-11 because
> that is now virtualized:
> https://www.equicon.de/images/Virtualisierung/LegacyTrainControlSystemStabi…
>
> Indeed older systems show up in surprising places. I was once on the
> bridge of a US Naval vessel in the late '00s and saw a SPARCstaton 20
> running Solaris (with the CDE login screen). I don't recall what it
> was doing, but it was a tad surprising.
>
> I do worry about legacy systems in critical situations, but then, I've
> been in a firefight when the damned tactical computer with the satcomm
> link rebooted and we didn't have VHF comms with the battlespace owner.
> That was not particularly fun.
>
> - Dan C.
So, all the shell-portability talk on TUHS reminds me of something I believe I saw back in the 90s, and then failed to find a few years ago when I went looking.
But my Google-fu is not great, so maybe I just didn't look in the right place.
I was trying to port Frotz to TOPS-20, because I wanted to run the Infocom games on TOPS-20 on an emulated PDP-10. (The only further causality to the chain was a warning in the Frotz sources that it assumed 8-bit bytes and if you wanted to try to port it to a 36-bit environment, good luck; this is the difference between "stuff I do fo fun" and "stuff that needs a business justification".) I had a K&R C compiler available, but the sources were all ANSI C.
I had remembered that deprotoize had been part of an early GCC, and I did manage to find deprotoize sources, buried, I think, in some dusty piece of the Apple toolchain. But I also have a vague memory that GCC at some point probably in the mid-to-late 1990s came with something that was halfway between autoconf and Perl's bootstrapper. I *think* it was a bunch of shell scripts that could put together a minimal C subset compiler, which then could be used to build the rest of GCC. I'm pretty sure it was released as a reaction to the unbundling of C compilers when Unix vendors realized that was a thing they could do.
I could not find that thing at all.
Did I hallucinate it? It seems like it would have been an immensely useful tool at the time.
I ended up writing my own very half-assed deprotoizer and symbol mangler (only the first six characters of the function name were significant, because that's how the TOPS-20 linker works, and I don't know that I could have gotten past that even with an ANSI compiler without having to do significant toolchain work) which got me over the hump, but I have remained curious whether there really was that nifty GCC bootstrapper or whether I made that up.
Adam
Hi Branden,
> Paul Ruizendaal wrote:
> > That was my immediate pain point in doing the D1 SoC port.
> > Unfortunately, the manufacturer only released the DRAM init code as
> > compiler ‘-S’ output and the 1,400 page datasheet does not discuss
> > its registers. Maybe this is a-typical, as I heard in the above
> > keynote that NXP provides 8,000 page datasheets with their SoC’s.
...
> I don't think it's atypical. I was pretty annoyed trying to use the
> data sheet to program a simple timer chip on the ODROID-C2
...
> OS nerds don't generally handle procurement themselves. Instead,
> purchasing managers do, and those people don't have to face the pain.
...
> Data sheets are only as good as they need to be to move the product,
> which means they don't need to be good at all, since the people who
> pay for them look only at the advertised feature list and the price.
I think it comes down to the background of the chip designer. I've
always found NXP very good: their documentation of a chip is extensive;
it doesn't rely on referring to external source code; and they're
responsive when I've found the occasional error, both confirming the
correction and committing to its future publication.
On the other hand, TI left a bad taste. The documentation isn't good
and they rely on a forum to mop up all the problems but it's pot luck
which staffer answers and perennial problems can easily be found by a
forum search, never with a satisfactory answer.
My guess is Allwinner, maker of Paul's D1 SoC, has a language barrier
and a very fast-moving market to dissuade them from putting too much
effort into documentation. Many simpler chips from China, e.g. a JPEG
encoder, come with a couple of pages listing features and some C written
by a chip designer or copied from a rival.
In my experience, chip selection is done by technical people, not
procurement. It's too complex a task, even just choosing from those of
one supplier like NXP, as there is often a compromise to make which
affects the rest of the board design. That's where FPGAs have an
allure, but unfortunately not in low-power designs.
--
Cheers, Ralph.
> On Dec 31, 2022, at 6:40 PM, Larry McVoy <lm(a)mcvoy.com> wrote:
>
> All true except for the Forth choice. It's as bad, maybe worse, as
> choosing Tcl for your language. I've written a ton of Tcl but I
> need the Tk GUI part so I put up with Tcl to get it. I'd never
> push Tcl as a language that other people had to use. Same thing
> with Forth.
>
> I dunno what I'd pick, Perl in the old days, Python now (not that
> I care for Python but everyone can program it). Just pick something
> that is trivial for someone to pick up.
(Moved to COFF)
I rather like FORTH. Its chief virtues are that it is both tiny and extensible. It was developed as a telescope control language, as I recall, and in highly constrained environments gives you a great deal of expressivity for a teeny tiny bit of interpreter code. I adored my HP 28S and still do: that was Peak Calculator, and its UI is basically a FORTH interpreter (which also, of course, functions just fine as an RPN calculator if you don't want to bother with flow control constructs).
But I also make the slightly more controversial claim that FORTH is just LISP stood up on end.
These days I think the right choice for those sorts of applications would be Micropython. Yes, a full-on Python interpreter is heavyweight, but Micropython gives you a lot of functionality in (comparatively) little space. It runs fine on a $4 Pi Pico, for instance, which has IIRC 256KB RAM.
And if you find yourself missing TCL, there's always Powershell, which is like what would happen if bash and TCL had a really ugly baby that just wouldn't shut up. The amazing thing is that access to all the system DLLs makes it *almost* worth putting up with Powershell.
Adam
Hi Larry,
> I hate Python because there is no printf in the base language.
There's print() with the format-string part being promoted into the
language as the ‘%’ operator when the left operand is a string.
It returns a string.
>>> '%10d' % (6*7)
' 42'
>>> '%s %s!\n' % ('hello', 'world')
'hello world!\n'
>>> '%10d' % 6*7
' 6 6 6 6 6 6 6'
>>> print('foo')
foo
>>> print('%.3s' % 'barxyzzy')
bar
>>>
So similar to AWK or Perl's sprintf() and Go's fmt.Sprintf() in that it
returns a dynamically-allocated string.
--
Cheers, Ralph.
Sharing because I can't justify this sort of purchase but perhaps someone is in with or knows someone who works at a museum or well-funded research outfit that might be keen on this sort of thing:
https://www.biblio.com/book/extraordinary-archive-original-house-publicatio…
Many books from OSU, lots of Digital literature for various PDPs, NBS reports, some Tektronix stuff, and a host of OSU-produced computing literature, plus more. This came up while I was searching for UNIX docs but I don't actually see anything UNIX there besides a mention that the PDP-11 ran it.
There is no explicit mention of no cherry picking, so via contact the seller may be amenable to working on a subset. Either way, figured this would pique the interest of a few folks around.
I also have to wonder if this bears any relation to my acquisition of the UNIX System V literature set this past year. That was likewise sourced from OSU. A professor who was retiring and mentioned oodles of old tech materials they were going to be auctioning off over the next few years, this very well could be from that same stash.
- Matt G.
Hi,
Branden wrote:
> the amount of space in the instruction for encoding registers seems to
> me to have played a major role in the design of the RV32I/E and C
> (compressed) extension instruction formats of RISC-V.
Before RISC-V, the need for code density caused ARM to move away from
the original, highly regular instruction format of one 32-bit word per
instruction to Thumb and then Thumb-2 encoding. IIRC, Thumb moved to
16-bit per instruction which was expanded by Thumb-2 to also have some
32-bit instructions. The mobile market was getting going and the
storage options and their financial and power costs meant code density
mattered more.
The original ARM instructions had the top four bits hold the ‘condition
code’ which decided if the instruction was executed, thus the top hex
nibble was readable.
0 f
eq ne cs cc mi pl vs vc hi ls ge lt gt le al nv
Data processing instructions, like
and rd, rn, rm ; d = n & m
aligned each of the four-bits to identify which of the sixteen registers
were used on nibble boundaries so again it was readable as hex.
xxxx000a aaaSnnnn ddddcccc ctttmmmm
The ‘a aaa’ above wasn't aligned, but still neatly picked which of the
sixteen data-processing instructions was used.
and eor sub rsb add adc sbc rsc tst teq cmp cmn orr mov bic mvn
And so it went on. A SoftWare Interrupt had an aligned 1111 to select
it and the low twenty-four bits as the interrupt number.
xxxx1111 yyyyyyyy yyyyyyyy yyyyyyyy
I assume this neat arrangement helped keep the decoding circuitry small
leading to a simpler design and lower power consumption. The latter was
important because Acorn, the ARM chip's designer, wanted a cheaper
plastic case rather than ceramic so they set a design limit of 1 W. Due
to the poor tooling available, it came in at 0.1 W after allowing for a
margin of error. This was so low that Acorn were surprised when an
early board ran without power connected to the ARM; they found it was
clocking just from the leakage of the surrounding support chips.
Also, Acorn's Roger Wilson who designed the ARM's instruction set was an
expert assembly programmer, e.g. he wrote the 16 KiB BASIC ROM for 6502,
so he approached it from the programmer's viewpoint as well as the chip
designer he became.
Thumb and Thumb-2 naturally had to destroy all this so instructions are
now not orthogonal. Having coded swtch() in assembler for various ARM
Cortex M-..., it's a pain to have to keep checking what instructions are
available on this model and what registers can it access. On ARM 2,
there were few rules to remember and writing assembler was fun.
--
Cheers, Ralph.
I’m wondering if anyone here is able to assist …
tvtwm is Tom LaStrange’s modified version of the twm window manager: Tom’s Virtual TWM. Somewhat unusually, tvtwm modelled the screen as a single large window and the display was a viewport that could be shifted around it, rather than the now-standard model of distinct virtual desktops.
I’m trying to rebuild the history of its releases. I have the initial release, and the first 6 patches Tom published via comp.sources.x, the 7th patch from the X11R5 contrib tape, the 10th patch from the X11R6 contrib tape, the widely available patch 11, and an incomplete patch 12 via personal email from Chris Ross, who took over maintenance from Tom at some point.
So, I’m looking for patch 8 and/or patch 9 (either one would do, since I can reconstruct the other from what I have plus one).
I’ve failed to find either of them. I’m not sure how they were distributed, and my searches have proven fruitless so far.
Does anyone here happen to have a trove of X11 window manager source code tucked away?
Thanks in advance,
d
Forwarded separately as original bounced due to use of old 'minnie'
address. Sorry :-(
---------- Forwarded message ---------
From: Rudi Blom <rudi.j.blom(a)gmail.com>
Date: Thu, 15 Dec 2022 at 10:18
Subject: [COFF] Re: DevOps/SRE [was Re: [TUHS] Re: LOC [was Re: Re: Re.:
Princeton's "Unix: An Oral History": who was in the team in "The Attic"?]
To: <mparson(a)bl.org>
Cc: coff <coff(a)minnie.tuhs.org>
<snip>
Our basic tooling is github enterprise for source and saltstack is our
config management/automation framework.
Their work-flow is supposed to basically be:
<snip>
Makes me wonder how current twitter deadlines are affecting 'quality', of
the code that is. Twits and tweets are a different matter :-)
Cheers,
uncle rubl
--
The more I learn the better I understand I know nothing.
--
The more I learn the better I understand I know nothing.