On Feb 3, 2023, at 8:26 AM, Will Senn <will.senn(a)gmail.com> wrote:
>
> I can't seem to get away from having to highlight and mark up the stuff I read. I love pdf's searchability of words, but not for quickly locating a section, or just browsing and studying them. I can flip pages much faster with paper than an ebook it seems :).
You can annotate, highlight and markup pdfs. There are apps for that though
I’m not very familiar with them as I don’t markup even paper copies. On an
iPad you can easily annotate pdfs with an apple pencil.
> From: Dennis Boone <drb(a)msu.edu>
>
> * Don't use JPEG 2000 and similar compression algorithms that try to
> re-use blocks of pixels from elsewhere in the document -- too many
> errors, and they're errors of the sort that can be critical. Even if
> the replacements use the correct code point, they're distracting as
> hell in a different font, size, etc.
I wondered about why certain images were the way they were, this
probably explains a lot.
> * OCR-under is good. I use `ocrmypdf`, which uses the Tesseract engine.
Thanks for the tips.
> * Bookmarks for pages / table of contents entries / etc are mandatory.
> Very few things make a scanned-doc PDF less useful than not being able
> to skip directly to a document indicated page.
I wish. This is a tough one. I generally sacrifice ditching the
bookmarks to make a better pdf. I need to look into extracting bookmarks
and if they can be re-added without getting all wonky.
> * I like to see at least 300 dpi.
Yes, me too, but I've found that this often results in too big (when
fixing existing), if I'm creating, they're fine.
> * Don't scan in color mode if the source material isn't color. Grey
> scale or even "line art" works fine in most cases. Using one pixel
> means you can use G4 compression for colorless pages.
Amen :).
>
> * Do reduce the color depth of pages that do contain color if you can.
> The resulting PDF can contain a mix of image types. I've worked with
> documents that did use color where four or eight colors were enough,
> and the whole document could be mapped to them. With care, you _can_
> force the scans down to two or three bits per pixel.
> * Do insert sensible metadata.
>
> * Do try to square up the inevitably crooked scans, clean up major
> floobydust and whatever crud around the edges isn't part of the paper,
> etc. Besides making the result more readable, it'll help the OCR. I
> never have any luck with automated page orientation tooling for some
> reason, so end up just doing this with Gimp.
Great points. Thanks.
-will
That was the title of a sidebar in Australia's "Silicon Chip" electronics
magazine this month, and referred to the alleged practice of running
scientific and engineering programs many times to ensure consistent
output, as hardware error checks weren't the best in those days (bit-flips
due to electrical noise etc).
Anyway, the mag is seeking corroboration on this (credit given where due,
of course); I find it a bit hard to believe that machines capable of
running complex programs did not have e.g. parity checking...
Thanks.
-- Dave
Will Senn wrote in
<3808a35c-2ee0-2081-4128-c8196b4732c0(a)gmail.com>:
|Well, I just read this as Rust is dead... here's hoping, but seriously,
|if we're gonna go off and have a language vs language discussion, I
|personally don't think we've had any real language innovation since
|Algol 60, well maybe lisp... sheesh, surely we're in COFF territory.
It has evangelists on all fronts. ..Yes it was only that while
i was writing the message i reread about Vala of the GNOME
project, which seems to be a modern language with many beneficial
properties still, growing out of a Swiss University (is that a bad
sign to come from Switzerland and more out of research), and it
had support of Ubuntu and many other parts of the GNOME project.
Still it is said to be dead. I scrubbed that part of my message.
But maybe thus a "dead" relation in between the lines remained.
Smalltalk is also such a thing, though not from Switzerland.
An Ach! on the mystery of human behaviour. Or, like the wonderful
Marcel Reich-Ranicki said, "Wir sehen es betroffen, den Vorhang
zu, und alle Fragen offen" ("Concerned we see, the curtain closed,
and all the Questions open").
--steffen
|
|Der Kragenbaer, The moon bear,
|der holt sich munter he cheerfully and one by one
|einen nach dem anderen runter wa.ks himself off
|(By Robert Gernhardt)
[TUHS to Bcc]
On Wed, Feb 1, 2023 at 2:11 PM Rich Salz <rich.salz(a)gmail.com> wrote:
> On Wed, Feb 1, 2023 at 1:33 PM segaloco via TUHS <tuhs(a)tuhs.org> wrote:
>> In the annals of UNIX gaming, have there ever been notable games that have operated as multiple processes, perhaps using formal IPC or even just pipes or shared files for communication between separate processes (games with networking notwithstanding)?
>
> https://www.unix.com/man-page/bsd/6/hunt/
> source at http://ftp.funet.fi/pub/unix/4.3bsd/reno/games/hunt/hunt/
Hunt was the one that I thought of immediately. We used to play that
on Suns and VAXen and it could be lively.
There were a number of such games, as Clem mentioned; others I
remember were xtrek, hearts, and various Chess and Go servers.
- Dan C.
Switching to COFF
On Wed, Feb 1, 2023 at 1:33 PM segaloco via TUHS <tuhs(a)tuhs.org> wrote:
> In the annals of UNIX gaming, have there ever been notable games that have
> operated as multiple processes, perhaps using formal IPC or even just pipes
> or shared files for communication between separate processes (games with
> networking notwithstanding)?
>
Yes - there were a number of them. Both for UNIX and other wise. Some
spanned the Arpanet back in the day on the PDP-10's. There was an early
first person shooter games that I remember that ran on the PDP-10s on
ADM3As and VT52 that worked that way. You flew into space and fought each
other.
CMU's (Steve Rubin's) Trip was stand alone program - sort of the
grand-daddy of the Star Trek games. It ran on a GDP2 (Triple-Drip Graphics
Wonder) and had dedicated 11/20. It was multiple processes to do
everything. You were at the Captions chair of the Enterprise looking out
into space. You had various mission and at some point would bee to
reprovision - which meant you had to dock at the 2001 space
station including timing your rotation to line up with docking bay like in
the movie. When you beat an alien ship you got a bottle of coke - all
of which collected in row on the bottom of the screen.
I did manage to save the (BLISS-11) sources to it a few years ago. One
of my dreams is to try to write GDP simulator for SIMH and see if we can
bring it back to life. A big issue as Rob knows is the GDPs had an amazing
keyboard so duplicating it will take some thinking with modern HW; but HW
has caught up such that I think it might be possible to emulate it. SIMH
works really well with a number of the other Graphics systems and with my
modem system like my current Mac and its graphics HW, there might be a
chance.
One of my other favorites was one that ran on the Xerox Alto's who's name I
don't remember, where you wandered around the Xerox 3M ethernet. People
would enter your system and appear on your system. IIRC Byte Magazine did
an article that talked about it at one point -- this was all pre-Apple Macs
- but I remember they had pictures of people playing it that I think they
took at Stanford. IIRC Shortly after the X-Terminals appeared somebody
tried to duplicate it, or maybe that was with the Bilts but it was not
quite as good as those of us that had access to real Xerox Altos.
ᐧ
COFF'd
> I think general software engineering knowledge and experience cannot be
> 'obsoleted' or made less relevant by better languages. If they help,
> great, but you have to do the other part too. As languages advance and
> get better at catching (certain kinds of) mistakes, I worry that
> engineers are not putting enough time into observation and understanding
> of how their programs actually work (or do not).
I think you nailed it there mentioning engineers in that one of the growing norms these days is making software development more accessible to a diverse set of backgrounds. No longer does a programming language have to just bridge the gap between, say, an expert mathematician and a compute device.
Now there are languages to allow UX designers to declaratively define interfaces, for data scientists to functionally define algorithms, and WYSIWYG editors for all sorts of things that were traditionally handled by hammering out code. The concern of describing a program through a standard language and the concern that language then describing the operations of a specific device have been growing more and more decoupled as time goes on, and that then puts a lot of the responsibility for "correctness" on those creating all these various languages.
Whatever concern an engineer originally had to some matter of memory safety, efficiency, concurrency, etc. is now being decided by some team working on the given language of the week, sometimes to great results, other times to disastrous ones. On the flip side, the person consuming the language or components then doesn't need to think about these things, which could go either way. If they're always going to work in this paradigm where they're offloading the concern of memory safety to their language architect of choice, then perhaps they're not shorting themselves any. However, they're then technically not seeing the big picture of what they're working on, which contributes to the diverse quality of software we have today.
Long story short, most people don't know how their programs work because they aren't really "their" programs so much as their assembly of a number of off-the-shelf or slightly tweaked components following the norms of whatever school of thought they may originate in (marketing, finance, graphic design, etc.). Sadly, this decoupling likely isn't going away, and we're only bound to see the percentage of "bad" software increase over time. That's the sort of change that over time leads to people then changing their opinions of what "bad software" is. Look at how many people gleefully accept the landscape of smart-device "apps"....
- Matt G.
Hi Dave,
COFF'd.
> > I'll never do if (a==b&&c==d), always if ((a==b)&&(c==d)).
>
> Indeed; I *always* use parentheses even if they are not necessary (for
> my benefit, at least).
I find unnecessary parenthesis annoying and clutter which obscures
reading. If parenthesis are used only when overriding the default
precedence then this beneficially draws attention to the exception.
I doubt mandatory parenthesis are used in maths formulas by those that
use them in expression.
Whitespace is beneficial in both maths formulas and expressions. The
squashed expression above will often be spaced more.
if (a==b && c==d)
if (a == b && c == d)
Go's source formatter will vary which operators get spaces to reflect
precedence, e.g. https://go.dev/play/p/TU95Oz57GuF shows ‘4*3’ differs.
fmt.Println(4 * 3)
fmt.Println(5 ^ 4*3)
fmt.Println(5 ^ 4*3 + 2/1.9)
--
Cheers, Ralph.
[moved to COFF]
On Sat, Jan 28, 2023 at 4:16 AM Andy Kosela <akosela(a)andykosela.com> wrote:
> Great initiative and idea! While I am personally not interested in reading
> USENET that much nowadays, the concept of providing free, public access to
> classic Internet services (public USENET, FTP, IRC, finger, etc.) gets all
> my praise. What happened to free, public services these days?
First off, what is stopping you from providing free, public access to those
services?
I don't know where you are, but I have orders of magnitude more access to
freely available content and services than I ever did in the heyday of
Usenet, etc. And for most of it, one doesn't have to be highly technical
to use it.
> Everything appears to to be subscriber pay-as-you-go based. The
> commercialization killed the free spirit of Internet we all loved in the
> 90s.
>
"Free" was never really true, as it required massive subsidies of
equipment, power, bandwidth and employee time, usually w/o the direct
knowledge or consent of the entities paying for it.
It reminds me of the lemonade stands I'd occasionally run as a kid, which
were "profitable" to me because mom and dad, with their knowledge and
consent, let me pretend that the costs were $0.
--
Nevin ":-)" Liber <mailto:nevin@eviloverlord.com> +1-847-691-1404
This seems COFF, not TUHS, and mostly not digital...
I have many 4mm DAT cartridges from 20-30 years ago. Every now and then
I will access one. So far I've yet to see evidence of the media degrading.
On 1/28/2023 4:12 AM, Steve Simon wrote:
> baking old, badly stored magnetic tapes prior to reading them is a common practice.
For the last year+ I have been digitizing selected audio tapes made in
the 70s at AWHQ
(https://en.wikipedia.org/wiki/Armadillo_World_Headquarters) The ones
I've been working with are 1/4" on 10.5" reels. A printed inventory I
was given says "bake" next to almost all of the items, but so far, after
processing roughly 40 reels, I've yet to find one that seemed to need
"baking" (actually, "baking" is a bit overstated, in that best practice
is to raise temperature to roughly 150F --
https://www.radioworld.com/industry/baking-magnetic-recording-tape)
For now, I'm not able to share those AWHQ recordings, but I can share
other recordings I made in the 60s and 70s at
https://technologists.com/60sN70s/. In all those reels, many of which
are cheap, unbranded tape, I didn't find any that seemed to me to need
baking.
Charlie
--
voice: +1.512.784.7526 e-mail: sauer(a)technologists.com
fax: +1.512.346.5240 Web: https://technologists.com/sauer/
Facebook/Google/LinkedIn/Twitter: CharlesHSauer
On 28 Jan 2023 11:29 -0800, from frew(a)ucsb.edu (James Frew):
> As I was leaving the lab late one evening during this mini-crisis, I had to
> walk around a custodian who was busy giving the linoleum floor in the
> hallway its annual deep cleaning / polishing. This involved a dingus with a
> large (~18" diameter) horizontal buffing wheel, atop which sat an enormous
> (like, a cylinder about as big around as a soccer ball) electric motor,
> sparking commutator clearly visible through the vents in the metal housing.
This is probably more COFF than TUHS, but I recall a story from almost
certainly much later where someone (I think it was a secretary; for
now, let's pretend it was) had been told to change backup tapes daily
and set the freshly taken backup aside for safekeeping. Then one day
the storage failed and the backups were needed, only it turned out
when trying to restore the backups that _every_ _single_ _tape_ was
blank. Nobody, least of all the secretary, could explain how that
could have happened, and eventually, the secretary was asked to
demonstrate exactly what had been done every day. Turned out that
while getting the replacement tape, the secretary put the freshly
taken backup tape on a UPS, which apparently generated a strong
magnetic field, before setting that tape aside. So the freshly taken
backup tape was dutifully well and thoroughly erased. Nobody had
mentioned the little detail of not putting the tape near the UPS.
Oops.
--
Michael Kjörling 🏡 https://michael.kjorling.se
“Remember when, on the Internet, nobody cared that you were a dog?”
On 2023-01-27 14:23, Stuff Received wrote:
> On 2023-01-27 12:42, Henry Mensch wrote (in part):
>> I'd like to find solid Android and Windows clients so I could once
>> again use USENET.
>
> I read USENET (at Newsdemon -- USD3 monthly) with Firefox (on MacOS) but
> presumably will also work on Windows.
Oops -- Thunderbird, not Firefox.
>
> N.
>
> (We seem to have strayed into COFF territory...)
> I have yet to look at the oral history things from Corby, etc which may
> answer this in passing.
This page:
https://multicians.org/project-mac.html
links to oral history transcripts from Fano, Corby and Dennis. Only Corby:
https://conservancy.umn.edu/handle/11299/107230http://archive.computerhistory.org/resources/access/text/2013/05/102658041-…
was involved when Bell Labs came on board; he says:
"Also Bell Labs was interested in acquiring a new computer system. They were
quite intrigued and sympathetic to the notions of what we were doing. Ed
David down there was a key figure and an old friend of Fano. They decided to
become partners (pg. 16, CHM interview)
"Simultaneously, Bell Labs had been looking for a new computer acquisition
for their laboratories, and they had been scouting out GE. There was some
synergism: because they knew we were interested they got interested. I think
they first began to look independently of us. But they saw the possibility
of our developing a new operating system together." (pg. 43, CBI interview)
So it sounds like it was kind of a mutual thing, aided by the connection
between Fano and David (who left Bell after Bell bailed on multics).
Noel
> From: Dan Cross
> From Acceptable Name <metta.crawler(a)gmail.com>:
Gmail has decided this machine is a source of spam, and is rejecting all email
from it, and I have yet to sort out what's going on, so someone might want to
forward anything I turn up to this person.
>> Did Bell Labs approach MIT or was it the other way around?
I looked around the Multics site, but all I could find is this: "Bell
Laboratories (BTL) joined the Multics software development effort in November
of 1964"
https://multicians.org/history.html
I did look through IEEE Annals of the History of Computing, vol. 14, no. 2,
listed there, but it's mostly about the roots of CTSS. It does have a footnote
referring to Doug, about the timing, but no detail of how Bell Labs got
involved.
I have yet to look at the oral history things from Corby, etc which may answer
this in passing.
I'll ask Jerry Saltzer if he remembers; he's about the only person left from
MIT who was around at that point.
>> Did participating in Project MAC come from researchers requesting
>> management at Bell Labs/MIT
At MIT, the 'managers' and the researchers were the same people, pretty much.
If you read the panel transcript in V14N2, Fano was the closest thing to a
manager (although he was really a professor) there was at MIT, and he talks
about not wanting to be involved in managing the thing!
Noel
[Bcc: to TUHS as it's not strictly Unix related, but relevant to the
pre-history]
This came from USENET, specifically, alt.os.multics. Since it's
unlikely anyone in a position to answer is going to see it there, I'm
reposting here:
From Acceptable Name <metta.crawler(a)gmail.com>:
>Did Bell Labs approach MIT or was it the other way around?
>Did participating in Project MAC come from researchers requesting
>management at Bell Labs/MIT or did management make the
>decision due to dealing with other managers in each of the two
>organizations? Did it grow out of an informal arrangement into
>a format one?"
These are interesting questions. Perhaps Doug may be in the know?
- Dan C.
(Move to COFF)
> On 22 Jan 2023, at 05:43, Luther Johnson <luther(a)makerlisp.com> wrote:
>
> Yes, I know, but some of that SW development is being automated ... I'm not saying it will totally go away, but the numbers will become smaller, and the number of people who know how to do it will become smaller, and the quality will continue to deteriorate. The number of people who can detect quality problems before the failures they cause, will also get smaller. Not extinct, but endangered, and we are all endangered by the quality problems.
Is it possible that this concern mirrors that of skilled programmers seeing the introduction of high-level languages?
I’ve played with ChatGPT, and the first 10 minutes were a bit confronting. But the remainder of the hour or so I played overcame my initial concerns.
It’s amazing what can be done, especially with Javascript or Python, when you ask for something that’s fairly simple to define, and in a common application area. You can get reasonable code, refine it, ask for an altered approach, etc. It’s probably quicker than writing it yourself, especially if you’re not intimately familiar with the library being used (or even the language).
But … it pretty quickly becomes clear that there’s no semantic understanding of what’s being done behind it. And unless you specify what you want in pretty minute detail, the output is unlikely to be what you want. And, as always, going from a roughed-out implementation of the core functionality to a production-ready program is a lot of work.
In the end, it’s like having an intern with a bit of experience, Stack Overflow, and a decent aptitude driving the keyboard: you still have to break down the spec into small, detailed chunks, and while sometimes they come back with the right thing, more often, you need to walk through it line by line to correct the little mistakes.
I’m looking forward to seeing a generative model combined with static analysis, incremental compilation, unit test output: I think it will be possible for a good programmer to multiply their productivity by a few times (maybe even 10x, but I don’t see 100x). There’ll still be times when it’s simpler to just write the code yourself, rather than trying to rephrase the request.
All of which makes me think of the assembly language to high-level language transition ...
d
COFF'd
I wonder if we'll see events around 2038 that renew interest in conventional computing. There are going to be more public eyes on vintage computers and aging computational infrastructure the closer we get to that date methinks, if even just in the form of Ric Romero-esque curiosity pieces.
Hopefully the cohort of folks that dive into Fortran and Cobol for the first time to pick up some of the slack on bringing 2038-averse software and systems forward will continue to explore around the margins of their newfound skills. I know starting in assembly and C influenced me to then come to understand the bigger picture in which those languages and their paradigms developed, so hopefully the same is true of a general programming community finding itself Fortran-and-Cobol-ish for a time.
- Matt G.
------- Original Message -------
On Saturday, January 21st, 2023 at 10:43 AM, Luther Johnson <luther(a)makerlisp.com> wrote:
> Yes, I know, but some of that SW development is being automated ... I'm
> not saying it will totally go away, but the numbers will become smaller,
> and the number of people who know how to do it will become smaller, and
> the quality will continue to deteriorate. The number of people who can
> detect quality problems before the failures they cause, will also get
> smaller. Not extinct, but endangered, and we are all endangered by the
> quality problems.
>
> On 01/21/2023 11:12 AM, arnold(a)skeeve.com wrote:
>
> > Real computers with keyboards etc won't go away; think about
> > all those servers running the backends of the apps and the
> > databases for the cool stuff on the phones. Someone is still
> > going to have to write those bits.
> >
> > Arnold
> >
> > Luther Johnson luther(a)makerlisp.com wrote:
> >
> > > Well, that's a comforting thought, I hope it goes that way.
> > >
> > > On 01/19/2023 06:10 PM, John Cowan wrote:
> > >
> > > > On Thu, Jan 19, 2023 at 5:23 PM Luther Johnson <luther(a)makerlisp.com
> > > > mailto:luther@makerlisp.com> wrote:
> > > >
> > > > Computers that are not smart phone-like are definitely on the
> > > > endangered
> > > > species list. You know, the kind on a desk, with a keyboard ...
> > > >
> > > > I don't have statistics for this, but I doubt it. Consider amateur
> > > > radio, which has been around for a century now. Amateur stations are
> > > > an ever-shrinking fraction of all transmitters, to say nothing of
> > > > receivers, but in absolute terms there are now more than 2 million
> > > > hams in the world, which is almost certainly more than ever.
"G. Branden Robinson" <g.branden.robinson(a)gmail.com> writes:
[migrating to COFF]
[snip]
>> In that time frame there was a number of microkernel designs. One
>> that has not been mentioned was OS-9 for the 6809/68000 processor. I
>> used it pretty extensively. OS-9 was very unix like from the userland
>> POV, when you consider something like V5 unix, however it didn't share
>> any of the same command names, just many of the same concepts.
>
> This is emphatically true. I used this system as a kid on a 64KiB
> machine, and I don't remember even a mention of Unix in the doorstop of
> a manual by Dale Puckett and Peter Dibble (who gave you something like 6
> chapters of architectural background before introducing the shell
> prompt). Maybe they did mention Unix , but since it had no meaning to
> me at the time, it didn't sink in. I think it is also possible they
> avoided any names that they thought might draw legal ire from AT&T.
That is more or less me too... However, in later years when I was
familiar with Unix I looked at some of the OS-9 books and the block
diagrams in the books about the OS-9 OS could have described early Unix
pretty well.
>> It was close enough that if you had the C compiler, a very basic K&R
>> compiler, you could get some of the unix command to compile without
>> too much trouble.
>
> Years later I went to college, landed on Sun IPC workstations, and
> quickly recognized OS-9's "T/S Edit" as a vi clone, and its "T/S Word"
> as a version of nroff. There was also a "T/S Spell" product but I don't
> recall it clearly enough to venture whether it was a clone of ispell.
Ya, I think I even had a patch that turned T/S Edit into a much closer
vi clone. But, I think by then I had another vi clone already on hand.
One of the other things I did with OS-9/6809 was worked on UUCP. I
didn't write the original OS-9 UUCP code, but I did modify it quite a
bit and had it talking UUCP g protocol to UUnet via a phone line. I did
write a 'rn' Usenet news reader clone and was pulling down a few news
groups as well as email every day. In the last days of that system, I
also logged into the system via a serial port complete with Username and
password prompts. This was all on a Color Computer 3 with 512K.
[snip]
>> and nothing like Mach or even Minix.
>
> With the source of all three available, a technical paper analyzing and
> contrasting them would be a worthwhile thing to have. (It's unclear to
> me if even a historical version of QNX is available for study.)
The source to OS-9/6809 would have been released by Microware a long
time ago had it not been for a particular person in the user community.
Got mucked up. I fell out of following it after the BSD Unixs became
available.
>> It was also very much positioned to real time OS needs of the time and
>> was not really marketed generally and unless you happened to have a
>> Color Computer from Radio Shack
>
> Lucky me! How I yearned for a 128KiB Color Computer 3 so I could
> upgrade to OS-9 Level 2 and the windowing system. (512KiB was
> preferred, but there had been a spike in RAM prices right about the time
> the machine was released. Not that greater market success would have
> kept Tandy from under-promoting and eventually killing the machine.[1])
Level II was nice. It was able to use bank switching and would allow a
set of random 8k memory blocks out of the 128k or 512k present in the
CC3 system to be mapped into the 6809 64k address space. The Color
Computer didn't support memory protection, so no paging or any real
process protection, but this banking allowed for a lot of possibilities.
I know that there was other OS-9 systems around that ran Level II but I
don't really know how they managed memory. I would suspect it to be
simular to the CC3, but that is just a guess on my part.
[snip]
> Regards,
> Branden
>
> [1] Here's a story you may have to sit down for from Frank Durda IV (now
> deceased) about how the same company knifed their m68k-based
> line--which ran XENIX--in the gut repeatedly. It's hard to find
> this story via Web search so I've made a Facebook post
> temporarily(?) public. I'd simply include it, but it's pretty long.
>
> https://www.facebook.com/g.branden.robinson/posts/pfbid0F8MrvauQ6KPQ1tytme9…
>
> [2] https://www.cnn.com/2000/TECH/computing/03/21/os9.suit.idg/index.html
> [3] https://appleinsider.com/articles/10/06/08/cisco_licenses_ios_name_to_apple…
> [4] https://sourceforge.net/projects/nitros9/
--
Brad Spencer - brad(a)anduin.eldar.org - KC8VKS - http://anduin.eldar.org
> On Jan 2, 2023, at 1:36 PM, Larry McVoy <lm(a)mcvoy.com> wrote:
>
> The /bin/sh stuff made me think of an interview question I had for engineers,
> that a surprisingly few could pass:
>
> "Tell me about something you wrote that was entirely you, the docs, the
> tests, the source, the installer, everything. It doesn't have to be a
> big thing, but it has to have been successfully used by at least 10
> people who had no contact with you (other than to say thanks)."
>
> Most people fail this. I think the people who pass might look
> positively on the v7 sh stuff. But who knows?
Huh. That is a surprisingly tricky question, depending on how you want to construe "entirely you".
v1 of https://atariage.com/software_page.php?SoftwareLabelID=2023 (before Thomas Jentzsch optimized the display engine) was ... stuff I did, but obviously neither the idea nor the execution was all that original, since I used Greg Troutman's Dark Mage source, which in turn was derived from Stellar Track.
There's a certain very large text adventure I once did, which I would certainly not bring up at a real job interview since it's riotously pornographic, but it is 200,000 words of source text, got surprisingly good reviews from many people (Emily Short loved it; Jimmy Maher hated it), and I put it all together myself, but the whole thing is a hodgepodge of T.S. Eliot and The Aeneid and then a few dozen other smaller sources, all tossed in a blender. Not going to directly link it but it's not hard to find with a little Googling. The arrangement is original, sure, but its charm--such as it is--may be that it is in some ways a love letter to early D&D and its "what if Gandalf and Conan teamed up to fight Cthulhu" sort of ethos. (Jimmy Maher found the intertextuality very dense and unappetizing, whereas Emily Short really enjoyed the playfulness.)
There's https://github.com/athornton/uCA which fits the criteria but really is a very small wrapper around OpenSSL to automate SAN generation, which is a huge PITA with plain old OpenSSL. Now, of course, you wouldn't bother with this, you'd just use Let's Encrypt, but that wasn't a thing yet. Such as it is it's all me but it is entirely useless without a functional OpenSSL under it.
I'm not sure that ten other people ever used https://github.com/athornton/nerdle-solver because there may have been fewer than ten people other than me that found Nerdle all that fascinating. It was fun talking with that community and finding out that the other solver I'm aware of was completely lexical, rather than actually doing the math. But again: it's a thing that makes no sense without someone else having invented Nerdle first.
Or there's https://github.com/athornton/tmenu; probably also not actually used by ten other people, but it's the front-end of https://mvsevm.fsf.net (which certainly has been enjoyed by...uh...let's go with "at least a dozen" people). It's original work, insofar as it goes, but it (like uCA) is really just glue between other things: a web server front end, a Javascript terminal emulator, and telnet/tn3270 clients.
Which of these, if any, do you count?
(moving to COFF)
On Tue, Jan 3, 2023 at 9:55 AM Marshall Conover <marzhall.o(a)gmail.com>
wrote:
> Along these lines but not quite, Jupyter Notebooks have stood out to me as
> another approach to this concept, with behavior I'd like to see implemented
> in a shell.
>
>
Well, you know, if you're OK with bash:
https://github.com/takluyver/bash_kernel
Or zsh: https://pypi.org/project/zsh-jupyter-kernel/
One of the big things I do is work on the Notebook Aspect of the Rubin
Science Platform. Each JupyterLab notebook session is tied to a "kernel"
which is a specific language-and-extension environment. At the Rubin
Observatory, we support a Python 3.10 environment with our processing
pipeline included. Although JupyterLab itself is capable of doing other
languages (notably Julia and R, which are the other two from which the name
"Jupyter" came), many others have been adapted to the notebook environment
(including at least the two shells above). And while researchers are
welcome to write their own code and work with raw images, we work under the
presumption that almost everyone is going to use the software tools the
Rubin Observatory provides to work with the data it collects, because
writing your own data processing pipeline from scratch is a pretty
monumental project.
Most astronomers are perfectly happy with what we provide, which is Python
plus our processing pipelines, which are all Python from the
scientist-facing perspective (much of the pipeline code is implemented in
C++ for speed, but it then gets Python bindings, so unless you're actually
working very deep down in the image processing code, it's Python as far as
you're concerned). However, a certain class of astronomers still loves
their FORTRAN. This class, unfortunately, tends to be the older ones,
which means the ones who have societal stature, tenure, can relatively
easily get big grants, and wield a lot of power within their institutions.
I know that it is *possible* to run gfortran as a Jupyter kernel. I've
seen it done. I was impressed.
Fortunately, no one with the power to make it stick has demanded we provide
a kernel like that. The initial provision of the service wouldn't be the
problem. It's that the support would be a nightmare. No one on my team is
great at FORTRAN; I'm probably the closest to fluent, and I'm not very, and
I really don't enjoy working in FORTRAN, and because FORTRAN really doesn't
lend itself easily to the kind of Python REPL exploration that notebooks
are all about, and because someone who loves FORTRAN and hates Python
probably has a very different concept of what good software engineering
practices look like than I do, trying to support someone working in a
notebook environment in a FORTRAN kernel would very likely be exquisitely
painful. And fortunately, since there are not FORTRAN bindings to the C++
classes providing the core algorithms, much less FORTRAN bindings to the
Python implementations (because all the things that don't particularly need
to be fast are just written in Python in the first place), a gfortran
kernel wouldn't be nearly as useful as our Python-plus-our-tools.
Now, sure, if we had paying customers who were willing to throw enough
money at us to make it worth the pain and both bring a FORTRAN
implementation to feature-parity with the reference environment and make a
gfortran kernel available, then we would do it. But I get paid a salary
that is not directly tied to my support burden, and I get to spend a lot
more of my time working on fun things and providing features for
astronomers who are not mired in 1978 if I can avoid spending my time
supporting huge time sinks that aren't in widespread use. We are scoped to
provide Python. We are not scoped to provide FORTRAN. We are not making
money off of sales: we're employed to provide stable infrastructure
services so the scientists using our platform and observatory can get their
research done. And thus far we've been successful in saying "hey, we've
got finite resources, we're not swimming in spare cycles, no, we can't
support [ x for x in
things-someone-wants-but-are-not-in-the-documented-scope ]". (To be fair,
this has more often been things like additional visualization libraries
than whole new languages, but the principle is the same.) We have a
process for proposing items for inclusion, and it's not at all rare that we
add them, but it's generally a considered decision about how generally
useful the proposed item will be for the project as a whole and how much
time it's likely to consume to support.
So this gave me a little satori about why I think POSIX.2 is a perfectly
reasonable spec to support and why I'm not wild about making all my shell
scripts instead compatible with the subset of v7 sh that works (almost)
everywhere. It's not all that much more work up front, but odds are that a
customer that wants to run new software, but who can't guarantee a POSIX
/bin/sh, will be a much more costly customer to support than one who can,
just as someone who wants a notebook environment but insists on FORTRAN in
it is very likely going to be much harder to support than someone who's
happy with the Python environment we already supply.
Adam
[Apologies for resending; I messed up and used the old Minnie address
for COFF in the Cc]
On Mon, Jan 2, 2023 at 1:36 PM Dan Cross <crossd(a)gmail.com> wrote:
>
> On Mon, Jan 2, 2023 at 1:13 PM Clem Cole <clemc(a)ccc.com> wrote:
> > Maybe this should go to COFF but Adam I fear you are falling into a tap that is easy to fall into - old == unused
> >
> > One of my favorite stores in the computer restoration community is from 5-10 years ago and the LCM+L in Seatle was restoring their CDC-6000 that they got From Purdue. Core memory is difficult to get, so they made a card and physical module that could plug into their system that is both electrically and mechanically equivalent using modern semiconductors. A few weeks later they announced that they had the system running and had built this module. They got approached by the USAF asking if they could get a copy of the design. Seems, there was still a least one CDC-6600 running a particular critical application somewhere.
> >
> > This is similar to the PDP-11s and Vaxen that are supposed to be still in the hydroelectric grid [a few years ago the was an ad for an RSX and VMS programmer to go to Canada running in the Boston newspapers - I know someone that did a small amount of consulting on that one].
>
> One of my favorite stories along these lines is the train signalling
> system in Melbourne, running on a "PDP-11". I quote PDP-11 because
> that is now virtualized:
> https://www.equicon.de/images/Virtualisierung/LegacyTrainControlSystemStabi…
>
> Indeed older systems show up in surprising places. I was once on the
> bridge of a US Naval vessel in the late '00s and saw a SPARCstaton 20
> running Solaris (with the CDE login screen). I don't recall what it
> was doing, but it was a tad surprising.
>
> I do worry about legacy systems in critical situations, but then, I've
> been in a firefight when the damned tactical computer with the satcomm
> link rebooted and we didn't have VHF comms with the battlespace owner.
> That was not particularly fun.
>
> - Dan C.
So, all the shell-portability talk on TUHS reminds me of something I believe I saw back in the 90s, and then failed to find a few years ago when I went looking.
But my Google-fu is not great, so maybe I just didn't look in the right place.
I was trying to port Frotz to TOPS-20, because I wanted to run the Infocom games on TOPS-20 on an emulated PDP-10. (The only further causality to the chain was a warning in the Frotz sources that it assumed 8-bit bytes and if you wanted to try to port it to a 36-bit environment, good luck; this is the difference between "stuff I do fo fun" and "stuff that needs a business justification".) I had a K&R C compiler available, but the sources were all ANSI C.
I had remembered that deprotoize had been part of an early GCC, and I did manage to find deprotoize sources, buried, I think, in some dusty piece of the Apple toolchain. But I also have a vague memory that GCC at some point probably in the mid-to-late 1990s came with something that was halfway between autoconf and Perl's bootstrapper. I *think* it was a bunch of shell scripts that could put together a minimal C subset compiler, which then could be used to build the rest of GCC. I'm pretty sure it was released as a reaction to the unbundling of C compilers when Unix vendors realized that was a thing they could do.
I could not find that thing at all.
Did I hallucinate it? It seems like it would have been an immensely useful tool at the time.
I ended up writing my own very half-assed deprotoizer and symbol mangler (only the first six characters of the function name were significant, because that's how the TOPS-20 linker works, and I don't know that I could have gotten past that even with an ANSI compiler without having to do significant toolchain work) which got me over the hump, but I have remained curious whether there really was that nifty GCC bootstrapper or whether I made that up.
Adam
Hi Branden,
> Paul Ruizendaal wrote:
> > That was my immediate pain point in doing the D1 SoC port.
> > Unfortunately, the manufacturer only released the DRAM init code as
> > compiler ‘-S’ output and the 1,400 page datasheet does not discuss
> > its registers. Maybe this is a-typical, as I heard in the above
> > keynote that NXP provides 8,000 page datasheets with their SoC’s.
...
> I don't think it's atypical. I was pretty annoyed trying to use the
> data sheet to program a simple timer chip on the ODROID-C2
...
> OS nerds don't generally handle procurement themselves. Instead,
> purchasing managers do, and those people don't have to face the pain.
...
> Data sheets are only as good as they need to be to move the product,
> which means they don't need to be good at all, since the people who
> pay for them look only at the advertised feature list and the price.
I think it comes down to the background of the chip designer. I've
always found NXP very good: their documentation of a chip is extensive;
it doesn't rely on referring to external source code; and they're
responsive when I've found the occasional error, both confirming the
correction and committing to its future publication.
On the other hand, TI left a bad taste. The documentation isn't good
and they rely on a forum to mop up all the problems but it's pot luck
which staffer answers and perennial problems can easily be found by a
forum search, never with a satisfactory answer.
My guess is Allwinner, maker of Paul's D1 SoC, has a language barrier
and a very fast-moving market to dissuade them from putting too much
effort into documentation. Many simpler chips from China, e.g. a JPEG
encoder, come with a couple of pages listing features and some C written
by a chip designer or copied from a rival.
In my experience, chip selection is done by technical people, not
procurement. It's too complex a task, even just choosing from those of
one supplier like NXP, as there is often a compromise to make which
affects the rest of the board design. That's where FPGAs have an
allure, but unfortunately not in low-power designs.
--
Cheers, Ralph.
> On Dec 31, 2022, at 6:40 PM, Larry McVoy <lm(a)mcvoy.com> wrote:
>
> All true except for the Forth choice. It's as bad, maybe worse, as
> choosing Tcl for your language. I've written a ton of Tcl but I
> need the Tk GUI part so I put up with Tcl to get it. I'd never
> push Tcl as a language that other people had to use. Same thing
> with Forth.
>
> I dunno what I'd pick, Perl in the old days, Python now (not that
> I care for Python but everyone can program it). Just pick something
> that is trivial for someone to pick up.
(Moved to COFF)
I rather like FORTH. Its chief virtues are that it is both tiny and extensible. It was developed as a telescope control language, as I recall, and in highly constrained environments gives you a great deal of expressivity for a teeny tiny bit of interpreter code. I adored my HP 28S and still do: that was Peak Calculator, and its UI is basically a FORTH interpreter (which also, of course, functions just fine as an RPN calculator if you don't want to bother with flow control constructs).
But I also make the slightly more controversial claim that FORTH is just LISP stood up on end.
These days I think the right choice for those sorts of applications would be Micropython. Yes, a full-on Python interpreter is heavyweight, but Micropython gives you a lot of functionality in (comparatively) little space. It runs fine on a $4 Pi Pico, for instance, which has IIRC 256KB RAM.
And if you find yourself missing TCL, there's always Powershell, which is like what would happen if bash and TCL had a really ugly baby that just wouldn't shut up. The amazing thing is that access to all the system DLLs makes it *almost* worth putting up with Powershell.
Adam