[moved to COFF]
On Sat, Jan 28, 2023 at 4:16 AM Andy Kosela <akosela(a)andykosela.com> wrote:
> Great initiative and idea! While I am personally not interested in reading
> USENET that much nowadays, the concept of providing free, public access to
> classic Internet services (public USENET, FTP, IRC, finger, etc.) gets all
> my praise. What happened to free, public services these days?
First off, what is stopping you from providing free, public access to those
services?
I don't know where you are, but I have orders of magnitude more access to
freely available content and services than I ever did in the heyday of
Usenet, etc. And for most of it, one doesn't have to be highly technical
to use it.
> Everything appears to to be subscriber pay-as-you-go based. The
> commercialization killed the free spirit of Internet we all loved in the
> 90s.
>
"Free" was never really true, as it required massive subsidies of
equipment, power, bandwidth and employee time, usually w/o the direct
knowledge or consent of the entities paying for it.
It reminds me of the lemonade stands I'd occasionally run as a kid, which
were "profitable" to me because mom and dad, with their knowledge and
consent, let me pretend that the costs were $0.
--
Nevin ":-)" Liber <mailto:nevin@eviloverlord.com> +1-847-691-1404
This seems COFF, not TUHS, and mostly not digital...
I have many 4mm DAT cartridges from 20-30 years ago. Every now and then
I will access one. So far I've yet to see evidence of the media degrading.
On 1/28/2023 4:12 AM, Steve Simon wrote:
> baking old, badly stored magnetic tapes prior to reading them is a common practice.
For the last year+ I have been digitizing selected audio tapes made in
the 70s at AWHQ
(https://en.wikipedia.org/wiki/Armadillo_World_Headquarters) The ones
I've been working with are 1/4" on 10.5" reels. A printed inventory I
was given says "bake" next to almost all of the items, but so far, after
processing roughly 40 reels, I've yet to find one that seemed to need
"baking" (actually, "baking" is a bit overstated, in that best practice
is to raise temperature to roughly 150F --
https://www.radioworld.com/industry/baking-magnetic-recording-tape)
For now, I'm not able to share those AWHQ recordings, but I can share
other recordings I made in the 60s and 70s at
https://technologists.com/60sN70s/. In all those reels, many of which
are cheap, unbranded tape, I didn't find any that seemed to me to need
baking.
Charlie
--
voice: +1.512.784.7526 e-mail: sauer(a)technologists.com
fax: +1.512.346.5240 Web: https://technologists.com/sauer/
Facebook/Google/LinkedIn/Twitter: CharlesHSauer
On 28 Jan 2023 11:29 -0800, from frew(a)ucsb.edu (James Frew):
> As I was leaving the lab late one evening during this mini-crisis, I had to
> walk around a custodian who was busy giving the linoleum floor in the
> hallway its annual deep cleaning / polishing. This involved a dingus with a
> large (~18" diameter) horizontal buffing wheel, atop which sat an enormous
> (like, a cylinder about as big around as a soccer ball) electric motor,
> sparking commutator clearly visible through the vents in the metal housing.
This is probably more COFF than TUHS, but I recall a story from almost
certainly much later where someone (I think it was a secretary; for
now, let's pretend it was) had been told to change backup tapes daily
and set the freshly taken backup aside for safekeeping. Then one day
the storage failed and the backups were needed, only it turned out
when trying to restore the backups that _every_ _single_ _tape_ was
blank. Nobody, least of all the secretary, could explain how that
could have happened, and eventually, the secretary was asked to
demonstrate exactly what had been done every day. Turned out that
while getting the replacement tape, the secretary put the freshly
taken backup tape on a UPS, which apparently generated a strong
magnetic field, before setting that tape aside. So the freshly taken
backup tape was dutifully well and thoroughly erased. Nobody had
mentioned the little detail of not putting the tape near the UPS.
Oops.
--
Michael Kjörling 🏡 https://michael.kjorling.se
“Remember when, on the Internet, nobody cared that you were a dog?”
On 2023-01-27 14:23, Stuff Received wrote:
> On 2023-01-27 12:42, Henry Mensch wrote (in part):
>> I'd like to find solid Android and Windows clients so I could once
>> again use USENET.
>
> I read USENET (at Newsdemon -- USD3 monthly) with Firefox (on MacOS) but
> presumably will also work on Windows.
Oops -- Thunderbird, not Firefox.
>
> N.
>
> (We seem to have strayed into COFF territory...)
> I have yet to look at the oral history things from Corby, etc which may
> answer this in passing.
This page:
https://multicians.org/project-mac.html
links to oral history transcripts from Fano, Corby and Dennis. Only Corby:
https://conservancy.umn.edu/handle/11299/107230http://archive.computerhistory.org/resources/access/text/2013/05/102658041-…
was involved when Bell Labs came on board; he says:
"Also Bell Labs was interested in acquiring a new computer system. They were
quite intrigued and sympathetic to the notions of what we were doing. Ed
David down there was a key figure and an old friend of Fano. They decided to
become partners (pg. 16, CHM interview)
"Simultaneously, Bell Labs had been looking for a new computer acquisition
for their laboratories, and they had been scouting out GE. There was some
synergism: because they knew we were interested they got interested. I think
they first began to look independently of us. But they saw the possibility
of our developing a new operating system together." (pg. 43, CBI interview)
So it sounds like it was kind of a mutual thing, aided by the connection
between Fano and David (who left Bell after Bell bailed on multics).
Noel
> From: Dan Cross
> From Acceptable Name <metta.crawler(a)gmail.com>:
Gmail has decided this machine is a source of spam, and is rejecting all email
from it, and I have yet to sort out what's going on, so someone might want to
forward anything I turn up to this person.
>> Did Bell Labs approach MIT or was it the other way around?
I looked around the Multics site, but all I could find is this: "Bell
Laboratories (BTL) joined the Multics software development effort in November
of 1964"
https://multicians.org/history.html
I did look through IEEE Annals of the History of Computing, vol. 14, no. 2,
listed there, but it's mostly about the roots of CTSS. It does have a footnote
referring to Doug, about the timing, but no detail of how Bell Labs got
involved.
I have yet to look at the oral history things from Corby, etc which may answer
this in passing.
I'll ask Jerry Saltzer if he remembers; he's about the only person left from
MIT who was around at that point.
>> Did participating in Project MAC come from researchers requesting
>> management at Bell Labs/MIT
At MIT, the 'managers' and the researchers were the same people, pretty much.
If you read the panel transcript in V14N2, Fano was the closest thing to a
manager (although he was really a professor) there was at MIT, and he talks
about not wanting to be involved in managing the thing!
Noel
[Bcc: to TUHS as it's not strictly Unix related, but relevant to the
pre-history]
This came from USENET, specifically, alt.os.multics. Since it's
unlikely anyone in a position to answer is going to see it there, I'm
reposting here:
From Acceptable Name <metta.crawler(a)gmail.com>:
>Did Bell Labs approach MIT or was it the other way around?
>Did participating in Project MAC come from researchers requesting
>management at Bell Labs/MIT or did management make the
>decision due to dealing with other managers in each of the two
>organizations? Did it grow out of an informal arrangement into
>a format one?"
These are interesting questions. Perhaps Doug may be in the know?
- Dan C.
(Move to COFF)
> On 22 Jan 2023, at 05:43, Luther Johnson <luther(a)makerlisp.com> wrote:
>
> Yes, I know, but some of that SW development is being automated ... I'm not saying it will totally go away, but the numbers will become smaller, and the number of people who know how to do it will become smaller, and the quality will continue to deteriorate. The number of people who can detect quality problems before the failures they cause, will also get smaller. Not extinct, but endangered, and we are all endangered by the quality problems.
Is it possible that this concern mirrors that of skilled programmers seeing the introduction of high-level languages?
I’ve played with ChatGPT, and the first 10 minutes were a bit confronting. But the remainder of the hour or so I played overcame my initial concerns.
It’s amazing what can be done, especially with Javascript or Python, when you ask for something that’s fairly simple to define, and in a common application area. You can get reasonable code, refine it, ask for an altered approach, etc. It’s probably quicker than writing it yourself, especially if you’re not intimately familiar with the library being used (or even the language).
But … it pretty quickly becomes clear that there’s no semantic understanding of what’s being done behind it. And unless you specify what you want in pretty minute detail, the output is unlikely to be what you want. And, as always, going from a roughed-out implementation of the core functionality to a production-ready program is a lot of work.
In the end, it’s like having an intern with a bit of experience, Stack Overflow, and a decent aptitude driving the keyboard: you still have to break down the spec into small, detailed chunks, and while sometimes they come back with the right thing, more often, you need to walk through it line by line to correct the little mistakes.
I’m looking forward to seeing a generative model combined with static analysis, incremental compilation, unit test output: I think it will be possible for a good programmer to multiply their productivity by a few times (maybe even 10x, but I don’t see 100x). There’ll still be times when it’s simpler to just write the code yourself, rather than trying to rephrase the request.
All of which makes me think of the assembly language to high-level language transition ...
d
COFF'd
I wonder if we'll see events around 2038 that renew interest in conventional computing. There are going to be more public eyes on vintage computers and aging computational infrastructure the closer we get to that date methinks, if even just in the form of Ric Romero-esque curiosity pieces.
Hopefully the cohort of folks that dive into Fortran and Cobol for the first time to pick up some of the slack on bringing 2038-averse software and systems forward will continue to explore around the margins of their newfound skills. I know starting in assembly and C influenced me to then come to understand the bigger picture in which those languages and their paradigms developed, so hopefully the same is true of a general programming community finding itself Fortran-and-Cobol-ish for a time.
- Matt G.
------- Original Message -------
On Saturday, January 21st, 2023 at 10:43 AM, Luther Johnson <luther(a)makerlisp.com> wrote:
> Yes, I know, but some of that SW development is being automated ... I'm
> not saying it will totally go away, but the numbers will become smaller,
> and the number of people who know how to do it will become smaller, and
> the quality will continue to deteriorate. The number of people who can
> detect quality problems before the failures they cause, will also get
> smaller. Not extinct, but endangered, and we are all endangered by the
> quality problems.
>
> On 01/21/2023 11:12 AM, arnold(a)skeeve.com wrote:
>
> > Real computers with keyboards etc won't go away; think about
> > all those servers running the backends of the apps and the
> > databases for the cool stuff on the phones. Someone is still
> > going to have to write those bits.
> >
> > Arnold
> >
> > Luther Johnson luther(a)makerlisp.com wrote:
> >
> > > Well, that's a comforting thought, I hope it goes that way.
> > >
> > > On 01/19/2023 06:10 PM, John Cowan wrote:
> > >
> > > > On Thu, Jan 19, 2023 at 5:23 PM Luther Johnson <luther(a)makerlisp.com
> > > > mailto:luther@makerlisp.com> wrote:
> > > >
> > > > Computers that are not smart phone-like are definitely on the
> > > > endangered
> > > > species list. You know, the kind on a desk, with a keyboard ...
> > > >
> > > > I don't have statistics for this, but I doubt it. Consider amateur
> > > > radio, which has been around for a century now. Amateur stations are
> > > > an ever-shrinking fraction of all transmitters, to say nothing of
> > > > receivers, but in absolute terms there are now more than 2 million
> > > > hams in the world, which is almost certainly more than ever.
"G. Branden Robinson" <g.branden.robinson(a)gmail.com> writes:
[migrating to COFF]
[snip]
>> In that time frame there was a number of microkernel designs. One
>> that has not been mentioned was OS-9 for the 6809/68000 processor. I
>> used it pretty extensively. OS-9 was very unix like from the userland
>> POV, when you consider something like V5 unix, however it didn't share
>> any of the same command names, just many of the same concepts.
>
> This is emphatically true. I used this system as a kid on a 64KiB
> machine, and I don't remember even a mention of Unix in the doorstop of
> a manual by Dale Puckett and Peter Dibble (who gave you something like 6
> chapters of architectural background before introducing the shell
> prompt). Maybe they did mention Unix , but since it had no meaning to
> me at the time, it didn't sink in. I think it is also possible they
> avoided any names that they thought might draw legal ire from AT&T.
That is more or less me too... However, in later years when I was
familiar with Unix I looked at some of the OS-9 books and the block
diagrams in the books about the OS-9 OS could have described early Unix
pretty well.
>> It was close enough that if you had the C compiler, a very basic K&R
>> compiler, you could get some of the unix command to compile without
>> too much trouble.
>
> Years later I went to college, landed on Sun IPC workstations, and
> quickly recognized OS-9's "T/S Edit" as a vi clone, and its "T/S Word"
> as a version of nroff. There was also a "T/S Spell" product but I don't
> recall it clearly enough to venture whether it was a clone of ispell.
Ya, I think I even had a patch that turned T/S Edit into a much closer
vi clone. But, I think by then I had another vi clone already on hand.
One of the other things I did with OS-9/6809 was worked on UUCP. I
didn't write the original OS-9 UUCP code, but I did modify it quite a
bit and had it talking UUCP g protocol to UUnet via a phone line. I did
write a 'rn' Usenet news reader clone and was pulling down a few news
groups as well as email every day. In the last days of that system, I
also logged into the system via a serial port complete with Username and
password prompts. This was all on a Color Computer 3 with 512K.
[snip]
>> and nothing like Mach or even Minix.
>
> With the source of all three available, a technical paper analyzing and
> contrasting them would be a worthwhile thing to have. (It's unclear to
> me if even a historical version of QNX is available for study.)
The source to OS-9/6809 would have been released by Microware a long
time ago had it not been for a particular person in the user community.
Got mucked up. I fell out of following it after the BSD Unixs became
available.
>> It was also very much positioned to real time OS needs of the time and
>> was not really marketed generally and unless you happened to have a
>> Color Computer from Radio Shack
>
> Lucky me! How I yearned for a 128KiB Color Computer 3 so I could
> upgrade to OS-9 Level 2 and the windowing system. (512KiB was
> preferred, but there had been a spike in RAM prices right about the time
> the machine was released. Not that greater market success would have
> kept Tandy from under-promoting and eventually killing the machine.[1])
Level II was nice. It was able to use bank switching and would allow a
set of random 8k memory blocks out of the 128k or 512k present in the
CC3 system to be mapped into the 6809 64k address space. The Color
Computer didn't support memory protection, so no paging or any real
process protection, but this banking allowed for a lot of possibilities.
I know that there was other OS-9 systems around that ran Level II but I
don't really know how they managed memory. I would suspect it to be
simular to the CC3, but that is just a guess on my part.
[snip]
> Regards,
> Branden
>
> [1] Here's a story you may have to sit down for from Frank Durda IV (now
> deceased) about how the same company knifed their m68k-based
> line--which ran XENIX--in the gut repeatedly. It's hard to find
> this story via Web search so I've made a Facebook post
> temporarily(?) public. I'd simply include it, but it's pretty long.
>
> https://www.facebook.com/g.branden.robinson/posts/pfbid0F8MrvauQ6KPQ1tytme9…
>
> [2] https://www.cnn.com/2000/TECH/computing/03/21/os9.suit.idg/index.html
> [3] https://appleinsider.com/articles/10/06/08/cisco_licenses_ios_name_to_apple…
> [4] https://sourceforge.net/projects/nitros9/
--
Brad Spencer - brad(a)anduin.eldar.org - KC8VKS - http://anduin.eldar.org
> On Jan 2, 2023, at 1:36 PM, Larry McVoy <lm(a)mcvoy.com> wrote:
>
> The /bin/sh stuff made me think of an interview question I had for engineers,
> that a surprisingly few could pass:
>
> "Tell me about something you wrote that was entirely you, the docs, the
> tests, the source, the installer, everything. It doesn't have to be a
> big thing, but it has to have been successfully used by at least 10
> people who had no contact with you (other than to say thanks)."
>
> Most people fail this. I think the people who pass might look
> positively on the v7 sh stuff. But who knows?
Huh. That is a surprisingly tricky question, depending on how you want to construe "entirely you".
v1 of https://atariage.com/software_page.php?SoftwareLabelID=2023 (before Thomas Jentzsch optimized the display engine) was ... stuff I did, but obviously neither the idea nor the execution was all that original, since I used Greg Troutman's Dark Mage source, which in turn was derived from Stellar Track.
There's a certain very large text adventure I once did, which I would certainly not bring up at a real job interview since it's riotously pornographic, but it is 200,000 words of source text, got surprisingly good reviews from many people (Emily Short loved it; Jimmy Maher hated it), and I put it all together myself, but the whole thing is a hodgepodge of T.S. Eliot and The Aeneid and then a few dozen other smaller sources, all tossed in a blender. Not going to directly link it but it's not hard to find with a little Googling. The arrangement is original, sure, but its charm--such as it is--may be that it is in some ways a love letter to early D&D and its "what if Gandalf and Conan teamed up to fight Cthulhu" sort of ethos. (Jimmy Maher found the intertextuality very dense and unappetizing, whereas Emily Short really enjoyed the playfulness.)
There's https://github.com/athornton/uCA which fits the criteria but really is a very small wrapper around OpenSSL to automate SAN generation, which is a huge PITA with plain old OpenSSL. Now, of course, you wouldn't bother with this, you'd just use Let's Encrypt, but that wasn't a thing yet. Such as it is it's all me but it is entirely useless without a functional OpenSSL under it.
I'm not sure that ten other people ever used https://github.com/athornton/nerdle-solver because there may have been fewer than ten people other than me that found Nerdle all that fascinating. It was fun talking with that community and finding out that the other solver I'm aware of was completely lexical, rather than actually doing the math. But again: it's a thing that makes no sense without someone else having invented Nerdle first.
Or there's https://github.com/athornton/tmenu; probably also not actually used by ten other people, but it's the front-end of https://mvsevm.fsf.net (which certainly has been enjoyed by...uh...let's go with "at least a dozen" people). It's original work, insofar as it goes, but it (like uCA) is really just glue between other things: a web server front end, a Javascript terminal emulator, and telnet/tn3270 clients.
Which of these, if any, do you count?
(moving to COFF)
On Tue, Jan 3, 2023 at 9:55 AM Marshall Conover <marzhall.o(a)gmail.com>
wrote:
> Along these lines but not quite, Jupyter Notebooks have stood out to me as
> another approach to this concept, with behavior I'd like to see implemented
> in a shell.
>
>
Well, you know, if you're OK with bash:
https://github.com/takluyver/bash_kernel
Or zsh: https://pypi.org/project/zsh-jupyter-kernel/
One of the big things I do is work on the Notebook Aspect of the Rubin
Science Platform. Each JupyterLab notebook session is tied to a "kernel"
which is a specific language-and-extension environment. At the Rubin
Observatory, we support a Python 3.10 environment with our processing
pipeline included. Although JupyterLab itself is capable of doing other
languages (notably Julia and R, which are the other two from which the name
"Jupyter" came), many others have been adapted to the notebook environment
(including at least the two shells above). And while researchers are
welcome to write their own code and work with raw images, we work under the
presumption that almost everyone is going to use the software tools the
Rubin Observatory provides to work with the data it collects, because
writing your own data processing pipeline from scratch is a pretty
monumental project.
Most astronomers are perfectly happy with what we provide, which is Python
plus our processing pipelines, which are all Python from the
scientist-facing perspective (much of the pipeline code is implemented in
C++ for speed, but it then gets Python bindings, so unless you're actually
working very deep down in the image processing code, it's Python as far as
you're concerned). However, a certain class of astronomers still loves
their FORTRAN. This class, unfortunately, tends to be the older ones,
which means the ones who have societal stature, tenure, can relatively
easily get big grants, and wield a lot of power within their institutions.
I know that it is *possible* to run gfortran as a Jupyter kernel. I've
seen it done. I was impressed.
Fortunately, no one with the power to make it stick has demanded we provide
a kernel like that. The initial provision of the service wouldn't be the
problem. It's that the support would be a nightmare. No one on my team is
great at FORTRAN; I'm probably the closest to fluent, and I'm not very, and
I really don't enjoy working in FORTRAN, and because FORTRAN really doesn't
lend itself easily to the kind of Python REPL exploration that notebooks
are all about, and because someone who loves FORTRAN and hates Python
probably has a very different concept of what good software engineering
practices look like than I do, trying to support someone working in a
notebook environment in a FORTRAN kernel would very likely be exquisitely
painful. And fortunately, since there are not FORTRAN bindings to the C++
classes providing the core algorithms, much less FORTRAN bindings to the
Python implementations (because all the things that don't particularly need
to be fast are just written in Python in the first place), a gfortran
kernel wouldn't be nearly as useful as our Python-plus-our-tools.
Now, sure, if we had paying customers who were willing to throw enough
money at us to make it worth the pain and both bring a FORTRAN
implementation to feature-parity with the reference environment and make a
gfortran kernel available, then we would do it. But I get paid a salary
that is not directly tied to my support burden, and I get to spend a lot
more of my time working on fun things and providing features for
astronomers who are not mired in 1978 if I can avoid spending my time
supporting huge time sinks that aren't in widespread use. We are scoped to
provide Python. We are not scoped to provide FORTRAN. We are not making
money off of sales: we're employed to provide stable infrastructure
services so the scientists using our platform and observatory can get their
research done. And thus far we've been successful in saying "hey, we've
got finite resources, we're not swimming in spare cycles, no, we can't
support [ x for x in
things-someone-wants-but-are-not-in-the-documented-scope ]". (To be fair,
this has more often been things like additional visualization libraries
than whole new languages, but the principle is the same.) We have a
process for proposing items for inclusion, and it's not at all rare that we
add them, but it's generally a considered decision about how generally
useful the proposed item will be for the project as a whole and how much
time it's likely to consume to support.
So this gave me a little satori about why I think POSIX.2 is a perfectly
reasonable spec to support and why I'm not wild about making all my shell
scripts instead compatible with the subset of v7 sh that works (almost)
everywhere. It's not all that much more work up front, but odds are that a
customer that wants to run new software, but who can't guarantee a POSIX
/bin/sh, will be a much more costly customer to support than one who can,
just as someone who wants a notebook environment but insists on FORTRAN in
it is very likely going to be much harder to support than someone who's
happy with the Python environment we already supply.
Adam
[Apologies for resending; I messed up and used the old Minnie address
for COFF in the Cc]
On Mon, Jan 2, 2023 at 1:36 PM Dan Cross <crossd(a)gmail.com> wrote:
>
> On Mon, Jan 2, 2023 at 1:13 PM Clem Cole <clemc(a)ccc.com> wrote:
> > Maybe this should go to COFF but Adam I fear you are falling into a tap that is easy to fall into - old == unused
> >
> > One of my favorite stores in the computer restoration community is from 5-10 years ago and the LCM+L in Seatle was restoring their CDC-6000 that they got From Purdue. Core memory is difficult to get, so they made a card and physical module that could plug into their system that is both electrically and mechanically equivalent using modern semiconductors. A few weeks later they announced that they had the system running and had built this module. They got approached by the USAF asking if they could get a copy of the design. Seems, there was still a least one CDC-6600 running a particular critical application somewhere.
> >
> > This is similar to the PDP-11s and Vaxen that are supposed to be still in the hydroelectric grid [a few years ago the was an ad for an RSX and VMS programmer to go to Canada running in the Boston newspapers - I know someone that did a small amount of consulting on that one].
>
> One of my favorite stories along these lines is the train signalling
> system in Melbourne, running on a "PDP-11". I quote PDP-11 because
> that is now virtualized:
> https://www.equicon.de/images/Virtualisierung/LegacyTrainControlSystemStabi…
>
> Indeed older systems show up in surprising places. I was once on the
> bridge of a US Naval vessel in the late '00s and saw a SPARCstaton 20
> running Solaris (with the CDE login screen). I don't recall what it
> was doing, but it was a tad surprising.
>
> I do worry about legacy systems in critical situations, but then, I've
> been in a firefight when the damned tactical computer with the satcomm
> link rebooted and we didn't have VHF comms with the battlespace owner.
> That was not particularly fun.
>
> - Dan C.
So, all the shell-portability talk on TUHS reminds me of something I believe I saw back in the 90s, and then failed to find a few years ago when I went looking.
But my Google-fu is not great, so maybe I just didn't look in the right place.
I was trying to port Frotz to TOPS-20, because I wanted to run the Infocom games on TOPS-20 on an emulated PDP-10. (The only further causality to the chain was a warning in the Frotz sources that it assumed 8-bit bytes and if you wanted to try to port it to a 36-bit environment, good luck; this is the difference between "stuff I do fo fun" and "stuff that needs a business justification".) I had a K&R C compiler available, but the sources were all ANSI C.
I had remembered that deprotoize had been part of an early GCC, and I did manage to find deprotoize sources, buried, I think, in some dusty piece of the Apple toolchain. But I also have a vague memory that GCC at some point probably in the mid-to-late 1990s came with something that was halfway between autoconf and Perl's bootstrapper. I *think* it was a bunch of shell scripts that could put together a minimal C subset compiler, which then could be used to build the rest of GCC. I'm pretty sure it was released as a reaction to the unbundling of C compilers when Unix vendors realized that was a thing they could do.
I could not find that thing at all.
Did I hallucinate it? It seems like it would have been an immensely useful tool at the time.
I ended up writing my own very half-assed deprotoizer and symbol mangler (only the first six characters of the function name were significant, because that's how the TOPS-20 linker works, and I don't know that I could have gotten past that even with an ANSI compiler without having to do significant toolchain work) which got me over the hump, but I have remained curious whether there really was that nifty GCC bootstrapper or whether I made that up.
Adam
Hi Branden,
> Paul Ruizendaal wrote:
> > That was my immediate pain point in doing the D1 SoC port.
> > Unfortunately, the manufacturer only released the DRAM init code as
> > compiler ‘-S’ output and the 1,400 page datasheet does not discuss
> > its registers. Maybe this is a-typical, as I heard in the above
> > keynote that NXP provides 8,000 page datasheets with their SoC’s.
...
> I don't think it's atypical. I was pretty annoyed trying to use the
> data sheet to program a simple timer chip on the ODROID-C2
...
> OS nerds don't generally handle procurement themselves. Instead,
> purchasing managers do, and those people don't have to face the pain.
...
> Data sheets are only as good as they need to be to move the product,
> which means they don't need to be good at all, since the people who
> pay for them look only at the advertised feature list and the price.
I think it comes down to the background of the chip designer. I've
always found NXP very good: their documentation of a chip is extensive;
it doesn't rely on referring to external source code; and they're
responsive when I've found the occasional error, both confirming the
correction and committing to its future publication.
On the other hand, TI left a bad taste. The documentation isn't good
and they rely on a forum to mop up all the problems but it's pot luck
which staffer answers and perennial problems can easily be found by a
forum search, never with a satisfactory answer.
My guess is Allwinner, maker of Paul's D1 SoC, has a language barrier
and a very fast-moving market to dissuade them from putting too much
effort into documentation. Many simpler chips from China, e.g. a JPEG
encoder, come with a couple of pages listing features and some C written
by a chip designer or copied from a rival.
In my experience, chip selection is done by technical people, not
procurement. It's too complex a task, even just choosing from those of
one supplier like NXP, as there is often a compromise to make which
affects the rest of the board design. That's where FPGAs have an
allure, but unfortunately not in low-power designs.
--
Cheers, Ralph.
> On Dec 31, 2022, at 6:40 PM, Larry McVoy <lm(a)mcvoy.com> wrote:
>
> All true except for the Forth choice. It's as bad, maybe worse, as
> choosing Tcl for your language. I've written a ton of Tcl but I
> need the Tk GUI part so I put up with Tcl to get it. I'd never
> push Tcl as a language that other people had to use. Same thing
> with Forth.
>
> I dunno what I'd pick, Perl in the old days, Python now (not that
> I care for Python but everyone can program it). Just pick something
> that is trivial for someone to pick up.
(Moved to COFF)
I rather like FORTH. Its chief virtues are that it is both tiny and extensible. It was developed as a telescope control language, as I recall, and in highly constrained environments gives you a great deal of expressivity for a teeny tiny bit of interpreter code. I adored my HP 28S and still do: that was Peak Calculator, and its UI is basically a FORTH interpreter (which also, of course, functions just fine as an RPN calculator if you don't want to bother with flow control constructs).
But I also make the slightly more controversial claim that FORTH is just LISP stood up on end.
These days I think the right choice for those sorts of applications would be Micropython. Yes, a full-on Python interpreter is heavyweight, but Micropython gives you a lot of functionality in (comparatively) little space. It runs fine on a $4 Pi Pico, for instance, which has IIRC 256KB RAM.
And if you find yourself missing TCL, there's always Powershell, which is like what would happen if bash and TCL had a really ugly baby that just wouldn't shut up. The amazing thing is that access to all the system DLLs makes it *almost* worth putting up with Powershell.
Adam
Hi Larry,
> I hate Python because there is no printf in the base language.
There's print() with the format-string part being promoted into the
language as the ‘%’ operator when the left operand is a string.
It returns a string.
>>> '%10d' % (6*7)
' 42'
>>> '%s %s!\n' % ('hello', 'world')
'hello world!\n'
>>> '%10d' % 6*7
' 6 6 6 6 6 6 6'
>>> print('foo')
foo
>>> print('%.3s' % 'barxyzzy')
bar
>>>
So similar to AWK or Perl's sprintf() and Go's fmt.Sprintf() in that it
returns a dynamically-allocated string.
--
Cheers, Ralph.
Sharing because I can't justify this sort of purchase but perhaps someone is in with or knows someone who works at a museum or well-funded research outfit that might be keen on this sort of thing:
https://www.biblio.com/book/extraordinary-archive-original-house-publicatio…
Many books from OSU, lots of Digital literature for various PDPs, NBS reports, some Tektronix stuff, and a host of OSU-produced computing literature, plus more. This came up while I was searching for UNIX docs but I don't actually see anything UNIX there besides a mention that the PDP-11 ran it.
There is no explicit mention of no cherry picking, so via contact the seller may be amenable to working on a subset. Either way, figured this would pique the interest of a few folks around.
I also have to wonder if this bears any relation to my acquisition of the UNIX System V literature set this past year. That was likewise sourced from OSU. A professor who was retiring and mentioned oodles of old tech materials they were going to be auctioning off over the next few years, this very well could be from that same stash.
- Matt G.
Hi,
Branden wrote:
> the amount of space in the instruction for encoding registers seems to
> me to have played a major role in the design of the RV32I/E and C
> (compressed) extension instruction formats of RISC-V.
Before RISC-V, the need for code density caused ARM to move away from
the original, highly regular instruction format of one 32-bit word per
instruction to Thumb and then Thumb-2 encoding. IIRC, Thumb moved to
16-bit per instruction which was expanded by Thumb-2 to also have some
32-bit instructions. The mobile market was getting going and the
storage options and their financial and power costs meant code density
mattered more.
The original ARM instructions had the top four bits hold the ‘condition
code’ which decided if the instruction was executed, thus the top hex
nibble was readable.
0 f
eq ne cs cc mi pl vs vc hi ls ge lt gt le al nv
Data processing instructions, like
and rd, rn, rm ; d = n & m
aligned each of the four-bits to identify which of the sixteen registers
were used on nibble boundaries so again it was readable as hex.
xxxx000a aaaSnnnn ddddcccc ctttmmmm
The ‘a aaa’ above wasn't aligned, but still neatly picked which of the
sixteen data-processing instructions was used.
and eor sub rsb add adc sbc rsc tst teq cmp cmn orr mov bic mvn
And so it went on. A SoftWare Interrupt had an aligned 1111 to select
it and the low twenty-four bits as the interrupt number.
xxxx1111 yyyyyyyy yyyyyyyy yyyyyyyy
I assume this neat arrangement helped keep the decoding circuitry small
leading to a simpler design and lower power consumption. The latter was
important because Acorn, the ARM chip's designer, wanted a cheaper
plastic case rather than ceramic so they set a design limit of 1 W. Due
to the poor tooling available, it came in at 0.1 W after allowing for a
margin of error. This was so low that Acorn were surprised when an
early board ran without power connected to the ARM; they found it was
clocking just from the leakage of the surrounding support chips.
Also, Acorn's Roger Wilson who designed the ARM's instruction set was an
expert assembly programmer, e.g. he wrote the 16 KiB BASIC ROM for 6502,
so he approached it from the programmer's viewpoint as well as the chip
designer he became.
Thumb and Thumb-2 naturally had to destroy all this so instructions are
now not orthogonal. Having coded swtch() in assembler for various ARM
Cortex M-..., it's a pain to have to keep checking what instructions are
available on this model and what registers can it access. On ARM 2,
there were few rules to remember and writing assembler was fun.
--
Cheers, Ralph.
I’m wondering if anyone here is able to assist …
tvtwm is Tom LaStrange’s modified version of the twm window manager: Tom’s Virtual TWM. Somewhat unusually, tvtwm modelled the screen as a single large window and the display was a viewport that could be shifted around it, rather than the now-standard model of distinct virtual desktops.
I’m trying to rebuild the history of its releases. I have the initial release, and the first 6 patches Tom published via comp.sources.x, the 7th patch from the X11R5 contrib tape, the 10th patch from the X11R6 contrib tape, the widely available patch 11, and an incomplete patch 12 via personal email from Chris Ross, who took over maintenance from Tom at some point.
So, I’m looking for patch 8 and/or patch 9 (either one would do, since I can reconstruct the other from what I have plus one).
I’ve failed to find either of them. I’m not sure how they were distributed, and my searches have proven fruitless so far.
Does anyone here happen to have a trove of X11 window manager source code tucked away?
Thanks in advance,
d
Forwarded separately as original bounced due to use of old 'minnie'
address. Sorry :-(
---------- Forwarded message ---------
From: Rudi Blom <rudi.j.blom(a)gmail.com>
Date: Thu, 15 Dec 2022 at 10:18
Subject: [COFF] Re: DevOps/SRE [was Re: [TUHS] Re: LOC [was Re: Re: Re.:
Princeton's "Unix: An Oral History": who was in the team in "The Attic"?]
To: <mparson(a)bl.org>
Cc: coff <coff(a)minnie.tuhs.org>
<snip>
Our basic tooling is github enterprise for source and saltstack is our
config management/automation framework.
Their work-flow is supposed to basically be:
<snip>
Makes me wonder how current twitter deadlines are affecting 'quality', of
the code that is. Twits and tweets are a different matter :-)
Cheers,
uncle rubl
--
The more I learn the better I understand I know nothing.
--
The more I learn the better I understand I know nothing.
On 14 Dec 2022 06:54 -0500, from brad(a)anduin.eldar.org (Brad Spencer):
> [...] but you needed to know 6809 or 68000 assembly to create anything
> new for the OS itself,
Wasn't that the norm at the time, though? As I recall one of the
things that really set UNIX apart from other operating systems up
until about the early 1990s was precisely how machine-independent it
was by virtue of (with the exception of the early versions) having
been written in something other than assembler.
--
✍ Michael Kjörling 🏡 https://michael.kjorling.se
“Remember when, on the Internet, nobody cared that you were a dog?”
Since I haven't seen it mentioned here: According to various sources,
Fred Brooks passed away on 17th November - see
https://en.wikipedia.org/wiki/Fred_Brooks
--
Peter Jeremy