[bumping to COFF]
On Wed, Mar 8, 2023 at 2:05 PM ron minnich <rminnich(a)gmail.com> wrote:
> The wheel of reincarnation discussion got me to thinking:
>
> What I'm seeing is reversing the rotation of the wheel of reincarnation. Instead of pulling the task (e.g. graphics) from a special purpose device back into the general purpose domain, the general purpose computing domain is pushed into the special purpose device.
>
> I first saw this almost 10 years ago with a WLAN modem chip that ran linux on its 4 core cpu, all of it in a tiny package. It was faster, better, and cheaper than its traditional embedded predecessor -- because the software stack was less dedicated and single-company-created. Take Linux, add some stuff, voila! WLAN modem.
>
> Now I'm seeing it in peripheral devices that have, not one, but several independent SoCs, all running Linux, on one card. There's even been a recent remote code exploit on, ... an LCD panel.
>
> Any of these little devices, with the better part of a 1G flash and a large part of 1G DRAM, dwarfs anything Unix ever ran on. And there are more and more of them, all over the little PCB in a laptop.
>
> The evolution of platforms like laptops to becoming full distributed systems continues.
> The wheel of reincarnation spins counter clockwise -- or sideways?
About a year ago, I ran across an email written a decade or more prior
on some mainframe mailing list where someone wrote something like,
"wow! It just occurred to me that my Athlon machine is faster than the
ES/3090-600J I used in 1989!" Some guy responded angrily, rising to
the wounded honor of IBM, raving about how preposterous this was
because the mainframe could handle a thousand users logged in at one
time and there's no way this Linux box could ever do that.
I was struck by the absurdity of that; it's such a ridiculous
non-comparison. The mainframe had layers of terminal concentrators,
3270 controllers, IO controllers, etc, etc, and a software ecosystem
that made heavy use of all of that, all to keep user interaction _off_
of the actual CPU (I guess freeing that up to run COBOL programs in
batch mode...); it's not as though every time a mainframe user typed
something into a form on their terminal it interrupted the primary
CPU.
Of course, the first guy was right: the AMD machine probably _was_
more capable than a 3090 in terms of CPU performance, RAM and storage
capacity, and raw bandwidth between the CPU and IO subsystems. But the
3090 was really more like a distributed system than the Athlon box
was, with all sorts of offload capabilities. For that matter, a
thousand users probably _could_ telnet into the Athlon system. With
telnet in line mode, it'd probably even be decently responsive.
So often it seems to me like end-user systems are just continuing to
adopt "large system" techniques. Nothing new under the sun.
> I'm no longer sure the whole idea of the wheel or reincarnation is even applicable.
I often feel like the wheel has fallen onto its side, and we're
continually picking it up from the edge and flipping it over, ad
nauseum.
- Dan C.
Hi Steffen,
COFF'd.
> Very often i find myself needing a restart necessity, so "continue
> N" would that be. Then again when "N" is a number instead of
> a label this is a (let alone maintainance) mess but for shortest
> code paths.
Do you mean ‘continue’ which re-tests the condition or more like Perl's
‘redo’ which re-starts the loop's body?
‘The "redo" command restarts the loop block without evaluating the
conditional again. The "continue" block, if any, is not executed.’
— perldoc -f redo
So like a ‘goto redo’ in
while (...) {
redo:
...
if (...)
goto redo
...
}
--
Cheers, Ralph.
TLDR exceptions don't make it better, they make it different.
The Mesa and Cedar languages at PARC CSL were intended to be "Systems Languages" and fully embraced exceptions.
The problem is that it is extremely tempting for the author of a library to use them, and equally tempting for the authors of library calls used by the first library, and so on.
At the application level, literally anything can happen on any call.
The Cedar OS was a library OS, where applications ran in the same address space, since there was no VM. In 1982 or so I set out to write a shell for it, and was determined that regardless of what happened, the shell should not crash, so I set out to guard every single call with handlers for every exception they could raise.
This was an immensely frustrating process because while the language suggested that the author of a library capture exceptions on the way by and translate them to one at the package level, this is a terrible idea in its own way, because you can't debug - the state of the ultimate problem was lost. So no one did this, and at the top level, literally any exception could occur.
Another thing that happens with exceptions is that programmers get the bright idea to use them for conditions which are uncommon, but expected, so any user of the function has to write complicated code to deal with these cases.
On the whole, I came away with a great deal of grudging respect for ERRNO as striking a great balance between ease of use and specificity.
I also evolved Larry's Theory of Exceptions, which is that it is the programmer's job to sort exceptional conditions into actionable categories: (1) resolvable by the user (bad arguments) (2) Temporary (out of network sockets or whatever) (3) resolvable by the sysadmin (config) (4) real bug, resolvable by the author.
The usual practice of course is the popup "Received unknown error, OK?"
-Larry
> On Mar 10, 2023, at 8:15 AM, Ralph Corderoy <ralph(a)inputplus.co.uk> wrote:
>
> Hi Noel,
>
>>> if you say above that most people are unfamiliar with them due to
>>> their use of goto then that's probably wrong
>>
>> I didn't say that.
>
> Thanks for clarifying; I did know it was a possibility.
>
>> I was just astonished that in a long thread about handling exceptional
>> conditions, nobody had mentioned . . . exceptions. Clearly, either
>> unfamiliarity (perhaps because not many laguages provide them - as you
>> point out, Go does not), or not top of mind.
>
> Or perhaps those happy to use gotos also tend to be those who dislike
> exceptions. :-)
>
> Anyway, I'm off-TUHS-pic so follow-ups set to goto COFF.
>
> --
> Cheers, Ralph.
On Fri, Mar 10, 2023 at 6:15 AM Ralph Corderoy <ralph(a)inputplus.co.uk>
wrote:
> Hi Noel,
>
> > > if you say above that most people are unfamiliar with them due to
> > > their use of goto then that's probably wrong
> >
> > I didn't say that.
>
> Thanks for clarifying; I did know it was a possibility.
>
Exception handling is a great leap sideways. it's a supercharged goto with
steroids on top. In some ways more constrained, in other ways more prone to
abuse.
Example:
I diagnosed performance problems in a program that would call into
'waiting' threads that would read data from a pipe and then queue work.
Easy, simple, straightforward design. Except they used exceptions to then
process the packets rather than having a proper lockless producer /
consumer queue.
Exceptions are great for keeping the code linear and ignoring error
conditions logically, but still having them handled "somewhere" above the
current code and writing the code such that when it gets an abort, partial
work is cleaned up and trashed.
Global exception handlers are both good and bad. All errors become
tracebacks to where it occurred. People often don't disambiguate between
expected and unexpected exceptions, so programming errors get lumped in
with remote devices committing protocol errors get lumped in with your
config file had a typo and /dve/ttyU2 doesn't exist. It can be hard for the
user to know what comes next when it's all jumbled together. In-line error
handling, at least, can catch the expected things and give a more
reasonable error near to where it happened so I know if my next step is vi
prog.conf or email support(a)prog.com.
So it's a hate hate relationship with both. What do I hate the least?
That's a three drink minimum for the answer.
Warner
(Moving to COFF)
On Mon, Mar 06, 2023 at 03:24:29PM -0800, Larry McVoy wrote:
> But even that seems suspect, I would think they could put some logic
> in there that just doesn't feed power to the GPU if you aren't using
> it but maybe that's harder than I think.
>
> If it's not about power then I don't get it, there are tons of transistors
> waiting to be used, they could easily plunk down a bunch of GPUs on the
> same die so why not? Maybe the dev timelines are completely different
> (I suspect not, I'm just grabbing at straws).
Other potential reasons:
1) Moving functionality off-CPU also allows for those devices to have
their own specialized video memory that might be faster (SDRAM) or
dual-ported (VRAM) without having to add that complexity to the more
general system DRAM and/or the CPU's Northbridge.
2) In some cases, having an off-chip co-processor may not need any
access to the system memory at well. An example of this is the "bump
in the wire" in-line crypto engines (ICE) which is located between the
Southbridge and the eMMC/UFS flash storage device. If you are using a
Android device, it's likely to have an ICE. The big advantage is that
it avoids needing to have a bounce buffer on the write path, where the
file system encryption layer has to copy-and-encrypt data from the
page cache to a bounce buffer, and then the encrypted block will then
get DMA'ed to the storage device.
3) From an architectural perspective, not all use cases need various
co-processors, whether it is to doing cryptography, or running some
kind of machine-learning module, or image manipulation to simulate
bokeh, or create HDR images, etc. While RISC-V does have the concept
of instructure set extensions, which can be developed without getting
permission from the "owners" of the core CPU ISA (e.g., ARM, Intel,
etc.), it's a lot more convenient for someone who doesn't need to bend
the knee to ARM, inc. (or their new corporate overloads) or Intel, to
simply put that extension outside the core ISA.
(More recently, there is an interesting lawsuit about whether it's
"allowed" to put a 3rd party co-processor on the same SOC without
paying $$$$$ to the corporate overload, which may make this point moot
--- although it might cause people to simply switch to another ISA
that doesn't have this kind of lawsuit-happy rent-seeking....)
In any case, if you don't need to play Quake with 240 frames per
second, then there's no point putting the GPU in the core CPU
architecture, and it may turn out that the kind of co-processor which
is optimized for running ML models is different, and it is often
easier to make changes to the programming model for a GPU, compared to
making changes to a CPU's ISA.
- Ted
Hi Phil,
Copying to the COFF list, hope that's okay. I thought it might interest
them.
> > $ units -1v '26^3 16 bit' 64KiB
>
> Works only for GNU units.
That's interesting, thanks.
I've access to a FreeBSD 12.3-RELEASE-p6, if that version number means
something to you. Its units groks ^ to mean power when applied to a
unit, as the fine units(1) says, but not to a number. Whereas * works.
$ units yd^3 ft^3
* 27
/ 0.037037037
$
$ units 6\*7 21
* 2
/ 0.5
$
$ units 2^4 64
* 0.03125
/ 32
$
The last one silently treats 2^4 as 2; I'd say that's a bug.
It has Ki- and byte allowing
$ units -t Kibyte bit
8192
but lacks GNU's
B byte
Fair enough, though I think that's common enough now to be included.
FreeBSD also seems to have another bug: demanding a space between the
quantity and the unit for fundamental ‘!’ units.
$ units m 8m
conformability error
1 m
8
$ units m '8 m'
* 0.125
/ 8
$
I found this when attempting the obvious
$ units Kibyte 8bit
conformability error
8192 bit
8
$ units Kibyte '8 bit'
* 1024
/ 0.0009765625
$
Whilst I'm not a GNU acolyte, in this case its version of units does
seem to have had a bit more TLC. :-)
--
Cheers, Ralph.
John Cowan <cowan(a)ccil.org> writes:
>> which Rob Austein re-wrote into "Alice's PDP-10".
> I didn't know that one was done at MIT.
This spells out the details:
https://www.hactrn.net/sra/alice/alice.glossary