Hi Steffen,
COFF'd.
> Very often i find myself needing a restart necessity, so "continue
> N" would that be. Then again when "N" is a number instead of
> a label this is a (let alone maintainance) mess but for shortest
> code paths.
Do you mean ‘continue’ which re-tests the condition or more like Perl's
‘redo’ which re-starts the loop's body?
‘The "redo" command restarts the loop block without evaluating the
conditional again. The "continue" block, if any, is not executed.’
— perldoc -f redo
So like a ‘goto redo’ in
while (...) {
redo:
...
if (...)
goto redo
...
}
--
Cheers, Ralph.
TLDR exceptions don't make it better, they make it different.
The Mesa and Cedar languages at PARC CSL were intended to be "Systems Languages" and fully embraced exceptions.
The problem is that it is extremely tempting for the author of a library to use them, and equally tempting for the authors of library calls used by the first library, and so on.
At the application level, literally anything can happen on any call.
The Cedar OS was a library OS, where applications ran in the same address space, since there was no VM. In 1982 or so I set out to write a shell for it, and was determined that regardless of what happened, the shell should not crash, so I set out to guard every single call with handlers for every exception they could raise.
This was an immensely frustrating process because while the language suggested that the author of a library capture exceptions on the way by and translate them to one at the package level, this is a terrible idea in its own way, because you can't debug - the state of the ultimate problem was lost. So no one did this, and at the top level, literally any exception could occur.
Another thing that happens with exceptions is that programmers get the bright idea to use them for conditions which are uncommon, but expected, so any user of the function has to write complicated code to deal with these cases.
On the whole, I came away with a great deal of grudging respect for ERRNO as striking a great balance between ease of use and specificity.
I also evolved Larry's Theory of Exceptions, which is that it is the programmer's job to sort exceptional conditions into actionable categories: (1) resolvable by the user (bad arguments) (2) Temporary (out of network sockets or whatever) (3) resolvable by the sysadmin (config) (4) real bug, resolvable by the author.
The usual practice of course is the popup "Received unknown error, OK?"
-Larry
> On Mar 10, 2023, at 8:15 AM, Ralph Corderoy <ralph(a)inputplus.co.uk> wrote:
>
> Hi Noel,
>
>>> if you say above that most people are unfamiliar with them due to
>>> their use of goto then that's probably wrong
>>
>> I didn't say that.
>
> Thanks for clarifying; I did know it was a possibility.
>
>> I was just astonished that in a long thread about handling exceptional
>> conditions, nobody had mentioned . . . exceptions. Clearly, either
>> unfamiliarity (perhaps because not many laguages provide them - as you
>> point out, Go does not), or not top of mind.
>
> Or perhaps those happy to use gotos also tend to be those who dislike
> exceptions. :-)
>
> Anyway, I'm off-TUHS-pic so follow-ups set to goto COFF.
>
> --
> Cheers, Ralph.
On Fri, Mar 10, 2023 at 6:15 AM Ralph Corderoy <ralph(a)inputplus.co.uk>
wrote:
> Hi Noel,
>
> > > if you say above that most people are unfamiliar with them due to
> > > their use of goto then that's probably wrong
> >
> > I didn't say that.
>
> Thanks for clarifying; I did know it was a possibility.
>
Exception handling is a great leap sideways. it's a supercharged goto with
steroids on top. In some ways more constrained, in other ways more prone to
abuse.
Example:
I diagnosed performance problems in a program that would call into
'waiting' threads that would read data from a pipe and then queue work.
Easy, simple, straightforward design. Except they used exceptions to then
process the packets rather than having a proper lockless producer /
consumer queue.
Exceptions are great for keeping the code linear and ignoring error
conditions logically, but still having them handled "somewhere" above the
current code and writing the code such that when it gets an abort, partial
work is cleaned up and trashed.
Global exception handlers are both good and bad. All errors become
tracebacks to where it occurred. People often don't disambiguate between
expected and unexpected exceptions, so programming errors get lumped in
with remote devices committing protocol errors get lumped in with your
config file had a typo and /dve/ttyU2 doesn't exist. It can be hard for the
user to know what comes next when it's all jumbled together. In-line error
handling, at least, can catch the expected things and give a more
reasonable error near to where it happened so I know if my next step is vi
prog.conf or email support(a)prog.com.
So it's a hate hate relationship with both. What do I hate the least?
That's a three drink minimum for the answer.
Warner
(Moving to COFF)
On Mon, Mar 06, 2023 at 03:24:29PM -0800, Larry McVoy wrote:
> But even that seems suspect, I would think they could put some logic
> in there that just doesn't feed power to the GPU if you aren't using
> it but maybe that's harder than I think.
>
> If it's not about power then I don't get it, there are tons of transistors
> waiting to be used, they could easily plunk down a bunch of GPUs on the
> same die so why not? Maybe the dev timelines are completely different
> (I suspect not, I'm just grabbing at straws).
Other potential reasons:
1) Moving functionality off-CPU also allows for those devices to have
their own specialized video memory that might be faster (SDRAM) or
dual-ported (VRAM) without having to add that complexity to the more
general system DRAM and/or the CPU's Northbridge.
2) In some cases, having an off-chip co-processor may not need any
access to the system memory at well. An example of this is the "bump
in the wire" in-line crypto engines (ICE) which is located between the
Southbridge and the eMMC/UFS flash storage device. If you are using a
Android device, it's likely to have an ICE. The big advantage is that
it avoids needing to have a bounce buffer on the write path, where the
file system encryption layer has to copy-and-encrypt data from the
page cache to a bounce buffer, and then the encrypted block will then
get DMA'ed to the storage device.
3) From an architectural perspective, not all use cases need various
co-processors, whether it is to doing cryptography, or running some
kind of machine-learning module, or image manipulation to simulate
bokeh, or create HDR images, etc. While RISC-V does have the concept
of instructure set extensions, which can be developed without getting
permission from the "owners" of the core CPU ISA (e.g., ARM, Intel,
etc.), it's a lot more convenient for someone who doesn't need to bend
the knee to ARM, inc. (or their new corporate overloads) or Intel, to
simply put that extension outside the core ISA.
(More recently, there is an interesting lawsuit about whether it's
"allowed" to put a 3rd party co-processor on the same SOC without
paying $$$$$ to the corporate overload, which may make this point moot
--- although it might cause people to simply switch to another ISA
that doesn't have this kind of lawsuit-happy rent-seeking....)
In any case, if you don't need to play Quake with 240 frames per
second, then there's no point putting the GPU in the core CPU
architecture, and it may turn out that the kind of co-processor which
is optimized for running ML models is different, and it is often
easier to make changes to the programming model for a GPU, compared to
making changes to a CPU's ISA.
- Ted
Hi Phil,
Copying to the COFF list, hope that's okay. I thought it might interest
them.
> > $ units -1v '26^3 16 bit' 64KiB
>
> Works only for GNU units.
That's interesting, thanks.
I've access to a FreeBSD 12.3-RELEASE-p6, if that version number means
something to you. Its units groks ^ to mean power when applied to a
unit, as the fine units(1) says, but not to a number. Whereas * works.
$ units yd^3 ft^3
* 27
/ 0.037037037
$
$ units 6\*7 21
* 2
/ 0.5
$
$ units 2^4 64
* 0.03125
/ 32
$
The last one silently treats 2^4 as 2; I'd say that's a bug.
It has Ki- and byte allowing
$ units -t Kibyte bit
8192
but lacks GNU's
B byte
Fair enough, though I think that's common enough now to be included.
FreeBSD also seems to have another bug: demanding a space between the
quantity and the unit for fundamental ‘!’ units.
$ units m 8m
conformability error
1 m
8
$ units m '8 m'
* 0.125
/ 8
$
I found this when attempting the obvious
$ units Kibyte 8bit
conformability error
8192 bit
8
$ units Kibyte '8 bit'
* 1024
/ 0.0009765625
$
Whilst I'm not a GNU acolyte, in this case its version of units does
seem to have had a bit more TLC. :-)
--
Cheers, Ralph.
John Cowan <cowan(a)ccil.org> writes:
>> which Rob Austein re-wrote into "Alice's PDP-10".
> I didn't know that one was done at MIT.
This spells out the details:
https://www.hactrn.net/sra/alice/alice.glossary
[COFF]
On Mon, Feb 27, 2023 at 4:16 PM Chet Ramey <chet.ramey(a)case.edu> wrote:
> On 2/27/23 4:01 PM, segaloco wrote:
> > The official Rust book lists a blind script grab from a website piped into a shell as their "official" install mechanism.
>
> Well, I suppose if it's from a trustworthy source...
>
> (Sorry, my eyes rolled so hard they're bouncing on the floor right now.)
I find this a little odd. If I go back to O'Reilly books from the
early 90s, there was advice to do all sorts of suspect things in them,
such as fetching random bits of pieces from random FTP servers (or
even using email fetch tarballs [!!]). Or downloading shell archives
from USENET.
And of course you _can_ download the script and read through it if you want.
And no one forces anyone to use `rustup`. Most vendors ship some
version of Rust through their package management system these days.
- Dan C.
On Mon, Feb 27, 2023 at 5:06 PM KenUnix <ken.unix.guy(a)gmail.com> wrote:
> Have they not heard of common sense? Whenever I get something from git I look through it to
> check for something suspicious before using it and then and only then do I do make install.
Up to what size? What about the dependencies? How about the compiler
that compiles it all?
I have a copy of the Linux kernel I checked out on my machine; it's
many millions of lines of code; sorry, I haven't read all of that. I
often install things using the operating system's package manager; I
haven't read through all that code, either. Life's too short as it is!
> And today's cookie cutter approach to writing software means they are not learning anything
> but copy paste. Where's the innovation?
I imagine that when people made the switch from programming in machine
code to symbolic assemblers, and then again from assembler to
higher-level languages (FORTRAN! COBOL! PL/I!). And so on.
Consider that, perhaps, the innovation is in how those things are all
combined to do something useful for users. My ability to search, read
documents, listen to music, watch real-time video, etc, is way beyond
anything I could do on the machines of the early 90s.
Not everything that the kids do these days is for the better, but not
everything is terrible, either. This list, and TUHS, bluntly, too
often makes the mistake of assuming that it is. Innovation didn't stop
in 1989.
- Dan C.
> On Mon, Feb 27, 2023 at 4:22 PM Dan Cross <crossd(a)gmail.com> wrote:
>>
>> [COFF]
>>
>> On Mon, Feb 27, 2023 at 4:16 PM Chet Ramey <chet.ramey(a)case.edu> wrote:
>> > On 2/27/23 4:01 PM, segaloco wrote:
>> > > The official Rust book lists a blind script grab from a website piped into a shell as their "official" install mechanism.
>> >
>> > Well, I suppose if it's from a trustworthy source...
>> >
>> > (Sorry, my eyes rolled so hard they're bouncing on the floor right now.)
>>
>> I find this a little odd. If I go back to O'Reilly books from the
>> early 90s, there was advice to do all sorts of suspect things in them,
>> such as fetching random bits of pieces from random FTP servers (or
>> even using email fetch tarballs [!!]). Or downloading shell archives
>> from USENET.
>>
>> And of course you _can_ download the script and read through it if you want.
>>
>> And no one forces anyone to use `rustup`. Most vendors ship some
>> version of Rust through their package management system these days.
>>
>> - Dan C.
>
>
>
> --
> End of line
> JOB TERMINATED
>
>
On Mon, Feb 27, 2023 at 4:52 PM Michael Stiller <mstiller(a)me.com> wrote:
> > I find this a little odd. If I go back to O'Reilly books from the
> > early 90s, there was advice to do all sorts of suspect things in them,
> > such as fetching random bits of pieces from random FTP servers (or
> > even using email fetch tarballs [!!]). Or downloading shell archives
> > from USENET.
> >
> > And of course you _can_ download the script and read through it if you want.
>
> This does not help, you can detect that on the server and send something else.
What? You've already downloaded the script. Once it's on your local
machine, why would you download it again?
> https://www.idontplaydarts.com/2016/04/detecting-curl-pipe-bash-server-side/
If I really wanted to see whether it had been tampered with, perhaps
spin up a sacrificial machine and run,
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | tee the.script | sh
and compare to the output of,
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs >
the.script.nopipeshell
- Dan C.
[Redirecting to COFF; TUHS to Bcc:]
On Mon, Feb 27, 2023 at 3:46 PM segaloco via TUHS <tuhs(a)tuhs.org> wrote:
> I see the wisdom in your last line there, I've typed and deleted a response to this email 4 times, each one more convoluted than the last.
>
> The short of my stance though is, as a younger programmer (29), I am certainly not a fan of these trends that are all too common in my generation. That said, I've set foot in one single softare-related class in my life (highschool Java class) and so I don't really know what is being taught to folks going the traditional routes. All I know from my one abortive semester of college is that I didn't see a whole lot of reliance on individual exploration of concepts in classes, just everyone working to a one-size-fits-all understanding of how to be a good employee in a given subject area. Of course, this is also influenced by my philosophy and biases and such, and only represents 4-5 months of observation, but if my minimal experience with college is to be believed, I have little faith that educational programs are producing much more than meat filters between StackOverflow and <insert code editor here>. No offense to said meat filters, people gotta work, but there is something lost when the constant march of production torpedoes individual creativity. Then again, do big firms want sophisticated engineers or are we too far gone into assembly line programming with no personal connection to any of the products? I'm glad I'm as personally involved in the stuff I work with, I could see myself slipping into the same patterns of apathy if I was a nameless face in a sea of coders on some project I don't even know the legal name of any given day.
This is an extraordinarily complicated subject, and it's really full
of nuance. In general, I think your categorization is unfair.
It sounds like you had a bad experience in your first semester of
college. I can sympathize; I did too.
But a thing to bear in mind is that in the first year, universities
are taking kids (and yes, they are kids...sorry young folks, I don't
mean that as a pejorative, but consider the context! For most young
people this is their first experience living on their own, their first
_real_ taste of freedom, and the first where they're about to be
subject to rigorous academic expectations without a lot of systemic
support) with wildly uneven academic and social backgrounds and
preparing them for advanced study in a particular field...one that
most haven't even identified for themselves yet. For the precocious
student, this will feel stifling; for many others it will be a
struggle. What, perhaps, you see as lack of intellectual curiosity may
have in fact been the outward manifestations of that struggle.
That said...Things are, legitimately, very different today than they
were when Unix was young. The level of complexity has skyrocketed in
every dimension, and things have gotten to the point where hack upon
hack has congealed into a system that's nearly bursting at the seams.
It's honestly amazing that anything works at all.
That said, good things have been invented since 1985, and the way many
of us "grew up" thinking about problems doesn't always apply anymore.
The world changes; c'est la vie.
- Dan C.
> ------- Original Message -------
> On Monday, February 27th, 2023 at 12:22 PM, arnold(a)skeeve.com <arnold(a)skeeve.com> wrote:
>
>
> > Chet Ramey chet.ramey(a)case.edu wrote:
> >
> > > On 2/27/23 3:04 PM, arnold(a)skeeve.com wrote:
> > >
> > > > IMHO the dependence upon IDEs is crippling; they cut & paste to the
> > > > almost total exclusion of the keyboard, including when shell completion
> > > > would be faster.
> > >
> > > Don't forget cargo-culting by pasting shell commands they got from the web
> > > and barely understand, if at all.
> >
> >
> > Yeah, really.
> >
> > I do what I can, but it's a very steep uphill battle, as most
> > don't even understand that they're missing something, or that
> > they could learn it if they wanted to.
> >
> > I think I'll stop ranting before I really get going. :-)
> >
> > Arnold