-1. Complaining about the wild confabulations of LLMs is like complaining about the food
service on the Wright Brothers’ flier. Obviously, they are not good at this, and users
must
be cautioned to view every AI output as a rough draft. Certainly don’t forward it
immediately
to your teacher or a judge.
I have found Claude to be fantastically helpful in programming. First, it replaces man
pages,
stack-overflow, and Apple’s wretched documentation, with just a few quick words typed in.
I am working on an iOS app written in Rejective C, implementing a little language protocol
to update the app from a server using SSL.
I got a shell version of the protocol working first through a simple TCP port run from
inetd.
Then Claude helped me pick the openssl client options and stunnel.conf stuff to get the
protocol running over that. I _think_ I have the signature properties right for the
security
I need.
SSL is a nightmare, and especially on iOS, with complicated code, deprecated stuff, and
so on. And, at 73 years old, I don’t retain all of this stuff the way I used to.
After slugging a few non-functioning versions for a couple days, I told Claude this
morning:
- hey this shell script works with these certs
- give me the routines that do the same thing in the app.
Worked the first time. I am delighted.
I don’t care that AI said I got my PhD at Hopkins during my early years at the Labs.
ches
On May 26, 2025, at 12:40 PM, Norman Wilson
<norman(a)oclsc.org> wrote:
That's why I think Norman has sussed it out accurately. LLMs are
fantastic bullshit generators in the Harry G. Frankfurt sense,[1]
wherein utterances are undertaken neither to enlighten nor to deceive,
but to construct a simulacrum of plausible discourse. BSing is a close
cousin to filibustering, where even plausibility is discarded, often for
the sake of running out a clock or impeding achievement of consensus.