It's like Wikipedia. "Community consensus" always requires further
investigation if you're serious about the topic. As long as sources are
provided, those sources can be referred to and it is up to the individual
to determine their accuracy and relevance. If you just want to toss out a
question so that you can get an answer, LLMs are the future. If you want
to actually think about the topic and what is being discussed, the
resources are all there for you - if it's being done in the right way.
-Henry
On Mon, 26 May 2025 at 12:46, Norman Wilson <norman(a)oclsc.org> wrote:
G. Branden Robinson:
That's why I think Norman has sussed it out accurately. LLMs are
fantastic bullshit generators in the Harry G. Frankfurt sense,[1]
wherein utterances are undertaken neither to enlighten nor to deceive,
but to construct a simulacrum of plausible discourse. BSing is a close
cousin to filibustering, where even plausibility is discarded, often for
the sake of running out a clock or impeding achievement of consensus.
====
That's exactly what I had in mind.
I think I had read Frankfurt's book before I first started
calling LLMs bullshit generators, but I can't remember for
sure. I don't plan to ask ChatGPT (which still, at least
sometimes, credits me with far greater contributions to Unix
than I have actually made).
Here's an interesting paper I stumbled across last week
which presents the case better than I could:
https://link.springer.com/article/10.1007/s10676-024-09775-5
To link this back to actual Unix history (or something much
nearer that), I realized that `bullshit generator' was a
reasonable summary of what LLMs do after also realizing that
an LLM is pretty much just a much-fancier and better-automated
descendant of Mark V Shaney:
https://en.wikipedia.org/wiki/Mark_V._Shaney
Norman Wilson
Toronto ON