The EFF just published an article on the rise and fall of Gopher on
their Deeplinks blog.
"Gopher: When Adversarial Interoperability Burrowed Under the
Gatekeepers' Fortresses"
https://www.eff.org/deeplinks/2020/02/gopher-when-adversarial-interoperabil…
I thought it might be of interest to people here.
--
Michael Kjörling • https://michael.kjorling.se • michael(a)kjorling.se
“Remember when, on the Internet, nobody cared that you were a dog?”
Cc: to COFF, as this isn't so Unix-y anymore.
On Tue, May 26, 2020 at 12:22 PM Christopher Browne <cbbrowne(a)gmail.com>
wrote:
> [snip]
> The Modula family seemed like the better direction; those were still
> Pascal-ish, but had nice intentional extensions so that they were not
> nearly so "impotent." I recall it being quite popular, once upon a time,
> to write code in Modula-2, and run it through a translator to mechanically
> transform it into a compatible subset of Ada for those that needed DOD
> compatibility. The Modula-2 compilers were wildly smaller and faster for
> getting the code working, you'd only run the M2A part once in a while
> (probably overnight!)
>
Wirth's languages (and books!!) are quite nice, and it always surprised and
kind of saddened me that Oberon didn't catch on more.
Of course Pascal was designed specifically for teaching. I learned it in
high school (at the time, it was the language used for the US "AP Computer
Science" course), but I was coming from C (with a little FORTRAN sprinkled
in) and found it generally annoying; I missed Modula-2, but I thought
Oberon was really slick. The default interface (which inspired Plan 9's
'acme') had this neat graphical sorting simulation: one could select
different algorithms and vertical bars of varying height were sorted into
ascending order to form a rough triangle; one could clearly see the
inefficiency of e.g. Bubble sort vs Heapsort. I seem to recall there was a
way to set up the (ordinarily randomized) initial conditions to trigger
worst-case behavior for quick.
I have a vague memory of showing it off in my high school CS class.
- Dan C.
Hi all, I have a strange question and I'm looking for pointers.
Assume that you can multiply two 8-bit values in hardware and get a 16-bit
result (e.g. ROM lookup table). It's straightforward to use this to multiply
two 16-bit values:
AABB *
CCDD
----
PPPP = BB*DD
QQQQ00 = BB*CC
RRRR00 = AA*DD
SSSS0000 = AA*CC
--------
32-bit result
But if the hardware can only provide the low eight bits of the 8-bit by
8-bit multiply, is it still possible to do a 16-bit by 16-bit multiply?
Next question, is it possible to do 16-bit division when the hardware
can only do 8-bit divided by 8-bit. Ditto 16-bit modulo with only 8-bit
modulo?
Yes, I could sit down and nut it all out from scratch, but I assume that
somewhere this has already been done and I could use the results.
Thanks in advance for any pointers.
Warren
** Back story. I'm designing an 8-bit TTL CPU which has 8-bit multiply, divide
and modulo in a ROM table. I'd like to write subroutines to do 16-bit and
32-bit integer maths.
On Sun, May 17, 2020 at 12:24 PM Paul Winalski <paul.winalski(a)gmail.com>
wrote:
> On 5/16/20, Steffen Nurpmeso <steffen(a)sdaoden.eu> wrote:
> >
> > Why was there no byte or "mem" type?
>
> These days machine architecture has settled on the 8-bit byte as the
> unit for addressing, but it wasn't always the case. The PDP-10
> addressed memory in 36-bit units. The character manipulating
> instructions could deal with a variety of different byte lengths: you
> could store six 6-bit BCD characters per machine word,
Was this perhaps a typo for 9 4-bit BCD digits? I have heard that a reason
for the 36-bit word size of computers of that era was that the main
competition at the time was against mechanical calculator, which had
9-digit precision. 9*4=36, so 9 BCD digits could fit into a single word,
for parity with the competition.
6x6-bit data would certainly hold BAUDOT data, and I thought the Univac/CDC
machines supported a 6-bit character set? Does this live on in the Unisys
1100-series machines? I see some reference to FIELDATA online.
I feel like this might be drifting into COFF territory now; Cc'ing there.
or five ASCII
> 7-bit characters (with a bit left over), or four 8-bit characters
> (ASCII plus parity, with four bits left over), or four 9-bit
> characters.
>
> Regarding a "mem" type, take a look at BLISS. The only data type that
> language has is the machine word.
>
> > +getfield(buf)
> > +char buf[];
> > +{
> > + int j;
> > + char c;
> > +
> > + j = 0;
> > + while((c = buf[j] = getc(iobuf)) >= 0)
> > + if(c==':' || c=='\n') {
> > + buf[j] =0;
> > + return(1);
> > + } else
> > + j++;
> > + return(0);
> > +}
> >
> > so here the EOF was different and char was signed 7-bit it seems.
>
> That makes perfect sense if you're dealing with ASCII, which is a
> 7-bit character set.
To bring it back slightly to Unix, when Mary Ann and I were playing around
with First Edition on the emulated PDP-7 at LCM+L during the Unix50 event
last USENIX, I have a vague recollection that the B routine for reading a
character from stdin was either `getchar` or `getc`. I had some impression
that this did some magic necessary to extract a character from half of an
18-bit word (maybe it just zeroed the upper half of a word or something).
If I had to guess, I imagine that the coincidence between "character" and
"byte" in C is a quirk of this history, as opposed to any special hidden
meaning regarding textual vs binary data, particularly since Unix makes no
real distinction between the two: files are just unstructured bags of
bytes, they're called 'char' because that was just the way things had
always been.
- Dan C.
On May 14, 2020, at 10:32 AM, Larry McVoy <lm(a)mcvoy.com> wrote:
> I'm being a whiney grumpy old man,
I’ve been one of those since I was, like, 20. I am finally growing into it. It’s kinda nice.
Adam