On 8/1/20 9:13 AM, Larry McVoy wrote:
> My dad wasn't famous, but he had a PhD in physics. He never asked people
> to call him Dr McVoy. As we grew up and realized he could be called that
> we asked him why not. He said it sounds fancy, the only time he used it
> was when he wanted a table at a crowded restaurant (which was very rare,
> Madison didn't pay him very well).
>
> Somehow that stuck with me and I've always been sort of wary of people
> who use their title. The people I admire never did.
>
> Someone on the list said that they thought Dennis wouldn't appreciate
> it if we got his PhD official. I couldn't put my finger on it at the
> time, but I agreed. And I think it is because the people who are really
> great don't need or want the fancy title. I may be over thinking it,
> but Dennis does not need the title, it does nothing to make his legacy
> better, his legacy is way way more than that title.
>
> Which is a long ramble to say I agree with Markus.
I agree with your dad, completely, it's fancy. I too am uncomfortable
with the title. I think it's because I was a street kid and as the
saying goes, you can take the kid out of the street, but you can't take
the street out of the kid. I work in the academy, so it's prevalent, but
I find it pretentious to insist on people calling you doctor. I ask
people to just call me Will. It's interesting to watch the reactions.
Some folks are glad to, some are fearful to (mostly students), and some
outright reject the proposition (mostly those pretentious types).
With regards to Dennis and his view on things, I haven't the slightest
clue, but if someone were to present him with an honorary degree, it
would be their attempt to recognize his exemplary contributions and
would not be meant as anything other than highest praise. As someone who
loves programming in C, I'm a direct beneficiary of his legacy and would
gladly support his being recognized in this manner. I know several
people who have been granted honorary doctorates, at least one of who
had no prior degree. They accepted and enjoyed telling their close
friends about their now having to call them doctor, but otherwise taking
it as a compliment and honor and not bothering about the title.
Will
--
GPG Fingerprint: 68F4 B3BD 1730 555A 4462 7D45 3EAA 5B6D A982 BAAF
> From: Lars Brinkhoff
> I have seen conflicting information in sources dating from the 70s when
> the TV-11 and XGP-11 were very much in use.
> For several reasons, I believe the TV-11 was the first machine attached
> to the 10-11 interface, and the XGP-11 came second. This would lend
> some weak support for the theory that the first would be a 11/20 and the
> second a 11/10.
Yeah, but Clem's note reinforces my vague memory that the XGP-11 was an
-11/20.
I wish we had a picture of the Knight TV system (the system, not a
terminal). It's a extremely significant system - I believe it my have been the
first bit-mapped computer display system ever; and thus the prototype, in some
sense, for the display of every single personal compupter (including phones)
now extant - and so there _ought_ to be a photo of it. But looking online for
a while, I can turn up almost nothing about it! (I guess we should do a page
about it on the CHWiki...)
(Repeat my prior grump about how the AI Lab did all sorts of ground-breaking
stuff, because it was just 'tools', and not their main research focus, it's
hard to find out about a lot of it, e.g. the inter-ITS network file
system.)
But if you can find an image, even a low-res picture of that end of the AI Lab
machine room, we can tell what model the TV-11 is - early 11's had inteagrated
front panels, which are different for every model:
http://ana-3.lcs.mit.edu/~jnc/tech/pdp11/PDP-11_Models.html
so you don't even need to be able to read anything to tell a /20 from a /10.
It was in a dual (I think - maybe triple, it's been a looooooong time :-)
rack which IIRC was along the side wall (i.e. the short building side) next to
the AI KA10 (which was sort of along the long wall, up in the corner).
I don't know if the XGP-11 code is still extant (my copy of the ITS filesystem
is offline right at the moment), but even if we look at the code, I'm not sure
we could tell; there are some _very minor_ programming differences between the
/20 and /10 (e.g. V bit on SWAB) - see the table at thd end of the PDP-11
Architecture Handbook - but I'd be aurprised if the code used any.
Surely there has to be _some_ picture of the machine room which shows it, even
if in the background.
> I did bring it up with TK at some point.
Try RG, too.
Noel
> From: Angelo Papenhoff
> Well the TV-11 is a tough question. I originally wrote an 11/05 emulator
> because some document said it was an 11/10 (which is the same thing).
> But other sources claimed it was an 11/20.
Hmmm. My memory was that it was an -11/05-10 (they are identical, except for
the paint on the front panel; and I don't recall it in enough detail to say),
but perhaps I'm wrong?
Or maybe it was an -11/20 early, and then it got replaced with an -11/10? (I
have a _very_ vague memory that the XGP's -11 was a /20, bur I wouldn't put
much weight on that.)
Moon or TK or someone might remember better.
Noel
Angelo Papenhoff wrote:
> Noel Chiappa wrote:
>> If the KE11 is needed to run some application on the -11/04, there
>> are KE11-B's (program compatible, but a single hex card) available,
>> ISTR. For emulation, something (SIMH?) supports it, since the TV -11
>> on ITS (now running in emulation,I'm pretty sure) uses it.
>
> Well the TV-11 is a tough question. I originally wrote an 11/05 emulator
> because some document said it was an 11/10 (which is the same thing).
> But other sources claimed it was an 11/20.
To clarify, the emulated TV-11 is *not* in any way based on SIMH. There
are more machines to potentially hook up to the 10-11 interface, but I'm
quite unsure if SIMH is the right vehicle for those.
But this is now clearly out of TUHS territory. CC to coff only.
"You'd be out of your mind to blindly run the shell on some anonymous shar
file..."
But but but all the cool kids tell you to install their new Javascript
framework with:
"curl https://rocketviagra.ru/distro/latest.sh | sudo /bin/bash"
Get offa my lawn.
On Fri, Jul 24, 2020 at 7:01 PM Dave Horsfall <dave(a)horsfall.org> wrote:
> On Fri, 24 Jul 2020, Random832 wrote:
>
> > For whatever it's worth, you can do the exact same thing as vi with sed
> > in this case: 1,/====/d
>
> It's been a while since I had to use it, but didn't "unshar" do this sort
> of thing, and in a safe manner too? You'd be out of your mind to blindly
> run the shell on some anonymous shar file...
>
> -- Dave
>
If you'll old enough to remember 'ADVENT' and been around the geeks when it
was a craze on the ARPA-net in the late 70s. You might find this article
which was in my feed last night:
https://onezero.medium.com/the-woman-who-inspired-one-of-the-first-hit-vide…
fun.
For those that did not, it was the world's first adventure game (no
graphics, just solving a series of puzzles while wandering through a
cave). It was originally written in Fortran-IV for the PDP-10/20 with a
small assembler assist to handle RAD50 for the input. [FYI: MIT'S Haystack
observatory is about 2 miles as the crow flies from my house on the top
of hill next over, in the town next to mine, Westford. Groton, MA is the
town after that].
This article is an interesting read (about 20 mins) with stuff I
never knew. I knew a divorced Will Crowthers worked at BBN and wrote the
game Adventure for his daughters to play when they visited him. I also
knew that he had been a caver and that the cave in the game was modeled
after Kentucky's Mammoth Caves. I did not know until a few years ago,
[from a friend of my wife's, Madeliene Needles] that at some time they were
living in Groton (because Crothers' ex-wife was working at Haystack with
Madeliene for a while). As this article tells the story, it was Patricia
Crowthers who actually did the mapping work.
FWIW: As a fun factoid, today, the Stanford version is one of the tests
used by the old DEC and now the Intel Fortran-2018 compiler to verify that
the compiler can still compile fixed format FORTRAN-IV and ensure the
resulting program still works. And of course, 'packrat Clem;' my own
'advent' map is in my filing cabinet in the basement. Written on the back
of '132 column green bar' computer paper of course.
Clem
For the folks that are interested, more good stuff including a number of
versions of the code can be found at: https://rickadams.org/adventure/
> Did the non-Unix people also pull pranks like the watertower?
Every year when our director, Arun Netravali of center 1135, went on
vacation,
Scott Knauer, a department head, would pull some kind of stunt. One year he
covered the carpeting in Arun's office with green astroturf, so it looked
like
half of a tennis court. Another year, he recruited many of us to blow up
balloons,
which he collected in Arun's office by aiming a huge fan in that direction,
while we pitched inflated balloons into the corridor. Scott, being Scott,
completely
topped off the balloons by lifting the ceiling tiles near the office door.
I recall
walking into Building 3 on the day Arun was scheduled to return and seeing
balloons struggling to escape from every open window of his office.
Building 3 lacked the large stairwells like the one that housed the Peter
face
made of magnets. But Scott compensated by projecting Arun's image along
a long corridor, and covering it with magnets, so Arun's face was visible on
a sidewall as you approached the end of the corridor.
I found the playful attitude at the Labs in general as an indicator of
out-of-the-box
thinking that a research organization thrives on.
As some of us remember this commercial. Predicting the future. Note that
Ethernet is a bus topology at this point.
Anyway, Allen Kay recently uploaded a copy of the wonderful and futurist
“Xerox Information Outlet TV commercial to YouTube.” A number of us think it
aired in the late 70s’, early ‘80s* i.e. *around the time of
DEC/Xerox/Intel “Blue Book” definition of Ethernet.
I understand the back story on the commercial is this:
The PARC guys did the drawing on the wall with a pale blue pencil so the
actor would see it, but the camera wouldn’t. All he had to do was trace the
lines.
Take 1. His delivery was perfect in but his drawing looked like a giant
smashed spider.
Take 2. Again, a flawless reading. This time his work of art was about
11”x17”. You had to squint to see it. The director yelled, cut! Then he
said to the actor, “Come here for a second.” He came forward. “Turn
around,” said the director. The actor did an about-face. They both stared
at the wall. Like talking to a 4-year old, the director said,
“Look...what... you... did.” “Whoops!” said the talent.
Take 3. The drawing was great but he flubbed the last. .. ah damn…
Take 4. Started out fine. We held our breath. Good...good...good.
“...and...Cut!! Perfect!” The director shouted.
https://www.youtube.com/watch?v=m2WgFpyL2Pk
On 7/6/20 7:30 PM, Greg 'groggy' Lehey wrote:
> People (not just Clem), when you change the topic, can you please
> modify the Subject: to match? I'm not overly interested in uucp,
> but editors are a completely different matter. I'm sure I'm not the
> only one, so many interested parties will miss these replies.
I see this type of change happen — in my not so humble opinion — /way/
/too/ /often/.
So, I'm wondering if people are interested in configuring TUHS and / or
COFF mailing list (Mailman) with topics. That way people could
subscribe to the topics that they are interested in and not receive
copies of topics they aren't interested in.
I'm assuming that TUHS and COFF are still on Mailman mailing lists and
that Warren would be amicable to such.
To clarify, it would still be the same mailing list(s) as they exist
today. They would just have the to be utilized option of picking
interesting topics. Where topics would be based on keywords in the
message body.
I'm just trying to gauge people's interest in this idea.
--
Grant. . . .
unix || die
On Wed, 8 Jul 2020, Jacob Goense wrote:
> When I have tuned out of a long running thread and the topic drifts
> significantly I'm always grateful to the kind soul that tags..:
>
> Subject: Re: new [was: old]
Yeah, I try and remember to do that...
-- Dave
The Internet's original graphics trio of "*Booth, Beatty, and Barsky*" (who
had been affectionately dubbed the graphics industries' "*Killer Bs*") lost
one for their greats minds and personalities this last week. With great
sadden in my heart, I regret to inform you that John Beatty has passed
away. I was informed this AM by his partner is so many enterprises, Kelly
Booth, that John died peacefully in his sleep in the early morning on
Thursday, July 2, 2020.
When we look at the wonders of the Internet's growth in the use of computer
graphics, we view his legacy. But more than that, John was a good friend of
many of us and, of course, we all miss him.
Kelly tells me that the obituary with further information is being prepared
by John’s sister, Jean Beatty, and well as others. We'll be sure to try to pass
on a copy (or a link if it is on the web) when it has been released.
Clem
Does anyone have any experience with UUCP on macOS or *BSD systems that
would be willing to help me figure something out?
I'm working on adding a macOS X system to my micro UUCP network and
running into some problems.
- uuto / uucp copy files from my non-root / non-(_)uucp user to the
UUCP spool. But the (demand based) ""call (pipe over SSH) is failing.
- running "uucico -r1 -s <remote system name> -f" as the (_)uucp user
(via sudo) works.
- I'm using the pipe port type and running things over an SSH connection.
- The (_)uucp user can ssh to the remote system as expected
without any problems or being prompted for a password. (Service
specific keys w/ forced command.)
I noticed that the following files weren't set UID or GID like they are
on Linux. But I don't know if that's a macOS and / or *BSD difference
when compared to Linux.
/usr/bin/uucp
/usr/bin/uuname
/usr/bin/uustat
/usr/bin/uux
/usr/sbin/uucico
/usr/sbin/uuxqt
Adding the set UID & GID bits allowed things to mostly work.
Aside: Getting the contemporary macOS so that I could edit the
(/usr/share/uucp/) sys & port files was a treat.
I figured that it was worth inquiring if anyone had any more experience
/ tips / gotchas before I go bending ~> breaking things even more.
Thank you.
--
Grant. . . .
unix || die
In 1966, Engineers at IBM invented a method of speeding up execution
without adding a lot of very expensive memory. They called their
invention the muffer. The name did not catch on so they picked another
name and submitted an article to the IBM System J. The editor noted
that their second name was heavily overused and suggested a third name,
which the engineers accepted. The third name was cache.
(Muffer was short for memory buffer.)
This from "IBM's 360 and Early 370 Systems", MIT Press. I found this
an amusing tidbit of history -- hopefully so may others.
N.
Redirected to COFF, since this isn't very TUHS related.
Richard Salz wrote:
> Noel Chiappa wote:
>> MLDEV on ITS would, I think, fit under that description.
>> I don't know if there's a paper on it; it's mid-70's.
I don't think there's anything like an academic paper.
The earliest evidence I found for the MLDEV facility, or a predecessor,
is from 1972:
https://retrocomputing.stackexchange.com/questions/13709/what-are-some-earl…
> A web search for "its mldev" finds several things (mostly by Lars Brinkhoff
> it seems) , including
> https://github.com/larsbrinkhoff/mldev/blob/master/doc/mldev.protoc
That's just a copy I made.
ABC-TV "Grantchester" (a detective-priest series[*]) featured someone who
died from mercury poisoning, when someone opened the tanks. Apart from
the beautiful Teletype in the foreground, there was a machine with lots of
dials in the background; obviously some early computer, but not enough
footage for me to tell which one, but is a British show if that helps.
[*]
I suppose that I need not remind any Monty Python freaks of these lines:
"There's another dead bishop on the landing, Vicar-Sergeant!"
"Err, Detective-Parsons, Madam."
Etc... Otherwise I'd go on all day :-)
-- Dave
[ -TUHS and +COFF ]
On Mon, Jun 8, 2020 at 1:49 AM Lars Brinkhoff <lars(a)nocrew.org> wrote:
> Chris Torek wrote:
> > You pay (sometimes noticeably) for the GC, but the price is not
> > too bad in less time-critical situations. The GC has a few short
> > stop-the-world points but since Go 1.6 or so, it's pretty smooth,
> > unlike what I remember from 1980s Lisp systems. :-)
>
> I'm guessing those 1980s Lisp systems would also be pretty smooth on
> 2020s hardware.
>
I worked on a big system written in Common Lisp at one point, and it used a
pretty standard Lispy garbage collector. It spent something like 15% of its
time in GC. It was sufficiently bad that we forced a full GC between every
query.
The Go garbage collector, on the other hand, is really neat. I believe it
reserves something like 25% of available CPU time for itself, but it runs
concurrently with your program: having to stop the world for GC is very,
very rare in Go programs and even when you have to do it, the concurrent
code (which, by the way, can make use of full physical parallelism on
multicore systems) has already done much of the heavy lifting, meaning you
spend less time in a GC pause. It's a very impressive piece of engineering.
- Dan C.
> I've never heard of the dimensionality of an array in such a context
> described in terms of a sum type,[...]
I was also puzzled by this remark. Perhaps Bram was thinking of an
infinite sum type such as
Arrays_0 + Arrays_1 + Arrays_2 + ...
where "Arrays_n" is the type of arrays of size n. However, I don't see
a way of defining this without dependent (or at least indexed) types.
> [...] but have often heard of it described in terms of a dependent
> type.
I think that's right, yes. We want the type checker to decide, at
compile time, whether a certain operation on arrays, such as scalar
product, or multiplication with a matrix, will respect the types. That
means providing the information about the size to the type checker:
types need to know about values, hence the need for dependent types.
Ideally, a lot of the type information can then be erased at run time,
so that we don't need the arrays to carry their sizes around. But the
distinction between what can and what cannot be erased at run time is
muddled in a dependently-typed programming language.
Best wishes,
Cezar
Dan Cross <crossd(a)gmail.com> writes:
> [-TUHS, +COFF as per wkt's request]
>
> On Sun, Jun 7, 2020 at 8:26 PM Bram Wyllie <bramwyllie(a)gmail.com> wrote:
>
>> Dependent types aren't needed for sum types though, which is what you'd
>> normally use for an array that carries its size, correct?
>>
>
> No, I don't think so. I've never heard of the dimensionality of an array in
> such a context described in terms of a sum type, but have often heard of it
> described in terms of a dependent type. However, I'm no type theorist.
>
> - Dan C.
> _______________________________________________
> COFF mailing list
> COFF(a)minnie.tuhs.org
> https://minnie.tuhs.org/cgi-bin/mailman/listinfo/coff
[-TUHS, +COFF as per wkt's request]
On Sun, Jun 7, 2020 at 8:26 PM Bram Wyllie <bramwyllie(a)gmail.com> wrote:
> Dependent types aren't needed for sum types though, which is what you'd
> normally use for an array that carries its size, correct?
>
No, I don't think so. I've never heard of the dimensionality of an array in
such a context described in terms of a sum type, but have often heard of it
described in terms of a dependent type. However, I'm no type theorist.
- Dan C.
Moving to COFF where this discussion really belongs ...
On Sun, Jun 7, 2020 at 2:51 PM Nemo Nusquam <cym224(a)gmail.com> wrote:
> On 06/07/20 11:26, Clem Cole wrote (in part):
> > Neither language is used for anything in production in our world at this
> point.
>
> They seem to be used in some worlds: https://blog.golang.org/10years and
> https://www.rust-lang.org/production
Nemo,
That was probably not my clearest wording. I did not mean to imply either
Go or Rust was not being used by any imagination.
My point was that in SW development environments that I reside (HPC, some
startups here in New England and the Bay Area, as well as Intel in general)
-- Go and Rust both have smaller use cases compared to C and C++ (much less
Python and Java for that matter). And I know of really no 'money' project
that relies yet on either. That does not mean I know of no one using
either; as I know of projects using both (including a couple of my own),
but no one that had done anything production or deployed 'mission-critical'
SW with one or the other. Nor does that mean it has not happened, it just
means I have not seen been exposed.
I also am saying that in my own personal opinion, I expect it too, in
particular in Go in userspace code - possibly having a chance to push out
Java and hopefully pushing out C++ a bit.
My response was to an earlier comment about C's popularity WRT to C++. I
answered with my experience and I widened it to suggest that maybe C++ was
not the guaranteed incumbent as the winner for production. What I did not
say then, but I alluded too was the particularly since nothing in nearly 70
years has displaced Fortran, which >>is<< still the #1 language for
production codes (as you saw with the Archer statistics I pointed out).
Reality time ... Intel, IBM, *et al,* spend a lot of money making sure
that there are >>production quality<< Fortran compilers easily available.
Today's modern society is from Weather prediction to energy, to param,
chemistry, and physics. As I have here and in other places, over my
career, Fortran has paid me and my peeps salary. It is the production
enabler and without a solid answer to having a Fortran solution, you are
unlikely to make too much progress, certainly in the HPC space.
Let me take this in a slightly different direction. I tend to use the
'follow the money' as a way to root out what people care about.
Where firms spend money to create or purchase tools to help their staff?
The answer is in tools that give them return that they can measure. So
using that rule: What programming languages have the largest ecosystems
for tools that help find performance and operation issues? Fortran, C, C++
have the largest that I know. My guess would be Java and maybe
JavaScript/PHP would be next, but I don't know of any.
If I look at Intel, were do we spend money on the development tools: C/C++
and Fortran (which all use a common backend) are #1. Then we invest in
other versions of the same (GCC/LLVM) for particularly things we care
about. After that, it's Python and used to be Java and maybe some
JavaScript. Why? because ensuring that those ecosystems are solid on
devices that we make is good for us, even it means we help some of our
competitors' devices also. But our investment helps us and Fortran,
C/C++ is where people use our devices (and our most profitable versions in
particular), so it's our own best interest to make sure there are tools to
bring out the best.
BTW: I might suggest you take a peek at where other firms do the same
thing, and I think you'll find the follow the money rule is helpful to
understand what people care the most about.
Hi
I'm wondering if anybody knows what happened with this? (Besides the
fact it crashed and burnt. I mean, where are the source trees?)
lowendmac.com/2016/nutek-mac-clones/
It was a Macintosh clone that reverse-engineered the Mac interface and
appearance, using the X Window System as the GUI library - what in
Macs of that era was locked into the Mac system BIOS.
As such, it's an interesting use of the X Window System.
Wesley Parish
Cc: to COFF, as this isn't so Unix-y anymore.
On Tue, May 26, 2020 at 12:22 PM Christopher Browne <cbbrowne(a)gmail.com>
wrote:
> [snip]
> The Modula family seemed like the better direction; those were still
> Pascal-ish, but had nice intentional extensions so that they were not
> nearly so "impotent." I recall it being quite popular, once upon a time,
> to write code in Modula-2, and run it through a translator to mechanically
> transform it into a compatible subset of Ada for those that needed DOD
> compatibility. The Modula-2 compilers were wildly smaller and faster for
> getting the code working, you'd only run the M2A part once in a while
> (probably overnight!)
>
Wirth's languages (and books!!) are quite nice, and it always surprised and
kind of saddened me that Oberon didn't catch on more.
Of course Pascal was designed specifically for teaching. I learned it in
high school (at the time, it was the language used for the US "AP Computer
Science" course), but I was coming from C (with a little FORTRAN sprinkled
in) and found it generally annoying; I missed Modula-2, but I thought
Oberon was really slick. The default interface (which inspired Plan 9's
'acme') had this neat graphical sorting simulation: one could select
different algorithms and vertical bars of varying height were sorted into
ascending order to form a rough triangle; one could clearly see the
inefficiency of e.g. Bubble sort vs Heapsort. I seem to recall there was a
way to set up the (ordinarily randomized) initial conditions to trigger
worst-case behavior for quick.
I have a vague memory of showing it off in my high school CS class.
- Dan C.
Hi all, I have a strange question and I'm looking for pointers.
Assume that you can multiply two 8-bit values in hardware and get a 16-bit
result (e.g. ROM lookup table). It's straightforward to use this to multiply
two 16-bit values:
AABB *
CCDD
----
PPPP = BB*DD
QQQQ00 = BB*CC
RRRR00 = AA*DD
SSSS0000 = AA*CC
--------
32-bit result
But if the hardware can only provide the low eight bits of the 8-bit by
8-bit multiply, is it still possible to do a 16-bit by 16-bit multiply?
Next question, is it possible to do 16-bit division when the hardware
can only do 8-bit divided by 8-bit. Ditto 16-bit modulo with only 8-bit
modulo?
Yes, I could sit down and nut it all out from scratch, but I assume that
somewhere this has already been done and I could use the results.
Thanks in advance for any pointers.
Warren
** Back story. I'm designing an 8-bit TTL CPU which has 8-bit multiply, divide
and modulo in a ROM table. I'd like to write subroutines to do 16-bit and
32-bit integer maths.
On Sun, May 17, 2020 at 12:24 PM Paul Winalski <paul.winalski(a)gmail.com>
wrote:
> On 5/16/20, Steffen Nurpmeso <steffen(a)sdaoden.eu> wrote:
> >
> > Why was there no byte or "mem" type?
>
> These days machine architecture has settled on the 8-bit byte as the
> unit for addressing, but it wasn't always the case. The PDP-10
> addressed memory in 36-bit units. The character manipulating
> instructions could deal with a variety of different byte lengths: you
> could store six 6-bit BCD characters per machine word,
Was this perhaps a typo for 9 4-bit BCD digits? I have heard that a reason
for the 36-bit word size of computers of that era was that the main
competition at the time was against mechanical calculator, which had
9-digit precision. 9*4=36, so 9 BCD digits could fit into a single word,
for parity with the competition.
6x6-bit data would certainly hold BAUDOT data, and I thought the Univac/CDC
machines supported a 6-bit character set? Does this live on in the Unisys
1100-series machines? I see some reference to FIELDATA online.
I feel like this might be drifting into COFF territory now; Cc'ing there.
or five ASCII
> 7-bit characters (with a bit left over), or four 8-bit characters
> (ASCII plus parity, with four bits left over), or four 9-bit
> characters.
>
> Regarding a "mem" type, take a look at BLISS. The only data type that
> language has is the machine word.
>
> > +getfield(buf)
> > +char buf[];
> > +{
> > + int j;
> > + char c;
> > +
> > + j = 0;
> > + while((c = buf[j] = getc(iobuf)) >= 0)
> > + if(c==':' || c=='\n') {
> > + buf[j] =0;
> > + return(1);
> > + } else
> > + j++;
> > + return(0);
> > +}
> >
> > so here the EOF was different and char was signed 7-bit it seems.
>
> That makes perfect sense if you're dealing with ASCII, which is a
> 7-bit character set.
To bring it back slightly to Unix, when Mary Ann and I were playing around
with First Edition on the emulated PDP-7 at LCM+L during the Unix50 event
last USENIX, I have a vague recollection that the B routine for reading a
character from stdin was either `getchar` or `getc`. I had some impression
that this did some magic necessary to extract a character from half of an
18-bit word (maybe it just zeroed the upper half of a word or something).
If I had to guess, I imagine that the coincidence between "character" and
"byte" in C is a quirk of this history, as opposed to any special hidden
meaning regarding textual vs binary data, particularly since Unix makes no
real distinction between the two: files are just unstructured bags of
bytes, they're called 'char' because that was just the way things had
always been.
- Dan C.
On May 14, 2020, at 10:32 AM, Larry McVoy <lm(a)mcvoy.com> wrote:
> I'm being a whiney grumpy old man,
I’ve been one of those since I was, like, 20. I am finally growing into it. It’s kinda nice.
Adam