All, I've also set this up to try out for the video chats:
https://meet.tuhs.org/COFF
Password to join is "unix" at the moment.
I just want to test it to confirm that it works; I'll be heading
out the door to go to the shops soon.
Cheers, Warren
Jon Steinhart wrote in
<202002120012.01C0CpEC3910426(a)darkstar.fourwinds.com>:
|Steffen Nurpmeso writes:
|> Of course you are right, you will likely need to focus your mind,
|> and that requires an intellectual context, knowledge, to base upon.
|
|Interesting that you mention this as I'm about to leave for a multi-day
|advanced yoga workshop. One of the things that I like about yoga is that
Then i wish you a good time, and deep breath!
|you do have to learn to focus your mind, and it's amazingly difficult to
|be focused on something as seemingly simple as standing up straight. I
|don't think that it's reasonable to expect people to be able to focus
|without training. Can you imagine if a computer tried to follow all of
|your fleeting thoughts?
I feel clearly overrated. The last time i had such fleeting was
i know when, no good, i "collapsed with overflow" like John
Falkens computer in Wargames. But i have the impression that "the
only winning move is not to play" is not very hip.
|In some respects, this takes me back to the early days of speech recogni\
|tion.
|I remember people enthusiastically telling me how it would solve the \
|problem
|of repetitive stress injuries. They were surprised when I pointed out that
|most people who use their voice in their work actually take vocal training;
|RSIs are not uncommon among performers.
|
|So really, what problem are we trying to solve here? I would claim \
|that the
|problem is signal-to-noise ratio degradation that's a result of too many
|people "learning to code" who have never learned to think. Much like \
|I feel
|that it became harder to find good music when MIDI was invented because \
|there
|was all of a sudden a lot more noise masquerading as music.
I am chewing on that one. You can be lucky to have lived in times
with great classical music artists as well as a tremendous flurry
of styles, ideas etc. otherwise. In the 60s and 70s and even the
first half of the 80s so much has happened. Not only in music.
Just take psychological treatment, before there was lobotomy and
electrical shocks, and studied persons stood on these ground solid
as rocks, but then it exploded.
Today the situation is really bad. And that "everyone is an
artist" was surely as naive as "everyone shall learn coding".
But i have spend long hours in MIDI piano rolls and i think you
are right. Unfortunately.
|I'm reminded of a Usenix panel session that I moderated on the future \
|of window
|systems a long time ago. Rob was on the panel as was some guy whose name I
|can't remember from Silicon Graphics. The highlight of the presentation \
|was
|when Robin asked the question "So, if I understand what the SGI person \
|is saying,
|it doesn't matter how ugly your shirt is, you can always cover it up \
|with a nice
|jacket...." While she was asking the question Rob anticipated the \
|rest of the
|question and started unbuttoning his shirt.
|
|So maybe I'm just an old-school minimalist, but I think that the biggest \
|problem
|that needs solving is good low-level abstractions that are simple and \
|work and
|don't have to be papered over with layer upon layer on top of them. \
| I just find
|myself without the patience to learn all of the magic incantations \
|of the package
|of the week.
I like that.
--steffen
|
|Der Kragenbaer, The moon bear,
|der holt sich munter he cheerfully and one by one
|einen nach dem anderen runter wa.ks himself off
|(By Robert Gernhardt)
> From: Clem Cole
> Noel's email has much wisdom. New is not necessarily better and old
> fashioned is not always a bad thing.
For those confused by the reference, it's to an email that didn't go to the
whole list (I was not sure if people would be interested):
>> One of my favourite sayings (original source unknown; I saw it in
>> "Shockwave Rider"): "There are two kinds of fool. One says 'This is
>> old, and therefore good'; the other says 'This is new, and therefore
>> better'."
Noel
moving to COFF
On Tue, Feb 11, 2020 at 5:00 AM Rob Pike <robpike(a)gmail.com> wrote:
> My general mood about the current standard way of nerd working is how
> unimaginative and old-fashioned it feels.
>
...
>
> But I'm a grumpy old man and getting far off topic. Warren should cry,
> "enough!".
>
> -rob
>
@Rob - I hear you and I'm sure there is a solid amount of wisdom in your
words. But I caution that just, because something is old-fashioned, does
not necessarily make it wrong (much less bad).
I ask you to take a look at the Archer statistics of code running in
production (Archer large HPC site in Europe):
http://archer.ac.uk/status/codes/
I think there are similar stats available for places like CERN, LRZ, and of
the US labs, but I know of these so I point to them.
Please note that Fortran is #1 (about 80%) followed by C @ about 10%, C++ @
8%, Python @ 1% and all the others at 1%.
Why is that? The math has not changed ... and open up any of those codes
and what do you see: solving systems of differential equations with linear
algebra. It's the same math my did by hand as a 'computer' in the 1950s.
There is not 'tensor flows' or ML searches running SPARK in there. Sorry,
Google/AWS et al. Nothing 'modern' and fresh -- just solid simple science
being done by scientists who don't care about the computer or sexy new
computer languages.
IIRC, you trained as a physicist, I think you understand their thinking. *They
care about getting their science done.*
By the way, a related thought comes from a good friend of mine from college
who used to be the Chief Metallurgist for the US Gov (NIST in Colorado).
He's back in the private sector now (because he could not stomach current
American politics), but he made an important observation/comment to me a
couple of years ago. They have 60+ years of metallurgical data that has
and his peeps have been using with known Fortran codes. If we gave him
new versions of those analytical programs now in your favorite new HLL -
pick one - your Go (which I love), C++ (which I loath), DPC++, Rust, Python
- whatever, the scientists would have to reconfirm previous results. They
are not going to do that. It's not economical. They 'know' how the data
works, the types of errors they have, how the programs behave* etc*.
So to me, the bottom line is just because it's old fashioned does not make
it bad. I don't want to write an OS in Fortran-2018, but I can wrote a
system that supports code compiled with my sexy new Fortran-2018 compiler.
That is to say, the challenge for >>me<< is to build him a new
supercomputer that can run those codes for him and not change what they are
doing and have them scale to 1M nodes *etc*..
Took this to coff since it's really hardware and non-Unix...
On 2/8/20 1:59 PM, Noel Chiappa wrote:
> > From: Dave Horsfall<dave(a)horsfall.org>
>
> > [ Getting into COFF territory, I think ]
>
>
>
>
> In all fairness, the entire field didn't really appreciate the metastability
> issue until the LINC guys at WUSTL did a big investigation of it, and then
> started a big campaign to educate everyone about it - it wasn't DEC being
> particularly clueless.
>
>
> > Hey, if the DEC marketoids didn't want 3rd-party UNIBUS implementations
> > then why was it published?
>
> Well, exactly - but it's useful to remember the differening situation for DEC
> from 1970 (first PDP-11's) and later.
>
> In 1970 DEC was mostly selling to scientists/engineers, who wanted to hook up
> to some lab equipment they'd built, and OEM's, who often wanted to use a mini
> to control some value-added gear of their own devising. An open bus was really
> necessary for those markets. Which is why the 1970 PDP-11/20 manual goes into
> a lot of detail on how to interface to the PDP-11's UNIBUS.
>
> Later, of course, they were in a different business model.
>
> Noel
My old Field Service memory is DEC never really went after Unibus
interfaces and the spec was open. It was connections to the big old
Massbus for things like tapes and disks that they kept closed and used
patent protection on along with the SBI and the later Vax BI bus. DEC
was the only maker of the BIIC chip from the VAXBI and the wouldn't sell
it to competitors...
Braegan (may be a spelling error) made interfaces to connect Calcomp
hard disks to the PDP11's on a Massbus. IIRC they were shut down hard
with legal action. I had a customer with a Unisys (formerly RCA)
Spectra 70 system that had Braegan Calcomp drives with an Eatontown, NJ
based Diva Disk controller. My tech school instructor pre-DEC career
worked for Diva Disk as an engineer.
Systems Industries, later (EMC), cloned the Massbus Adapter on the SBI
Bus and didn't directly share the bus or controller with DEC sold disk
drives so the SI-9400 showed up on DEC 11/780's (and I think they had an
11/70 controller as well. DEC, IIRC went after them about them using
the SBI backplane interconnect.
A Google search showed up this note about EMC Memory boards in Vaxes but
also mentions DEC patent suits against people who used the Massbus. I
don't remember that on Unibus devices like the controllers from Emulex
and others. (Until they tried to deal with the Vax BI bus -- a DEC chip
only or the MSCP disk subsystems.)
Like you say, different time, different business model. Many inside DEC
wanted them to OEM Sell Vax chips like they did PDP11 LSI/F11/J11
chips. There are a number of DECcies who feel that attitude came over
with the influx of IBM'ers and others who came to DEC in the Vax period
to sell into the Data Centers.
They were really protecting the "family-er-crown jewels" back then to
the company's detriment.
Old Computerworld and Datamation adverts along with PR releases are what
I find when searching, unfortunately. Here's a suit against EMC --
which cloned DEC memory products and interfaced to the SBI 11/78x bus.
https://books.google.com/books?id=0sNDKMzgG8gC&pg=RA1-PA70&dq=DEC%2BMassbus…
Along with the DIVA Computroller V there's another picture at the left
of the page with a different emulating controller.
Here's a Legal CDC9766 (I think) on a Plessey controller that plugged
into an RH70 but didn't use the actual DEC Massbus (probably the CDC A
and B SMD cables... (Storage Module Device? IIRC)
https://books.google.com/books?id=-Nentjp6qSMC&pg=RA1-PA66&dq=eatontown,+nj…
DEC even took the Emulex controllers on service contract in the late 80's.
Bill
[x-posting to COFF]
Idea: anybody interested in a regular video chat? I was thinking of
one that progresses(*) through three different timezones (Asia/Aus/NZ,
then the Americas, then Europe/Africa) so that everybody should be
able to get to two of the three timezones.
(* like a progressive dinner)
30-60 minutes each one, general old computing. Perhaps a guest speaker
now and then with a short presentation. Perhaps a theme now and then.
Perhaps just chew the fat, shoot the breeze as well.
Platform: Zoom or I'd be happy to set up a private Jitsi instance.
Something else?
How often: perhaps weekly or fortnightly through the three timezones,
so it would cycle back every three or six weeks.
Comments, suggestions?!
Cheers, Warren
Moving to COFF to avoid the wrath of wkt.
On Friday, 7 February 2020 at 18:54:33 -0500, Richard Salz wrote:
> BDS C stood for Brain-Damaged Software, it was the work of one guy (Leor
> Zolman). I think it was used to build the Mark of the Unicorn stuff
> (MINCE, Mince is not complete emacs, and Scribble, a scribe clone).
Correct. That's how I came in contact with it (and Emacs, for that
matter).
Greg
--
Sent from my desktop computer.
Finger grog(a)lemis.com for PGP public key.
See complete headers for address and phone numbers.
This message is digitally signed. If your Microsoft mail program
reports problems, please read http://lemis.com/broken-MUA
On Saturday, 8 February 2020 at 9:37:22 +1100, Dave Horsfall wrote:
> On Fri, 7 Feb 2020, Greg 'groggy' Lehey wrote:
>
>> But over the years I've been surprised how many people have been fooled.
>
> I'm sure that we've all pulled pranks like that. My favourite was piping
> the output of "man" (a shell script on that system) through "Valley Girl"
> (where each "!" was followed e.g. by "Gag me with a spoon!" etc).
>
> Well, $BOSS came into the office after a "heavy" night, and did something
> like "man uucp", not quite figuring out what was wrong; I was summoned
> shortly afterwards, as I was the only possible culprit...
That brings back another recollection, not Unix-related.
In about 1978 I was getting fed up with the lack of clear text error
messages from Tandem's Guardian operating system. A typical message
might be
FILE SYSTEM ERROR 011
Yes, Tandem didn't use leading 0 to indicate octal. This basically
meant ENOENT, but it was all that the end user saw. By chance I had
been hacking in the binaries and found ways to catch such messages and
put them through a function which converted them into clear text
messages. For reasons that no longer make sense to me, I stored the
texts in an external file, which required a program to update it.
Early one morning I was playing around with this, and for the fun of
it I changed the text for error 11 from "File not found" to "Please
enter FUP PURGE ! *" (effectively rm -f *).
I was still giggling about this when the project manager came to me
and said "Mr. Lehey, I think I've done something silly".
Thank God for backups! We were in a big IBM shop, and the operators
religiously ran a backup every night. Nothing lost.
Greg
--
Sent from my desktop computer.
Finger grog(a)lemis.com for PGP public key.
See complete headers for address and phone numbers.
This message is digitally signed. If your Microsoft mail program
reports problems, please read http://lemis.com/broken-MUA
On Fri, 7 Feb 2020, Rudi Blom wrote:
>>> Regarding Nasa's Tidbinbilla Tracking station, someone suggested to me
>>> they might >>have had MODCOMPs
>>
>> Dunno about Tidbinbilla, but Parkes ("The Dish") has a roomful of Linux
>> boxen; I didn't >have time to enquire further.
>
> The questions was
>
> "Does anyone on this list know anyone who worked at a tracking station
> during the 60s and 70s? They might be able to help fill in the details."
>
> Maybe MODCOMP, but at THAT time for sure no Linux.
I didn't say there was.... Where did you get that idea?
-- Dave
>From: Dave Horsfall <dave(a)horsfall.org>
>To: Computer Old Farts Followers <coff(a)tuhs.org>
>Cc:
>Bcc:
>Date: Fri, 7 Feb 2020 07:04:43 +1100 (EST)
>Subject: Re: [COFF] How much Fortran?
>On Thu, 6 Feb 2020, Rudi Blom wrote:
>
>>Regarding Nasa's Tidbinbilla Tracking station, someone suggested to me they might >>have had MODCOMPs
>
>Dunno about Tidbinbilla, but Parkes ("The Dish") has a roomful of Linux boxen; I didn't >have time to enquire further.
>-- Dave
The questions was
"Does anyone on this list know anyone who worked at a tracking station
during the 60s and 70s? They might be able to help fill in the
details."
Maybe MODCOMP, but at THAT time for sure no Linux.
Cheers
Regarding Nasa's Tidbinbilla Tracking station, someone suggested to me
they might have had MODCOMPs
https://en.wikipedia.org/wiki/MODCOMP
Cheers,
uncle rubl
===========
From: Wesley Parish <wobblygong(a)gmail.com>
To: Computer Old Farts Followers <coff(a)tuhs.org>
Cc:
Bcc:
Date: Tue, 4 Feb 2020 14:25:25 +1300
Subject: Re: [COFF] How much Fortran?
My thoughts exactly. I was once lucky enough to visit the NASA's
Tidbinbilla Tracking Station in the ACT just a few miles out of
Canberra c. 1976 or 77, and they had some sizeable minicomputers in
their computer room. (How many I don't know.) I imagine they would've
been used to record the transmissions on tape and do some preliminary
processing, before sending the tapes to NASA HQ in the States for
storage and further analysis.
I think what NASA did with their early probes would've made Real
Programmers (TM) sit up and gasp. :)
Does anyone on this list know anyone who worked at a tracking station
during the 60s and 70s? They might be able to help fill in the
details.
Wesley Parish
I recall reading in an old manpage that the (patented) set-uid bit was to
solved the MOO problem. I've searched around, but cannot find anything
relevant. Anyone know?
-- Dave
I despair of getting an attachment through. Let's hope a link survives.
https://www.dropbox.com/s/2lfmrkp34j9z68n/Huffman.jpg?dl=0
The NYC Math Museum (MoMath) had/has an origami exhibit. Seems David
Huffman was interested in origami as well as compression.
=> coff since it's non-Unix
On 2020-Jan-22 13:42:44 -0500, Noel Chiappa <jnc(a)mercury.lcs.mit.edu> wrote:
>Pretty interesting machine, if you study its instruction set, BTW; with no
>stack, subroutines are 'interesting'.
"no stack" was fairly standard amongst early computers. Note the the IBM
S/360 doesn't have a stack..
The usual approach to subroutines was to use some boilerplate as part of the
"call" or function prologue that stashed a return address in a known
location (storing it in the word before the function entry or patching the
"return" branch were common aproaches). Of course this made recursion
"hard" (re-entrancy typically wasn't an issue) and Fortran and Cobol (at
least of that vintage) normally don't support recursion for that reason.
--
Peter Jeremy
Moving to COFF
On Tue, Jan 21, 2020 at 12:53 PM Jon Forrest <nobozo(a)gmail.com> wrote:
> As I remember the Z8000 was going to be the great white hope that
> would continue Zilog's success with the Z80 into modern times.
> But, it obviously didn't happen.
>
> Why?
>
A really good question. I will offer my opinion as someone that lived
through the times.
The two contemporary chips of that time were the Intel 8086 and Z8000.
Certainly, between those two, the Zilog chip was a better chip from a
software standpoint. The funny part was that Moto had been pushing the
6809 against those two. The story of the IBM/PC and Moto is infamous.
Remember the 68K is a skunkworks project and is not something they were
talking about it.
Why IBM picked 8086/8088 over Z8000 I do not know. I'm >>guessing<< total
system cost and maybe vendor preference. The tea, that developed the PC
had been using the 8085 for another project, so the jump of vendors to
Zilog would have been more difficult (Moto and IBM corporate had been tight
for years, MECL was designed by Moto for IBM for the System 360). I do
know they picked Intel over the 6809, even though they had the X-series
device in Yorktown (just like we had it at Tektronix) and had wanted to use
what would become the 68000.
In the end, other than Forest's scheme, none of them could do VM without a
lot of help. If I had not known about the X-series chip (or had been given
a couple of them), I think Roger and I would have used the Z8000 for
Magnolia. But I know Roger and I liked it better; as did most of our
peeps in Tek Labs at the time. IIRC Our thinking was that Z8000 has an
"almost" good enough instruction set, but since many of the processors's
addressing modes are missing on some/most of the instructions, it makes
writing a compiler more difficult (Bill Wulf used to describe this as an
'irregular' instruction set). And even though the address space was large,
you still had to contend with a non-linear segmented address scheme.
So I think once the 68000 came on the scene for real, it had the advantage
if the best instructions set (all instructions worked as expected, was
symmetric) and looked pretty darned near a PDP-11. The large linear
address was a huge win and even though it was built as a 16-bit chip
internally (i.e. 16-bit barrel shifter and needing 2 ticks for all 32-bit
ops), all the registers were defined as 32-bits wide. I think we saw it
as becoming a full 32-bit device the soonest and with the least issues.
Hi
This isn't precisely Unix-related, but I'm wondering about the Third
Ronnie's SDI's embedded systems. Is there anyone alive who knows just
what they were? I'm also wondering, since the "Star Wars" program
seemed to go off the boil at the end of the "Cold War", and the
embedded systems were made with the US taxpayer's dollar, whether or
not they are now public domain - since iirc, US federal law mandates
that anything made with the taxpayer's dollar is owned by the taxpayer
and is thus in the public domain. I'm wondering about starting a
Freedom of Information request to find all of that out, but I don't
quite know how to go about it. (FWVLIW, I'm a fan of outer space
exploration (and commercial use) and a trove of realtime, embedded
source code dealing with satellites would be a treasure indeed. It'd
raise the bar and lower the cost of entry into that market.)
Also, more Unixy, what status at the time were the POSIX realtime
standards, and what precise relation did they have to Unix?
Thanks
Wesley Parish
Does anyone know if there is a book or in-depth article about the Sprint/Spartan system, named Safeguard, hardware and software?
There is very little about it available online (see http://www.nuclearabms.info/Computers.html) but it was apparently an amazing programming effort running on UNIVAC.
Arrigo
moving to COFF
On Wed, Jan 22, 2020 at 1:06 PM Pete Wright <pete(a)nomadlogic.org> wrote:
> I also seem to remember him telling me about working on the patriot
> missile system, although i am not certain if i am remembering correctly
> that this was something he did at apollo or at another company in the
> boston area.
>
The Patriot was/is Raytheon in Andover, MA not Apollo (Chelmsford - two
towns west). Cannot speak for today, but when it was developed the source
code was in Ada. I knew the Chief Scientist/PI for the original Patriot
system (who died of a massive stroke a few years back -- my wife used to
take care of his now 30-40 yo kids when they were small and she was a tad
younger).
During the first Gulf War, he basically did not sleep the whole first
month. As I understand it, Raytheon normally took 3-6 months per SW
release. During the war, they put out an update every couple of days and
Willman once said they were working non-stop on the codebase, dealing with
issues they have never seen or have been simulated. I gather it was quite
exciting ... sigh. We got him to give a couple of talks at some local
IEEE functions describing the SW engineering process they had used.
Willman was one of the people that got me to respect Ada and the job his
folks had to do. To once told me, that at some point, Raytheon had a
contract supporting the Polaris System for the US Navy. The Navy had long
ago lost the source. They had disassembled and were patching what they
had. Yeech!!!! He also once made another comment to me ( in the late
1980s IIRC) that the DoD wanted Ada because they want the source to be part
of the specifications and wanted a language that was more explicit that
they could use for those specs. I have no idea how much that has proven
to be true.
On 1/17/20, Rob Pike <robpike(a)gmail.com> wrote:
> I am convinced that large-scale modern compute centers would be run very
> differently, with fewer or at least lesser problems, if they were treated
> as a single system rather than as a bunch of single-user computers ssh'ed
> together.
>
> But history had other ideas.
>
[moving to COFF since this isn't specific to historic Unix]
For applications (or groups of related applications) that are already
distributed across multiple machines I'd say "cluster as a single
system" definitely makes sense, but I still stand by what I said
earlier about it not being relevant for things like workstations, or
for server applications that are run on a single machine. I think
clustering should be an optional subsystem, rather than something that
is deeply integrated into the core of an OS. With an OS that has
enough extensibiity, it should be possible to have an optional
clustering subsystem without making it feel like an afterthought.
That is what I am planning to do in UX/RT, the OS that I am writing.
The "core supervisor" (seL4 microkernel + process server + low-level
system library) will lack any built-in network support and will just
have support for local file servers using microkernel IPC. The network
stack, 9P client filesystem, 9P server, and clustering subsystem will
all be separate regular processes. The 9P server will use regular
read()/write()-like APIs rather than any special hooks (there will be
read()/write()-like APIs that expose the message registers and shared
buffer to make this more efficient), and similarly the 9P client
filesystem will use the normal API for local filesystem servers (which
will also use read()/write() to send messages). The clustering
subsystem will work by intercepting process API calls and forwarding
them to either the local process server or to a remote instance as
appropriate. Since UX/RT will go even further than Plan 9 with its
file-oriented architecture and make process APIs file-oriented, this
will be transparent to applications. Basically, the way that the
file-oriented process API will work is that every process will have a
special "process server connection" file descriptor that carries all
process server API calls over a minimalist form of RPC, and it will be
possible to redirect this to an intermediary at process startup (of
course, this redirection will be inherited by child processes
automatically).
Originally, I meant to reply to the Linux-origins thread by pointing to
AST's take on the matter but I failed to find it. So, instead, here is
something to warm the cockles of troff users:
From https://www.cs.vu.nl/~ast/home/faq.html
Q: What typesetting system do you use?
A: All my typesetting is done using troff. I don't have any need to see
what the output will look like. I am quite convinced that troff will
follow my instructions dutifully. If I give it the macro to insert a
second-level heading, it will do that in the correct font and size, with
the correct spacing, adding extra space to align facing pages down to
the pixel if need be. Why should I worry about that? WYSIWYG is a step
backwards. Human labor is used to do that which the computer can do
better. Also, using troff means that the text is in ASCII, and I have a
bunch of shell scripts that operate on files (whole chapters) to do
things like produce a histogram by year of all the references. That
would be much harder and slower if the text were kept in some
manufacturer's proprietary format.
Q: What's wrong with LaTeX?
A: Nothing, but real authors use troff.
N.
>From the museum pages via the KG-84 picture to wiki. Reading a bit on
crypto devices, stumbling over M-209 and
"US researcher Dennis Ritchie has described a 1970s collaboration with
James Reeds and Robert Morris on a ciphertext-only attack on the M-209
that could solve messages of at least 2000–2500 letters.[3] Ritchie
relates that, after discussions with the NSA, the authors decided not
to publish it, as they were told the principle was applicable to
machines then still in use by foreign governments.[3]"
https://en.wikipedia.org/wiki/M-209
The paper
https://cryptome.org/2015/12/ReedsTheHagelinCipherBellLabs1978.pdf
ends with
"The program takes about two minutes to produce a solution on a DEC PDP-11/70."
No info on the program coding.
More info around the story from Richie himself
https://www.bell-labs.com/usr/dmr/www/crypt.html
A post in a Facebook IBM retirees group of an IBM PC museum (in
Germany?) made be follow up with reference to Glenn's museum. Glenn
showed me around the museum last time I saw him at Centaur, but I did
not know until today about the Web presence at http://www.glennsmuseum.com/.
Rise of the Centaur (https://www.imdb.com/title/tt5690958/) includes
Glenn discussing some of the museum.
Charlie
--
voice: +1.512.784.7526 e-mail: sauer(a)technologists.com
fax: +1.512.346.5240 Web: https://technologists.com/sauer/
Facebook/Google/Skype/Twitter: CharlesHSauer
-TUHS, +COFF, in line with Warren's wishes.
On Sun, Jan 12, 2020 at 7:36 PM Bakul Shah <bakul(a)bitblocks.com> wrote:
> There is similar code in FreeBSD kernel. Embedding head and next ptrs
> reduces
> memory allocation and improves cache locality somewhat. Since C doesn't
> have
> generics, they try to gain the same functionality with macros. See
>
> https://github.com/freebsd/freebsd/blob/master/sys/sys/queue.h
>
> Not that this is the same as what Linux does (which I haven't dug into) but
> I suspect they may have had similar motivation.
>
I was actually going to say, "blame Berkeley." As I understand it, this
code originated in BSD, and the Linux implementation is at least inspired
by the BSD code. There was code for singly and doubly linked lists, queues,
FIFOs, etc.
I can actually understand the motivation: lists, etc, are all over the
place in a number of kernels. The code to remove an element from a list is
trivial, but also tedious and repetitive: if it can be wrapped up into a
macro, why not do so? It's one less thing to mess up.
I agree it's gone off the rails, however.
- Dan C.
All, can we move this not-really-Unix discussion to COFF?
Thanks, Warren
P.S A bit more self-regulation too, please. You shouldn't need me to point
out when the topic has drifted so far :-)