On Wed, Jan 4, 2017 at 11:17 AM, ron minnich <rminnich(a)gmail.com
<https://mail.google.com/mail/?view=cm&fs=1&tf=1&to=rminnich@gmail.com>>
wrote:
> Larry, had Sun open sourced SunOS, as you fought so hard to make happen,
> Linux might not have happened as it did. SunOS was really good. Chalk up
> another win for ATT!
>
FWIW: I disagree. For details look at my discussion of rewriting Linux
in RUST
<https://www.quora.com/Would-it-be-possible-advantageous-to-rewrite-the-Linu…>
on quora. But a quick point is this .... Linux original took off (and was
successful) not because of GPL, but in spite of it and later the GPL would
help it. But it was not the GPL per say that made Linux vs BSD vs SunOS et
al.
What made Linux happen was the BSDi/UCB vs AT&T case. At the time, a
lot of hackers (myself included) thought the case was about *copyright*.
It was not, it was about *trade secret* and the ideas around UNIX. * i.e.*
folks like, we "mentally contaminated" with the AT&T Intellectual Property.
When the case came, folks like me that were running 386BSD which would
later begat FreeBSD et al, got scared. At that time, *BSD (and SunOS)
were much farther along in the development and stability. But .... may of
us hought Linux would insulate us from losing UNIX on cheap HW because
their was not AT&T copyrighted code in it. Sadly, the truth is that if
AT&T had won the case, *all UNIX-like systems* would have had to be removed
from the market in the USA and EU [NATO-allies for sure].
That said, the fact the *BSD and Linux were in the wild, would have made it
hard to enforce and at a "Free" (as in beer) price it may have been hard to
make it stick. But that it was a misunderstanding of legal thing that
made Linux "valuable" to us, not the implementation.
If SunOS has been available, it would not have been any different. It
would have been thought of based on the AT&T IP, but trade secret and
original copyright.
Clem
>Date: Mon, 09 Jan 2017 08:45:47 -0700
>From: arnold(a)skeeve.com
>To: rochkind(a)basepath.com
>Cc: tuhs(a)tuhs.org
>Subject: Re: [TUHS] Unix stories, Stephen Bourne and IF-FI in C code
>Message-ID: <201701091545.v09FjlXE027448(a)freefriends.org>
>Content-Type: text/plain; charset=us-ascii
>
>I remember the Bournegol well; I did some hacking on the BSD shell.
>
>In general, it wasn't too unusual for people from Pascal backgrounds to
>do similar things, e.g.
>
> #define repeat do {
> #define until(cond) } while (! (cond))
>
>(I remember for me personally that do...while sure looked weird for.
>my first few years of C programming. :-)
>
>(Also, I would not recommend doing that; I'm just noting that
>people often did do stuff like that.)
When the Philips computer division worked on MPX (multi-processor
UNIX) in late 80tish they had an include file 'syntax.h' which did a
lot of that Pascal-like mapping.
Here part of it:
/* For a full explanation see the file syntax.help */
#define IF if(
#define THEN ){
#define ELSIF }else if(
#define ELSE }else{
#define ENDIF }
#define NOT !
#define AND &&
#define OR ||
#define CASE switch(
#define OF ){
#define ENDCASE break;}
#define WHEN break;case
#define CWHEN case
#define IMPL :
#define COR :case
#define BREAK break
#define WHENOTHERS break;default
#define CWHENOTHERS default
#define SELECT do{{
#define SWHEN }if(
#define SIMPL ){
#define ENDSELECT }}while(0)
#define SCOPE {
#define ENDSCOPE }
#define BLOCK {
#define ENDBLOCK }
#define FOREVER for(;;
#define FOR for(
#define SKIP
#define COND ;
#define STEP ;
#define LOOP ){
#define ENDLOOP }
#define NULLOOP ){}
#define WHILE while(
#define DO do{
#define UNTIL }while(!(
#define ENDDO ))
#define EXITWHEN(e) if(e)break
#define CONTINUE continue
#define RETURN return
#define GOTO goto
I was in building 5 at Sun when they were switching to SVr4 which became
Solaris 2.0 (I think). Building 5 housed the kernel people at Sun.
John Pope was the poor bastard who got stuck with doing the bring up.
Everyone hated him for doing it, we all wanted it to fail.
I was busting my ass on something in SunOS 4.x and I was there late into
the night, frequently to around midnight or beyond. So was John.
We became close friends. We both moved to San Francisco and ended up
commuting to Mountain View together (and hit the bars together).
John was just at my place, here's a few pictures for those who might
be interested. He's a great guy, got stuck with a shitty job.
http://www.mcvoy.com/lm/2016-pope/
--lm
> On Jan 9, 2017, at 6:00 PM,"Steve Johnson" <scj(a)yaccman.com> wrote:
>
> I can certainly confirm that Steve Bourne not only knew Algol 68, he
> was quite an evangelist for it.
Bourne had led the Algol68C development team at Cambridge until 1975. See http://www.softwarepreservation.org/projects/ALGOL/algol68impl/#Algol68C .
> if-fi and case-esac notation from Algol came to shell [via Steve Bourne]
There was some pushback which resulted in the strange compromise
of if-fi, case-esac, do-done. Alas, the details have slipped from
memory. Help, scj?
doug
All, I'm not sure if you know of Walter Müller's work at implementing
a PDP-11 on FPGAs: https://wfjm.github.io/home/w11/. He sent me this e-mail
with an excellent source code cross-reference of the 2.11BSD kernel:
P.S.: long time ago I wrote a source code viewer for 2.11BSD and OS with
a similar file and directory layout. I made a few tune-ups lately
and wrote some sort of introduction, see
https://wfjm.github.io/home/ouxr/
Might be helpful for you in case you inspect 2.11BSD source code.
Cheers all, Warren
I was amused this morning to see a post on the tack-devel(a)lists.sourceforge.net
mailing list (TACK = The Amsterdam Compiler Kit) today from David Given,
who writes:
>> ...
>> ... I took some time off from thinking about register allocation (ugh)
>> and ported the ABC B compiler to the ACK. It's now integrated into the
>> system and everything.
>>
>> B is Ken Thompson and Dennis Ritchie's untyped programming language
>> which later acquired types and turned into K&R C. Everything's a machine
>> word, and pointers are *word* address, not byte addresses.
>>
>> The port's a bit clunky and doesn't generate good code, but it works and
>> it passes its own tests. It runs on all supported backends. There's not
>> much standard library, though.
>>
>> Example:
>>
>> https://github.com/davidgiven/ack/blob/default/examples/hilo.b
>>
>> (Also, in the process it found lots of bugs in the PowerPC mcg backend,
>> now fixed, as well as several subtle bugs in the PowerPC ncg backend; so
>> that's good. I'm pretty sure that this is the only B compiler for the
>> PowerPC in existence.)
>> ...
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
> Date: Fri, 6 Jan 2017 20:09:18 -0700
> From: Warner Losh <imp(a)bsdimp.com>
> To: "Greg 'groggy' Lehey" <grog(a)lemis.com>
> Cc: Clem Cole <clemc(a)ccc.com>, The Eunuchs Hysterical Society
> <tuhs(a)tuhs.org>
> Subject: Re: [TUHS] SunOS vs Linux
>
>> On Friday, 6 January 2017 at 9:27:36 -0500, Clem Cole wrote:
>>
>> I think that if SunOS 4 had been released to the world at the right
>> time, the free BSDs wouldn't have happened in the way they did either;
>> they would have evolved intimately coupled with SunOS.
>
> With the right license (BSD), I'd go so far as to saying there'd be no
> BSD 4.4, or if there was, it would have been rebased from the SunOS
> base... There were discussions between CSRG and Sun about Sun donating
> it's reworked VM and VFS to Berkeley to replace the Mach VM that was
> in there... Don't know the scope of these talks, or if they included
> any of the dozens of other areas that Sun improved from its BSD 4.3
> base... The talks fell apart over the value of the code, if the rumors
> I've heard are correct.
>
> Warner
Since I was involved with the negotiations with Sun, I can speak
directly to this discussion. The 4.2BSD VM was based on the
implementation done by Ozalp Babaoglu that was incorporated into
the BSD kernel by Bill Joy. It was very VAX centric and was not
able to handle shared read-write mappings.
Before Bill Joy left Berkeley for Sun, he wrote up the API
specification for the mmap interface but did not finish an
implementation. At Sun, he was very involved in the implementation
though did not write much (if any) of the code for the SunOS VM.
The original plan was to ship 4.2BSD with an mmap implementation,
but with Bill's departure that did not happen. So, it fell to me
to sort out how to get it into 4.3BSD. CSRG did not have the
resources to do it from scratch (there were only three of us).
So, I researched existing implementations and it came down to
the SunOS and MACH implementations. The obvious choice was SunOS,
so I approached Sun about contributing their implementation to
Berkeley. We had had a lot of cooperation about exchanging bug
fixes, so this is not as crazy as it seems.
The Sun engineers were all for it, and convinced their managers
to push my request up the hierarchy. Skipping over lots of drama
it eventually got to Scott McNealy who was dubious, but eventually
bought into the idea and cleared it. At that point it went to the
Sun lawyers to draw up the paperwork. The lawyers came back and
said that "giving away SunOS technology could lead to a stockholder
lawsuit concerning the giving away of stockhoder assets." End of
discussion. We had to go with MACH.
Kirk McKusick
This is a history list and I'm going to try to answer this to give some
historical context and hopefully end this otherwise a thread that I'm not
sure adds to the history of UNIX much one way of the other. Some people
love GPL, some do not. I'll gladly take some of this off list. But I
would like to see us not devolve TUHS into my favorite license or favorite
unix discussion.
On Thu, Jan 5, 2017 at 9:09 PM, Larry McVoy <lm(a)mcvoy.com> wrote:
> That makes sense to me, the GPL was hated inside of Sun, it was considered
>
> a virus. The idea that you used a tiny bit of GPLed code and then
> everything
>
> else is GPLed was viewed as highway robbery.
>
I'm not lawyer, nor play one. I am speaking for myself not Intel here so
take what I have to say with that in mind. Note I do teach the required
"GPL and Copyright Course" of all Intel SW folks so I have had some
training and I do have some opinions. I also have lived this for 5 start
up, and a number of large firms both inside and as the a consultant.
Basically, history has shown that they both viral an non-viral licenses
have their place. Before I worked Intel I admit I was pretty much
negative on the GPL "virus" and I >>mostly<< still am. IMHO, it's done
more damage than it has helped and the CMU/MIT/BSD style "Dead Fish"
license has done for more positive *for the industry @ large *than the GPL
in the long run. But I admit, I'm a capitalist and I see the value in
letting some one make some profit for their work. All I have seen the
virus do in the long run is that firms have lawyers to figure out how to
deal with it.
There is a lot of miss information about the term "open source" .... open
source does not mean "free" as in beer. It means available and "open" to
be read and modified. Unix has >>always<< be open and available - which
is why their are so many versions of Unix (and Linux). The question was
the *price* of the license and who had it. Most hacker actually did have
access as this list shows -- we had from our universities for little money
or our employees for much more. GPL and the virus it has, does not protect
any one from this diversity. In fact, in some ways it makes it harder.
The diversity comes from the market place. The problem is that in the
computer business, the diversity can be bad and keeping things "my way" is
better for the owner of the gold (be it a firm like IBM, DEC, or Microsoft)
or a technology like Linux.
What GPL is >>supposed<< to do it ensure that secrets are not locked up and
ensure that all can see and share in the ideas. This is a great idea in
theory, the risk is that if you have IP that you want to some how protect,
as Larry suggests, the virus can put your IP in danger. To the credit of
firms like Intel, GE, IBM et al, they have learned how to try to firewall
their >>important<< IP with processes and procedures to protect it (which
is exactly what rms did not want to have happen BTW). [In my experience,
it made the locks even tighter than before], although it has made some
things more available. I now this rankles some folks. There are
positives and negatives to each way of doing things.
IMO, history has shown that it has been the economics of >>Clay
Christiansen style disruption<<, not a license that changed things in our
industry. When the price of UNIX of any version (Linux, *BSD, SunOS,
MInux, etc...) and the low cost HW came to be and the "enough" hackers did
something. Different legal events pushed one version ahead of others, and
things had to be technology "good enough" -- but it was economics not
license that made the difference. License played into the economics for
sure, but in the end, it was free (as in beer) vs $s that made it all work.
Having lived through they completely open, completely closed, GPLed and
dead-fish world of the computer industry, I'm not sure if we are really any
farther ahead in practice. We just have to be careful and more lawyers
make more money - by that's my being a cynic.
Anyway, I hope we can keep from devolving from really history.
Clem
One significant area of non compliance with unix conventions is its non
case sensitive filesystem (HFS and variants like HFS+ if I recall). I think
this is partly for historical reasons to make Classic / MacOS9 emulation
easier during the transition. But I could never understand why they did
this, they could have put case insensitivity in their shell and apps
without breaking the filesystem.
Anyway despite its being unix I can't really see it gaining much traction
with serious unix users (when did you last get a 404 from a major website
with a tagline "Apache running on MacOSX"?), the MacPorts and Fink repos
are a really bad and patchy implementation of something like
apt/ctan/cpan/etc (I think possibly at least one of those repos builds from
source with attendant advantages/problems), it does not support X properly,
the dylibs are non standard, everything is a bit broken compared with Linux
(or FreeBSD) and Apple does not really have the motivation or the manpower
to create a modern, clean system like unix users expect.
Open sourcing Darwin was supposed to open it up to user contributed
enhancements but Apple was never serious about this, it was just a sop to
people who claimed (correctly) that Apple was riding on the back of open
source and giving nothing back to the community. Since Apple refused to
release any important code like drivers or bootstrap routines the Darwin
release was never really any more useable than something like 4.4BSDLite.
People who loved their Macs and loved unix and dreamed of someday running
the Mac UI on top of a proper unix, put significant effort into supplying
the missing pieces but were rebuffed by Apple at every turn, Apple would
constantly make new releases with even more missing pieces and breakage and
eventually stopped making any open source releases at all, leaving a lot of
people crushed and very bitter.
As for me I got on the Apple bandwagon briefly in 2005 or so, at that time
I was experimenting with RedHat but my primary development machines were
Windows 98 and 2000 (occasionally XP). My assessment was RedHat was not
ready for desktop use, since I had trouble with stuff like printers and
scanners that required me to stay with Windows (actually this was probably
surmountable but I did not have the knowledge or really the desire to spend
time debugging it). That's why I selected Apple as a "compromise unix"
which should connect to my devices easily. I got enthusiastic and spent a
good $4k on new hardware. Shortly afterwards Apple announced the Intel
transition so I realized my brand new gear would soon be obsolete and
unsupported. I was still pretty happy though. Two things took the shine off
eventually (a) I spilt champagne on my machine, tore it down to discover my
beautiful and elegant and spare (on the outside) machine was a horrible
hodgepodge of strange piggyback PCBs and third party gear (on the inside),
this apparently happened because options like the backlit keyboard had
become standard equipment at some point but Apple had never redesigned them
into the motherboard, the whole thing was horribly complicated and fragile
and never worked well after the teardown (b) I got seriously into FreeBSD
and Linux and soon discovered the shortcomings of the Mac as a serious
development machine, everything was just slightly incompatible leading to
time waste.
Happily matters have improved a lot. Lately I was setting up some Windows 7
and 10 machines for my wife to use MS Office on for her uni work. Both had
serious driver issues like "The graphics card has crashed and recovered".
And on the Windows 10 machine, despite it being BRAND NEW out of the box
and manufacturer preloaded, the wifi also did not work, constantly crashed
requiring a reboot. Windows Update did not fix these problems. Downloading
and trying various updated drivers from the manufacturer's website seems to
have for now, except on the Windows 7 machine where the issue is noted and
listed as "won't fix" because the graphics card is out of date, the fixed
driver won't load on this machine. Given this seems to be the landscape
even for people who are happy to spend the $$ on the official manufacturer
supported Windows based solutions, Linux looks pretty easy to install and
use by comparison. Not problem free, but may have fewer problems and easier
to fix problems.
It appears to me that with the growing complexity of the hardware due to
the millions of compatibility layers and ad hoc protocols built into it,
the job of the manufacturers and official OS or driver writers gets harder
and harder, whereas the crowdsourced principle of open source shows its
value since the gear is better tested in a wider variety of realistic
situations.
cheers, Nick
On Jan 1, 2017 9:46 AM, "David" <david(a)kdbarto.org> wrote:
> On Dec 31, 2016, at 8:58 AM, tuhs-request(a)minnie.tuhs.org wrote:
>
> From: Michael Kjörling <michael(a)kjorling.se>
> To: tuhs(a)tuhs.org
> Subject: Re: [TUHS] Historic Linux versions not on kernel.org
> Message-ID: <20161231111339.GK576(a)yeono.kjorling.se>
> Content-Type: text/plain; charset=utf-8
>
> I might be colored by the fact that I'm running Linux myself, but I'd
> say that those are almost certainly worth preserving somehow,
> somewhere. Linux and OS X are the Unix-like systems people are most
> likely to come in contact with these days
MacOS X is a certified Unix (tm) OS. Not Unix-Like.
http://www.opengroup.org/openbrand/register/apple.htm
It has been so since 10.0. Since 10.5 (Leopard) it has been so noted on the
above Open Group page. The Open Group only lists the most recent release
however.
The Tech Brief for 10.7 (http://images.apple.com/media/us/osx/2012/docs/OSX_
for_UNIX_Users_TB_July2011.pdf) also notes the compliance.
David
I left Digital in 1994, so I don’t know much about the later evolution of the Alphaservers, but 1998 would have been about right for en EV-56 (EV-5 shrink) or EV-6. There’s a Wikipedia article about all the different systems but most of the dates are missing.
The white label parts are all PAL22V10-15s. The 8 square chips are cache SRAMS, and most the the SOIC jellybeans are bus transceivers to connect the CPU to RAM and I/O. The PC derived stuff is in the back corner. There are 16 DIMM slots to make two ranks of 54 bit RAM out of 8-bit DIMMs. We usually ran with a SCSI card, an ethernet, and an 8514 graphics card plugged into the riser.
-L
> On 2017, Jan 5, at 5:55 PM, ron minnich <rminnich(a)gmail.com> wrote:
>
> What version of this would I have bought ca. 1998? I had 16 of some kind of Alpha nodes in AMD sockets, interconnected with SCI for encoding videos. I ended up writing and releasing what I think were the first open source drivers for SCI -- it took a long time to get Dolphin to let me release them.
>
> The DIPs with white labels -- are those PALs or somethin? Or are the labels just to cover up part names :-)
>
> On Thu, Jan 5, 2017 at 2:39 PM Lawrence Stewart <lstewart2(a)gmail.com <mailto:lstewart2@gmail.com>> wrote:
> Alphas in PC boxes! I dug around in the basement and found my Beta (photo attached).
>
> This was from 1992 or 1993 I think. This is an EV-3 or EV-4 in a low profile PC box using pc peripherals. Dave Conroy designed the hardware, I did the console ROMS (BIOS equivalent) and X server, and Tom Levergood ported OSF-1. A joint project of DEC Semiconductor Engineering and the DEC Cambridge Research Lab. I think about 20 were built, and the idea kickstarted a line of low end Alphaservers.
>
> This was a typical Conroy minimalist design, crunching the off-chip caches, PC junk I/O, ISA bus, and 64 MBytes of RAM into this little space. I think one gate array would replace about half of the chips.
>
> -L
>
>
> <IMG_0939.JPG><IMG_0939.JPG>
All, following on from the "lost ports" thread, I might remind you all that
I'm keeping a hidden archive of Unix material which cannot be made public
due to copyright and other reasons. The goal is to ensure that these bits
don't > /dev/null, even if we can't (yet) do anything with them.
If you have anything that could be added to the archive, please let me know.
My rules are: I don't divulge what's in the archive, nor who I got stuff
from. There have been very few exceptions. I have sent copies of the archive
to two important historical computer organisations who must abide by the
same rules. I think I've had one or two individuals who were desperate to
get software to run on their old kit, and I've "loaned" some bits to them.
Anway, that's it. If that seems reasonable to you, and you want an off-site
backup of your bits, I'm happy to look after them for you.
Cheers, Warren
On 4 January 2017 at 13:51, Steve Johnson <scj(a)yaccman.com> wrote (in part):
> These rules provided rich fodder for Lint, when it came along, [...]
All this lint talk caused me to reread your Lint article but no
history there. Was there a specific incident that begat lint?
N.
So there are a few ports I know of that I wonder if they ever made it back
into that great github repo.I don't think they did.
harris
gould
That weird BBN 20-bit machine
(20 bits? true story: 5 4-bit modules fit in a 19" rack. So 20 bits)
Alpha port (Tru64)
Precision Architecture
Unix port to Cray vector machines
others? What's the list of "lost machines" look like? Would companies
consider a donation, do you think?
If that Cray port is of any interest I have a thread I can push on maybe.
but another true story: I visited DEC in 2000 or so, as LANL was about to
spend about $120M on an Alpha system. The question came up about the SRM
firmware for Alpha. As it was described to me, it was written in BLISS and
the only machine left that could build it was an 11/750, "somewhere in the
basement, man, we haven't turned that thing on in years". I suspect there's
a lot of these containing oxide oersteds of interest.
ron
(Yes, a repeat, but this momentous event only happens every few years.)
The International Earth Rotation Service has announced that there will be
a Leap Second inserted at 23:59:59 UTC on the 31st December, due to the
earth slowly slowing down. It's fun to listen to see how the time beeps
handle it; will your GPS clock display 23:59:60, or will it go nuts
(because the programmer was an idiot)?
I actually have a recording of the last one, over at
www.horsfall.org/leapsecond.webm (yes, I am a tragic geek),
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
>Date: Wed, 4 Jan 2017 16:41:07 -0500
>From: "Ron Natalie" <ron(a)ronnatalie.com>
>To: "'ron minnich'" <rminnich(a)gmail.com>, <tuhs(a)minnie.tuhs.org>
>Subject: Re: [TUHS] lost ports
>Message-ID: <01c001d266d3$42294820$c67bd860$(a)ronnatalie.com>
>Content-Type: text/plain; charset="utf-8"
...
>I did kernel work on the PA for HP also worked on their X server (did a few other X server >over the years).
>The hard part would be finding anybody from these companies who could even remember
>they made computers let alone had UNIX software.
I worked for the computer division in Philips Electronics, DEC,
Compaq, HP, HPE and still remember some of it :-)
I wasn't involved in OS development, but in testing, turnover to
National Sales Organisations, etc. Even now at some customer side I
still have a few aDEC400xP servers from 1992 running SCO UNIX 3.2V4.2
(last update 1999). Also a few AlphaServers with Digital UNIX, Tru64;
finally some Itanium servers with HP-UX 11.23/11.31.
Especially the big/small endian issue gave our customer (and therefore
myself) a few headaches. Imagine getting a chunk of shared memory and
casting pointers assuming the 'system' takes care of alignment. Big
surprise for the customer moving from Tru64 to HP-UX.
I just went looking at the v6 source to confirm a memory, namely that cpp
was only invoked if a # was the first character in the file. Hence, this:
https://github.com/dspinellis/unix-history-repo/blob/Research-V6-Snapshot-D…
People occasionally forgot this, and hilarity ensued.
Now I'm curious. Anyone know when that convention ended?
ron
Goodness, I go to sleep, wake up 8 hours later and there's 50 messages in
the TUHS mailing list. Some of these do relate to the history of Unix, but
some are getting quite off-topic.
So, can I get you all to just pause before you send in a reply and ask:
is this really relevant to the history of Unix, and does it contribute
in a meaningful way to the conversation.
Looks like we lost Armando, that's a real shame.
Cheers, Warren
Peter Salus writes "The other innovation present in the Third Edition
was the pipe" ("A Quarter Century of Unix", p. 50). Yet, in the
corresponding sys/ken/sysent.c, the pipe system call seems to be a stump.
1, &fpe, /* 40 = fpe */
0, &dup, /* 41 = dup */
0, &nosys, /* 42 = pipe */
1, ×, /* 43 = times */
On the other hand, the Fourth Edition manual documents the pipe system
call, the construction of pipelines through the shell, and the use of wc
as a filter (without an input file, as was required in the Second Edition).
Would it therefore be correct to say that pipes were introduced in the
Fourth rather than the Third Edition?
>
> What made Linux happen was the BSDi/UCB vs AT&T case. At the time, a
> lot of hackers (myself included) thought the case was about *copyright*.
> It was not, it was about *trade secret* and the ideas around UNIX. * i.e.*
> folks like, we "mentally contaminated" with the AT&T Intellectual Property.
>
Wasn’t there a Usenix button with the phrase “Mentally Contaminated” on it?
I’m sure I’ve got it around here somewhere. Or is my memory suffering from
parity errors?
David
> keeping the code I work on portable between Linux and the Mac requires
> more than a bit of ‘ifdef’ hell.
Curmudgeonly comment: I bristle at the coupling of "ifdef" and "portable".
Ifdefs that adjust code for different systems are prima facie
evidence of NON-portability. I'll buy "configurable" as a descriptor
for such ifdef'ed code, but not "portable".
And, while I am venting about ifdef:
As a matter of sytle, ifdefs are global constructs. Yet they often
have local effects like an if statement. Why do we almost always write
#ifdef LINUX
linux code
#else
default unix code
#endif
instead of the much cleaner
if(LINUX)
linux code
else
default unix code
In early days the latter would have cluttered precious memory
with unfreachable code, but now we have optimizing compilers
that will excise the useless branch just as effectively as cpp.
Much as the trait of overeating has been ascribed to our
hunter ancestors' need to eat fully when nature provided,
so overuse of ifdef echos coding practices tuned to
the capabilities of bygone computing systems.
"Ifdef hell" is a fitting image for what has to be one of
Unix's least felicitous contributions to computing. Down
with ifdef!
Doug
From: "Doug McIlroy" <doug(a)cs.dartmouth.edu>
Subject:Re: [TUHS] Mac OS X is Unix
> keeping the code I work on portable between Linux and the Mac requires
> more than a bit of ‘ifdef’ hell.
| Curmudgeonly comment: I bristle at the coupling of "ifdef” and "portable".
| Ifdefs that adjust code for different systems are prima facie
| evidence of NON-portability. I'll buy "configurable" as a descriptor
| for such ifdef'ed code, but not "portable".
| <snip>
| "Ifdef hell" is a fitting image for what has to be one of
| Unix's least felicitous contributions to computing. Down
| with ifdef!
| Doug
Doug makes a very good point about ifdef hell. Though I’d claim that it isn’t even “configurable” at some level.
Several years ago I was working at Megatek, a graphics h/w vendor. We were porting the X11 suite to various new boards at the rate of about 1 a week it seemed. Needless to say the code became such a mishmash of ifdef’s that you couldn’t figure out what some functions did any longer. You just hoped and prayed that your patch worked properly on the various hardware you were targeting and didn’t break it for anyone else. You ran the unit tests, if they passed you pushed your change and ran an hid under a rock for a while until you were sure it was safe to come out again.
David