> /proc came from research
Indeed it did.
> /proc was done by Roger at AT&T (maybe USL). I recall him telling me that
> he was not the original author though and that it came from PWB.
Roger Faulkner's /proc article, recently cited in tuhs, begins with
acknowledgment to Tom Killian, who originated /proc in research.
(That was Tom's spectacular debut when he switched from high-energy
physics, at Argonne IIRC, to CS at Bell Labs.)
doug
I asked:
> I wonder where the inspiration for the Unix job control came from? In
> particular, I can't help but notice that Control-Z does something very
> similar in the PDP-10 Incompatible Timesharing System.
Jim Kulp answered:
> The ITS capabilities were certainly part of the inspiration. It was a
> combination of frustrations and gaps in UNIX with some of those
> features found in ITS that resulted in the final package of features.
Casual interest,
Anyone ever used RJE from SYS-III - IBM mainframe remote job entry
System? I started on Edition 7 on an interdata so I am (pretty much) too young
for that era, unless I am fooling myself.
-Steve
Doug McIlroy:
There was some pushback which resulted in the strange compromise
of if-fi, case-esac, do-done. Alas, the details have slipped from
memory. Help, scj?
====
do-od would have required renaming the long-tenured od(1).
I remember a tale--possibly chat in the UNIX Room at one point in
the latter 1980s--that Steve tried and tried and tried to convince
Ken to rename od, in the name of symmetry and elegance. Ken simply
said no, as many times as it took. I don't remember who I heard this
from; anyone still in touch with Ken who can ask him?
Norman Wilson
Toronto ON
Reputed origins of SVR4:
> From SunOS:
> ...
> NFS
And, sadly, NFS is still with us, having somehow upstaged Peter
Weinberger's RFS (R for remote) that appeared at the same time.
NFS allows one to add computers to a file system, but not to
combine the file systems of multiple computers, as RFS did
by mapping uids: NFS:RFS::LAN:WAN.
Doug
> From: Chet Ramey
> /proc was done by Roger at AT&T (maybe USL). I recall him telling me
> that he was not the original author though and that it came from PWB.
> The original implementation was done by Tom Killian for 8th Edition.
I wonder if >pdd (which dates to somewhere in the mid-60's, I'm too lazy to
look the exact date up) was in any way any inspiration for /proc?
Noel
Wasn't ksh SVR4... It was in the Xelos sources @Concurrent Computer which was an SVR2 port. Xelos didn't do paging but the source in 87 or 88 or so had ksh in it.
I. built it for SVR4 on my Xelos 3230 back in the day.
Bill
Sent from my android device.
> On 10 Jan 2017, at 16:16, pechter(a)gmail.com wrote:
>
> Wasn't msg SVR4... It was in the Xelos sources @Concurrent Computer which was an SVR2 port. Xelos didn't do paging but the source in 87 or 88 or so had ksh in it.
>
> I. built it for SVR4 on my Xelos 3230 back in the day.
msgs goes back as far as SVR2.
>
> Bill
>
> Sent from my android device.
>
> -----Original Message-----
> From: Berny Goodheart <berny(a)berwynlodge.com>
> To: tuhs(a)minnie.tuhs.org
> Sent: Tue, 10 Jan 2017 10:12
> Subject: [TUHS] the guy who brought up SVr4 on Sun machines
>
> I have been trolling these many threads lately of interest. So thought I should chip in.
>
> "SVr4 was not based on SunOS, although it incorporated
> many of the best features of SunOS 4.x”.
>
> IMHO this statement is almost true (there were many great features from BSD too!).
> SunOS 5.0 was ported from SVR4 in early 1991 and released as Solaris 2.0 in 1992 for desktop only.
> Back in the late 80s, Sun and AT&T partnered development efforts so it’s no surprise that SunOS morphed into SVR4. Indeed it was Sun and AT&T who were the founding members of Unix International…with an aim to provide direction and unification of SVR4.
> I remember when I went to work for Sun (much later in 2003), and found that the code base was remarkably similar to the SVR4 code (if not exact in many areas).
>
> Here’s the breakdown of SVR4 kernel lineage as I recall it. I am pretty sure this is correct. But I am sure many of you will put me right if I am wrong ;)
>
> From BSD:
> TCP/IP
> C Shell
> Sockets
> Process groups and job Control
> Some signals
> FFS in UFS guise
> Multi groups/file ownership
> Some system calls
> COFF
>
> From SunOS:
> vnodes
> VFS
> VM
> mmap
> LWP and kernel threads
> /proc
> Dynamic linking extensions
> NFS
> RPC
> XDR
>
> From SVR3:
> .so libs
> revamped signals and trampoline code
> VFSSW
> RFS
> STREAMS and TLI
> IPC (Shared memory, Message queues, semaphores)
>
> Additional features in SVR4 from USL:
> new boot process.
> ksh
> real time extensions
> Service access facility
> Enhancements to STREAMS
> ELF
>
>
>
>
>
> From: Berny Goodheart
> From BSD:
> Process groups and job Control
The intermediate between V6 and V7 which ran on several MIT machines (I think
it was an early PWB - I should retrieve it and make it available to the Unix
archive, it's an interesting system) had 'process groups', but I don't know if
the concept was the same as BSD process groups.
Noel
> From: Tony Finch
> The other classic of Algol 68 literature
No roundup of classic Algol 68 literature would be complete without Hoare's
"The Emperor's Old Clothes".
I assume everyone here has read it, but on the off-chance there is someone
who hasn't, a copy is here:
http://zoo.cs.yale.edu/classes/cs422/2014/bib/hoare81emperor.pdf
and I cannot recommend it more highly.
Noel
On Wed, Jan 4, 2017 at 11:17 AM, ron minnich <rminnich(a)gmail.com
<https://mail.google.com/mail/?view=cm&fs=1&tf=1&to=rminnich@gmail.com>>
wrote:
> Larry, had Sun open sourced SunOS, as you fought so hard to make happen,
> Linux might not have happened as it did. SunOS was really good. Chalk up
> another win for ATT!
>
FWIW: I disagree. For details look at my discussion of rewriting Linux
in RUST
<https://www.quora.com/Would-it-be-possible-advantageous-to-rewrite-the-Linu…>
on quora. But a quick point is this .... Linux original took off (and was
successful) not because of GPL, but in spite of it and later the GPL would
help it. But it was not the GPL per say that made Linux vs BSD vs SunOS et
al.
What made Linux happen was the BSDi/UCB vs AT&T case. At the time, a
lot of hackers (myself included) thought the case was about *copyright*.
It was not, it was about *trade secret* and the ideas around UNIX. * i.e.*
folks like, we "mentally contaminated" with the AT&T Intellectual Property.
When the case came, folks like me that were running 386BSD which would
later begat FreeBSD et al, got scared. At that time, *BSD (and SunOS)
were much farther along in the development and stability. But .... may of
us hought Linux would insulate us from losing UNIX on cheap HW because
their was not AT&T copyrighted code in it. Sadly, the truth is that if
AT&T had won the case, *all UNIX-like systems* would have had to be removed
from the market in the USA and EU [NATO-allies for sure].
That said, the fact the *BSD and Linux were in the wild, would have made it
hard to enforce and at a "Free" (as in beer) price it may have been hard to
make it stick. But that it was a misunderstanding of legal thing that
made Linux "valuable" to us, not the implementation.
If SunOS has been available, it would not have been any different. It
would have been thought of based on the AT&T IP, but trade secret and
original copyright.
Clem
>Date: Mon, 09 Jan 2017 08:45:47 -0700
>From: arnold(a)skeeve.com
>To: rochkind(a)basepath.com
>Cc: tuhs(a)tuhs.org
>Subject: Re: [TUHS] Unix stories, Stephen Bourne and IF-FI in C code
>Message-ID: <201701091545.v09FjlXE027448(a)freefriends.org>
>Content-Type: text/plain; charset=us-ascii
>
>I remember the Bournegol well; I did some hacking on the BSD shell.
>
>In general, it wasn't too unusual for people from Pascal backgrounds to
>do similar things, e.g.
>
> #define repeat do {
> #define until(cond) } while (! (cond))
>
>(I remember for me personally that do...while sure looked weird for.
>my first few years of C programming. :-)
>
>(Also, I would not recommend doing that; I'm just noting that
>people often did do stuff like that.)
When the Philips computer division worked on MPX (multi-processor
UNIX) in late 80tish they had an include file 'syntax.h' which did a
lot of that Pascal-like mapping.
Here part of it:
/* For a full explanation see the file syntax.help */
#define IF if(
#define THEN ){
#define ELSIF }else if(
#define ELSE }else{
#define ENDIF }
#define NOT !
#define AND &&
#define OR ||
#define CASE switch(
#define OF ){
#define ENDCASE break;}
#define WHEN break;case
#define CWHEN case
#define IMPL :
#define COR :case
#define BREAK break
#define WHENOTHERS break;default
#define CWHENOTHERS default
#define SELECT do{{
#define SWHEN }if(
#define SIMPL ){
#define ENDSELECT }}while(0)
#define SCOPE {
#define ENDSCOPE }
#define BLOCK {
#define ENDBLOCK }
#define FOREVER for(;;
#define FOR for(
#define SKIP
#define COND ;
#define STEP ;
#define LOOP ){
#define ENDLOOP }
#define NULLOOP ){}
#define WHILE while(
#define DO do{
#define UNTIL }while(!(
#define ENDDO ))
#define EXITWHEN(e) if(e)break
#define CONTINUE continue
#define RETURN return
#define GOTO goto
I was in building 5 at Sun when they were switching to SVr4 which became
Solaris 2.0 (I think). Building 5 housed the kernel people at Sun.
John Pope was the poor bastard who got stuck with doing the bring up.
Everyone hated him for doing it, we all wanted it to fail.
I was busting my ass on something in SunOS 4.x and I was there late into
the night, frequently to around midnight or beyond. So was John.
We became close friends. We both moved to San Francisco and ended up
commuting to Mountain View together (and hit the bars together).
John was just at my place, here's a few pictures for those who might
be interested. He's a great guy, got stuck with a shitty job.
http://www.mcvoy.com/lm/2016-pope/
--lm
> On Jan 9, 2017, at 6:00 PM,"Steve Johnson" <scj(a)yaccman.com> wrote:
>
> I can certainly confirm that Steve Bourne not only knew Algol 68, he
> was quite an evangelist for it.
Bourne had led the Algol68C development team at Cambridge until 1975. See http://www.softwarepreservation.org/projects/ALGOL/algol68impl/#Algol68C .
> if-fi and case-esac notation from Algol came to shell [via Steve Bourne]
There was some pushback which resulted in the strange compromise
of if-fi, case-esac, do-done. Alas, the details have slipped from
memory. Help, scj?
doug
All, I'm not sure if you know of Walter Müller's work at implementing
a PDP-11 on FPGAs: https://wfjm.github.io/home/w11/. He sent me this e-mail
with an excellent source code cross-reference of the 2.11BSD kernel:
P.S.: long time ago I wrote a source code viewer for 2.11BSD and OS with
a similar file and directory layout. I made a few tune-ups lately
and wrote some sort of introduction, see
https://wfjm.github.io/home/ouxr/
Might be helpful for you in case you inspect 2.11BSD source code.
Cheers all, Warren
I was amused this morning to see a post on the tack-devel(a)lists.sourceforge.net
mailing list (TACK = The Amsterdam Compiler Kit) today from David Given,
who writes:
>> ...
>> ... I took some time off from thinking about register allocation (ugh)
>> and ported the ABC B compiler to the ACK. It's now integrated into the
>> system and everything.
>>
>> B is Ken Thompson and Dennis Ritchie's untyped programming language
>> which later acquired types and turned into K&R C. Everything's a machine
>> word, and pointers are *word* address, not byte addresses.
>>
>> The port's a bit clunky and doesn't generate good code, but it works and
>> it passes its own tests. It runs on all supported backends. There's not
>> much standard library, though.
>>
>> Example:
>>
>> https://github.com/davidgiven/ack/blob/default/examples/hilo.b
>>
>> (Also, in the process it found lots of bugs in the PowerPC mcg backend,
>> now fixed, as well as several subtle bugs in the PowerPC ncg backend; so
>> that's good. I'm pretty sure that this is the only B compiler for the
>> PowerPC in existence.)
>> ...
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
> Date: Fri, 6 Jan 2017 20:09:18 -0700
> From: Warner Losh <imp(a)bsdimp.com>
> To: "Greg 'groggy' Lehey" <grog(a)lemis.com>
> Cc: Clem Cole <clemc(a)ccc.com>, The Eunuchs Hysterical Society
> <tuhs(a)tuhs.org>
> Subject: Re: [TUHS] SunOS vs Linux
>
>> On Friday, 6 January 2017 at 9:27:36 -0500, Clem Cole wrote:
>>
>> I think that if SunOS 4 had been released to the world at the right
>> time, the free BSDs wouldn't have happened in the way they did either;
>> they would have evolved intimately coupled with SunOS.
>
> With the right license (BSD), I'd go so far as to saying there'd be no
> BSD 4.4, or if there was, it would have been rebased from the SunOS
> base... There were discussions between CSRG and Sun about Sun donating
> it's reworked VM and VFS to Berkeley to replace the Mach VM that was
> in there... Don't know the scope of these talks, or if they included
> any of the dozens of other areas that Sun improved from its BSD 4.3
> base... The talks fell apart over the value of the code, if the rumors
> I've heard are correct.
>
> Warner
Since I was involved with the negotiations with Sun, I can speak
directly to this discussion. The 4.2BSD VM was based on the
implementation done by Ozalp Babaoglu that was incorporated into
the BSD kernel by Bill Joy. It was very VAX centric and was not
able to handle shared read-write mappings.
Before Bill Joy left Berkeley for Sun, he wrote up the API
specification for the mmap interface but did not finish an
implementation. At Sun, he was very involved in the implementation
though did not write much (if any) of the code for the SunOS VM.
The original plan was to ship 4.2BSD with an mmap implementation,
but with Bill's departure that did not happen. So, it fell to me
to sort out how to get it into 4.3BSD. CSRG did not have the
resources to do it from scratch (there were only three of us).
So, I researched existing implementations and it came down to
the SunOS and MACH implementations. The obvious choice was SunOS,
so I approached Sun about contributing their implementation to
Berkeley. We had had a lot of cooperation about exchanging bug
fixes, so this is not as crazy as it seems.
The Sun engineers were all for it, and convinced their managers
to push my request up the hierarchy. Skipping over lots of drama
it eventually got to Scott McNealy who was dubious, but eventually
bought into the idea and cleared it. At that point it went to the
Sun lawyers to draw up the paperwork. The lawyers came back and
said that "giving away SunOS technology could lead to a stockholder
lawsuit concerning the giving away of stockhoder assets." End of
discussion. We had to go with MACH.
Kirk McKusick
This is a history list and I'm going to try to answer this to give some
historical context and hopefully end this otherwise a thread that I'm not
sure adds to the history of UNIX much one way of the other. Some people
love GPL, some do not. I'll gladly take some of this off list. But I
would like to see us not devolve TUHS into my favorite license or favorite
unix discussion.
On Thu, Jan 5, 2017 at 9:09 PM, Larry McVoy <lm(a)mcvoy.com> wrote:
> That makes sense to me, the GPL was hated inside of Sun, it was considered
>
> a virus. The idea that you used a tiny bit of GPLed code and then
> everything
>
> else is GPLed was viewed as highway robbery.
>
I'm not lawyer, nor play one. I am speaking for myself not Intel here so
take what I have to say with that in mind. Note I do teach the required
"GPL and Copyright Course" of all Intel SW folks so I have had some
training and I do have some opinions. I also have lived this for 5 start
up, and a number of large firms both inside and as the a consultant.
Basically, history has shown that they both viral an non-viral licenses
have their place. Before I worked Intel I admit I was pretty much
negative on the GPL "virus" and I >>mostly<< still am. IMHO, it's done
more damage than it has helped and the CMU/MIT/BSD style "Dead Fish"
license has done for more positive *for the industry @ large *than the GPL
in the long run. But I admit, I'm a capitalist and I see the value in
letting some one make some profit for their work. All I have seen the
virus do in the long run is that firms have lawyers to figure out how to
deal with it.
There is a lot of miss information about the term "open source" .... open
source does not mean "free" as in beer. It means available and "open" to
be read and modified. Unix has >>always<< be open and available - which
is why their are so many versions of Unix (and Linux). The question was
the *price* of the license and who had it. Most hacker actually did have
access as this list shows -- we had from our universities for little money
or our employees for much more. GPL and the virus it has, does not protect
any one from this diversity. In fact, in some ways it makes it harder.
The diversity comes from the market place. The problem is that in the
computer business, the diversity can be bad and keeping things "my way" is
better for the owner of the gold (be it a firm like IBM, DEC, or Microsoft)
or a technology like Linux.
What GPL is >>supposed<< to do it ensure that secrets are not locked up and
ensure that all can see and share in the ideas. This is a great idea in
theory, the risk is that if you have IP that you want to some how protect,
as Larry suggests, the virus can put your IP in danger. To the credit of
firms like Intel, GE, IBM et al, they have learned how to try to firewall
their >>important<< IP with processes and procedures to protect it (which
is exactly what rms did not want to have happen BTW). [In my experience,
it made the locks even tighter than before], although it has made some
things more available. I now this rankles some folks. There are
positives and negatives to each way of doing things.
IMO, history has shown that it has been the economics of >>Clay
Christiansen style disruption<<, not a license that changed things in our
industry. When the price of UNIX of any version (Linux, *BSD, SunOS,
MInux, etc...) and the low cost HW came to be and the "enough" hackers did
something. Different legal events pushed one version ahead of others, and
things had to be technology "good enough" -- but it was economics not
license that made the difference. License played into the economics for
sure, but in the end, it was free (as in beer) vs $s that made it all work.
Having lived through they completely open, completely closed, GPLed and
dead-fish world of the computer industry, I'm not sure if we are really any
farther ahead in practice. We just have to be careful and more lawyers
make more money - by that's my being a cynic.
Anyway, I hope we can keep from devolving from really history.
Clem
One significant area of non compliance with unix conventions is its non
case sensitive filesystem (HFS and variants like HFS+ if I recall). I think
this is partly for historical reasons to make Classic / MacOS9 emulation
easier during the transition. But I could never understand why they did
this, they could have put case insensitivity in their shell and apps
without breaking the filesystem.
Anyway despite its being unix I can't really see it gaining much traction
with serious unix users (when did you last get a 404 from a major website
with a tagline "Apache running on MacOSX"?), the MacPorts and Fink repos
are a really bad and patchy implementation of something like
apt/ctan/cpan/etc (I think possibly at least one of those repos builds from
source with attendant advantages/problems), it does not support X properly,
the dylibs are non standard, everything is a bit broken compared with Linux
(or FreeBSD) and Apple does not really have the motivation or the manpower
to create a modern, clean system like unix users expect.
Open sourcing Darwin was supposed to open it up to user contributed
enhancements but Apple was never serious about this, it was just a sop to
people who claimed (correctly) that Apple was riding on the back of open
source and giving nothing back to the community. Since Apple refused to
release any important code like drivers or bootstrap routines the Darwin
release was never really any more useable than something like 4.4BSDLite.
People who loved their Macs and loved unix and dreamed of someday running
the Mac UI on top of a proper unix, put significant effort into supplying
the missing pieces but were rebuffed by Apple at every turn, Apple would
constantly make new releases with even more missing pieces and breakage and
eventually stopped making any open source releases at all, leaving a lot of
people crushed and very bitter.
As for me I got on the Apple bandwagon briefly in 2005 or so, at that time
I was experimenting with RedHat but my primary development machines were
Windows 98 and 2000 (occasionally XP). My assessment was RedHat was not
ready for desktop use, since I had trouble with stuff like printers and
scanners that required me to stay with Windows (actually this was probably
surmountable but I did not have the knowledge or really the desire to spend
time debugging it). That's why I selected Apple as a "compromise unix"
which should connect to my devices easily. I got enthusiastic and spent a
good $4k on new hardware. Shortly afterwards Apple announced the Intel
transition so I realized my brand new gear would soon be obsolete and
unsupported. I was still pretty happy though. Two things took the shine off
eventually (a) I spilt champagne on my machine, tore it down to discover my
beautiful and elegant and spare (on the outside) machine was a horrible
hodgepodge of strange piggyback PCBs and third party gear (on the inside),
this apparently happened because options like the backlit keyboard had
become standard equipment at some point but Apple had never redesigned them
into the motherboard, the whole thing was horribly complicated and fragile
and never worked well after the teardown (b) I got seriously into FreeBSD
and Linux and soon discovered the shortcomings of the Mac as a serious
development machine, everything was just slightly incompatible leading to
time waste.
Happily matters have improved a lot. Lately I was setting up some Windows 7
and 10 machines for my wife to use MS Office on for her uni work. Both had
serious driver issues like "The graphics card has crashed and recovered".
And on the Windows 10 machine, despite it being BRAND NEW out of the box
and manufacturer preloaded, the wifi also did not work, constantly crashed
requiring a reboot. Windows Update did not fix these problems. Downloading
and trying various updated drivers from the manufacturer's website seems to
have for now, except on the Windows 7 machine where the issue is noted and
listed as "won't fix" because the graphics card is out of date, the fixed
driver won't load on this machine. Given this seems to be the landscape
even for people who are happy to spend the $$ on the official manufacturer
supported Windows based solutions, Linux looks pretty easy to install and
use by comparison. Not problem free, but may have fewer problems and easier
to fix problems.
It appears to me that with the growing complexity of the hardware due to
the millions of compatibility layers and ad hoc protocols built into it,
the job of the manufacturers and official OS or driver writers gets harder
and harder, whereas the crowdsourced principle of open source shows its
value since the gear is better tested in a wider variety of realistic
situations.
cheers, Nick
On Jan 1, 2017 9:46 AM, "David" <david(a)kdbarto.org> wrote:
> On Dec 31, 2016, at 8:58 AM, tuhs-request(a)minnie.tuhs.org wrote:
>
> From: Michael Kjörling <michael(a)kjorling.se>
> To: tuhs(a)tuhs.org
> Subject: Re: [TUHS] Historic Linux versions not on kernel.org
> Message-ID: <20161231111339.GK576(a)yeono.kjorling.se>
> Content-Type: text/plain; charset=utf-8
>
> I might be colored by the fact that I'm running Linux myself, but I'd
> say that those are almost certainly worth preserving somehow,
> somewhere. Linux and OS X are the Unix-like systems people are most
> likely to come in contact with these days
MacOS X is a certified Unix (tm) OS. Not Unix-Like.
http://www.opengroup.org/openbrand/register/apple.htm
It has been so since 10.0. Since 10.5 (Leopard) it has been so noted on the
above Open Group page. The Open Group only lists the most recent release
however.
The Tech Brief for 10.7 (http://images.apple.com/media/us/osx/2012/docs/OSX_
for_UNIX_Users_TB_July2011.pdf) also notes the compliance.
David
I left Digital in 1994, so I don’t know much about the later evolution of the Alphaservers, but 1998 would have been about right for en EV-56 (EV-5 shrink) or EV-6. There’s a Wikipedia article about all the different systems but most of the dates are missing.
The white label parts are all PAL22V10-15s. The 8 square chips are cache SRAMS, and most the the SOIC jellybeans are bus transceivers to connect the CPU to RAM and I/O. The PC derived stuff is in the back corner. There are 16 DIMM slots to make two ranks of 54 bit RAM out of 8-bit DIMMs. We usually ran with a SCSI card, an ethernet, and an 8514 graphics card plugged into the riser.
-L
> On 2017, Jan 5, at 5:55 PM, ron minnich <rminnich(a)gmail.com> wrote:
>
> What version of this would I have bought ca. 1998? I had 16 of some kind of Alpha nodes in AMD sockets, interconnected with SCI for encoding videos. I ended up writing and releasing what I think were the first open source drivers for SCI -- it took a long time to get Dolphin to let me release them.
>
> The DIPs with white labels -- are those PALs or somethin? Or are the labels just to cover up part names :-)
>
> On Thu, Jan 5, 2017 at 2:39 PM Lawrence Stewart <lstewart2(a)gmail.com <mailto:lstewart2@gmail.com>> wrote:
> Alphas in PC boxes! I dug around in the basement and found my Beta (photo attached).
>
> This was from 1992 or 1993 I think. This is an EV-3 or EV-4 in a low profile PC box using pc peripherals. Dave Conroy designed the hardware, I did the console ROMS (BIOS equivalent) and X server, and Tom Levergood ported OSF-1. A joint project of DEC Semiconductor Engineering and the DEC Cambridge Research Lab. I think about 20 were built, and the idea kickstarted a line of low end Alphaservers.
>
> This was a typical Conroy minimalist design, crunching the off-chip caches, PC junk I/O, ISA bus, and 64 MBytes of RAM into this little space. I think one gate array would replace about half of the chips.
>
> -L
>
>
> <IMG_0939.JPG><IMG_0939.JPG>
All, following on from the "lost ports" thread, I might remind you all that
I'm keeping a hidden archive of Unix material which cannot be made public
due to copyright and other reasons. The goal is to ensure that these bits
don't > /dev/null, even if we can't (yet) do anything with them.
If you have anything that could be added to the archive, please let me know.
My rules are: I don't divulge what's in the archive, nor who I got stuff
from. There have been very few exceptions. I have sent copies of the archive
to two important historical computer organisations who must abide by the
same rules. I think I've had one or two individuals who were desperate to
get software to run on their old kit, and I've "loaned" some bits to them.
Anway, that's it. If that seems reasonable to you, and you want an off-site
backup of your bits, I'm happy to look after them for you.
Cheers, Warren
On 4 January 2017 at 13:51, Steve Johnson <scj(a)yaccman.com> wrote (in part):
> These rules provided rich fodder for Lint, when it came along, [...]
All this lint talk caused me to reread your Lint article but no
history there. Was there a specific incident that begat lint?
N.