I'd totally subscribe to your newsletter :P
that's cool, there is a tape dump of the old stuff on bitsavers... the
UniSoft port I think was the original stuff before Bill showed up?
http://bitsavers.trailing-edge.com/bits/Sun/UniSoft_1.3/
along with some ROM images
http://bitsavers.trailing-edge.com/bits/Sun/sun1/
but more pictures and whatnot are always interesting!
-----Original Message-----
From: Earl Baugh
To: Clem Cole
Cc: tuhs(a)minnie.tuhs.org
Sent: 4/10/21 4:02 AM
Subject: Re: [TUHS] SUN (Stanford University Network) was PC Unix
I’ve done a fair amount of research on Sun 1’s since I have one ( and it
has one of the original 68k motherboards with the original proms ).
It’s on my list to create a Sun 1 registry along the lines of the Apple
1 registry. ( https://www.apple1registry.com/
<https://www.apple1registry.com/> )
Right now, I can positively identify 24 machines that still exist. Odd
serial numbering makes it very hard to know exactly how many they made.
Cisco was sued by Stanford over the Sun 1. From what I read, they made
off with some Stanford property ( SW and HW ). Wikipedia mentions this (
and I have some supporting documents as well ). They ended up licensing
from Stanford as part of the settlement. From what I’ve gathered VLSI
licensed the design from Stanford not Andy directly. However the only
produced a few machines and Andy wasn’t all that happy with that. That
was one of the impetus is to getting sun formed and licensing the same
design. I also believe another company ( or 2 )licensed the design but
either didn’t produce any or very very few machines.
You can tell a difference between VLSI boards and the Sun Microsystems
boards because the SUN is all capitalized on the VLSI boards ( and is
Sun on the others ). At least on the few I’ve seen pictures of.
The design was also licensed to SGI — I’ve seen a prototype SGI board
that’s the same thing with a larger PCB to allow some extensions.
And the original CPU boards didn’t have an MMU. They could only run Sun
OS up to 0.9, I believe was the version. When Bill Joy got there, again
from what I’ve gathered, he wanted to bring more of the BSD code over
and they had to change the system board. This is why you see the Sun
1/150 model number ( as an upgrade to the original Sun 1/100 designation
). The rack mounted Sun 1/120 was changed to the 1/170. The same
upgraded CPU board was used in the Sun 2/120 at least initially.
The original Sun OS wasn’t BSD based. It was a V32 variant I believe.
And the original CPU boards were returned to Sun, I believe as part of
the upgrade from the 1/100 to the 1/150. ( Given people had just paid
$10,000 for a machine having to replace the entire machine would’ve been
bad from a customer perspective). Sun did board upgrade trade ups after
this ( I worked at a company and we purchased an upgrade to upgrade a
Sun 3/140 to a Sun 3/110 — the upgrade consisted of a CPU board swap and
a different badge for the box )
Sun then, from when I can tell, sold the original CPU boards to a German
company that was producing a V32 system. They changed out the PROMs but
you can see the Sun logo and part numbers on the boards
I could go on and on about this topic ?
A Sun 1 was a “bucket list” machine for me - and I am still happy that
some friends were willing to take a 17 hour road trip from Atlanta to
Minnesota to pick mine up. ?
After unparking the drive heads it booted up, first try ( I was only
willing to try that without a bunch of testing work because I have some
spare power supplies and a couple plastic tubs of multi bus boards that
came with it ?)
Earl
Sent from my iPhone
On Apr 9, 2021, at 11:13 AM, Clem Cole <clemc(a)ccc.com> wrote:
?
On Fri, Apr 9, 2021 at 10:10 AM Tom Lyon < pugs(a)ieee.org
<mailto:pugs@ieee.org> > wrote:
Prior to Sun, Andy had a company called VLSI Technology, Inc. which
licensed SUN designs to 5-10 companies, including Forward Technology and
CoData, IIRC. The SUN IPR effectively belonged to Andy, but I don't
know what kind of legal arrangement he had with Stanford. But the
design was not generally public, and relied on CAD tools only extant on
the Stanford PDP-10. Cisco did start with the SUN-1 processor, though
whether they got it from Andy or direct from Stanford is not known to
me. When Cisco started (1984), the Sun-1 was long dead already at Sun.
Bits passing in the night -- this very much is what I remember,
expereinced.
<https://mailfoogae.appspot.com/t?sender=aY2xlbWNAY2NjLmNvbQ%3D%3D&type=
zerocontent&guid=57eccb88-2f68-40ed-9f5a-ce8913f2b4cc> ?
Is there any solid info on the Stanford SUN boards? I just know the SUN-1
was based around them, but they aren't the same thing? And apparently cisco
used them as well but 'borrowed' someone's RTOS design as the basis for IOS?
There was some lawsuit and Stanford got cisco network gear for years for
free but they couldn't take stock for some reason?
I see more and more of these CP/M SBC's on ebay/online and it seems odd that
there is no 'DIY' SUN boards... Or were they not all that open, hence why
they kind of disappeared?
-----Original Message-----
From: Jon Steinhart
To: tuhs(a)minnie.tuhs.org
Sent: 4/8/21 7:04 AM
Subject: Re: [TUHS] PC Unix
Larry McVoy writes:
> On Thu, Apr 08, 2021 at 12:18:04AM +0200, Thomas Paulsen wrote:
> > >From: John Gilmore <gnu(a)toad.com>
> > >Sun was making 68000-based systems in 1981, before the IBM PC was
created.
> >
> > Sun was founded on February 24, 1982. The Sun-1 was launched in May
1982.
> >
> > https://en.wikipedia.org/wiki/Sun_Microsystems
> > https://en.wikipedia.org/wiki/Sun-1
>
> John may be sort of right, I bet avb was building 68k machines at
> Stanford before SUN was founded. Sun stood for Stanford University
> Network I believe.
>
> --lm
Larry is correct. I remember visiting a friend of mind, Gary Newman,
who was working at Lucasfilm in '81. He showed me a bunch of stuff
that they were doing on Stanford University Network boards.
Full disclosure, it was Gary and Paul Rubinfeld who ended up at DEC
and I believe was the architect for the microVax who told me about
the explorer scout post at BTL which is how I met Heinz.
Jon
> From: Jason Stevens
> apparently cisco used them as well but 'borrowed' someone's RTOS design
> as the basis for IOS? There was some lawsuit and Stanford got cisco
> network gear for years for free but they couldn't take stock for some
> reason?
I don't know the whole story, but there was some kind of scandal; I vaguely
recall stories about 'missing' tapes being 'found' under the machine room
raised floor...
The base software for the Cisco multi-protocol router was code done by William
(Bill) Yeager at Stanford (it handled IP and PUP); I have a vgue memory that
his initially ran on PDP-11's, like mine. (I think their use of that code was
part of the scandal, but I've forgotten the details.)
> From: Tom Lyon
> the design ... relied on CAD tools only extant on the Stanford PDP-10.
Sounds like SUDS?
Noel
> I developed LSX at Bell Labs in Murray Hill NJ in the 1974-1975
> timeframe.
> An existing C compiler made it possible without too much effort. The
> UNIX
> source was available to Universities by then. I also developed Mini-UNIX
> for the PDP11/10 (also no memory protection) in the 1976 timeframe.
> This source code was also made available to Universities, but the source
> code for LSX was not.
>
> Peter Weiner, the founder of INTERACTIVE Systems Corp.(ISC) in June
> 1977,
> the first commercial company to license UNIX source from Western
> Electric for $20,000. Binary licenses were available at the same time.
> I joined ISC in May of 1978 when ISC was the first company to offer
> UNIX support services to third parties. There was never any talk about
> licensing UNIX source code from Western Electric (WE) from the founding
> of ISC to when the Intel 8086 micro became available in 1981.
> DEC never really targeted the PC market with the LSI-11 micro,
> and WE never made it easy to license binary copies of the UNIX
> source code, So LSX never really caught on in the commercial market.
> ISC was in the business of porting the UNIX source code to other
> computers, micro to mainframe, as new computer architectures
> were developed.
>
> Heinz
The Wikipedia page for ISC has the following paragraphs:
"Although observers in the early 1980s expected that IBM would choose Microsoft Xenix or a version from AT&T Corporation as the Unix for its microcomputer, PC/IX was the first Unix implementation for the IBM PC XT available directly from IBM. According to Bob Blake, the PC/IX product manager for IBM, their "primary objective was to make a credible Unix system - [...] not try to 'IBM-ize' the product. PC-IX is System III Unix." PC/IX was not, however, the first Unix port to the XT: Venix/86 preceded PC/IX by about a year, although it was based on the older Version 7 Unix.
The main addition to PC/IX was the INed screen editor from ISC. INed offered multiple windows and context-sensitive help, paragraph justification and margin changes, although it was not a fully fledged word processor. PC/IX omitted the System III FORTRAN compiler and the tar file archiver, and did not add BSD tools like vi or the C shell. One reason for not porting these was that in PC/IX, individual applications were limited to a single segment of 64 kB of RAM.
To achieve good filesystem performance, PC/IX addressed the XT hard drive directly, rather than doing this through the BIOS, which gave it a significant speed advantage compared to MS-DOS. Because of the lack of true memory protection in the 8088 chips, IBM only sold single-user licenses for PC/IX.
The PC/IX distribution came on 19 floppy disks and was accompanied by a 1,800-page manual. Installed, PC/IX took approximately 4.5 MB of disk space. An editorial by Bill Machrone in PC Magazine at the time of PC/IX's launch flagged the $900 price as a show stopper given its lack of compatibility with MS-DOS applications. PC/IX was not a commercial success although BYTE in August 1984 described it as "a complete, usable single-user implementation that does what can be done with the 8088", noting that PC/IX on the PC outperformed Venix on the PDP-11/23.”
It seems like Venix/86 came out in Spring 1983 and PC/IX in Spring 1984. I guess by then RAM had become cheap enough that running in 64KB of core was no longer a requirement and LSX and MX did not make sense anymore. Does that sound right?
I heard a while back, that the reason that Microsoft has avoided *ix so
meticulously, was that back when they sold Xenix to SCO, as part of the
deal Microsoft signed a noncompete agreement that prevented them from
selling anything at all similar to *ix.
True?
I first encountered the fuzz-testing work of Barton Miller (Computer
Sciences Department, University of Wisconsin in Madison) and his
students and colleagues in their original paper on the subject
An empirical study of the reliability of UNIX utilities
Comm. ACM 33(12) 32--44 (December 1990)
https://doi.org/10.1145/96267.96279
which was followed by
Fuzz Revisited: A Re-examination of the Reliability of UNIX Utilities and Services
University of Wisconsin CS TR 1264 (18 February 1995)
ftp://ftp.cs.wisc.edu/pub/techreports/1995/TR1268.pdf
and
An Empirical Study of the Robustness of MacOS Applications Using Random Testing
ACM SIGOPS Operating Systems Review 41(1) 78--86 (January 2007)
https://doi.org/10.1145/1228291.1228308
I have used their techniques and tools many times in testing my own,
and other, software.
By chance, I found today in Web searches on another subject that
Milller's group has a new paper in press in the journal IEEE
Transactions on Software Engineering:
The Relevance of Classic Fuzz Testing: Have We Solved This One?
https://doi.org/10.1109/TSE.2020.3047766https://arxiv.org/abs/2008.06537https://ieeexplore.ieee.org/document/9309406
I track that journal at
http://www.math.utah.edu/pub/tex/bib/ieeetranssoftwengYYYY.{bib,html}
[YYYY = 1970 to 2020, by decade], but the new paper has not yet been
assigned a journal issue, so I had not seen it before today.
The Miller group work over 33 years has examined the reliability of
common Unix tools in the face of unexpected input, and in the original
work that began in 1988, they were able to demonstrate a significant
failure rate in common, and widely used, Unix-family utilities.
Despite wide publicity of their first paper, things have not got much
better, even from reprogramming software tools in `safe' languages,
such as Rust.
In each paper, they analyze the reasons for the exposed bugs, and
sadly, much the same reasons still exist in their latest study, and in
several cases, have been introduced into code since their first work.
The latest paper also contains mention of Plan 9, which moved
bug-prone input line editing into the window system, and of bugs in
pdftex (they say latex, but I suspect they mean pdflatex, not latex
itself: pdflatex is a macro-package enhanced layer over the pdftex
engine, which is a modified TeX engine). The latter are significant
to me and my friends and colleagues in the TeX community, and for the
TeX Live 2021 production team
http://www.math.utah.edu/pub/texlive-utah/
especially because this year, Don Knuth revisited TeX and Metafont,
produced new bug-fixed versions of both, plus updated anniversary
editions of his five-volume Computers & Typesetting book series. His
recent work is described in a new paper announced this morning:
The \TeX{} tuneup of 2021
TUGboat 42(1) ??--?? February 2021
https://tug.org/TUGboat/tb42-1/tb130knuth-tuneup21.pdf
Perhaps one or more list members might enjoy the exercise of applying
the Barton-group fuzz tests (all of which are available from a Web
site
ftp://ftp.cs.wisc.edu/paradyn/fuzz/fuzz-2020/
as discussed in their paper) to 1970s and 1980s vintage Unix systems
that they run on software-simulated CPUs (or rarely, on real vintage
hardware).
The Unix tools of those decades were generally much smaller (in lines
of code), and most were written by the expert Unix pioneers at Bell
Labs. It would of interest to compare the tool failure rates in
vintage Unix with tool versions offered by commercial distributions,
the GNU Project, and the FreeBSD community, all of which are treated
in the 2021 paper.
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
I'm not sure why people, even in a group devoted to history like
ours, focus so much on whether a journal is issued in print or
only electronically. The latter has become more and more common.
On one hand, I too find that if something is available only
electronically I'm more likely to put off reading it, probably
because back issues don't pile up as visibly.
On the other, in recent years I've been getting behind in my
reading of periodicals of all sorts, and so far as I can tell
that has nothing to do with whether a given periodical arrives
on paper. If anything, electronic access makes it more likely
I'll be able to catch up, because it's easier to carry a bunch
of back issues around on a USB stick or loaded into a tablet or
the like than to lug around lots of hardcopy. The biggest
burden has been that imposed by PDF files, which are often
carefully constructed to be appallingly cumbersome to read
unless viewed on a letter-paper/A4-sized screen (or printed
out). HTML used to be better, though the ninnies who design
web pages to look like magazine ads have spoiled a lot of
that over the years.
Since I often want to read PDF files when travelling (e.g.
conference proceedings while at the conference) I finally
invested in a large-screened tablet.
Even so, I have a big pile of back issues of ;login:, CACM
(until ACM's policies, having little to do with the journal,
recently drove me away), Rail Passenger Association News,
and Consumer Reports waiting to be read. And sometimes I'm
months behind on this list.
My advice to those who find electronic-only publications
cumbersome is to invest in either a good tablet or a good
printer. I have and use both. There's no substitute for
a large, high-quality screen, and sometimes there's no
substitute for paper that I can flip back and forth, but
I'm fine with supplying those myself.
I'm still looking for a nice brass-bound leather tablet case,
though.
Norman Wilson
Toronto ON
I'm having a bit of trouble with a couple of RD52 drives, and I suspect
that I need a newer formatting program. The formatter floppy in my XXDP
kit includes ZRQC-C-0 (ZRQCC0.BIC), and I understand that revision C is
really old, and I should be running at least F, preferably H.
Does anyone have (or can make) an image of a newer version of the RX50
formatter floppy? I've got an RX50 drive in my 11/83 with 2.11BSD, so
it would be a simple matter to make a bootable floppy there if I just
had the bits to write... :)
-tih
--
Most people who graduate with CS degrees don't understand the significance
of Lisp. Lisp is the most important idea in computer science. --Alan Kay
> IBM famously failed to buy the well-established CP/M in
> 1980. (CP/M had been introduced in 1974, before the
> advent of the LSI-11 on which LSX ran.) By then IBM had
> settled on Basic and Intel. I do not believe they ever
> considered Unix and DEC, nor that AT&T considered
> selling to IBM. (AT&T had--fortunately--long since been
> rebuffed in an attempt to sell to DEC.)
>
> Doug
Besides all the truth or legend around flying and signing NDA’s, I think there were clear economic reasons for ending up with Microsoft’s DOS, and the pre-cursor to that: picking the 8088.
[1] By 1980 there were an estimated 8,000 software packages for CP/M available, many aimed at small business. IBM was targeting that. The availability of source level converters for 8080 code to 8088 code made porting economically feasible for the (cottage) ISV’s. This must have been a strong argument in favour of picking the 8088 for the original PC.
[2] In line with their respective tried and tested business models, Digital Research offered CP/M-86 with a per-copy license structure. Microsoft offered QDOS with a one-off license structure. The latter was economically more attractive to IBM. I don’t think either side expected clones to happen the way they did, although they did probably factor in the appearance of non-compatible work-alikes.
Although some sources suggest that going with the 68000 and/or Unix were considered, it would have left the new machine without an instant base of affordable small business applications. Speed to market was a leading paradigm for the PC's design team.
> There is some information and demos of the early 8086/80286 Xenix,
> including the IBM rebranded PC Xenix 1.0 on pcjs.org
>
> https://www.pcjs.org/software/pcx86/sys/unix/ibm/xenix/1.0/
>
> And if you have a modern enough browser you can run them from the browser as
> well!
>
> It's amazing that CPU's are fast enough to run interpreted emulation that is
> faster than the old machines of the day.
That is a cool link. At the bottom of the page are two images of floppy disks. These show an ISC copyright notice. Maybe this is because the floppies contained “extensions” rather than Xenix itself.
===
Note that "IBM Xenix 1.0" is actually the same as MS Xenix 3.0, and arrived after MS Xenix had been available for 4 years (initially for the PDP-11 and shortly after for other CPU's):
http://seefigure1.com/2014/04/15/xenixtime.html
Rob Ferguson writes:
"From 1986 to 1989, I worked in the Xenix group at Microsoft. It was my first job out of school, and I was the most junior person on the team. I was hopelessly naive, inexperienced, generally clueless, and borderline incompetent, but my coworkers were kind, supportive and enormously forgiving – just a lovely bunch of folks.
Microsoft decided to exit the Xenix business in 1989, but before the group was dispersed to the winds, we held a wake. Many of the old hands at MS had worked on Xenix at some point, so the party was filled with much of the senior development staff from across the company. There was cake, beer, and nostalgia; stories were told, most of which I can’t repeat. Some of the longer-serving folks dug through their files to find particularly amusing Xenix-related documents, and they were copied and distributed to the attendees.
If memory serves, it was a co-operative effort between a number of the senior developers to produce this timeline detailing all the major releases of Xenix.
I have no personal knowledge of the OEM relationships before 1986, and I do know that there were additional minor ports and OEMs that aren’t listed on the timeline (e.g. NS32016, IBM PS/2 MCA-bus, Onyx, Spectrix), but to the best of my understanding this hits the major points.
Since we’re on the topic, I should say that I’ve encountered a surprising amount of confusion about the history of Xenix. So, here are some things I know:
Xenix was a version of AT&T UNIX, ported and packaged by Microsoft. It was first offered for sale to the public in the August 25, 1980 issue of Computerworld.
It was originally priced between $2000 and $9000 per copy, depending on the number of users.
MS owned the Xenix trademark and had a master UNIX license with AT&T, which allowed them to sub-licence Xenix to other vendors.
Xenix was licensed by a variety of OEMs, and then either bundled with their hardware or sold as an optional extra. Ports were available for a variety of different architectures, including the Z-8000, Motorola 68000, NS16032, and various Intel processors.
In 1983, IBM contracted with Microsoft to port Xenix to their forthcoming 80286-based machines (codenamed “Salmon”); the result was “IBM Personal Computer XENIX” for the PC/AT.
By this time, there was growing retail demand for Xenix on IBM-compatible personal computer hardware, but Microsoft made the strategic decision not to sell Xenix in the consumer market; instead, they entered into an agreement with a company called the Santa Cruz Operation to package, sell and support Xenix for those customers.
Even with outsourcing retail development to SCO, Microsoft was still putting significant effort into Xenix:
• Ports to new architectures, the large majority of the core kernel and driver work, and extensive custom tool development were all done by Microsoft. By the time of the Intel releases, there was significant kernel divergence from the original AT&T code.
• The main Microsoft development products (C compiler, assembler, linker, debugger) were included with the Intel-based releases of Xenix, and there were custom internally-developed toolchains for other architectures. Often, the latest version of the tools appeared on Xenix well before they were available on DOS.
• The character-oriented versions of Microsoft Word and Multiplan were both ported to Xenix.
• MS had a dedicated Xenix documentation team, which produced custom manuals and tutorials.
As late as the beginning of 1985, there was some debate inside of Microsoft whether Xenix should be the 16-bit “successor” to DOS; for a variety of reasons – mostly having to do with licensing, royalties, and ownership of the code, but also involving a certain amount of ego and politics – MS and IBM decided to pursue OS/2 instead. That marked the end of any further Xenix investment at Microsoft, and the group was left to slowly atrophy.
The final Xenix work at Microsoft was an effort with AT&T to integrate Xenix support into the main System V.3 source code, producing what we unimaginatively called the “Merged Product” (noted by the official name of “UNIX System V, r3.2” in the timeline above).
Once that effort was completed, all Intel-based releases of UNIX from AT&T incorporated Xenix support; in return, Microsoft received royalties for every copy of Intel UNIX that AT&T subsequently licensed.
It will suffice, perhaps, to simply note that this was a good deal for Microsoft.”
It would be so cool if these early (1980-1984) Xenix versions were available for historical examination and study.