On Thursday, 7 March 2024 at 1:47:26 -0500, Jeffry R. Abramson wrote:
>
> I eventually reverted back to Linux because it was clear that the
> user community was getting much larger, I was using it
> professionally at work and there was just a larger range of
> applications available. Lately, I find myself getting tired of the
> bloat and how big and messy and complicated it has all gotten.
> Thinking of looking for something simpler and was just wondering
> what do other old timers use for their primary home computing needs?
I'm surprised how few of the responders use BSD. My machines all
(currently) run FreeBSD, with the exception of a Microsoft box
(distress.lemis.com) that I use remotely for photo processing. I've
tried Linux (used to work developing Linux kernel code), but I
couldn't really make friends with it. It sounds like our reasons are
similar.
More details:
1977-1984: CP/M, 86-DOS
1984-1990: MS-DOS
1991-1992: Inactive UNIX
1992-1997: BSD/386, BSD/OS
1997-now: FreeBSD
Greg
--
Sent from my desktop computer.
Finger grog(a)lemis.com for PGP public key.
See complete headers for address and phone numbers.
This message is digitally signed. If your Microsoft mail program
reports problems, please read http://lemis.com/broken-MUA.php
This is UNIX history, but since the Internet's history and Unix history are
so intertwined, I'm going to risk the wrath of the IH moderators to try to
explain, as I was one of the folks who was at the table in those the times
and participated in my small way in both events: the birth of the Internet
and the spreading of the UNIX IP.
More details can be found in a paper I did a few years ago:
https://technique-societe.cnam.fr/colloque-international-unix-en-france-et-…
[If you cannot find it and are interested send me email off list and I'll
forward it].
And ... if people want to continue this discussion -- please, please, move
it to the more appropriate COFF mailing list:
https://www.tuhs.org/cgi-bin/mailman/listinfo/coff - which I have CC'ed in
this reply.
On Fri, Mar 8, 2024 at 11:32 PM Greg Skinner via Internet-history <
internet-history(a)elists.isoc.org> wrote:
> Forwarded for Barbara
>
> > I will admit your response is confusing me. My post only concerns what
> I think I remember as a problem in getting BSD UNIX, in particular the
> source code. Nothing about getting something we wanted to use on a
> hardware platform from one of the commercial vendors. We needed the BSD
> source but got hung up.
>
Let me see if I can explain better ...
Assuming you were running BSD UNIX on a Vax, your team would have needed
two things:
- an AT&T License for 32/V [Research Version 7 -- port to a Vax/780 at
AT&T] and a
- a license for BSD 3, later 4, then 4.1, *etc*., from the Regents of
the University of CA.
The first license gave your team core a few rights from AT&T:
1. the right to run UNIX binaries on a single CPU (which was named in
your license)
2. the right to look at and modify the sources,
3. the right the create derivative works from the AT&T IP, and
4. the right to exchange your derivative works with others people that
held a similar license from AT&T.
[AT&T had been forced to allow this access (license) to their IP under the
rules of the 1956 consent decree - see paper for more details, but
remember, as part of the consent decree allow it to have a legal monopoly
on the phone system, AT&T had to make its IP available to the US Gov --
which I'm guessing the crux of Barbara's question/observation].
For not-for-profits (University/Research), a small fee was allowed to be
charged (order of 1-2 hundred $s) to process the paperwork and copy the mag
tape. But their IP came without any warranty, and you had to hold AT&T
harmless if you used it. In those days, we referred to this as *the UNIX IP
was abandoned on your doorstep.* BTW: This license allowed the research
sites to move AT&T derivative work (binaries) within their site freely.
Still, if you look at the license carefully, most had a restriction
(often/usually ignored at the universities) that the sources were supposed
to only be available on the original CPU named in their specific license.
Thus, if you were a University license, no fees were charged to run the
AT&T IP on other CPUs --> however, the licensees were not allowed to use it
for "commercial" users at the University [BTW: this clause was often
ignored, although a group of us at CMU hackers in the late 1970s famously
went on strike until the Unversity obtained at least one commercial
license]. The agreement was that a single CPU should be officially bound
for all commercial use for that institution. I am aware that Case-Western
got a similar license soon after CMU did (their folks found out about
the CMU strike/license). But I do not know if MIT, Standford, or UCB
officials came clean on that part and paid for a commercial license
(depending on the type of license, its cost was the order of $20K-25K for
the first CPU and an order of $7K-10K for each CPU afterward - each of
these "additional cpu' could also have the sources - but named in an
appendix for each license with AT&T). I believe that some of the larger
state schools like Penn State, Rutgers, Purdue, and UW started to follow
that practice by the time Unix started to spread around each campus.
That said, a different license for UNIX-based IP could be granted by the
Regents of the University of CA and managed by its 'Industrial
Laison's Office" at UCB (the 'IOL' - the same folks that brought licenses
for tools like SPICE, SPLICE, MOTIS,* et al*). This license gave the holder
the right to examine and use the UCB's derivative works on anything as long
as you acknowledged that you got that from UCB and held the
Regents blameless [we often called this the 'dead-fish license' -- *you
could make a chip, make a computer, or even wrap dead-fish in it.* But you
had to say you started with something from the Regents, but they were not
to be blamed for what you did with it].
The Regents were exercising rights 3 and 4 from AT&T. Thus, a team who
wanted to obtain the Berkeley Software Distribution for UNIX (*a.k.a*. BSD)
needed to demonstrate that they held the appropriate license from AT&T
[send a copy of the signature page from your license to the ILO] before UCB
would release the bits. They also had a small processing fee to the IOL in
the order of $1K. [The original BSD is unnumbered, although most refer to
it today as 1BSD to differentiate it from later BSD releases for UNIX].
Before I go on, in those times, the standard way we operated was that you
needed to have a copy of someone else's signature page to share things. In
what would later become USENIX (truth here - I'm an ex-president of the
same), you could only get invited and come to a conference if you were
licensed from AT&T. That was not a big deal. We all knew each other.
FWIW: at different times in my career, I have had a hanging file in a
cabinet with a copy of the number of these pages from different folks, with
whom I would share mag tapes (remember this is pre-Internet, and many of
the folks using UNIX were not part of the ARPAnet).
However, the song has other verses that make this a little confusing.
If your team obtained a* commercial use license* from AT&T, they could
further obtain a *commercial redistribution license*. This was initially
granted for the Research Seventh Edition. It was later rewritten (with the
business terms changing each time) for what would eventually be called
System III[1], and then the different System V releases. The price of the
redistribution license for V7 was $150K, plus a sliding scale per CPU you
ran the AT&T IP, depending on the number of CPUs you needed. With this, the
single CPU for the source restriction was removed.
So ... if you had a redistribution license, you could also get a license
from the Regents, and as long as you obeyed their rules, you could sell a
copy of UNIX to run on any licensed target. Traditionally, hardware is
part of the same purchase when purchased from a firm like DEC, IBM,
Masscomp,* etc*. However, separate SW licenses were sold via firms such as
Microsoft and Mt. Xinu. The purchaser of *a binary license* from one of
those firms did not have the right to do anything but use the AT&T
derivative work. If your team had a binary licensee, you could not obtain
any of the BSD distributions until the so-called 'NET2" BSD release [and
I'm going to ignore the whole AT&T/BSDi/Regents case here as it is not
relevant to Barbara's question/comment].
So the question is, how did a DoD contractor, be it BBN, Ford Aerospace,
SRI, etc., originally get access to UNIX IP? Universities and traditional
research teams could get a research license. Commercial firms like DEC
needed a commercial licensee. Folks with DoD contracts were in a hazy
area. The original v5 commercial licensee was written for Rand, a DoD
contractor. However, as discussed here in the IH mailing list and
elsewhere, some places like BBN had access to the core UNIX IP as part of
their DoD contracts. I believe Ford Aerospace was working with AT&T
together as part of another US Gov project - which is how UNIX got there
originally (Ford Aero could use it for that project, but not the folks at
Ford Motors, for instance].
The point is, if you access the *IP indirectly* such as that, then
your site probably did not have a negotiated license with a signature page
to send to someone.
@Barbara, I can not say for sure, but if this was either a PDP-11 or a VAX
and you wanted one of the eBSDs, I guess/suspect that maybe your team was
dealing with an indirect path to AT&T licensing -- your site license might
have come from a US Gov contract, not directly. So trying to get a BSD tape
directly from the IOL might have been more difficult without a signature
page.
So, rolling back to the original. You get access to BSD sources, but you
had to demonstrate to the IOL folks in UCB's Cory Hall that you were
legally allowed access to the AT&T IP in source code form. That
demonstration was traditionally fulfilled with a xerographic copy of the
signature page for your institution, which the IOL kept on file. That
said, if you had legal access to the AT&T IP by indirect means, I do not
know how the IOL completed that check or what they needed to protect the
Regents.
Clem
1.] What would be called a System from a marketing standpoint was
originally developed as PWB 3.0. This was the system a number of firms,
including my own, were discussing with AT&T at the famous meetings at
'Ricky's Hyatt' during the price (re)negotiations after the original V7
redistribution license.
On 3/7/24, Tom Lyon <pugs78(a)gmail.com> wrote:
> For no good reason, I've been wondering about the early history of C
> compilers that were not derived from Ritchie, Johnson, and Snyder at Bell.
> Especially for x86. Anyone have tales?
> Were any of those compilers ever used to port UNIX?
>
[topic of interest to COFF, as well, I think]
DEC's Ultrix for VAX and MIPS used off-the-shelf Unix cc. I don't
recall what they used for Alpha.
The C compiler for VAX/VMS was written by Dave Cutler's team at
DECwest in Seattle. The C front end generated intermediate language
(IL) for Cutler's VAX Code Generator (VCG), which was designed to be a
common back end for DEC's compilers for VAX/VMS. His team also
licensed the Freiburghouse PL/I front end (commercial version of a
PL/I compiler originally done for Multics) and modified it to generate
VCG IL. The VCG was also the back end for DEC's Ada compiler. VCG
was superseded by the GEM back end, which supported Alpha and Itanium.
A port of GEM to x86 was in progress at the time Compaq sold off the
Alpha technology (including GEM and its C and Fortran front ends) to
Intel.
Currently I've only got an older laptop at home still running Windows 10
Pro. Mostly I use a company provided HP ProBook 440 G7 with Windows 11 Pro.
I installed WSL2 to run Ubuntu 20.04 if only because I wanted to mount UFS
ISO images 😊
Still employed I have access to lots of UNIX servers, SCO UNIX 3.2V4.2 on
Intel based servers, Tru64 on AlphaServers, HP-UX 11.23/11.31 on Itanium
servers. There's an rx-server rx2660 I can call my own but even in a
testroom I can hear it. Reluctant to take home. My electricity bill would
also explode I think.
Cheers,
uncle rubl
--
The more I learn the better I understand I know nothing.
Dropped TUHS; added COFF.
> * Numerous editors show up on different systems, including STOPGAP on
> the MIT PDP6, eventually SOS, TECO, EMACs, etc., and most have some
> concept of a 'line of text' to distinguish from a 'card image.'
I'd like expand on this, since I never heard about STOPGAP or SOS on the
MIT PDP-6/10 computers. TECO was ported over to the 6 only a few weeks
after delivers, and that seems to have been the major editor ever since.
Did you think of the SAIL PDP-6?
> From: Clem Cole
> the idea of a text editor existed long before Ken's version of QED,
> much less, ed(1). Most importantly, Ken's QED came after the original
> QED, which came after other text editors.
Yes; some of the history is given here:
An incomplete history of the QED Text Editor
https://www.bell-labs.com/usr/dmr/www/qed.html
Ken would have run into the original on the Berkeley Time-Sharing System; he
apparently wrote the CTSS one based on his experience with the one on the BTSS.
Oddly enough, CTSS seems to have not had much of an editor before. The
Programmer's Guide has an entry for 'Edit' (Section AH.3.01), but 'edit file'
seems to basically do a (in later terminology) 'cat >> file'. Section AE
seems to indicate that most 'editing' was done by punching new cards on a
key-punch!
The PDP-1 was apparently similar, except that it used paper tape. Editing
paper tapes was difficult enough that Dan Murphy came up with TECO - original
name 'Tape Editor and Corrector':
https://opost.com/tenex/anhc-31-4-anec.pdf
> Will had asked -- how did people learn to use reg-ex?
I learned it from reading the 'sh' and 'ed' V6 man pages.
The MIT V6 systems had TECO (with a ^R mode even), but I started out with ed,
since it was more like editors I had previously used.
Noel
Might interest the bods here too...
-- Dave
---------- Forwarded message ----------
From: Paul Ruizendaal
To: "tuhs(a)tuhs.org" <tuhs(a)tuhs.org>
Subject: [TUHS] RIP Niklaus Wirth, RIP John Walker
Earlier this year two well known computer scientists passed away.
On New Year’s Day it was Niklaus Wirth aged 90. A month later it was John Walker aged 75. Both have some indirect links to Unix.
For Wirth the link is that a few sources claim that Plan 9 and the Go language are in part influenced by the design ideas of Oberon, the language and the OS. Maybe others on this list know more about those influences.
For Walker, the link is via the company that he was running as a side-business before he got underway with AutoCAD: https://www.fourmilab.ch/documents/marinchip/
In that business he was selling a 16-bit system for the S-100 bus, based around the TI9900 CPU (which from a programmer perspective is quite similar to a PDP11). For that system he wrote a Unix-like operating system around 1978-1980, called NOS/MT. He had never worked with Unix, but had spelled the BSTJ issues about it. It was fully written in assembler.
The design was rather unique, maybe inspired by Heinz Lycklama’s “Satellite Processor” paper in BSTJ 57-6. It has a central microkernel that handles message exchange, process scheduling and memory management. Each system call is a message. However, the system call message is then passed on to a privileged “fat kernel” process that handles it. The idea was to provide multiprocessor and network transparency: the microkernel could decide to run processes on other boards in the same rack or on remote systems over a network. Also the kernel processes could be remote. Hence its name “Network Operating System / Multitasking” or “NOS/MT”.
The system calls are pretty similar to Unix. The file system is implemented very similar to Unix (with i-nodes etc.), with some notable differences (there are file locking primitives and formatting a disk is a system call). File handles are not shareable, so special treatment for stdin/out/err is hardcoded. Scheduling and memory management are totally different -- unsurprising as in both cases it reflects the underlying hardware.
Just as NOS/MT was getting into a usable state, John decided to pivot to packaged software including a precursor of what would become the AutoCAD package. What was there worked and saw some use in the UK and Denmark in the 1980’s -- there are emulators that can still run it, along with its small library of tools and applications. “NOS/MT was left in an arrested state” as John puts it. I guess it will remain one of those many “what if” things in computer history.
A friend of mine has a DECmate-II word processor. It is in perfect
working order except for one thing. The field encoding the current
date/time has overflowed. It is impossible to set a date/time in the
21st century.
He says that the software in question is a version of WPS-8 for the
PDP-8. It should be possible to fix the date/time problem by dumping
the DECmate's ROM and disassembling the code. It ought not be too
hard to locate the date/time encode/decode routine and come up with a
fix to the time epoch problem.
Is anyone out there familiar with the DECmate-II software? Or, even
better, knows how to get its source code?
Advice greatly appreciated.
-Paul W.
I've just uploaded a couple new items to archive.org that
folks may find interesting:
https://archive.org/details/5ess-2000-switch-es5431-office-data-base-1998https://archive.org/details/5ess-2000-switch-es5432-system-analysis-1998
Linked above are the ES5431 (Office Data Base Installation)
and ES5432 (System Analysis) training CDs as produced by
Bell Laboratories (Lucent era) for the 5ESS-2000 switch.
Among other things, these CDs contain a 5ESS simulator which
you can see a screenshot of here:
https://www.classicrotaryphones.com/forum/index.php?topic=28001.msg269652#m…
I was able to successfully get it to run on Windows 98 SE in
a virtual machine, although did break one rule of archiving
optical media in that I didn't take iso rips. I intend to
throw an old FreeBSD hard disk in that computer sometime soon
and do some proper rips with dd(1). In the meantime,
this means using the above archives presents only a partial
experience in that the Training section of the software
appears to depend on the original discs being inserted.
In any case, the simulator interests me greatly. I intend
to do a little digging around in it as time goes on to see
if there may be traces of 3B20 emulation or DMERT in the
guts. I'm not holding my breath, but who knows. Either way,
it'll be interesting to play with. Thus far I've only
verified the simulator launches, but have done nothing
with it yet. Picked up Steele's Common Lisp (2nd Edition)
in the same eBay session so time will be split between this,
learning Lisp, and plenty of other little oddball projects
I have going, but if I find anything interesting I'll be
sure to share.
Given that Nokia is shedding 5ESS stuff pretty heavily right
now (or so I've heard) I have to wonder if more of this
stuff will start popping up in online market places. Word
over on the telephone forum is that some folks in Nokia
do have an interest in preserving 5ESS knowledge and
materials but are getting the expected apathy and lack of
engagement from higher ups. Hopefully this at least means
Nokia doesn't mind too much this stuff getting archived if
they don't have to do any of the footwork :)
- Matt G.
On Wed Feb 23 16:33, 1994, I turned on the web service on my machine
"minnie", originally minnie.cs.adfa.edu.au, now minnie.tuhs.org (aka
www.tuhs.org) The web service has been running continuously for thirty
years, except for occasional downtimes and hardware/software upgrades.
I think this makes minnie one of the longest running web services
still in existence :-)
For your enjoyment, I've restored a snapshot of the web site from
around mid-1994. It is visible at https://minnie.tuhs.org/94Web/
Some hyperlinks are broken.
## Web Logs
The web logs show me testing the service locally on Feb 23 1994,
with the first international web fetches on Feb 26:
```
sparcserve.cs.adfa.oz.au [Wed Feb 23 16:33:13 1994] GET / HTTP/1.0
sparcserve.cs.adfa.oz.au [Wed Feb 23 16:33:18 1994] GET /BSD.html HTTP/1.0
sparcserve.cs.adfa.oz.au [Wed Feb 23 16:33:20 1994] GET /Images/demon1.gif HTTP/1.0
...
estcs1.estec.esa.nl [Sat Feb 26 01:48:21 1994] GET /BSD-info/BSD.html HTTP/1.0
estcs1.estec.esa.nl [Sat Feb 26 01:48:30 1994] GET /BSD-info/Images/demon1.gif HTTP/1.0
estcs1.estec.esa.nl [Sat Feb 26 01:49:46 1994] GET /BSD-info/cdrom.html HTTP/1.0
shazam.cs.iastate.edu [Sat Feb 26 06:31:20 1994] GET /BSD-info/BSD.html HTTP/1.0
shazam.cs.iastate.edu [Sat Feb 26 06:31:24 1994] GET /BSD-info/Images/demon1.gif HTTP/1.0
dem0nmac.mgh.harvard.edu [Sat Feb 26 06:32:04 1994] GET /BSD-info/BSD.html HTTP/1.0
dem0nmac.mgh.harvard.edu [Sat Feb 26 06:32:10 1994] GET /BSD-info/Images/demon1.gif HTTP/1.0
```
## Minnie to This Point
Minnie originally started life in May 1991 as an FTP server running KA9Q NOS
on an IBM XT with a 30M RLL disk, see https://minnie.tuhs.org/minannounce.txt
By February 1994 Minnie was running FreeBSD 1.0e on a 386DX25 with 500M
of disk space, 8M of RAM and a 10Base2 network connection. I'd received a copy
of the BSDisc Vol.1 No.1 in December 1993. According to the date on the file
`RELNOTES.FreeBSD` on the CD, FreeBSD 1.0e was released on Oct 28 1993.
## The Web Server
I'd gone to a summer conference in Canberra in mid-February 1994 (see
pg. 29 of https://www.tuhs.org/Archive/Documentation/AUUGN/AUUGN-V15.1.pdf
and https://minnie.tuhs.org/94Web/Canberra-AUUG/cauugs94.html, 10am)
and I'd seen the Mosaic web browser in action. With FreeBSD running on
minnie, it seemed like a good idea to set up a web server on her.
NCSA HTTPd server v1.1 had been released at the end of Jan 1994, see
http://1997.webhistory.org/www.lists/www-talk.1994q1/0282.html
It was the obvious choice to be the web server on minnie.
## Minnie from Then to Now
You can read more about minnie's history and her hardware/software
evolution here: https://minnie.tuhs.org/minnie.html
I obtained the "tuhs.org" domain in May 2000 and switched minnie's
domain name from "minnie.cs.adfa.edu.au" to "minnie.tuhs.org".
Cheers!
Warren
P.S. I couldn't wait until Friday to post this :-)
After learning of Dave's death, a professor I very much enjoyed as a U
of Delaware EE student, I came across this page
https://www.eecis.udel.edu/~mills/gallery/gallery9.html
This reminds me of his lectures, the occasional 90 degree turn into who
knows what, but guaranteed to be interesting. And if anyone has a UDel
Hollerith card they're willing to part with, please get in touch. I
have none. :-(
Mike Markowski
Howdy folks, just finished an exciting series of repairs and now have a DEC VT100 plumbed into a Western Electric Data Set 103J. I was able to supply an answer tone (~2250Hz) at which point the modem began transmitting data. I could then pull the answer tone down and the connection remained, with keypresses on the VT100 properly translating to noise on the line.
Really all I have left is to see if it can do the real thing. I'm keeping an eye out for another such modem but in the meantime, is anyone aware of any 300-baud systems out there in the world that are currently accepting dials in? I don't have POTS at home but they do at my music practice space and if there is such a machine out there, I kinda wanna take my terminal and modem down there and see if I can straight up call a computer over this thing.
I've got other experiments planned too like just feeding it 300-baud modem noise to see if I get the proper text on the screen, that sort of thing, but figured this would be an interesting possibility to put feelers out for.
On that same note, if I get another modem and a stable POTS number to expose it via, I'm considering offering the same, a 300-baud UNIX-y system folks can just call and experiment with (realistically probably a SimH machine with the pty/tty socat'd together)
- Matt G.
Seen on TUHS...
-- Dave
---------- Forwarded message ----------
Date: Sat, 20 Jan 2024 12:27:41 +1000
From: George Michaelson <ggm(a)algebras.org>
To: The Eunuchs Hysterical Society <TUHS(a)tuhs.org>
Subject: [TUHS] (Off topic) Dave Mills
Dave Mills, of fuzzball and ntp fame, one time U Delaware died on the 17th
of January.
He was an interesting, entertaining, prolific and rather idosyncratic
emailer. Witty and informative.
G
Not really UNIX -- so I'm BCC TUHS and moving to COFF
On Tue, Jan 9, 2024 at 12:19 PM segaloco via TUHS <tuhs(a)tuhs.org> wrote:
> On the subject of troff origins, in a world where troff didn't exist, and
> one purchases a C/A/T, what was the general approach to actually using the
> thing? Was there some sort of datasheet the vendor supplied that the end
> user would have to program a driver around, or was there any sort of
> example code or other materials provided to give folks a leg up on using
> their new, expensive instrument? Did they have any "packaged bundles" for
> users of prominent systems such as 360/370 OSs or say one of the DEC OSs?
>
Basically, the phototypesetter part was turnkey with a built-in
minicomputer with a paper tape unit, later a micro and a floppy disk as a
cost reduction. The preparation for the typesetter was often done
independently, but often the vendor offered some system to prepare the PPT
or Floppy. Different typesetter vendors targeted different parts of the
market, from small local independent newspapers (such as the one my sister
and her husband owned and ran in North Andover MA for many years), to
systems that Globe or the Times might. Similarly, books and magazines
might have different systems (IIRC the APS-5 was originally targeted for
large book publishers). This was all referred to as the 'pre-press'
industry and there were lots of players in different parts.
Large firms that produced documentation, such as DEC, AT&T *et al*., and
even some universities, might own their own gear, or they might send it out
to be set.
The software varied greatly, depending on the target customer. For
instance, by the early 80s, the Boston Globe's input system was still
terrible - even though the computers had gotten better. I had a couple of
friends working there, and they used to b*tch about it. But big newspapers
(and I expect many other large publishers) were often heavy union shops on
the back end (layout and presses), so the editors just wanted to set strips
of "column wide" text as the layout was manual. I've forgotten the name of
the vendor of the typesetter they used, but it was one of the larger firms
-- IIRC, it had a DG Nova in it. My sister used CompuGraphic Gear, which
was based on 8085's. She had two custom editing stations and the
typesetter itself (it sucked). The whole system was under $35K in
late-1970s money - but targeted to small newspapers like hers. In the
mid-1908s, I got her a Masscomp at a reduced price and put 6 Wyse-75
terminals on it, so she could have her folks edit their stories with vi,
run spell, and some of the other UNIX tools. I then reverse-engineered the
floppy enough to split out the format she wanted for her stories -- she
used a manual layout scheme. She still has to use the custom stuff for
headlines and some other parts, but it was a load faster and more parallel
(for instance, we wrote an awk script to generate the School Lunch menus,
which they published each week).
ᐧ
[TUHS bcc, moved to COFF]
On Thursday, January 4th, 2024 at 10:26 AM, Kevin Bowling <kevin.bowling(a)kev009.com> wrote:
> For whatever reason, intel makes it difficult to impossible to remove
> the ME in later generations.
Part of me wonders if the general computing industry is starting to cheat off of the smartphone sector's homework, this phenomenon where whole critical components of a hardware device you literally own are still heavily controlled and provisioned by the vendor unless you do a whole bunch of tinkering to break through their stuff and "root" your device. That I can fully pay for and own a "computer" and I am not granted full root control over that device is one of the key things that keeps "smart" devices besides my work issued mobile at arms length.
For me this smells of the same stuff, they've gotten outside of the lane of *essential to function* design decisions and instead have now put in a "feature" that you are only guaranteed to opt out of by purchasing an entirely different product. In other words, the only guaranteed recourse if a CPU has something like this going on is to not use that CPU, rather than as the device owner having leeway to do what you want. Depends on the vendor really, some give more control than others, but IMO there is only one level of control you give to someone who has bought and paid for a complete device: unlimited. Anything else suggests they do not own the device, it is a permanently leased product that just stops requiring payments after a while, but if I don't get the keys, I don't consider myself to own it, I'm just borrowing it, kinda like how the Bell System used to own your telephone no matter how many decades it had been sitting on your desk.
My two cents, much of this can also be said of BIOS, UEFI, anything else that gets between you and the CPUs reset vector. Is it a nice option to have some vendor provided blob to do your DRAM training, possibly transition out of real mode, enumerate devices, whatever. Absolutely, but it's nice as an *option* that can be turned off should I want to study and commit to doing those things myself. I fear we are approaching an age where the only way you get reset vector is by breadboarding your own thing. I get wanting to protect users from say bricking the most basic firmware on a board, but if I want to risk that, I should be completely free to do so on a device I've fully paid for. For me the key point of contention is choice and consent. I'm fine having this as a selectable option. I'm not fine with it becoming an endemic "requirement." Are we there yet? Can't say, I don't run anything serious on x86-family stuff, not that ARM and RISC-V don't also have weird stuff like this going on. SBI and all that are their own wonderful kettle of fish.
BTW sorry that's pretty rambly, the lack of intimate user control over especially smart devices these days is one of the pillars of my gripes with modern tech. Only time will tell how this plays out. Unfortunately the general public just isn't educated enough (by design, not their own fault) on their rights to really get a big push on a societal scale to change this. People just want I push button I get Netflix, they'll happily throw all their rights in the garbage over bread and circuses....but that ain't new...
- Matt G.
Hello, I wanted to share this evening a scan I just finished. It is linked to in this thread where I will continue adding ESS-related materials (for those interested in that sort of thing):
http://www.classicrotaryphones.com/forum/index.php?topic=28001.0
The document therein is the Guide to Stored Program Control Switching, a Bell System document concerning #1 and #2 ESS telephone exchanges and varieties of networking connections. It's interesting in that the guide is three separate little flip decks of cards, some of which can represent steps in a network. Flip to the components you need and the traces at the edges meet to create a diagram of that particular configuration. I quite like the concept, seems like a great way to visualize variable networks.
Anywho, as mentioned I'm going to be putting some more ESS stuff up there, mostly related to 5ESS and 3B20 computers. That seems to now squarely be my main documentation focus since I'm starting to bleed the Bell System UNIX well a bit dry of stuff I can randomly find on eBay. 5ESS and 3B20 are still adjacent enough that I'm sure UNIX-y things will be scattered throughout this material.
Finally, a long shot but is anyone aware of preserved copies (or in possession of) the March 1980 issue of the Bell Laboratories Record? An index on archive.org indicates that issue has a focus piece on the 3B20 which I am quite interested in getting eyes on. I've come across several other copies around this time frame, some of which I've purchased to scan, but not this one yet.
As always happy to answer any questions about what I'm working on or consider scanning jobs for other documentation folks have leads on, happy new year everyone!
- Matt G.
[TUHS as Bcc]
I just saw sad news from Bertrand Meyer. Apparently, Niklaus Wirth
passed away on the 1st. :-(
I think it's fair to say that it is nearly impossible to overstate his
influence on modern programming.
- Dan C.
Dyslexia sucks... sorry, if it was not obvious, please globally substitute
s:STOP/STOP:START/STOP:
ᐧ
On Sun, Dec 31, 2023 at 2:30 PM Clement T Cole via groups.io <clemc=
ccc.com(a)groups.io> wrote:
> Small PS below...
>
> On Sat, Dec 30, 2023 at 9:27 PM Clement T Cole via groups.io <clemc=
> ccc.com(a)groups.io> wrote:
>
>>
>> I did not say that or imply it. But variable vs. fixed blocking has
>> implications on both mechanical requirements and ends up being reflected in
>> how the sw handles it. Traditional 9-track allows you to mix record sizes
>> on each tape. Streamer formats don’t traditionally allow that because they
>> restrict / remove inter record gaps in the same manner 9-track supports.
>> This increases capacity of the tape (less waste).
>>
>
> In my explanation, I may have been a tad confusing. When I say fixed
> records -- I mean on-tape fixed records, what the QIC-24/120/150 standard
> refers to as: "*A group of consecutive bits comprising of a preamble,
> data block marker, a single data block, block address and CRC and postamble*"
> [the standard previous defines a data black os 512 consecutive bytes] --*
> i.e*., if you put an o'scope on the tape head and looked at the bit
> stream (see page 16 of QIC-120 Rev F - Section 5 "Recorded Block" for a
> picture of this format -- note is can only address 2^20 blocks per track,
> but it supports addressing to 256 tracks -- with 15 tracks of QIC-120 that
> means 15728640 unique 512-byte blocks).
>
> STOP/STOP does something similar but encodes the LRECL used [I don't have
> the ANSI tape standard handy - but I remember seeing a wonderful picture of
> all this from said documents when I first was educated about tapes in my
> old IBM days ~50 years ago]. After each record, STOP/STOP needs an
> "Inter-Record-Gap" (IRC) to allow for the motor's spin-up/spin-down time
> before continuing the bit stream of the next data block. The IRC distance
> is something like 5-10 mm [which is a great deal compared to the size of a
> bit when using GCR encoding (which is what 6250 BPI and QIC both use).
> These gaps take space (capacity) from the tape, so people tend to write
> with larger blocking factors [UNIX traditionally uses 10240 bytes- other
> systems use other values - as I said, I believe the standard says the max
> is 64K bytes).
>
> Since streamers (like QIC tape) are supposed to be continuous, the QIC
> inter-records gaps resemble fixed disk records and can be extremely small.
> Remember, each bit is measured in micrometers -- about *2 micrometers*,
> IIRC for QIC and around 10 for ½" formats -- again, and I would need to
> check the ANSI spec, which is not handy. But this is a huge space savings
> even with a smallish block (512) -- again - this was lifted from disk
> technology of the day which had often standardized on 512 8-bit byte blocks
> by then.
>
> BTW: this is again why I suspect a TK25 tape is not going to be able to
> read on QIC-24/120/150 drive if, indeed, page 1-5 of the TK25 user manual
> says it supports four different block sizes [1K/2K/4K/8K]. First, the
> data block format would have to be variable to 4 sizes, and second, the
> preamble would need to encode and write what size block to expect on
> read. Unfortunately, that document does not say much more about the
> physical tape format other than it can use cartridges "similar to ANSI
> Standard X3.55-1982" (which is a 3M DC-600A tape cartridge), has "11
> tracks, 8000 bpi" recording density (/w 10000 flux reversals per in), using
> a "single track NRZI dat in a serpentine pattern, with 4-5 run length
> limited code similar to GCR."
>
> That said, most modern SW will allow you to *write* different size record
> sizes (LRECL) in the user software, but the QIC drives and I believe things
> like DAT and Exabyte will only write 512-byte blocks, so somewhere between
> your user program and tape itself, the write will be broken into N 512 byte
> blocks and then pad (with null is typical) the last block to 512 bytes.
> My memory is the QIC standard is silent on where that is done, but I
> suspect it's done in the controller and the driver is forced to send it
> 512-byte blocks.
>
> So, while you may define blocks of different sizes, unlike ½", it will
> always be written as 512-byte blocks.
>
> That said, using larger record sizes in your application SW can have huge
> performance wins (which I mentioned in my first message) - *e.g.*,
> keeping the drive streaming as more user data has been locked down in
> memory for a DMA. But by the time the driver and the controller are
> finished, it's fixed 512-byte blocks on the tape.
>
>
> One other thing is WRT to QIC, which differs from other schemes. I
> previously mentioned tape files - a feature of the ½" physical tape formats
> not typically supported for QIC tapes. QIC has an interesting feature that
> allows a block to be rewritten and replaced later on the tape (see the
> section of spec/you user manual WRT for "rewritten" or "replacement
> "blocks). I've forgotten all the details, but I seem to remember that
> features were why multiple tape files were difficult to implement.
> Someone who knows more about tapes may remember the details/be able to
> explain -- I remember dealing with tape files was a PITA in QIC, and the
> logic in a standard ½" tape driver could not be just cloned for the QIC
> driver.
> ᐧ
> ᐧ
> ᐧ
> _._,_._,_
> ------------------------------
> Groups.io Links:
>
> You receive all messages sent to this group.
>
> View/Reply Online (#3631) <https://groups.io/g/simh/message/3631> | Reply
> To Group
> <simh@groups.io?subject=Re:%20Re%3A%20%5Bsimh%5D%20Old%20VAX%2FVMS%20Tapes>
> | Reply To Sender
> <clemc@ccc.com?subject=Private:%20Re:%20Re%3A%20%5Bsimh%5D%20Old%20VAX%2FVMS%20Tapes>
> | Mute This Topic <https://groups.io/mt/103433309/4811590> | New Topic
> <https://groups.io/g/simh/post>
> Your Subscription <https://groups.io/g/simh/editsub/4811590> | Contact
> Group Owner <simh+owner(a)groups.io> | Unsubscribe
> <https://groups.io/g/simh/leave/8620764/4811590/1680534689/xyzzy> [
> clemc(a)ccc.com]
> _._,_._,_
>
>
Small PS below...
On Sat, Dec 30, 2023 at 9:27 PM Clement T Cole via groups.io <clemc=
ccc.com(a)groups.io> wrote:
>
> I did not say that or imply it. But variable vs. fixed blocking has
> implications on both mechanical requirements and ends up being reflected in
> how the sw handles it. Traditional 9-track allows you to mix record sizes
> on each tape. Streamer formats don’t traditionally allow that because they
> restrict / remove inter record gaps in the same manner 9-track supports.
> This increases capacity of the tape (less waste).
>
In my explanation, I may have been a tad confusing. When I say fixed
records -- I mean on-tape fixed records, what the QIC-24/120/150 standard
refers to as: "*A group of consecutive bits comprising of a preamble, data
block marker, a single data block, block address and CRC and postamble*"
[the standard previous defines a data black os 512 consecutive bytes] --*
i.e*., if you put an o'scope on the tape head and looked at the bit
stream (see page 16 of QIC-120 Rev F - Section 5 "Recorded Block" for a
picture of this format -- note is can only address 2^20 blocks per track,
but it supports addressing to 256 tracks -- with 15 tracks of QIC-120 that
means 15728640 unique 512-byte blocks).
STOP/STOP does something similar but encodes the LRECL used [I don't have
the ANSI tape standard handy - but I remember seeing a wonderful picture of
all this from said documents when I first was educated about tapes in my
old IBM days ~50 years ago]. After each record, STOP/STOP needs an
"Inter-Record-Gap" (IRC) to allow for the motor's spin-up/spin-down time
before continuing the bit stream of the next data block. The IRC distance
is something like 5-10 mm [which is a great deal compared to the size of a
bit when using GCR encoding (which is what 6250 BPI and QIC both use).
These gaps take space (capacity) from the tape, so people tend to write
with larger blocking factors [UNIX traditionally uses 10240 bytes- other
systems use other values - as I said, I believe the standard says the max
is 64K bytes).
Since streamers (like QIC tape) are supposed to be continuous, the QIC
inter-records gaps resemble fixed disk records and can be extremely small.
Remember, each bit is measured in micrometers -- about *2 micrometers*,
IIRC for QIC and around 10 for ½" formats -- again, and I would need to
check the ANSI spec, which is not handy. But this is a huge space savings
even with a smallish block (512) -- again - this was lifted from disk
technology of the day which had often standardized on 512 8-bit byte blocks
by then.
BTW: this is again why I suspect a TK25 tape is not going to be able to
read on QIC-24/120/150 drive if, indeed, page 1-5 of the TK25 user manual
says it supports four different block sizes [1K/2K/4K/8K]. First, the
data block format would have to be variable to 4 sizes, and second, the
preamble would need to encode and write what size block to expect on
read. Unfortunately, that document does not say much more about the
physical tape format other than it can use cartridges "similar to ANSI
Standard X3.55-1982" (which is a 3M DC-600A tape cartridge), has "11
tracks, 8000 bpi" recording density (/w 10000 flux reversals per in), using
a "single track NRZI dat in a serpentine pattern, with 4-5 run length
limited code similar to GCR."
That said, most modern SW will allow you to *write* different size record
sizes (LRECL) in the user software, but the QIC drives and I believe things
like DAT and Exabyte will only write 512-byte blocks, so somewhere between
your user program and tape itself, the write will be broken into N 512 byte
blocks and then pad (with null is typical) the last block to 512 bytes.
My memory is the QIC standard is silent on where that is done, but I
suspect it's done in the controller and the driver is forced to send it
512-byte blocks.
So, while you may define blocks of different sizes, unlike ½", it will
always be written as 512-byte blocks.
That said, using larger record sizes in your application SW can have huge
performance wins (which I mentioned in my first message) - *e.g.*, keeping
the drive streaming as more user data has been locked down in memory for a
DMA. But by the time the driver and the controller are finished, it's fixed
512-byte blocks on the tape.
One other thing is WRT to QIC, which differs from other schemes. I
previously mentioned tape files - a feature of the ½" physical tape formats
not typically supported for QIC tapes. QIC has an interesting feature that
allows a block to be rewritten and replaced later on the tape (see the
section of spec/you user manual WRT for "rewritten" or "replacement
"blocks). I've forgotten all the details, but I seem to remember that
features were why multiple tape files were difficult to implement.
Someone who knows more about tapes may remember the details/be able to
explain -- I remember dealing with tape files was a PITA in QIC, and the
logic in a standard ½" tape driver could not be just cloned for the QIC
driver.
ᐧ
ᐧ
ᐧ
> From: Derek Fawcus
> How early does that have to be? MP/M-1.0 (1979 spec) mentions this, as
> "Resident System Processes" ... It was a banked switching, multiuser,
> multitasking system for a Z80/8080.
Anything with a microprocessor is, by definition, late! :-)
I'm impressed, in retrospect, with how quickly the world went from proceesors
built with transistors, through proceesors built out discrete ICs, to
microprocessors. To give an example; the first DEC machine with an IC
processor was the -11/20, in 1970 (the KI10 was 1972); starting with the
LSI-11, in 1975, DEC started using microprocessors; the last PDP-11 with a
CPU made out of of discrete ICs was the -11/44, in 1979. All -11's produced
after that used microprocessors.
So just 10 years... Wow.
Noel
We should move to COFF (cc’ed) for any further discussion. This is way off
topic for simh.
Below
Sent from a handheld expect more typos than usual
On Sat, Dec 30, 2023 at 7:59 PM Nigel Johnson MIEEE via groups.io
<nw.johnson=ieee.org(a)groups.io> wrote:
> First of all, 7-track vs 9-yrack - when you are streaming in serpentine
> mode, it is whatever you can fit into the tape width having regard to the
> limitations of the stepper motor accuracy.
>
Agreed. It’s the physical size of head and encoding magnetics. Parallel
you have n heads together all reading or writing together into n analog
circuits. A rake across the ground if you will. Serial of course its
like a single pencil line with the head on a servo starting in the center
of the tape and when you hit the physical eot move it up or down as
appropriate.
It is nothing to do with the number of bits per data unit.
>
I did not say that or imply it. But variable vs. fixed blocking has
implications on both mechanical requirements and ends up being reflected in
how the sw handles it. Traditional 9-track allows you to mix record sizes
on each tape. Streamer formats don’t traditionally allow that because they
restrict / remove inter record gaps in the same manner 9-track supports.
This increases capacity of the tape (less waste).
Just for comparison at 6250 BPI a traditional 2400’ ½” tape writing fixed
blocks of 10240 8-bit bytes gets about 150Mbytes. A ¼” DC-6150 tape using
QIC-150 only one forth the length and half as wide gets the same capacity
and they both use the same core scheme to encode the bits. QIC writes
smaller bits and wastes less tape with IRCs.
That all said, Looking at the TK25 specs besides being 11 tracks it is also
supports a small number different block sizes (LRECL) - unlike QIC.
Nothing like 9-track which can handle a large range of LRECLs. What I
don’t see in the TK25 is if you can mix them on a tape or if that is coded
once for each tape as opposed in each record.
Btw while I don’t think ansi condones it, some 9-track units like the
Storage Tek ones could not only write different LRECLs but could write
using different encoding (densities) on the same medium. This sad trick
confused many drives when you moved the tape to a drive that could not. I
have some interesting customer stories living those issues. But I digress …
FWIW As I said before do have a lot of experience with what it takes to
support this stuff and what you have to do decode it, the drivers for same
et al. I never considered myself a tape expert- there are many the know
way more than I - but I have lived, experienced and had to support a number
of these systems and have learned the hard way about how these schemes can
go south when trying to recover data.
Back in the beginning of my career, we had Uniservo VIC drives which were
> actually 7-bit parallel! (256, 556, and 800 bpi! NRZI
>
Yep same here. ½” was 5, 7 and 9 bits in parallel originally. GE-635 has
in the late 1960s then and a IBM shop in the early 70s. And of course saw
my favorite tapes of all - original DEC tape. I’ve also watched things
change with serial and the use of serpentine encoding.
You might find it amusing — any early 1980s Masscomp machines had a special
½” drive that had a huge number serpentine tracks I’ve forgotten the exact
amount. They used traditional 1/2” spools from 3M and the like but r/w was
custom to the drive. I’ve forgotten the capacity but at the time it was
huge. What I remember it was much higher capacity and reliability than
exabyte which at the time was the capacity leader. The USAF AWACS planes
had 2 plus a spare talking to the /700 systems doing the I/O - they were
suckling up everything in the air and recording it as digital signals. The
tape units were Used to record all that data. An airman spends his/whole
time loading and unloading tapes. Very cool system.
> Some things about the 92192 drive: it was 8" cabinet format in a 5.25
> inch world so needed an external box. It also had an annoying habit, given
> Control Data's proclivity for perfection, that when you put a cartridge in,
> it ran it back and forth for five minutes before coming ready to ensure
> even tension on the tape!
>
> The formatter-host adapter bus was not QIC36, so Emulex had to make a
> special controller, the TC05, to handle the CDC Proprietary format. The
> standard was QIC-36, although I think that Tandberg had a standard of their
> own.
>
Very likely. When thoses all came on the scene there were a number of
interfaces and encoding schemes. I was not involved in any of the politics
but QIC ended up as the encoding standard and SCSI the interface
IIRC the first QIC both Masscomp and Apollo used was QIC-36 via a SCSI
converter board SCS made for both of us. I don’t think Sun used it. Later
Archive and I think Wangtek made SCSI interface standard on the drives.
> I was wrong about the 9-track versus 7, the TC05/sentinel combination
> writes 11 tracks! The standard 1/4' cartridge media use QIC24, which
> specifies 9 tracks. I just knew it was not 9!
>
It also means it was not a QIC standard as I don’t believe they had one
between QIC-24-DC and QIC-120-DC. Which I would think means that if this
tape came from a TK25 I doubt either Steve or my drives will read it -
he’ll need to find someone with a TK25 - which I have never seen
personally.
> That's all I know!
>
fair enough
Clem_._,_._,_
>
I've got an exciting piece of hardware to pair with the VT100 I recently got, a Western Electric Dataphone 300. The various status lights and tests seem to work, and the necessary cabling is in place as far as the unit is concerned. However, it did not come with the accompanying telephone. I believe but can't verify yet that the expected telephone is or resembles a *565HK(M) series telephone, the ones with one red and five clear buttons along the bottom, otherwise resembling a standard WECo telephone.
Pictured: http://www.classicrotaryphones.com/forum/index.php?action=dlattach;attach=4…
Thus far I've found myself confused on the wiring expectations. There is a power line going into a small DC brick, one DB-25 port on the back terminating in a female 25-pair amphenol cable, and another DB-25 port with a ribbon extension plugged in. My assumptions thus far have been the amphenol plugs into a *565HK(M) or similar series telephone and the DB-25 then plugs into the serial interface of whichever end of the connection it represents. However, while this is all fine and dandy, it's missing one important part...a connection to the outside world. I've found no documentation describing this yet, although a few pictures from auctions that included a telephone seemed to have a standard telephone cable also coming out of the back of the telephone terminating in either a 6 or 8-conductor modular plug. The pictures were too low-res to tell which.
Would anyone happen to know anything concrete about the wiring situation with these, or some documentation hints, as I've tried some general web searches for documentation concerning Dataphone 300 and the 103J Data Set configuration and haven't turned up wiring-specific information. If nothing else I might just tap different places on the network block of the 2565HKM I've got plugged into it and see if anything resembling a telephone signal pops up when running some serial noise in at an appropriate baud. My fear is that the wiring differences extend beyond the tap to the CO/PBX line and that there are different wiring expectations in the 25-pair as well, this and my other appropriate telephone are both 1A2 wired I believe, still working on that KSU...
Any help is much appreciated, lotsa little details in these sorts of things, but once I get it working I intend to do some documentation and teardown photos. I don't want to take it apart yet and run the risk of doing something irreversible. I want to make sure it gets a chance to serve up some serial chit chat as weird telephone noises.
- Matt G.
FYI: Tim was Mr. 36-bit kernel and I/O system until he moved to the Vax and
later Alpha (and Intel).
The CMU device he refers is was the XGP and was a Xerox long-distance fax
(LDX). Stanford and MIT would get them too, shortly thereafter.
---------- Forwarded message ---------
From: Timothe Litt
Date: Thu, Dec 21, 2023 at 1:52 PM
Subject: Re: Fwd: [COFF] IBM 1403 line printer on DEC computers?
To: Clem Cole
I don't recall ever seeing a 1403 on a DECsystem-10 or DECSYSTEM-20. I
suppose someone could have connected one to a systems concepts channel...
or the DX20 Massbus -> IBM MUX/SEL channel used for the STC (TU70/1/2)
tape and disk (RP20=STC 8650) disk drives. (A KMC11-based device.) Not
sure why anyone would.
Most of the DEC printers on the -10/20 were Dataproducts buy-outs, and were
quite competent. 1,000 - 1,250 LPM. Earlier, we also bought from MDS and
Analex; good performance (1,000LPM), but needed more TLC from FS. The
majority were drum printers; the LP25 was a band printer, and lighter duty
(~300LPM).
Traditionally, we had long-line interfaces to allow all the dust and mess
to be located outside the machine room. Despite filters, dust doesn't go
well with removable disk packs. ANF-10 (and eventually DECnet) remote
stations provided distributed printing.
CMU had a custom interface to some XeroX printer - that begat Scribe.
The LN01 brought laser printing - light duty, but was nice for those
endless status reports and presentations. I think the guts were Canon -
but in any case a Japanese buyout. Postscript. Networked.
For high volume printing internally, we used XeroX laser printers when they
became available. Not what you'd think of today - these are huge,
high-volume devices. Bigger than the commercial copiers you'd see in print
shops. I(Perhaps interestingly, internally they used PDP-11s running
11M.) Networked, not direct attach. They also were popular in IBM shops.
We eventually released the software to drive them (DQS) as part of GALAXY.
The TU7x were solid drives - enough so that the SDC used them for making
distribution tapes. The copy software managed to keep 8 drives spinning at
125/200 ips - which was non-trivial on TOPS-20.
The DX20/TX0{2,3}/TU7x *was *eventually made available for VAX - IIRC as
part of the "Migration" strategy to keep customers when the -10/20 were
killed. I think CSS did the work on that for the LCG PL. Tapes only - I
don't think anyone wanted the disks by them - we had cheaper dual-porting
via the HSC/CI, and larger disks.
The biggest issue for printers on VAX was the omission of VFU support.
Kinda hard to print paychecks and custom forms without it - especially if
you're porting COBOL from the other 3-letter company. Technically, the
(Unibuas) LP20 could have been used, but wasn't. CSS eventually solved
that with some prodding from Aquarius - I pushed that among other high-end
I/O requirements.
On 21-Dec-23 12:29, Clem Cole wrote:
Tim - care to take a stab at this?
ᐧ