I was wondering if anyone close to Early Unix and Bell Labs would offer some comments on the
evolution of Unix and the quality of decisions made by AT&T senior managers.
Tom Wolfe did an interesting piece on Fairchild / Silicon Valley,
where he highlights the difference between SV’s management style
and the “East Coast” Management style.
[ Around 2000, “Silicon Valley” changed from being ‘chips & hardware’ to ’software’ & systems ]
[ with chip making, every new generation / technology step resets competition, monopolies can’t be maintained ]
[ Microsoft showed that Software is the opposite. Vendor Lock-in & monopolies are common, even easy for aggressive players ]
Noyce & Moore ran Fairchild Semiconductor, but Fairchild Camera & Instrument was ‘East Coast’
or “Old School” - extracting maximum profit.
It seems to me, an outsider, that AT&T management saw how successful Unix was
and decided they could apply their size, “marketing knowhow” and client lists
to becoming a big player in Software & Hardware.
This appears to be the reason for the 1984 divestiture.
In another decade, they gave up and got out of Unix.
Another decade on, AT&T had one of the Baby Bells, SBC, buy it.
SBC had understood the future growth markets for telephony was “Mobile”
and instead of “Traditional” Telco pricing, “What the market will bear” p[lus requiring Gross Margins over 90%,
SBC adopted more of a Silicon Valley pricing approach - modest Gross Margins
and high “pass through” rates - handing most/all cost reductions onto customers.
If you’re in a Commodity market, passing on cost savings to customers is “Profit Maximising”.
It isn’t because Commodity markets are highly competitive, but Volumes drive profit,
and lower prices stimulate demand / Volumes. [ Price Elasticity of Demand ]
Kenneth Flamm has written a lot on “Pass Through” in Silicon Chip manufacture.
Just to close the loop, Bells Labs, around 1966, hired Fred Terman, ex-Dean of Stanford,
to write a proposal for “Silicon Valley East”.
The AT&T management were fully aware of California and perhaps it was a long term threat.
How could they replicate in New Jersey the powerhouse of innovation that was happening in California?
Many places in many countries looked at this and a few even tried.
Apparently South Korea is the only attempt that did reasonably.
I haven’t included links, but Gordon Bell, known for formulating a law of computer ‘classes’,
did forecast early that MOS/CMOS chips would overtake Bipolar - used by Mainframes - in speed.
It gave a way to use all those transistors on a chip that Moore’s Law would provide,
and with CPU’s in a few, or one, chip, the price of systems would plummet.
He forecast the cutover in 1985 and was right.
The MIPS R2000 blazed past every other chip the year it was released.
And of course, the folk at MIPS understood that building their own O/S, tools, libraries etc
was a fool’s errand - they had Unix experience and ported a version.
By 1991, IBM was almost the Last Man Standing of the original 1970’s “IBM & the BUNCH”,
and their mainframe revenues collapsed. In 1991 and 1992, IBM racked up the largest
corporate losses in US history to the time, then managed to survive.
Linux has, in my mind, proven the original mid-1970’s position of CSRC/1127
that Software has to be ‘cheap’, even ‘free’
- because it’s a Commodity and can be ’substituted’ by others.
=================================
1956 - AT&T / IBM Consent decree: 'no computers, no software’
1974 - CACM article, CSRC/1127 in Software Research, no commercial Software allowed
1984 - AT&T divested, doing commercial Software & Computers
1994 - AT&T Sells Unix
1996 - “Tri-vestiture", Bell Labs sold to Lucent, some staff to AT&T Research.
2005 - SBC buys AT&T, long-lines + 4 baby bells
1985 - MIPS R2000, x2 throughput at same clock speed. Faster than bipolar, CMOS CPU's soon overtook ECL
=================================
Code Critic
John Lions wrote the first, and perhaps only, literary criticism of Unix, sparking one of open source's first legal battles.
Rachel Chalmers
November 30, 1999
https://www.salon.com/test2/1999/11/30/lions_2/
"By the time the seventh edition system came out, the company had begun to worry more about the intellectual property issues and trade secrets and so forth," Ritchie explains.
"There was somewhat of a struggle between us in the research group who saw the benefit in having the system readily available,
and the Unix Support Group ...
Even though in the 1970s Unix was not a commercial proposition,
USG and the lawyers were cautious.
At any rate, we in research lost the argument."
This awkward situation lasted nearly 20 years.
Even as USG became Unix System Laboratories (USL) and was half divested to Novell,
which in turn sold it to the Santa Cruz Operation (SCO),
Ritchie never lost hope that the Lions books could see the light of day.
He leaned on company after company.
"This was, after all, 25-plus-year-old material, but when they would ask their lawyers,
they would say that they couldnt see any harm at first glance,
but there was a sort of 'but you never know ...' attitude, and they never got the courage to go ahead," he explains.
Finally, at SCO [ by July 1996 ], Ritchie hit paydirt.
He already knew Mike Tilson, an SCO executive.
With the help of his fellow Unix gurus Peter Salus and Berny Goodheart, Ritchie brought pressure to bear.
"Mike himself drafted a 'grant of permission' letter," says Ritchie,
"'to save the legal people from doing the work!'"
Research, at last, had won.
=================================
Tom Wolfe, Esquire, 1983, on Bob Noyce:
The Tinkerings of Robert Noyce | Esquire | DECEMBER 1983.webarchive
http://classic.esquire.com/the-tinkerings-of-robert-noyce/
=================================
Special Places
IEEE Spectrum Magazine
May 2000
Robert W. Lucky (Bob Lucky)
https://web.archive.org/web/20030308074213/http://www.boblucky.com/reflect/…https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=803583
Why does place matter? Why does it matter where we live and work today when the world is so connected that we're never out of touch with people or information?
The problem is, even if they get da Vinci, it won't work.
There's just something special about Florence, and it doesn't travel.
Just as in this century many places have tried to build their own Silicon Valley.
While there have been some successes in
Boston,
Research Triangle Park, Austin, and
Cambridge in the U.K.,
to name a few significant places, most attempts have paled in comparison to the Bay Area prototype.
In the mid-1960s New Jersey brought in Fred Terman, the Dean at Stanford and architect of Silicon Valley, and commissioned him to start a Silicon Valley East.
[ Terman reited from Stanford in 1965 ]
=================================
--
Steve Jenkin, IT Systems and Design
0412 786 915 (+61 412 786 915)
PO Box 38, Kippax ACT 2615, AUSTRALIA
mailto:sjenkin@canb.auug.org.au http://members.tip.net.au/~sjenkin
[TUHS to Bcc]
On Wed, Feb 1, 2023 at 3:23 PM Douglas McIlroy
<douglas.mcilroy(a)dartmouth.edu> wrote:
> > In the annals of UNIX gaming, have there ever been notable games that have operated as multiple processes, perhaps using formal IPC or even just pipes or shared files for communication between separate processes
>
> I don't know any Unix examples, but DTSS (Dartmouth Time Sharing
> System) "communication files" were used for the purpose. For a fuller
> story see https://www.cs.dartmouth.edu/~doug/DTSS/commfiles.pdf
Interesting. This is now being discussed on the Multicians list (which
had a DTSS emulator! Done for use by SIPB). Warren Montgomery
discussed communication files under DTSS for precisely this kind of
thing; apparently he had a chess program he may have run under them.
Barry Margolin responded that he wrote a multiuser chat program using
them on the DTSS system at Grumman.
Margolin suggests a modern Unix-ish analogue may be pseudo-ttys, which
came up here earlier (I responded pointing to your wonderful note
linked above).
> > This is probably a bit more Plan 9-ish than UNIX-ish
>
> So it was with communication files, which allowed IO system calls to
> be handled in userland. Unfortunately, communication files were
> complicated and turned out to be an evolutionary dead end. They had
> had no ancestral connection to successors like pipes and Plan 9.
> Equally unfortunately, 9P, the very foundation of Plan 9, seems to
> have met the same fate.
I wonder if there was an analogy to multiplexed files, which I admit
to knowing very little about. A cursory glance at mpx(2) on 7th
Edition at least suggests some surface similarities.
- Dan C.
I don't know if a thousand users ever logged in there at one time, but
they do tend to have a lot of simultaneous logins.
On Mon, Mar 13, 2023 at 6:16 PM Peter Pentchev <roam(a)ringlet.net> wrote:
>
> On Wed, Mar 08, 2023 at 02:52:43PM -0500, Dan Cross wrote:
> > [bumping to COFF]
> >
> > On Wed, Mar 8, 2023 at 2:05 PM ron minnich <rminnich(a)gmail.com> wrote:
> > > The wheel of reincarnation discussion got me to thinking:
> [snip]
> > > The evolution of platforms like laptops to becoming full distributed systems continues.
> > > The wheel of reincarnation spins counter clockwise -- or sideways?
> >
> > About a year ago, I ran across an email written a decade or more prior
> > on some mainframe mailing list where someone wrote something like,
> > "wow! It just occurred to me that my Athlon machine is faster than the
> > ES/3090-600J I used in 1989!" Some guy responded angrily, rising to
> > the wounded honor of IBM, raving about how preposterous this was
> > because the mainframe could handle a thousand users logged in at one
> > time and there's no way this Linux box could ever do that.
> [snip]
> > For that matter, a
> > thousand users probably _could_ telnet into the Athlon system. With
> > telnet in line mode, it'd probably even be decently responsive.
>
> sdf.org (formerly sdf.lonestar.org) comes to mind...
>
> G'luck,
> Peter
>
> --
> Peter Pentchev roam(a)ringlet.net roam(a)debian.org pp(a)storpool.com
> PGP key: http://people.FreeBSD.org/~roam/roam.key.asc
> Key fingerprint 2EE7 A7A5 17FC 124C F115 C354 651E EFB0 2527 DF13
Hi,
I'd like some thoughts ~> input on extended regular expressions used
with grep, specifically GNU grep -e / egrep.
What are the pros / cons to creating extended regular expressions like
the following:
^\w{3}
vs:
^(Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)
Or:
[ :[:digit:]]{11}
vs:
( 1| 2| 3| 4| 5| 6| 7| 8|
9|10|11|12|13|14|15|16|17|18|19|20|21|22|23|24|25|26|27|28|29|30|31)
(0|1|2)[[:digit:]]:(0|1|2|3|4|5)[[:digit:]]:(0|1|2|3|4|5)[[:digit:]]
I'm currently eliding the 61st (60) second, the 32nd day, and dealing
with February having fewer days for simplicity.
For matching patterns like the following in log files?
Mar 2 03:23:38
I'm working on organically training logcheck to match known good log
entries. So I'm *DEEP* in the bowels of extended regular expressions
(GNU egrep) that runs over all logs hourly. As such, I'm interested in
making sure that my REs are both efficient and accurate or at least not
WILDLY badly structured. The pedantic part of me wants to avoid
wildcard type matches (\w), even if they are bounded (\w{3}), unless it
truly is for unpredictable text.
I'd appreciate any feedback and recommendations from people who have
been using and / or optimizing (extended) regular expressions for longer
than I have been using them.
Thank you for your time and input.
--
Grant. . . .
unix || die
Hi Emanuel,
I believe I may have the install disks for ESIX, SVR4. It actually was distributed in this beautiful box with over 100 5.25” floppy disks.
As things progressed, ESIX was distributed on a streaming tape cartridge. That was so much faster than swapping floppy disks for the install.
The nice thing about the ESIX SVR4 was the documentation. It was essentially the AT&T SVR4 books with a white ESIX cover slapped on it.
If you want to copy the disks and make them accessible to our UNIX community, let me know. Since it’s part of my collection I would ask that you return them to me. Send me an email directly if you’re interested and I will see if I can locate them for you.
Bill Corcoran
> On Jun 11, 2023, at 8:47 AM, emanuel stiebler <emu(a)e-bbes.com> wrote:
> Hi,
> anybody still has the install media for that?
> We used it in the office long ago, but I lost the install disk in my last moving :(
>
> THANKS!
Apropos the ESIX SVR4 distro on floppies or streaming tape mentioned by Bill Corcoran
<https://www.tuhs.org/mailman3/hyperkitty/list/coff@tuhs.org/message/WEJQQCJ…>
In the mid 1980’s I worked for a small Australian outfit that did “Unix”.
One of the things we did was distributing software, which required writing to many media.
There was a very clever script that broke the distribution into many parts, if needed,
to suit the size of the distribution media. [ tape, 3.5” floppy, 2.5” floppy, etc ]
Over the years I’ve tried to recreate a version and not succeeded :(
There was a ‘create the distro’ step of the pipeline which gathered the input,
followed by a loop that used ‘dd’ to block the stream into media-sized parts.
I’ve never figured out how to use ‘dd’ so it returns after a single block is written
doesn’t close the input, killing the pipeline, or cause the rest of the data
to be discarded.
The script let our admin staff reliably create distros on whatever media was requested.
Any suggestions or hints?
I’m thinking this is obvious, but in the man pages i’ve read, not found an answer.
It could be modern versions of ‘dd’ don’t have this behaviour.
cheers
steve
--
Steve Jenkin, IT Systems and Design
0412 786 915 (+61 412 786 915)
PO Box 38, Kippax ACT 2615, AUSTRALIA
mailto:sjenkin@canb.auug.org.au http://members.tip.net.au/~sjenkin
Good afternoon folks, I just wanted to ask if anyone is aware of online marketplaces I should be looking at in my constant scouring for historical documentation materials?
Presently I've got a policy of checking eBay and Biblio pretty regularly for UNIX material, occasionally searching for a few other odds and ends subject-wise, but I'm starting to wonder if there are other avenues flying under my radar where folks might be more likely to be selling for instance 70s and 80s UNIX manuals, paper copies of old standards, hardware docs from IBM and DEC, etc.
If you have any suggestions, especially those that don't require me to setup yet another account to keep track of, I'd surely appreciate it. Also consider this my way of saying if you have something to sell, I'll gladly consider it, although I am being pretty selective on matters of historical/research significance that are currently obscure, so sorry if I won't buy your twelfth copy of KnR C, even if it is signed!
- Matt G.
Hello, I've got a question I'm puzzling on that someone here may have some info on.
Are there any known lists/promo material/price sheets from between 80-83 regarding WECo computing hardware such as the 3B20D and 3B20S? More broadly, is it documented at all what hardware models made it out before the removal of the Bell logo and transition from WECo to AT&T ownership of the 3B and related technologies?
Aside from the cover illustration of a 3B20S on the UNIX 4.1 manual and having seen a MAC-Tutor on eBay once, I can't say I've seen any other WECo branded computation hardware with Bell logos. The only photos I can find of a 3B20D are a later AT&T branded issue.
Any leads? Would it have just been BellMAC stuff and 3B20 systems before the change in logo? Based on the manual I recently received, the 3B5 may have also made it out during the WECo period but after dropping the Bell logo, somewhere between the consent degree being produced and the completion of divesting WECo.
- Matt G.
P.S. In the bigger picture, I'm slowly starting to aggregate info together on Bell/WECo's computer hardware activities tangential to but distinct from UNIX developments. Stuff like the 3B computers, BellMAC stuff, etc. If there's already a community/resources in this focused area I'd happily divert those efforts to a more focused collective.
Good afternoon everyone. I've been thinking about the color/contrast landscape of computing today and have a bit of a nebulous quandary that I wonder if anyone would have some insight on.
So terminals, they started as typewriters with extra steps, a white piece of paper on a reel being stamped with dark ink to provide feedback from the machine. When video terminals hit the market, the display was a black screen with white, orange, green, or whatever other color of phosphor they bothered to smear on the surface of the tube. Presumably this display style was chosen as on a CRT, you're only lighting phosphor where there is actually an image, unlike the LCD screens of today. So there was a complete contrast shift from dark letters on white paper to light letters on an otherwise unlit pane of glass.
Step forward to graphical systems and windows on the Alto? Light background with dark text.
Windows on the Macintosh? Light background with dark text.
Windows on MS Windows? Light backgrounds with dark text.
Default HTML rendering in browsers? Light backgrounds with dark text.
Fast forward to today, and it seems that dark themes are all the rage, light characters on an otherwise dark background. This would've made so much sense during the CRT era as every part of the screen representing a black pixel is getting no drawing, but when CRTs were king, the predominant visual style was dark on light, like a piece of paper, rather than light on dark, like a video terminal. Now in the day and age of LCDs, where every pixel is on regardless, now we're finally flipping the script and putting light characters on dark backgrounds, long after any hardware benefit (that I'm aware of) would be attained by minimizing the amount of "lit surface" on the screen.
Anyone know if this has all been coincidental or if the decision for graphical user interfaces and such to predominantly use white/light colors for backgrounds was a relatively intentional measure around the industry? Or is it really just that that's how Xerox's system looked and it was all domino effect after that? At the end of the day I'm really just finding myself puzzling why computing jumped into the minimalism seen on terminal screens, keeping from driving CRTs super hard but then when GUIs first started appearing, they didn't just organically align with what was the most efficient for a CRT. I recognize this is based largely in subjective views of how something should look too, so not really expecting a "Person XYZ authoritatively decided on <date> that GUI elements shall overwhelmingly only be dark on light", just some thoughts on how we got going down this path with color schemes in computing. Thanks all!
- Matt G.