That rang a Bell with VAX as IBM-killer :-)
Even if it's on an MS site it's still a nice article.
https://news.microsoft.com/features/the-engineers-engineer-computer-industr…
Take care and stay healthy,
uncle rubl
>From: Dave Horsfall <dave(a)horsfall.org>
>To: Computer Old Farts Followers <coff(a)tuhs.org>
>Cc:
>Bcc:
>Date: Mon, 26 Jul 2021 11:54:44 +1000 (EST)
>Subject: [COFF] In these COVID times...
>I wonder how many bods rememeber: "Nothing sucks like a VAX!"?
>
>-- Dave
--
The more I learn the better I understand I know nothing.
Thank you, Doug.
On Wed, Jul 14, 2021 at 10:22 PM Douglas McIlroy <
douglas.mcilroy(a)dartmouth.edu> wrote:
> The open source movement was a revival of the old days of SHARE and other
> user groups.
>
Amen, my basic point, although I was also trying to pointing at that these
user groups got started b*ecause the vendors gave the sources to their
products out.* We SHARED patches and features. DECUS started out the same
way. For instance, many/most PDP-10 OS's used the DEC compilers and often
even found a way to run TOPS-10 binaries by emulating the UUOs. The
IBM/360 world worked pretty much the same way. My own experience was that
the compilers (e.g WATFIV-FTNG-ALGOLW-PL/1) and language interpreters
(APL-Snolbol) for the TSS and MTS had been 'ported' from the IBM-supplied
OS [my own first job was doing just that].
The same story was true for the PDP-8 with DOS-8/TSS-8 and the like. By the
time of the PDP-11, while some of the DEC source code was available (such
as the Fortran-IV for RT-11/RSX), since it took at PDP-10/BLISS to support
it, DEC had it its protection - so moving it/stealing it - would have been
harder. By the time of the VAX, DEC was charging a lot of money of SW and
it was actually a revenue stream, so they keep a lot more locked up and
had started to do the same with PDP-10 world.
So, the available/unavailable source issue came when things started to get
closed up, which really started with the rise of the SW industry and making
revenue with the use of your SW. OEMs and IVSs started to be a lot less
willing to reveal what they thought was their 'special sauce.' Some/many
end-users started to balk. RMS just took it to a new level - just look at
how he reacted to Symbolics being closed source :-)
The question that used to come up (and still does not an extent) is how are
the engineers and teams of people that developed the SW going to be
paid/renumerated for their work? The RMS/GNU answer had been service
revenue [and living like a student in a rent-controlled APT in
Central Sq]. What has happened for most of the biggest FOSS projects, the
salaries are paid for firms like my own that pay developers to work on the
SW and most FOSS projects die when the developer/maintainer is unable to
continue (if not just gets bored).
In fact, [I can not say I personally know this - but have read internal
memos that make the claim], Intel pays for more Linux developers and now
LLVM developers than any firm. What's interesting is that Intel does not
really directly sell its HW product to end-users. We sell to others than
use our chips to make their products. We have finally moved to the
support model for the compilers (I've personally been fighting that battle
for 15 years).
So back to my basic point ... while giving the *behavior* a name, the *idea
*of "Open Source" is really not anything new. While it may be new in their
lifetime/experience, it is frankly at minimum a sad, if not outright
disingenuous, statement for the people to try to imply otherwise because
they are unwilling to look back into history and understand, much less
accept it as a fact. Trying to rewrite history is just not pretty to
witness. And I am pleased to see that a few folks (like Larry) that have
lived a little both times have tried to pass the torch with more complete
history.
Clem.
ᐧ
[-TUHS] [+COFF]
On Fri, Jul 16, 2021 at 4:06 AM Lars Brinkhoff <lars(a)nocrew.org> wrote:
On ITS it only ever stored characters as full 36-bit words! So sizeof
> char == 1 == sizeof int. This is allowed per the C standard. (Maybe it
> was updated somewhere else, I dunno.)
>
The ZETA-C compiler ran on the Symbolics Lisp Machine and translated C into
Zetalisp; since everything was a Lisp object, from the C perspective all
elementary types had sizeof == 1 also. The modern Vacietis compiler to
Common Lisp uses the same design for its data, though it does not share any
code. C pointers are represented by CL closures.
Moved to the COFF list.
> Yes, WAITS is what I was thinking of. As I mentioned in my previous
> mail, it feels like the SAIL timesharing systems get mentioned briefly
> in a lot of accounts of historical computing, sometimes with mention
> that they had some sort of (relatively) advanced video terminals, but
> no in-depth descriptions of the actual hardware/software environment.
I agree WAITS gets very little attention, particularly in relation to
the great number of things pioneered at SAIL.
I'm involved making emulators for some of the hardware. SAIL started
out with a couple PDP-1 timesharing systems with vector displays from
Philco. But that's almost a pre historical era.
The PDP-6/10 started with another vector system from III. It could
support up to 12 displays, but only ever had 6. A raster display system
was added in the early 70s. It must have been one of the very first
bitmapped display systems. It came from the Data Disc company and used
disk for storage. It was dual ported: the computer could write data,
and the displays could read. 64 displays were supported.
The III and DD displays used the SAIL keyboard which introduced the META
key.
The Data Disc displays and SAIL keyboard heavily influenced Tom Knight
at MIT to make a similar system for their AI lab PDP-10 running ITS.
On 14 Jul 2021 22:21 -0400, from douglas.mcilroy(a)dartmouth.edu (Douglas McIlroy):
> IBM provided source code for the Fortran II compiler.
More recently than that, for the original IBM PC anyone could get (I
believe) the complete schematics, detailed technical information, and
a commented ROM BIOS source code listing just by purchasing their
Technical Reference for, what, $50 or thereabouts?
It certainly wasn't open source according to the Open Source
Definition, but it certainly was _available_ to anyone who wanted a
copy.
What kind of company does that today, in a similar market segment?
--
Michael Kjörling • https://michael.kjorling.se • michael(a)kjorling.se
“Remember when, on the Internet, nobody cared that you were a dog?”
Hi Stephen,
I noticed you shared an article from KiCAD.org when you talked about PCB
design software tools, here:
https://minnie.tuhs.org/Blog/2019_06_18_cscvon8_lessons_learned.html
I thought that the article from KiCAD.org is actually pretty good, but we
recently published an article that goes much deeper and talks about the ten
best PCB design software tools in 2021.
PCB design software is the operating program used by a computer that helps
electronic engineers design printed circuit board (PCB) layouts. This
design tool helps users to collaborate on various projects, access
libraries of components created before, know the accuracy of their circuit
schematic design, and more.
The article talks about the best PCB design software tools available in the
market today and provides answers to the following questions:
-
What is PCB design software and why do you need it?
-
What is the evaluation criteria for PCB design software?
-
What are the key features to look for when choosing PCB design tools?
-
What are other recommended PCB design software tools?
We quote 30 different sources in the article -- it's quite authoritative.
Here's the article, if you're interested:
https://www.wellpcb.com/special/pcb-design-software.html
Would you consider sharing our article with your readers by linking to it?
Anyone who is looking for the right PCB design software to use might find
this very useful.
Please let me know if you have any questions and thank you for your time.
Cheers,
-Jesse
--
Jesse Davidson, Editor
5 Ross Rd
Durham, NH 03824
BTW, if you didn't like getting this email, please reply with something
like "please don't email me anymore", and I'll make sure that we don't.
Sigh ... Warren I am going to ask for your indulgence once here on TUHS as
I try to get any *new* discussion moved to COFF, but I guess it's time to
renew this history as enough people have joined the list since the last
time this was all discussed ... I'll do this once -- please take any other
discussion off this list. It has been argued too many times. Many of the
actors in this drama are part of the list. Sadly we have lost a few,
sometimes because of the silliness of the argument/trying to give people
credit or not/person preferences, etc.
If you want to comment, please go back and read both the TUHS and COFF
archives and I suspect your point may have already been made. *If you
really do have something new, please move to COFF.*
On Wed, Jul 14, 2021 at 4:21 AM Angus Robinson <angus(a)fairhaven.za.net>
wrote:
> Looking at a few online sources, Linus actually said when "386BSD came
> out, Linux was already in a usable state, that I never really thought about
> switching. If 386BSD had been available when I started on Linux, Linux
> would probably never had happened".
>
A number of us, such as Larry and I have discussed this a bunch both online
and in person. What would become 386BSD was actually available as early
as 1988, but you needed to know the public FTP address of where to get it
at UCB (which the UCB licensees had access to that FTP server). Bostic was
still working on what would become the 'NET' release, but this tarball
offered a bootable system and did have things in it that later AT&T would
require UCB to remove. In fact, this system would have X10 ported to it
and was a reasonably complete 'distro' in today's terms.
By formal definition, the tarball and the rest of UNIX from Research is and
always has been, '*Open Source*' in the sources were available. *But they
were licensed*. This was fairly typical of much early software BTW. The
binary nature only came about with the minicomputers.
The tarball in question was fairly easy to find in the wild but to use the
sources as a system, you technically needed an AT&T license. An
practically you needed access to a BSD box to rebuild them, which took a
license - although by then SunOS was probably close enough - although I do
not know anyone that tried it.
The sources in the tarball were not '*Free and Open Source*' -- which
becomes the crux of the issue. [Sadly the OSS folks have confused this
over the years and that important detail is lost]. Many people, such as
myself, when the AT&T suite began got worried and started hacking on Linux
at that point as the not nearly as mature but sort of works version without
networking or graphics had appeared [386BSD had both and a real installer -
more in a minute]
FWIW: Linus could have had access to the BSD for a 386 tarball if we had
asked in the right place. But as he has said later in time, he wanted to
write his own OS and did not both ask the right folks at his University, or
try to get permission. Although he has said he access to Sun3 and has
said that was his impetus for his work. This is an important point that
Larry reminds us of, many institutions kept the sources locked away like
his U of Wis. Other places were like liberal about access. IIRC Larry
sometimes refers to it as the "UNIX Club."
In my own case, I was running what would become 386BSD on my Wyse 32:16 box
at home and on an NCR 386 based system in Clemson as I was consulting for
them at the time. I also helped Bill with the PC/AT disk driver[WD1003 and
later WD7000/SCSI controllers], as I had access to the docs from WD which
Bill did not. I think I still have a photocopy of them.
What basically happened is as BSDi forked and that begets a number of
things, from hurt feelings to a famous law suite. A number of us, thought
the latter was about copyright (we were wrong it was about trade secret).
We were worried that the AT&T copyright would cause UNIX for an inexpensive
processor to disappear. We >>thought<< (incorrectly) that the copyright
that Linux was using, the GPL, would save us. Turns out >>legally<< it
would not have, if AT&T had won, at least in the USA and most NATO Allies -
the trade secret applied to all implementations of Ken, Dennis, and the
rest of the BTL folk's ideas. All of the Unix-like systems were in
violation at this point. BSDi/UCB was where AT&T started. The problem is
that while the court found that AT&T did create and own the >>ideas<< (note
ideas are not the source code implementation of the ideas), they could not
call the UNIX 'IP', trade secrets since the AT&T people published them all
both academically in books like Maury Bach's, much less they had been
forced by the 1956 consent decree to make the license available, they had
taught an industry. BTW: It's not just software, the transistor 'gets
out' of AT&T under the same type of rules.
In reality, like PGP, since there was lots of UNIX-based IP in other
places, it hard to see in practice how AT&T could have enforced the trade
secret. But again -- remember Charlie Brown (AT&T CEO) wants to go after
IBM, thinking the big money in computers in the mainframe. So they did
believe that they could exert pressure on UNIX-like systems for the higher
end, and they might have been able to enforce that.
On 06/07/2021, Larry McVoy <lm(a)mcvoy.com> wrote:
>
> http://lkml.iu.edu/hypermail/linux/kernel/0106.2/0405.html
>
> I wasn't completely right 20 years ago but I was close. I'm tired,
> if you want to know where I'm wrong, ask and I'll tell you how I
> tried to get Linus to fix it.
>
> In general, Rob was on point. He usually is.
>
I've never been a fan of clone(). It always strikes me as something
that seems like an elegant simplification at first, but the practical
realization (on Linux that is) requires several rather ugly
library-level hacks to make it work right for typical use cases.
UX/RT will use the "processes are containers for threads" model rather
than rfork()/clone() since that's the model the seL4 kernel basically
uses (in a very generalized form with address spaces , capability
spaces, and threads being separate objects and each thread being
associated with a capability space and address space), and it would
also be slightly easier to create the helper threads that will be
required in certain parts of the IPC transport layer.
The base process creation primitive (efork(), for "empty/eviscerated
fork") will create a completely blank non-runnable child process with
no memory mappings or file descriptors, and return a context FD that
the parent can use to manipulate the state of the child with normal
APIs, including copying FDs and memory mappings. To actually start the
child the parent will perform an exec*() within the child context
(either a regular exec*() to make the child run a different program,
or a new eexec() function that takes an entry point rather than a
command line to run the process with whatever memory mappings were set
up), after which point the parent will no longer be able to manipulate
the child's state.
This will eliminate the overhead of fork() for spawning processes
running other programs, but will still allow for a library-level
fork() implementation that has comparable overhead to traditional
implementations. Also, it will do what Plan 9 never did and make the
process/memory APIs file-oriented (I still don't get why Plan 9 went
with a rather limited anonymous memory API rather than using
memory-mapped files for everything).
Also, we're straying a bit from historical Unix here and should have
probably moved to COFF several messages ago.