Hi Stephen,
I noticed you shared an article from KiCAD.org when you talked about PCB
design software tools, here:
https://minnie.tuhs.org/Blog/2019_06_18_cscvon8_lessons_learned.html
I thought that the article from KiCAD.org is actually pretty good, but we
recently published an article that goes much deeper and talks about the ten
best PCB design software tools in 2021.
PCB design software is the operating program used by a computer that helps
electronic engineers design printed circuit board (PCB) layouts. This
design tool helps users to collaborate on various projects, access
libraries of components created before, know the accuracy of their circuit
schematic design, and more.
The article talks about the best PCB design software tools available in the
market today and provides answers to the following questions:
-
What is PCB design software and why do you need it?
-
What is the evaluation criteria for PCB design software?
-
What are the key features to look for when choosing PCB design tools?
-
What are other recommended PCB design software tools?
We quote 30 different sources in the article -- it's quite authoritative.
Here's the article, if you're interested:
https://www.wellpcb.com/special/pcb-design-software.html
Would you consider sharing our article with your readers by linking to it?
Anyone who is looking for the right PCB design software to use might find
this very useful.
Please let me know if you have any questions and thank you for your time.
Cheers,
-Jesse
--
Jesse Davidson, Editor
5 Ross Rd
Durham, NH 03824
BTW, if you didn't like getting this email, please reply with something
like "please don't email me anymore", and I'll make sure that we don't.
Sigh ... Warren I am going to ask for your indulgence once here on TUHS as
I try to get any *new* discussion moved to COFF, but I guess it's time to
renew this history as enough people have joined the list since the last
time this was all discussed ... I'll do this once -- please take any other
discussion off this list. It has been argued too many times. Many of the
actors in this drama are part of the list. Sadly we have lost a few,
sometimes because of the silliness of the argument/trying to give people
credit or not/person preferences, etc.
If you want to comment, please go back and read both the TUHS and COFF
archives and I suspect your point may have already been made. *If you
really do have something new, please move to COFF.*
On Wed, Jul 14, 2021 at 4:21 AM Angus Robinson <angus(a)fairhaven.za.net>
wrote:
> Looking at a few online sources, Linus actually said when "386BSD came
> out, Linux was already in a usable state, that I never really thought about
> switching. If 386BSD had been available when I started on Linux, Linux
> would probably never had happened".
>
A number of us, such as Larry and I have discussed this a bunch both online
and in person. What would become 386BSD was actually available as early
as 1988, but you needed to know the public FTP address of where to get it
at UCB (which the UCB licensees had access to that FTP server). Bostic was
still working on what would become the 'NET' release, but this tarball
offered a bootable system and did have things in it that later AT&T would
require UCB to remove. In fact, this system would have X10 ported to it
and was a reasonably complete 'distro' in today's terms.
By formal definition, the tarball and the rest of UNIX from Research is and
always has been, '*Open Source*' in the sources were available. *But they
were licensed*. This was fairly typical of much early software BTW. The
binary nature only came about with the minicomputers.
The tarball in question was fairly easy to find in the wild but to use the
sources as a system, you technically needed an AT&T license. An
practically you needed access to a BSD box to rebuild them, which took a
license - although by then SunOS was probably close enough - although I do
not know anyone that tried it.
The sources in the tarball were not '*Free and Open Source*' -- which
becomes the crux of the issue. [Sadly the OSS folks have confused this
over the years and that important detail is lost]. Many people, such as
myself, when the AT&T suite began got worried and started hacking on Linux
at that point as the not nearly as mature but sort of works version without
networking or graphics had appeared [386BSD had both and a real installer -
more in a minute]
FWIW: Linus could have had access to the BSD for a 386 tarball if we had
asked in the right place. But as he has said later in time, he wanted to
write his own OS and did not both ask the right folks at his University, or
try to get permission. Although he has said he access to Sun3 and has
said that was his impetus for his work. This is an important point that
Larry reminds us of, many institutions kept the sources locked away like
his U of Wis. Other places were like liberal about access. IIRC Larry
sometimes refers to it as the "UNIX Club."
In my own case, I was running what would become 386BSD on my Wyse 32:16 box
at home and on an NCR 386 based system in Clemson as I was consulting for
them at the time. I also helped Bill with the PC/AT disk driver[WD1003 and
later WD7000/SCSI controllers], as I had access to the docs from WD which
Bill did not. I think I still have a photocopy of them.
What basically happened is as BSDi forked and that begets a number of
things, from hurt feelings to a famous law suite. A number of us, thought
the latter was about copyright (we were wrong it was about trade secret).
We were worried that the AT&T copyright would cause UNIX for an inexpensive
processor to disappear. We >>thought<< (incorrectly) that the copyright
that Linux was using, the GPL, would save us. Turns out >>legally<< it
would not have, if AT&T had won, at least in the USA and most NATO Allies -
the trade secret applied to all implementations of Ken, Dennis, and the
rest of the BTL folk's ideas. All of the Unix-like systems were in
violation at this point. BSDi/UCB was where AT&T started. The problem is
that while the court found that AT&T did create and own the >>ideas<< (note
ideas are not the source code implementation of the ideas), they could not
call the UNIX 'IP', trade secrets since the AT&T people published them all
both academically in books like Maury Bach's, much less they had been
forced by the 1956 consent decree to make the license available, they had
taught an industry. BTW: It's not just software, the transistor 'gets
out' of AT&T under the same type of rules.
In reality, like PGP, since there was lots of UNIX-based IP in other
places, it hard to see in practice how AT&T could have enforced the trade
secret. But again -- remember Charlie Brown (AT&T CEO) wants to go after
IBM, thinking the big money in computers in the mainframe. So they did
believe that they could exert pressure on UNIX-like systems for the higher
end, and they might have been able to enforce that.
On 06/07/2021, Larry McVoy <lm(a)mcvoy.com> wrote:
>
> http://lkml.iu.edu/hypermail/linux/kernel/0106.2/0405.html
>
> I wasn't completely right 20 years ago but I was close. I'm tired,
> if you want to know where I'm wrong, ask and I'll tell you how I
> tried to get Linus to fix it.
>
> In general, Rob was on point. He usually is.
>
I've never been a fan of clone(). It always strikes me as something
that seems like an elegant simplification at first, but the practical
realization (on Linux that is) requires several rather ugly
library-level hacks to make it work right for typical use cases.
UX/RT will use the "processes are containers for threads" model rather
than rfork()/clone() since that's the model the seL4 kernel basically
uses (in a very generalized form with address spaces , capability
spaces, and threads being separate objects and each thread being
associated with a capability space and address space), and it would
also be slightly easier to create the helper threads that will be
required in certain parts of the IPC transport layer.
The base process creation primitive (efork(), for "empty/eviscerated
fork") will create a completely blank non-runnable child process with
no memory mappings or file descriptors, and return a context FD that
the parent can use to manipulate the state of the child with normal
APIs, including copying FDs and memory mappings. To actually start the
child the parent will perform an exec*() within the child context
(either a regular exec*() to make the child run a different program,
or a new eexec() function that takes an entry point rather than a
command line to run the process with whatever memory mappings were set
up), after which point the parent will no longer be able to manipulate
the child's state.
This will eliminate the overhead of fork() for spawning processes
running other programs, but will still allow for a library-level
fork() implementation that has comparable overhead to traditional
implementations. Also, it will do what Plan 9 never did and make the
process/memory APIs file-oriented (I still don't get why Plan 9 went
with a rather limited anonymous memory API rather than using
memory-mapped files for everything).
Also, we're straying a bit from historical Unix here and should have
probably moved to COFF several messages ago.
I know of two early computer (in the stored program sense) programming
books.
1951: Preparation of Programs for an Electronic Digital Computer (Wilkes,
Wheeler, & Gill)
1957: Digital Computer Programming (McCracken)
What others were published prior to the McCracken text?
Excluded are lecture compendia and symposia proceedings, such as:
1946: Moore School Lectures
1947: Proceedings of a Symposium on Large-Scale Digital Calculating
Machinery
1951: Proceedings of a Second Symposium on Large-Scale Digital
Calculating Machinery
1953: Faster Than Thought, A Symposium On Digital Computing Machines
These were principally about designs for, and experience with, new hardware.
I'm curious about texts specifically focused on the act of programming.
Were there others prior to McCracken?
paul
Seen in my calendar yesterday:
Jun 15 UNIVAC I delivered to the Census Bureau, 1951
70 years! And Unix has been around for nearly 52 of those years.
Amusingly, though, the next entry is:
Jun 16 First publicized programming error at Census Bureau, 1951
Which suggests that they were able to install the machine and getting
it running in only one day.
Greg
--
Sent from my desktop computer.
Finger grog(a)lemis.com for PGP public key.
See complete headers for address and phone numbers.
This message is digitally signed. If your Microsoft mail program
reports problems, please read http://lemis.com/broken-MUA.php
>From wiki
"The first Univac was accepted by the United States Census Bureau on
March 31, 1951, and was dedicated on June 14 that year.[3][4] "
https://en.wikipedia.org/wiki/UNIVAC_I
--
The more I learn the better I understand I know nothing.
The EFF just published an article on the rise and fall of Gopher on
their Deeplinks blog.
"Gopher: When Adversarial Interoperability Burrowed Under the
Gatekeepers' Fortresses"
https://www.eff.org/deeplinks/2020/02/gopher-when-adversarial-interoperabil…
I thought it might be of interest to people here.
--
Michael Kjörling • https://michael.kjorling.se • michael(a)kjorling.se
“Remember when, on the Internet, nobody cared that you were a dog?”