On Fri, Nov 20, 2020 at 4:33 PM Henry Bent <henry.r.bent(a)gmail.com> wrote:
I know I have asked this before, but I am curious
about any new replies or
insight. How did package management start?
Really good question. I thought the PKG_ADD we had on Masscomp in '83 was
grabbed from PWB 3.0. Unfortunately, Warren's stuff does not include, but
that is known to missing things (like SCCS which was first distributed as
part of PWB 1.0 and every version after).
So here is what I remember ...
When we did the '85 /usr/group standard, one of the things we argued about
was how would an ISV >>deliver<< a binary *I.e.* 'interchange'
between two
systems to use the TOPS-10/TOPS-20 terminology (which is actually what we
were using since most of us were familiar with same). By the time we got
to IEEE P1003, the whole reason USTAR was created was to solve that - which
begat the famous Tar Wars of the Research (TAR format) *vs*. CPIO (AT&T)
types [USTAR was a compromise and as I have said previously, was picked due
to the code in Ken's original implementation of the header CKSUM so it had
an unintended extension mechanism and as ASCII - cpio was binary in those
days AND could not be extended so older readers could at least read a new
tape).
The 'install' was left to each ISV and the assumption had been you would
use a USTAR tape (and eventually the PAX program) to read the bits, but
each ISV did their own 'installer.' The idea of keeping a system-wide DB
on what was installed was still in the future. PWB 3.0/System III
PKG_ADD was primitive, but my memory is it was the first attempt. I do
remember it was on a number of System III based systems but was very much
tied to installing the AT&T supplied SW - which I suspect was leftover from
the AT&T external maneuver of trying to supply everything and was difficult
to use by ISVs and I don't remember many doing so.
As you point out, the first commercial UNIX I remember that really tried to
solve it, was Ultrix which had something for both their own use and for
their ISV's (setld) - which frankly sucked and I personally hated and
railed against. But to DEC's credit, it was there. It was modeled after a
similar tool for VMS. Truth is, for a while it was the best. The biggest
thing that setld did (which in practice it did poorly) was trying to keep a
DB of what you installed so that an admin could type a command and see what
had been loaded, and when and also what licenses were installed to run
purchased software. Basically, it was driven by field service and SW
licensing.
When FreeBSD 1.0 came out, the big thing Jordan Hubbard did (and was much
better than Linux installs for a long time) was work on install >>for a new
system<<. He also created the idea of 'packages' which were all of the
thousands of UNIX tools that people had ported to FreeBSD and could
optionally be installed. I think it really was the first of the same name
and most of the features we know. By today's measure, again it was crude,
my memory is that unlike setld, since it was not managing licenses, he
didn't think to add a DB/log of what was being installed. He did not try
to solve the 'update' problem when a new version of FreeBSD was released
BTW. Basically, you needed to do a new install.
Roll forward a couple of years and Linux eventually picked up Jordan's
basic installer framework which vastly improved the out-of-box for some of
the Linux distros. But the important thing that RH did beyond FreeBSD was
to create RPM, which added a setld like DB to the scheme, not for licenses,
but so that you could easily do updates, add options, etc. They combined
Jordans install ideas and packages ideas, which was cool for a system where
you got/get everything from the distro.
The truth is, none of the Research UNIX, FreeBSD nor Linux really put the
effort that DEC, Masscomp, Sun, IBM, HP did in how to update a system.
*i.e.* I'm currently running version 10.13.5 and I want to get to 10.14.2
-- what needs to be installed and how will it affect already installed and
running ISV codes. [ IMO Microsoft is the worst and Apple is not much
better].
Linux is a weird one. Because of the 'open source' thinking, the idea of
keeping old binaries running is not the high order bit. DEC, IBM, HP,
Masscomp, and to some extent SUN and SGI, because they had a market for
commercial SW, have tried to keep old binaries going.
So ... now we have apt-get - which for what it is, works pretty well but,
it still does not solve a problem someone like my firm has that sells
commercial SW. FWIW: Since I actually wrote the spec for it inside
Intel, I can tell you what the design/goal/direction to tell the install
teams in that my employer distributes using RPM and >>is suppose<< to work
unmodified with an RPM-based install (*i.e.* be 'socially compliant' to the
norms of a more commercial-like Linux site). The >>idea<< is that the RPMs
are supposed to be able to automatically converted to Yum and a few other
formats (check the specs here for each tool, however -- this is not a
warranty from me - YMMV -- just telling what I >>personal<< scream at the
team when I discover they did not test properly as sometimes they do break
that - which can cause big issues when trying to install on a
supercomputer). The >>idea<< is that the current generation of package
tools, like setld of yesterday, will allow the admin to what's running on
the local system.
Clem