> - separation of code and data using read-only and read/write file systems
I'll bite. How do you install code in a read-only file system? And
where does a.out go?
My guess is that /bin is in a file system of its own. Executables from
/letc and /lib are probably there too. On the other hand, I guess
users' personal code is still read/write.
I agree that such an arrangement is prudent. I don't see a way,
though, to update bin without disrupting most running programs.
Doug
All,
I was introduced to Unix in the mid 1990's through my wife's VMS account
at UT Arlington, where they had a portal to the WWW. I was able to
download Slackware with the 0.9 kernel on 11 floppies including X11. I
installed this on my system at the time - either a DEC Rainbow 100B? or
a handme down generic PC. A few years later at Western Illinois
University - they had some Sun Workstations there and I loved working
with them. It would be several years later, though, that I would
actually use unix in a work setting - 1998. I don't even remember what
brand of unix, but I think it was again, sun, though no gui, so not as
much love. Still, I was able to use rcs and and when my Windows bound
buddies lost a week's work because of some snafu with their backups, I
didn't lose anything - jackflash was the name of the server - good
memories :). However, after this it was all DOS and Windows until, 2005.
I'd been eyeing Macs for some time. I like the visual aesthetics and
obvious design considerations. But, in 2005, I finally had a bonus big
enough to actually buy one. I bought a G5 24" iMac and fell in love with
Mac. Next, it was a 15" G4 Powerbook. I loved those Macs until Intel
came around and then it was game over, no more PC's in my life (not
really, but emotionally, this was how I felt). With Mac going intel, I
could dual boot into Windows, Triple boot into Linux, and Quadruple boot
into FreeBSD, and I could ditch Fink and finally manage my unix tools
properly (arguable, I know) with Homebrew or MacPorts (lately, I've gone
back to MacPorts due to Homebrew's lack of support for older OS
versions, and for MacPorts seeming rationality).
Anyhow, I have thoroughly enjoyed the Mac ride, but with Catalina, the
ride got really bumpy (too much phone home, no more 32 bit programs and
since Adobe Acrobat X, which I own, outright, isn't 64 bit, among other
apps, this just in not an option for me), and with Big Sur, it's gotten
worse, potholes, sinkholes, and suchlike, and the interface is downright
patronizing (remember Microsoft Bob?). So, here I am, Mr.
Run-Any-Cutting-Edge-OS anytime guy, hanging on tooth and nail to Mac OS
Mojave where I still have a modicum of control over my environment.
My thought for the day and question for the group is... It seems that
the options for a free operating system (free as in freedom) are
becoming ever more limited - Microsoft, this week, announced that their
Edge update will remove Edge Legacy and IE while doing the update -
nuts; Mac's desktop is turning into IOS - ew, ick; and Linux is wild
west meets dictatorship and major corporations are moving in to set
their direction (Microsoft, Oracle, IBM, etc.). FreeBSD we've beat to
death over the last couple of weeks, so I'll leave it out of the mix for
now. What in our unix past speaks to the current circumstance and what
do those of you who lived those events see as possibilities for the next
revolution - and, will unix be part of it?
And a bonus question, why, oh why, can't we have a contained kernel that
provides minimal functionality (dare I say microkernel), that is
securable, and layers above it that other stuff (everything else) can
run on with auditing and suchlike for traceability?
Hi,
As I find myself starting yet another project that that wants to use
ANSI control sequences for colorization of text, I find myself -- yet
again -- wondering if there is a better way to generate the output from
the code in a way that respects TERMinal capabilites.
Is there a better / different control sequence that I can ~> should use
for colorizing / stylizing output that will account for the differences
in capabilities between a VT100 and XTerm?
Can I wrap things that I output so that I don't send color control
sequences to a TERMinal that doesn't support them?
--
Grant. . . .
unix || die
The recent discussions on the TUHS list of whether /bin and /usr/bin
are different, or symlinked, brought to mind the limited disk and tape
sizes of the 1970s and 1980s. Especially the lower-cost tape
technologies had issues with correct recognition of an end-of-tape
condition, making it hard to span a dump across tape volumes, and
strongly suggesting that directory tree sizes be limited to what could
fit on a single tape.
I made an experiment today across a broad range of operating systems
(many with multiple versions in our test farm), and produced these two
tables, where version numbers are included only if the O/S changed
practices:
------------------------------------------------------------------------
Systems with /bin a symlink to /usr/bin (or both to yet another common
directory) [42 major variants]:
ArchLinux Kali RedHat 8
Arco Kubuntu 19, 20 Q4OS
Bitrig Lite ScientificLinux 7
CentOS 7, 8 Lubuntu 19 Septor
ClearLinux Mabox Solaris 10, 11
Debian 10, 11 Magiea Solydk
Deepin Manjaro Sparky
DilOS Mint 20 Springdale
Dyson MXLinux 19 Ubuntu 19, 20, 21
Fedora Neptune UCS
Gnuinos Netrunner Ultimate
Gobolinux Oracle Linux Unleashed
Hefftor Parrot 4.7 Void
IRIX PureOS Xubuntu 19, 20
------------------------------------------------------------------------
Systems with separate /bin and /usr/bin [60 major variants]:
Alpine Hipster OS108
AltLinux KaOS Ovios
Antix KFreeBSD PacBSD
Bitrig Kubuntu 18 Parrot 4.5
Bodhi LibertyBSD PCBSD
CentOS 5, 6 LMDE PCLinuxOS
ClonOS Lubuntu 17 Peppermint
Debian 7--10 LXLE Salix
DesktopBSD macOS ScientificLinux 6
Devuan MidnightBSD SlackEX
DragonFlyBSD Mint 18--20 Slackware
ElementaryOS MirBSD Solus
FreeBSD 9--13 MXLinux 17, 18 T2
FuryBSD NetBSD 6-1010 Trident
Gecko NomadBSD Trisquel
Gentoo OmniOS TrueOS
GhostBSD OmniTribblix Ubuntu 14--18
GNU/Hurd OpenBSD Xubuntu 18
HardenedBSD OpenMandriva Zenwalk
Helium openSUSE Zorinos
------------------------------------------------------------------------
Some names appear in both tables, indicating a transition from
separate directories to symlinked directories in more recent O/S
releases.
Many of these system names are spelled in mixed lettercase, and if
I've botched some of them, I extend my apologies to their authors.
Some of those systems run on multiple CPU architectures, and our test
farm exploits that; however, I found no instance of the CPU type
changing the separation or symbolic linking of /bin and /usr/bin.
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
To fill out the historical record, the earliest doctype I know of
was a shell (not rc) script. From my basement heater that happens
to run 10/e:
b$ man doctype | uniq
DOCTYPE(1) DOCTYPE(1)
NAME
doctype - guess command line for formatting a document
SYNOPSIS
doctype [ option ... ] [ file ]
DESCRIPTION
Doctype guesses and prints on the standard output the com-
mand line for printing a document that uses troff(1),
related preprocessors like eqn(1), and the ms(6) and mm
macro packages.
Option -n invokes nroff instead of troff. Other options are
passed to troff.
EXAMPLES
eval `doctype chapter.?` | apsend
Typeset files named chapter.0, chapter.1, ...
SEE ALSO
troff(1), eqn(1), tbl(1), refer(1), prefer(1), pic(1),
ideal(1), grap(1), ped(9.1), mcs(6), ms(6), man(6)
BUGS
It's pretty dumb about guessing the proper macro package.
Page 1 Tenth Edition (printed 2/24/2021)
doctype(1) is in the 8/e manual, so it existed in early 1985;
I bet it's actually older than that. The manual page is on
the V8 tape, but, oddly, not the program; neither is it in
the V10 pseudo-tape I cobbled together for Warren long ago.
I'm not sure why not.
The version in rc is, of course, a B-movie remake of the
original.
Norman Wilson
Toronto ON
Lately, I've been playing around in v6 unix and mini-unix with a goal of
better understanding how things work and maybe doing a little hacking.Â
As my fooling around progressed, it became clear that moving files into
and out of the v6 unix world was a bit tedious. So it occurred to me
that having a way to mount a v6 filesystem under linux or another modern
unix would be kind of ideal. At the same time it also occurred to me
that writing such a tool would be a great way to sink my teeth into the
details of old Unix code.
I am aware of Amit Singh's ancientfs tool for osxfuse, which implements
a user-space v6 filesystem (among other things) for MacOS. However,
being read-only, it's not particularly useful for my problem. So I set
out to create my own FUSE-based filesystem capable of both reading and
writing v6 disk images. The result is a project I call retro-fuse,
which is now up on github for anyone to enjoy
(https://github.com/jaylogue/retro-fuse)
A novel (or perhaps just peculiar) feature of retro-fuse is that, rather
than being a wholesale re-implementation of the v6 filesystem, it
incorporates the actual v6 kernel code itself, "lightly" modernized to
work with current compilers, and reconfigured to run as a Unix process.Â
Most of file-handling code of the kernel is there, down to a trivial
block device driver that reflects I/O into the host OS. There's also a
filesystem initialization feature that incorporates code from the
original mkfs tool.
Currently, retro-fuse only works on linux. But once I get access to my
mac again in a couple weeks, I'll port it to MacOS as well. I also hope
to expand it to support other filesystems as well, such as v7 or the
early BSDs, but we'll see when that happens.
As I expected, this was a fun and very educational project to work on.Â
It forced me to really understand what was going in the kernel (and to
really pay attention to what Lions was saying). It also gave me a
little view into what it was like to work on Unix back in the day.Â
Hopefully someone else will find my little self-education project useful
as well.
--Jay
Some additions:
Systems with /bin a symlink to /usr/bin
Digital UNIX 4.0
Tru64 UNIX 5.0 to 5.1B
HP-UX 11i 11.23 and 11.31
Systems with separate /bin and /usr/bin
SCO UNIX 3.2 V4.0 to V4.2
--
The more I learn the better I understand I know nothing.
> I can imagine a simple perl (or python or whatever) script that would run
> through groff input [and] determine which preprocessors are actually
> needed ...
Brian imagined such and implemented it way back when. Though I used
it, I've forgotten its name. One probably could have fooled it by
tricks like calling pic only in a .so file and perhaps renaming .so.
But I never heard of it failing in real life. It does impose an extra
pass over the input, but may well save a pass compared to the
defensive groff -pet that I often use or to the rerun necessary when I
forget to mention some or all of the filters.
All,
So, we've been talking low-level design for a while. I thought I would
ask a fundamental question. In days of old, we built small
single-purpose utilities and used pipes to pipeline the data and
transformations. Even back in the day, it seemed that there was tension
to add yet another option to every utility. Today, as I was marveling at
groff's abilities with regard to printing my man pages directly to my
printer in 2021, I read the groff(1) page:
example here: https://linux.die.net/man/1/groff
What struck me (the wrong way) was the second paragraph of the description:
The groff program allows to control the whole groff system by command
line options. This is a great simplification in comparison to the
classical case (which uses pipes only).
Here is the current plethora of options:
groff [-abcegilpstzCEGNRSUVXZ] [-d cs] [-f fam] [-F dir] [-I dir] [-L
arg] [-m name] [-M dir] [-n num] [-o list] [-P arg] [-r cn] [-T dev] [-w
name] [-W name] [file ...]
Now, I appreciate groff, don't get me wrong, but my sensibilities were
offended by the idea that a kazillion options was in any way simpler
than pipelining single-purpose utilities. What say you? Is this the
perfected logical extension of the unix pioneers' work, or have we gone
horribly off the trail.
Regards,
Will