I've assembled some notes from old manuals and other sources
on the formats used for on-disk file systems through the
Seventh Edition:
http://www.cita.utoronto.ca/~norman/old-unix/old-fs.html
Additional notes, comments on style, and whatnot are welcome.
(It may be sensible to send anything in the last two categories
directly to me, rather than to the whole list.)
Computer pioneer Niklaus Wirth was born on this day in 1934; he basically
designed ALGOL, one of the most influential languages ever, with just
about every programming language in use today tracing its roots to it.
His name is pronounced "vurt" but he would accept "worth", and he joked
that you could call him by name or by value (you need to know ALGOL to
understand).
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
The recent discussion of long-lived applications, and backwards
compatibility in Unix, got me thinking about the history of shared
objects. My own experience with Linux and MacOS is that
statically-linked applications tend to continue working from release
to release, but shared objects provided by the OS tend not to be
backwards compatible, and one often has to carry around with your
application the exact C runtime and other shared objects your program
was linked against. This is in big contrast to shared libraries on
VMS, where great care is taken to maintain strict backward
compatibility release to release.
What is the history of shared objects on Unix? When did they first
appear, and with what object/executable file format? The a.out ZMAGIC
format doesn't seem to support them. I don't recall if COFF does.
MACH-O, at least the MacOS dialect of it, supports dynamic libraries.
ELF supports them.
Also, when was symbol preemption invented? Traditional shared library
designs such as in IBM System/370, VMS, and Windows NT doesn't have
it. As one who worked on optimizations in compilers, I came to hate
symbol preemption because it prohibits many useful optimizations. ELF
does provide a way to turn it off, but it's on by default--you have to
explicitly declare symbols as protected or hidden via source language
pragmas to get rid of it.
-Paul W.
Time for another hand-grenade in the duck pond :-) Or as we call it
down-under, "stirring the possum".
On this day in 2010, it was found unanimously that Novell, not SCO, owned
"Unix". SCO appealed later, and it was dismissed "with prejudice"; SCO
shares plummeted as a result.
As an aside, this was the first and only time that I was on IBM's side,
and I still wonder whether M$ was bankrolling SCO in an effort to wipe
Linux off the map; what sort of an idiot would take on IBM?
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
On this day in 1778 businessman Oliver Pollock created the "$" sign, and
now we see it everywhere: shell prompts and variables, macro strings, Perl
variables, system references (SYS$BLAH, SWAP$SYS, etc), etc; where would
we be without it?
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
[TUHS] long lived programs (was Re: RIP John Backus
> Every year someone takes some young hotshot and points them at some
"impossible" thing and one of them makes it work. I don't see that
changing.
Case in point.
We hired Tom Killian, a young high-energy physicist disenchanted
with contributing to hundred-author papers. He'd done plenty of
instrument programming, but no operating systems. So, high-energy
as he was, he cooked up an exercise to get his feet wet.
The result: /proc
Doug
On 3/17/2018 12:22 PM, Arthur Krewat <krewat(a)kilonet.net> wrote:
> Leave it to IBM to do something backwards.
>
> Of course, that was in 1954, so I can't complain, it was 11 years before
> I was born. But that's ... odd.
>
> Was subtraction easier than addition with digital electronics back then?
> I would think that they were both the same level of effort (clock
> cycles) so why do something obviously backwards logically?
Subtraction was done by taking the two's complement and adding. I
suspect the CPU architect (Gene Amdahl -- not exactly a dullard)
intended programmers store array elements at increasing memory
addresses, and reference an array element relative to the address of the
last element plus one. This would allow a single index register (and
there were only three) to be used as the index and the (decreasing)
count. See the example on page 97 of:
James A. Saxon
Programming the IBM 7090: A Self-Instructional Programmed Manual
Prentice-Hall, 1963
http://www.bitsavers.org/pdf/ibm/7090/books/Saxon_Programming_the_IBM_7090_…
The Fortran compiler writers decided to reverse the layout of array
elements so a Fortran subscript could be used directly in an index register.
Hi,
The Hacker's Dictionary says that daemons were so named in CTSS. I'm
guessing then that Ken Thompson brought them into Unix? I've noticed that
more recent implementations of init have shunned the traditional
terminology in favor of the more prosaic word "services". For example,
Solaris now has SMF, the Service Management Facility, and systemd, the
linux replacement for init, has services as well. It makes me a little sad,
because it feels like some of the imaginativeness, fancifulness, and
playfulness that imbue the Unix spirit are being lost.
[try-II]
On Fri, Mar 23, 2018 at 6:43 AM, Tim Bradshaw <tfb(a)tfeb.org> wrote:
> On 22 Mar 2018, at 21:05, Bakul Shah <bakul(a)bitblocks.com> wrote:
>
>
> I was thinking about a similar issue after reading Bradshaw's
> message about FORTRAN programs being critical to his country's
> security. What happens in 50-100 years when such programs have
> been in use for a long time but none of the original authors
> may be alive? The world may have moved on to newer languages
> and there may be very few people who study "ancient" computer
> languages and even they won't have in-depth experience to
> understand the nuances & traps of these languages well enough.
> No guarantee that FORTRAN will be in much use then! Will it be
> like in science fiction where ancient spaceships continue
> working but no one knows what to do when they break?
>
>
> My experience of large systems like this is that this isn't how they work
> at all. The program I deal with (which is around 5 million lines naively
> (counting a lot of stuff which probably is not source but is in the source
> tree)) is looked after by probably several hundred people. It's been
> through several major changes in its essential guts and in the next ten
> years or so it will be entirely replaced by a new version of itself to deal
> with scaling problems inherent in the current implementation. We get a new
> machine every few years onto which it needs to be ported, and those
> machines have not all been just faster versions of the previous one, and
> will probably never be so.
>
> What it doesn't do is to just sit there as some sacred artifact which
> no-one understands, and it's unlikely ever to do so. The requirements for
> it to become like that would be at least that the technology of large-scale
> computers was entirely stable, compilers, libraries and operating systems
> had become entirely stable and people had stopped caring about making it do
> what it does better. None of those things seems very likely to me.
>
> (Just to be clear: this thing isn't simulating bombs: it's forecasting the
> weather.)
>
+1 - exactly
my
point.
We have drifted a bit from pure UNIX, but I actually do think this is
relevant to UNIX history. Once UNIX started to run on systems targeting
HPC loads where Fortran was the dominate programming language, UNIX quickly
displaced custom OSs and became the dominant target even if at the
beginning of that transition
as
the 'flavor' of UNIX did vary (we probably can and should discuss how that
happened and why independently
-- although
I will point out the UNIX/Linux implementation running at say LLNL != the
version running at say Nasa Moffitt). And the truth is today, for small
experiments you probably run Fortran on Windows on your desktop. But for
'production' - the primary OS for Fortran is a UNIX flavor of some type
and has been that way since the mid-1980s - really starting with the UNIX
wars of that time.
As I also have said here and elsewhere, while HPC and very much its
lubricant, Fortran, are not something 'academic CS types' like to study
these days
- even though
Fortran (HPC) pays my
and many of our
salar
ies
. Yet it runs on the system the those same academic types all prefer -
*i.e.* Ken and Dennis' ideas. The primary difference is the type of
program the users are running. But Ken and Dennis ideas work well for
almost all users and spans
specific
application market
s.
Here is a
picture
I did a few years ago for a number of Intel exec's. At the time I was
trying to explain to them that HPC is not a single style of application and
also help them understand that there two types of value - the code itself
and the data. Some markets (
*e.g.*
Financial) use public data but the methods they use
to crunch it
(
*i.e.* the
code
s
)
are
private, while others
market segments
might have private data (*e.g.*
oil and gas) but
different customers
use the same or similar codes to crunch it.
F
or this discussion, think about how much of the code I sho
w
below is complex arithmetics -
while
much of it is searching
google style
, but a lot is
just plain
nasty math. The 'nasty math' that has not changed
over the years
and thus those codes are dominated by Fortran.
[Note Steve has pointed out that with AI maybe the math could change in
the future - but certainly so far, history of these markets is basically
differential equations solvers].
As Tim says, I really can not
see
that changing and
the
reason (I believe) is I do not see any
compelling
economic reason to do so.
Clem
ᐧ
ᐧ
On this day in 1978 Kurt Shoens placed the following comment in
def.h (the fork i maintain keeps it in nail.h):
/*
* Mail -- a mail program
*
* Author: Kurt Shoens (UCB) March 25, 1978
*/
--steffen
|
|Der Kragenbaer, The moon bear,
|der holt sich munter he cheerfully and one by one
|einen nach dem anderen runter wa.ks himself off
|(By Robert Gernhardt)