> Are there languages that copy arrays in function calls defaultly?
> Pascal is an example.
Pascal's var convention, where the distinction between value
and reference is made once and for all for each argument of
each function, is sound. The flexibility of PL/I, where the
distinction is made at every call (parenthesize the name to
pass an array by value) is finicky, though utterly general.
> Where is all that [memory] going to come from if you pass a
> large array on a memory-constrained system of specs common back in the
> days when C was designed
Amusingly, under the customary linkage method in the even earlier
days when Fortran was designed, pass-by-reference entailed a big
overhead that could easily dominate pass-by-value for small arrays.
[In the beginning, when CPUs had only one register, subroutine
preambles plugged the reference into every mention of that variable
throughout the body of the subroutine. This convention persisted
in Fortran, which was designed for a machine with three index
registered. Since reference variables were sometimes necessary
(think of swap(a,b) for example) they were made standard.]
Doug
> From: Random832
> It seems to me that this check is central to being able to (or not)
> modify the in-core image of any process at all other than the one being
> traced (say, by attaching to a SUID program that has already dropped
> privileges, and making changes that will affect the next time it is
> run).
Right, good catch: if you have a program that was _both_ sticky and SUID, when
the system is idle (so the text copy in the swap area won't get recycled),
call up a copy under the debugger, patch it, exit (leaving the patched copy),
and then re-run it without the debugger.
I'd have to check the handling of patched sticky pure texts - to see if they
are retained or not.
{Checks code.}
Well, the code to do with pure texts is _very_ different between V6 and
PWB1.
The exact approach above might not work in V6, because the modified (in-core)
copy of pure texts are simply deleted when the last user exits them. But it
might be possible for a slight variant to work; leave the copy under the
debugger (which will prevent the in-core copy from being discarded), and then
run it again without the debugger. That might do it.
Under PWB1, I'm not sure if any variant would work (very complicated, and I'm
fading). There's an extra flag bit, XWRIT, which is set when a pure text is
written into; when the last user stops using the in-code pure text, the
modified text is written to swap. (It lools like the in-core copy is always
discarded when the last user stops using it.) But the check for sticky would
probably stop a sticky pure-text being modified? But maybe the approach that
seems like it would work under V6 (leave the patched, debugger copy running,
and start a new instance) looks like it should work here too.
So maybe the sticky thing is irrelevant? On both V6 and PWB1, it just needs a
pure text which is SETUID: start under the debugger, patch, leave running, and
start a _new_ copy, which will run the patched version as the SUID user.
Noel
> From: Clem Cole
> I said -- profil - I intended to say ptrace(2)
Is that the one where running an SUID program under the debugger allowed one
to patch the in-core image of said program?
If so, I have a story, and a puzzle, about that.
A couple of us, including Jim Gettys (later of X-windows fame) were on out way
out to dinner one evening (I don't recall when, alas, but I didn't meet him
until '80 or so), and he mentioned this horrible Unix security bug that had
just been found. All he would tell me about it (IIRC) was that it involved
ptrace.
So, over dinner (without the source) I figured out what it had to be:
patching SUID programs. So I asked him if that was what it was, and I don't
recall his exact answer, but I vaguely recall he hemmed and hawed in a way
that let me know I'd worked it out.
So when we got back from dinner, I looked at the source to our system to see
if I was right, and.... it had already been fixed! Here's the code:
if (xp->x_count!=1 || xp->x_iptr->i_mode&ISVTX)
goto error;
Now, we'd been running that system since '77 (when I joined CSR), without any
changes to that part of the OS, so I'm pretty sure this fix pre-dates your
story?
So when I saw your email about this, I wondered 'did that bug get fixed at
MIT when some undergrad used it to break in' (I _think_ ca. '77 is when they
switched from an OS called Delphi on the -11/45 used for the undergrad CS
programming course - I _think_ they switched that machine from Delphi to
Unix), or did it come with PWB1? (Like I said, that system was mostly PWB1.)
So I just looked in the PWB1 sources, and... there it is, the _exact_ same
fix. So we must have got it from PWB1.
So now the question is: did the PWB guys find and fix this, and forget to
tell the research guys? Or did they tell them, and the research guys blew
them off? Or what?
Noel
> We all took the code back and promised to get patches out ASAP and not tell any one about it.
Fascinating. Chnages were installed frequently in the Unix lab, mostly
at night without fanfare. But an actual zero-day should have been big
enough news for me to have heard about. I'm pretty sure I didn't; Dennis
evidently kept his counsel.
Doug
> From: "Ron Natalie"
> Ordered writes go back to the original BSD fast file system, no? I seem
> to recall that when we switched from our V6/V7 disks, the filesystem got
> a lot more stable in crashes.
I had a vague memory of reading about that, so I looked in the canonical FFS
paper (McKusick et al, "A Fast File System for UNIX" [1984)]) but found no
mention of it.
I did find a paper about 'fsck' (McKusick, Kowalski, "Fsck: The UNIX File
System Check Program") which talks (in Section 2.5. "Updates to the file
system") about how "problem[s] with asynchronous inode updates can be avoided
by doing all inode deallocations synchronously", but it's not clear if they're
talking about something that was actually done, or just saying
(hypothetically) that that's how one would fix it.
Is is possible that the changes to the file system (e.g. the way free blocks
were kept) made it more crash-proof?
Noel
> The problem with that is that * doesn't really bind to the type name.
> It binds to the variable.
>
> char* cp1, cp2; // cp1 is pointer to char, cp2 is just a char.
>
> I always found it confusing that the * is used to indicate an pointer
> here, where as when you want to change an lvalue to a pointer, you use
> &.
The way to read it is that you are declaring *cp1 as a char.
-- Richard
--
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
> From: Warner Losh
> There's a dcheck.c in the TUHS v7 sources. How's that related?
That was one of the earlier tools - not sure how far back it goes, but it's in
V6, but not V5. It consistency checks the directory tree. Another tool, 'icheck',
consistency checks file blocks and the free list.
Noel
I've made available on GitHub a series of tables showing the evolution
of Unix facilities (as documented in the man pages) over the system's
lifetime [1] and two diagrams where I attempted to draw the
corresponding architecture [2]. I've also documented the process in a
short blog post [3]. I'd welcome any suggestions for corrections and
improvements you may have, particularly for the architecture diagrams.
[1] https://dspinellis.github.io/unix-history-man/
[2] https://dspinellis.github.io/unix-architecture/
[3] https://www.spinellis.gr/blog/20170510/
Cheers,
Diomidis
On 6 May 2017 at 11:23, ron minnich <rminnich(a)gmail.com> wrote (in part):
[...]
> Lest you think things are better now, Linux uses self modifying code to
> optimize certain critical operations, and at one talk I heard the speaker
> say that he'd like to put more self modifying code into Linux, "because it's
> fun". Oh boy.
Fun, indeed! Even self-modifying chips are being touted -- Yikes!
N.
> tr -cs A-Za-z '\n' |
> tr A-Z a-z |
> sort |
> uniq -c |
> sort -rn |
> sed ${1}q
>
> This is real genius.
Not genius. Experience. In the Bentley/Knuth/McIlroy paper I said,
"[Old] Unix hands know instinctively how to solve this one in a jiffy."
While that is certainly true, the script was informed by my having
written "spell", which itself was an elaboration of a model
pioneered by Steve Johnson. By 1986, when BKM was published,
the lore was baked in: word-processing scripts in a similar
vein were stock in trade.
A very early exercise of this sort was Dennis Ritchie's
enumeration of anagrams in the unabridged Merriam-Webster.
Since the word list barely fit on the tiny disk of the time,
the job entailed unimaginable marshalling of resources. I
was mightily impressed then, and still am.
Doug
On Thu, May 4, 2017 at 7:14 PM, Larry McVoy <lm(a)mcvoy.com> wrote:
> some of those Berkeley flags (not specifically for cat, but almost
> certainly including those for cat) were really quite useful.
Amen!!! I think that this is key point. What is in good taste or good
style? Doug's distain for the results of:
less --help | wc
is found in bad design, a poor user interface, little fore thought, *etc.*
Most of what is there has been added over Eric Scheinbrood's
original more(1)
I do not believe are used that often, but some of them are of course, and
you can not tell who uses what!! So how do you decide what get rid of?
How do you learn what people really need -- IMO: much of that is
* experience *and this thing we call 'good taste.' As opposed to what I
think happened with less(1) and many others similar programs (programmers
peeing on the code because they code and the source was available -- I can
add this feature to it and I think it is cool. As opposed to asking,
what do really get and not having a 'master builder' arbitrating or vetting
things).
The problem we have is that we don't yet have a way of defining good taste.
One might suggest that it takes years of civilization and also that
tastes do change over time. Pike's minimalistic view (which I think is
taken to the extreme in the joke about automobile dashboard on Brian
Kernighan's car) sets the bar to one end and probably one of the reason why
UNIX had a bad reputation, certainly by non-computer scientists, when it
first appeared as being difficult. Another extreme is systems that put in
you a box and never let you do anything but what we told you do; which I
find just as frighten and frustration builds there when I use them.
Clearly systems that are so noisy that can not find what really want, or
need is another dimension of the same bad design. So what to do? [more in
a minute...]
Larry is right. Many of the 'features' added to UNIX (and Linux) over time
have been and *are useful*. Small and simple as it was (and I really
admire Ken, Dennis and the Team for its creation), but in 2017 I really
don't want to run the Sixth Edition for my day to day work anymore - which
I happily did in 1977. But the problem is, as we got 'unlimited address
space' of 32 and 64 bits, and more room for more 'useful' things, we also
got a great deal of rubbish and waste. I am interpreting Doug's point
about less --help | wc is that is that are so many thorns and weeds, it
hard to see to see the flowers in the garden.
I'd like to observe that is that most college courses in CS I have seen
talk about how to construct a programs, algorithms, structures - i.e. the
mechanics of some operation. But this discussion is about the human
element. What we feel is good or bad and how it related to how to use
the program.
I think about my friends that have degrees in literature, art and
architecture. In all cases, they spend a lot of time examining past
examples, of good and bad - and thinking and discussing what makes them so.
I'm actually happy to see it was Professor McIlroy that is one of the
folks taking a stand on current craziness. I think this is only going to
get better if a new crop of students that have been trained in 'good
taste.' So, I wonder do any of the schools like Darthmouth and the like
teach courses that study 'style' and taste in CS. Yes, it is a young
field but we have been around long enough that we do a body of work good
and bad to consider.
I think there is a book or two and a few lectures in there somewhere.
Thoughts?
Clem
> From: Michael Kjörling <michael(a)kjorling.se>
> To: tuhs(a)minnie.tuhs.org
> Subject: Re: [TUHS] Discuss of style and design of computer programs
> from a user stand point
> Message-ID: <20170506091857.GE12539(a)yeono.kjorling.se>
> Content-Type: text/plain; charset=utf-8
>
> I would actually take that one step further: When you are writing
> code, you are _first and foremost_ communicating with whatever human
> will need to read or modify the code later. That human might be you, a
> colleague, or the violent psychopath who knows both where you live and
> where your little kids go to school (might as well be you). You should
> strive to write the code accordingly, _even if_ the odds of the threat
> ever materializing are slim at most. Style matters a lot, there.
>
Interesting, I was going to say about the same thing about the violent psychopath
who has to maintain your code after you leave. When I lectured at UCSD or was
giving talks on style for ViaSat I always said the same thing:
Whatever you write, the fellow who is going to wind up maintaining it is a known
axe killer, now released from prison, completely reformed. He learned computer
programming on MS/DOS 3.1 and a slightly broken version of Pascal. He will be
given your home phone number and address so if he has any questions about the
code you wrote he can get in contact with you.
This always got a few chuckles. I then pointed out that whenever anyone gets code
that someone else wrote, the recipient always thinks that they can ‘clean up’ what
is there because the original author clearly doesn’t understand what proper code
looks like.
Over time, I’ve learned that everyone has a style when writing code, just like handwriting
and given enough time, I can spot who the author of a block of code is just from the
indenting, placement of ( and ) around a statement and other small traits.
What makes good code is the ability to convey the meaning of the algorithm
from the original author to all those who come after. Sometimes even the most
unusual code can be quite clear, while the most cleanly formatted and commented
code can be opaque to all.
David
> tr -cs A-Za-z '\n' |
> tr A-Z a-z |
> sort |
> uniq -c |
> sort -rn |
> sed ${1}q
>
> This is real genius.
Not genius. Experience. In the Bentley/Knuth/McIlroy paper I said,
"[Old] Unix hands know instinctively how to solve this one in a jiffy."
While that is certainly true, the script was informed by my having
written "spell", which itself was an elaboration of a model
pioneered by Steve Johnson. By 1986, when BKM was published,
the lore was baked in: word-processing scripts in a similar
vein were stock in trade.
A very early exercise of this sort was Dennis Ritchie's
enumeration of anagrams in the unabridged Merriam-Webster.
Since the word list barely fit on the tiny disk of the time,
the job entailed unimaginable marshalling of resources. I
was mightily impressed then, and still am.
Doug
On 3 May 2017 at 09:09, Arthur Krewat <krewat(a)kilonet.net> wrote:
> Not to mention, you can cat multiple files - as in concatenate :)
Along these lines, who said "Cat went to Berkely, came back waving flags."
N.
> I believe I was the last person to modify the Linux man page macros.
> Their current maintainer is not the kind of groff expert to whom it
> would occur to modify them; it would work as well to ask me questions
Question #1. Which tmac file to they use? If it's not in the groff
package, where can it be found?
Doug
OK, I recall a note dmr wrote probably in the late 70s/early 80s when folks
at UCB had (iirc) extended the symbol name size in C programs to
essentially unlimited. This followed on (iirc) file names going beyond 14
characters.
The rough outline was that dmr was calling out the revisions for being too
general, and the phrase "BSD sins" sticks in my head (sins as a verb).
I'm reminded of this by something that happened with some interns recently,
as they wanted to make something immensely complex to cover a case that
basically never happened. I was trying to point out that you can go
overboard on that sort of thing, and it would have been nice to have such a
quote handy -- anyone else remember it?
ron
There you go:
http://harmful.cat-v.org/cat-v/
Em 2 de mai de 2017 17:29, "Diomidis Spinellis" <dds(a)aueb.gr> escreveu:
On 02/05/2017 19:11, Steve Johnson wrote:
> I recall a paper Dennis wrote (maybe more like a note) that was titled
> echo -c considered harmful
> (I think it was -c). It decried the tendency, now completely out of
> control, for everybody and their dog to piddle on perfectly good code
> just because it's "open".
>
There's definitely Rob Pike's talk "UNIX Style, or cat -v Considered
Harmful", which he delivered at the 1983 Usenix Association Conference and
Software Tools USers Group Summer Conference. Unfortunately, I can't find
it online. It's interesting that the talk's date is now closer to the
birth of Unix than to the present.
Diomidis
I'm this close to figuring out how to get netbsd to work on fs-uae with
no prior amiga experience. Searching around the English Amiga Users's
board for clues, I found a guide on downloading and installing Amix.
Complete with amix download links. Haven't tried it myself -I'm still
working on my bsd tangent. But for anyone interested:
http://eab.abime.net/showthread.php?t=86480
> From: Josh Good
> Would the command "cd /tmp ; rm -rf .*" be able to kill a V6 ... system?
Looking at the vanilla 'rm' source for V6, it cannot/does not delete
directories; one has to use the special 'rmdir' command for that. But,
somewhat to my surprise, it does support both the '-r' and '-f' flags, which I
thought were later. (Although not as 'stacked' flags, so you'd have to say
'rm -r -f'.)
So, assuming one did that, _and_ (important caveat!) _performed that command
as root_, it probably would empty out the entire directory tree. (I checked,
and "cd /tmp ; echo .*" evaluates to ". .." on V6.
Noel
The JHU version of the V6 kernel and the mount program were modified (or
should I say buggered) so that unprivileged users could mount user packs.
There were certain restrictions added as well: no setuid on mounted
volumes etc.
The problem came up that people would mount them using relative paths and
the mtab wouldn't really show who was using the disk as a result. I
suggested we just further bugger it by making the program chdir to '/dev'
first. That way you wouldn't have to put /dev/ on the drive device and
you'd have to give an absolute path for the mount point (or at least one
relative to /dev). I pointed out to my coworker that there was nothing in
/dev/ to mount on. He started trying it. Well the kernel issued errors
for trying to use a special file as a mount point. He then tried "."
Due to a combination of bugs that worked!
The only problem, is how do you unmount it? The /dev nodes had been
replaced by the root of directory of my user pack. Oh well, go halt and
reboot.
There were supposed to be protections against this. Mind you I did not
have root access at this point (just a lowly student operator), so we
decided to see where else we could mount. Sure enough cd /etc/ and mount
on "." there. We made up our own password file. It had one account with
uid 0 and the name "Game Player" in the gcos field. About this one of the
system managers calls and tells us to halt the machine as it'd had been
hacked. I told him we were responsible and we'd undo what we did.
I think by this time Mike Muuss came out and gave me the "mount" source and
told me to fix it.
Tim Newsham:
I'm not sure what fd 3 is intended to be, but its the telnet socket in p9p.
====
By the 10/e days, file descriptor 3 was /dev/tty. There was
no more magic driver for /dev/tty; the special file still
existed, but it was a link to /dev/fd/3.
Similarly /dev/stdin stdout stderr were links to /dev/fd/0 1 2.
(I mean real links, not mere symbolic ones.)
I have a vague recollection that early on /dev/tty was fd/127
instead, but that changed somewhere in the middle 8/e era.
None of which says what Plan 9 did with that file descriptor,
though I suppose it could possibly have copied the /dev/tty
use.
And none of that excuses the hard-coded magic number file
descriptor, but hackers will be hackers.
Norman Wilson
Toronto ON
Here are my notes to run 8th Edition Research Unix on SIMH.
http://9legacy.org/9legacy/doc/simh/v8
These notes are quite raw and unpolished, but should be
sufficient to get Unix running on SIMH.
Fell free to use, improve and share.
--
David du Colombier
Many years ago I was at Burroughs and they wanted to do Unix (4.1c) on a new machine. Fine. We all started on the project porting from a Vax. So far so good. Then a new PM came in and said that intel was the future and we needed to use their machines for the host of the port. And an intel rep brought in their little x86 box running some version of Unix (Xenix?, I didn’t go anywhere near the thing). My boss, who was running the Unix port project did the following:
Every Friday evening he would log into the intel box as root and run “/bin/rm -rf /“ from the console. Then turn off the console and walk away.
Monday morning found the box dead and the intel rep would be called to come and ‘fix’ his box.
This went on for about 4 weeks, and finally my boss asked the intel rep what was wrong with his machine.
The rep replied that this was ‘normal’ for the hardware/software and we would just have to “get used to it”.
The PM removed the intel box a couple of days later.
David
> On Apr 25, 2017, at 7:19 AM, tuhs-request(a)minnie.tuhs.org wrote:
>
> From: Larry McVoy <lm(a)mcvoy.com>
> To: Clem Cole <clemc(a)ccc.com>
> Cc: Larry McVoy <lm(a)mcvoy.com>, TUHS main list <tuhs(a)minnie.tuhs.org>
> Subject: Re: [TUHS] was turmoil, moving to rm -rf /
> Message-ID: <20170425140853.GD24499(a)mcvoy.com>
> Content-Type: text/plain; charset=us-ascii
>
> Whoever was the genuis that put mknod in /etc has my gratitude.
> We had other working Masscomp boxen but after I screwed up that
> badly nobody would let me near them until I fixed mine :)
>
> And you have to share who it was, I admitted I did it, I think
> it's just a thing many people do..... Once :)