Hi all,
I've been using window(1) on my simh-emulated 11/73, but it can't handle
terminals much larger than 80x24, failing with "Out of memory."
I'd like to use window(1) to drive a big xterm, like 132x66, for
instance, because I'd like to reduce the number of telnet connections to
the host.
How does one go about analyzing and remediating the memory contention in
this environment?
If anyone's interested, we could set up a pair programming session to
work on it together, which I think would be most instructive, for me, at
least.
Bear in mind that this is just for pdp11 voyeurism / fun.
thx
jake
I've been thinking more about early yacc.
It's not mentioned explicitly but I'm wondering if early Yacc's output
(say in Unix version 3) was in B language since it was written in B
language? It seems logical but I can't back up this assertion as
there's no executable or source code that I can find. I assume there
had to be some sort of B language compiler at some point but the
hybrid v1/v2 unix I've looked at doesn't have it.
And I'm still wondering what yacc was used for in the Unix v5 era.
There's no *.y at all, e.g. no expr and no bc. I still have some hopes
of modifying bc to run on Unix v5, or at least getting some simple
yacc program to work under the v5 version.
Mark
I just saw this video mentioned on reddit...
https://www.youtube.com/watch?v=XvDZLjaCJuw
UNIX: Making Computers Easier To Use -- AT&T Archives film from 1982, Bell
Laboratories
It features many of Bell UNIX folks, and even includes a brief example of
speak in action at about the 15:20 mark.
It's really cool to see the proliferation of UNIX by 1982 inside Bell.
The Xerox Alto and CP/M are not Unix-derived, but the first in
particular influenced the design of Unix workstations and the X11
Window System in the 1980s, so this story may be of interest to list
readers:
Exposed: Xerox Alto and CP/M OS source code released
The Computer History Museum has made the code behind yet more
historic software available for download
http://www.itworld.com/article/2838925/exposed-xerox-alto-and-cp-m-os-sourc…
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
On 2014-10-28 13:42, Clem Cole <clemc(a)ccc.com> wrote:
> yes:
> http://repository.cmu.edu/cgi/viewcontent.cgi?article=3241&context=compsci
Cool. I knew CMU did a lot of things with 11/40 machines. I didn't know
they had modified them to be able to write their own microcode, but
thinking about it, it should have been obvious. As they did
multiprocessor systems based on the 11/40, they would have had to modify
the microcode anyway.
> I had a 60 running v7 years later. we also toyed with adding CSV/CRET
> but never did it because we got an 11/70
>> >On Oct 27, 2014, at 9:09 PM, Dave Horsfall<dave(a)horsfall.org> wrote:
>> >
>>> >>On Mon, 27 Oct 2014, Clem Cole wrote:
>>> >>
>>> >>[...] because the CMU 11/40E had special CSV/CRET microcode which we
>>> >>could not use on the 11/34.
>> >
>> >The 40E had microcode whilst the vanilla 40 didn't? I thought only the 60
>> >was micro-programmable; I never did get around to implementing CSV/CRET on
>> >our 60 (Digital had a bunch of them when a contract with a publishing
>> >house fell through).
DEC actually made two PDP-11s that were micro programmable. The 11/60
and the 11/03 (if I remember right). DEC never had microprogramming for
the 11/40, but obviously CMU did that.
Ronald Natalie<ron(a)ronnatalie.com> wrote:
>> >On Oct 27, 2014, at 10:06 PM, Clem Cole<clemc(a)ccc.com> wrote:
>> >
>> >yes:http://repository.cmu.edu/cgi/viewcontent.cgi?article=3241&context=comp… <http://repository.cmu.edu/cgi/viewcontent.cgi?article=3241&context=compsci>
>> >
>> >I had a 60 running v7 years later. we also toyed with adding CSV/CRET but never did it because we got an 11/70
> Problem with the 60 was it lacked Split I/D (as did the 40's). We kind of relied on that for the kernels towards the end of the PDP-11 days,
> We struggled with the lack of I/D on the 11/34 and 11/23 at BRL but finally gave up when TCP came along. We just didn't have enough segments to handle all the overlaying needed to do. I recycled all the non split-I/D machines into BRL GATEWAYS.
>
> Of course, there was the famous (or imfamous) MARK instruction. This thing was sort of a kludge, you actually pushed the instruction on the stack and then did the RTS into the stack to execute the MARK to pop the stack and jump back to the caller. I know of no compiler (either DEC-written or UNIX) that used the silly thing. It obviously wouldn't work in split I/D mode anyhow. Years later while sitting in some DEC product announcement presentation, they announced the new T-11 chip (the single chip PDP-11) and the speaker said that it supported the entire instruction set with the exception of MARK. Me and one other PDP-11 trivia guy are going "What? No mark instruction?" in the back of the room.
Yurg... The MARK instruction was just silly. I never knew of anyone who
actually used it. Rumors have it that DEC just came up with it to be
able to extend some patent for a few more years related to the whole
PDP-11 architecture.
Clem Cole<clemc(a)ccc.com> wrote:
>> >Problem with the 60 was it lacked Split I/D (as did the 40's).
>
> ?A problem was that it was 40 class processor and as you says that means it
> was shared I/D (i.e. pure 16 bits) - so it lacked the 45 class 17th bit.
> The 60 has went into history as the machine that went from product to
> "traditional products" faster than any other DEC product (IIRC 9 months).
> I'm always surprised to hear of folks that had them because so few were
> actually made.
I picked up four 11/60 machines from a place in the late 80s. I still
have a complete set of CPU cards, but threw the last machine away about
10 years ago.
> I've forgotten the details nows, but they also had some issues when running
> UNIX. Steve Glaser and I chased those for a long time. The 60 had the HCM
> instruction sequences (halt a confuse microcode) which were some what
> random although UNIX seemed to hit them. DEC envisioned it as a commercial
> machine and added decimal arithmetic to it for RSTS Cobol.? I'm not sure
> RSX was even supported on it.
RSX-11M supports it. So do RSTS/E and RT-11. RSX-11M-PLUS obviously
don't, since it have a minimal requirement of 22-bit addressing.
The microcode specific instructions are interesting. But in general
shouldn't crash things, but of course kernel is a different story. :-)
Johnny
has anyone ever tried to compile any of the old C compilers with a 'modern'
C compiler?
I tried a few from the 80's (Microsoft/Borland) and there is a bunch of
weird stuff where integers suddenly become structs, structures reference
fields that aren't in that struct,
c01.c
register int t1;
....
t1->type = UNSIGN;
And my favorite which is closing a bunch of file handles for the heck of it,
and redirecting stdin/out/err from within the program instead of just
opening the file and using fread/fwrite..
c00.c
if (freopen(argv[2], "w", stdout)==NULL ||
(sbufp=fopen(argv[3],"w"))==NULL)
How did any of this compile? How did this stuff run without clobbering
each-other?
I don't know why but I started to look at this stuff with some half hearted
attempt at getting Apout running on Windows. Naturally there is no fork, so
when a child process dies, the whole thing crashes out. I guess I could
simulate a fork with threads and containing all the cpu variables to a
structure for each thread, but that sounds like a lot of work for a limited
audience.
But there really is some weird stuff in v7's c compiler.
Wow BSD on a supercomputer! That sounds pretty cool!
http://web.ornl.gov/info/reports/1986/3445600639931.pdf
>From here it mentions it could scale to 16 process execution modules
(CPU's?)
while here http://ftp.arl.mil/mike/comphist/hist.html it mentions 4 PEMs
which each could run 8 processes.
It still looks like an amazing machine.
-----Original Message-----
From: Ronald Natalie
To: Noel Chiappa
Cc: tuhs(a)minnie.tuhs.org
Sent: 10/27/14 11:09 PM
Subject: Re: [TUHS] speaking of early C compilers
We thought the kernels got cleaned up a lot by the time we got to the
BSD releases. We were wrong.
When porting our variant of the 4 BSD to the Denelcor HEP supercomputer
we found a rather amusing failure.
The HEP was a 64 bit word machine but it had partial words of 16 and 32
bits. The way it handled these was to encode the word size in the
lower bits of the address (since the bottom three weren't used in word
addressing anyhow). If the bottom three were zero, then it was the
full word. If it was 2 or 6, it was the left or right half word, and
1,3, 5, and 7 yielded the four quarter words. (Byte operations used
different instructions so they directly addressed the memory).
Now Mike Muuss who did the C compiler port made sure that all the casts
did the right thing. If you cast "int *" to "short *" it would tweak
the low order bits to make things work. However the BSD kernel in
several places did what I call conversion by union: essentially this:
union carbide {
char* c;
short* s;
int* i;
} u;
u.s = ...some valid short* ...
int* ip = u.i;
Note the compiler has no way of telling that you are storing and
retrieving through different union members and hence the low order bits
ended up reflecting the wrong word size and this led to some flamboyant
failures. I then spent a week running around the kernel making these
void* and fixing up all the access points to properly cast the accesses
to it.
The other amusing thing was what to call the data types. Since this
was a word machine, there was a real predisposition to call the 64 bit
sized thing "int" but that meant we needed another typename for the 32
bit thing (since we decided to leave short for the 16 bit integer).
I lobbied hard for "medium" but we ended up using int32. Of course,
this is long before the C standards ended up reserving the _ prefix for
the implementation.
The afore mentioned fact that all the structure members shared the same
namespace in the original implementation is why the practice of using
letter prefixes on them (like b_flags and b_next etc... rather than just
flags or next) that persisted long after the C compiler got this issue
resolved.
Frankly, I really wish they'd have fixed arrays in C to be properly
functioning types at the same time they fixed structs to be proper types
as well. Around the time of the typesetter or V7 releases we could
assign and return structs but arrays still had the silly "devolve into
pointers" behavior that persists unto this day and still causes problems
among the newbies.
_______________________________________________
TUHS mailing list
TUHS(a)minnie.tuhs.org
https://minnie.tuhs.org/mailman/listinfo/tuhs
> From: Dave Horsfall <dave(a)horsfall.org>
> What, as opposed to spelling creat() with an "e"?
Actually, that one never bothered me at all!
I tended to be more annoyed by _extra_ characters; e.g. the fact that 'change
directory' was (in standard V6) "chdir" (as opposed to just plain "cd") I
found far more irritating! Why make that one _five_ characters, when most
common commands are two?! (cc, ld, mv, rm, cp, etc, etc, etc...)
Noel
Norman Wilson writes today:
>> ...
>> -- Dennis, in one of his retrospective papers (possibly that
>> in the 1984 all-UNIX BLTJ issue, but I don't have it handy at
>> the moment) remarked about ch becoming chdir but couldn't
>> remember why that happened.
>> ...
The reference below contains on page 5 this comment by Dennis:
>> (Incidentally, chdir was spelled ch; why this was expanded when we
>> went to the PDP-11 I don't remember)
@String{pub-PH = "Pren{\-}tice-Hall"}
@String{pub-PH:adr = "Upper Saddle River, NJ 07458, USA"}
@Book{ATT:AUS86-2,
author = "AT{\&T}",
key = "ATT",
title = "{AT}{\&T UNIX} System Readings and Applications",
volume = "II",
publisher = pub-PH,
address = pub-PH:adr,
pages = "xii + 324",
year = "1986",
ISBN = "0-13-939845-7",
ISBN-13 = "978-0-13-939845-2",
LCCN = "QA76.76.O63 U553 1986",
bibdate = "Sat Oct 28 08:25:58 2000",
bibsource = "http://www.math.utah.edu/pub/tex/bib/master.bib",
acknowledgement = ack-nhfb,
xxnote = "NB: special form AT{\&T} required to get correct
alpha-style labels.",
}
That chapter of that book comes from this paper:
@String{j-ATT-BELL-LAB-TECH-J = "AT\&T Bell Laboratories Technical Journal"}
@Article{Ritchie:1984:EUT,
author = "Dennis M. Ritchie",
title = "Evolution of the {UNIX} time-sharing system",
journal = j-ATT-BELL-LAB-TECH-J,
volume = "63",
number = "8 part 2",
pages = "1577--1593",
month = oct,
year = "1984",
CODEN = "ABLJER",
DOI = "http://dx.doi.org/10.1002/j.1538-7305.1984.tb00054.x"
ISSN = "0748-612X",
ISSN-L = "0748-612X",
bibdate = "Fri Nov 12 09:17:39 2010",
bibsource = "Compendex database;
http://www.math.utah.edu/pub/tex/bib/bstj1980.bib",
abstract = "This paper presents a brief history of the early
development of the UNIX operating system. It
concentrates on the evolution of the file system, the
process-control mechanism, and the idea of pipelined
commands. Some attention is paid to social conditions
during the development of the system.",
acknowledgement = ack-nhfb,
fjournal = "AT\&T Bell Laboratories Technical Journal",
topic = "computer systems programming",
}
Incidentally, on modern systems with tcsh and csh, I use both chdir
and cd; the long form does the bare directory change, whereas the
short form is an alias that also updates the shell prompt string and
the terminal window title.
I also have a personal alias "xd" (eXchange Directory) that is short
for the tcsh & bash sequence "pushd !*; cd .", allowing easy jumping
back and forth between pairs of directories, with updating of prompts
and window titles.
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
> From: Jason Stevens
> has anyone ever tried to compile any of the old C compilers with a
> 'modern' C compiler?
> ...
> How did any of this compile? How did this stuff run without clobbering
> each-other?
As Ron Natalie said, the early kernels are absolutely littered with all sorts
of stuff that, by today's standards, are totally unacceptable. Using a
variable declared as an int as a pointer, using a variable declared as a
'foo' pointer as a 'bar' pointer, yadda-yadda.
I ran (tripped, actually :-) across several of these while trying to get my
pipe-splicing code to work. (I used Version 6 since i) I am _totally_
familiar with it, and ii) it's what I had running.)
For example, I tried to be all nice and modern and declared my pointer
variables to be the correct type. The problem is that Unix generated unique
ID's to sleep on with code like "sleep(p+1, PPIPE)", and the value generated
by "p+1" depends on what type "p" is declared as - and if you look in pipe.c,
you'll see it's often declared as an int pointer. So when _I_ wrote
"sleep((p + 1), PPIPE)", with "p" declared as a "stuct file pointer", I got
the wrong number.
I can only speculate as to why they wrote code like this. I think part of it
is, as Brantley Coile points out, historical artifacts due to the evolution
of C from (originally) BCPL. That may have gotten them used to writing code
in a certain way - I don't know. I also expect the modern mindset (of being
really strict about types, and formal about coverting data from one to
another) was still evolving back then - partly because they often didn't
_have_ the tools (e.g. casts) to do it right. Another possibility is that
they were using line editors, and maintaining more extensive source is a pain
with an editor like that. Why write "struct file *p" wnen you can just write
"*p"? And of course everyone was so space-concious back then, with those tiny
disks (an RK05 pack is, after all, only 2.5MB - only slightly larger than a
3.5" floppy!) every byte counted.
I have to say, though, that it's really kind of jarring to read this stuff.
I have so much respect for their overall structure (the way the kernel is
broken down into sub-systems, and the sub-systems into routines), how they
managed to get a very powerful (by anyone's standards, even today's) OS into
such a small amount of code... And the _logic_ of any given routine is
usually quite nice, too: clear and efficient. And I love their commenting
style - no cluttering up the code with comments unless there's something that
really needs elucidation, just a short header to say, at a high level, what
the routine does (and sometimes how and why).
So when I see these funky declarations (e.g. "int *p" for something that's
_only_ going to be used to point to a "struct file"), I just cringe - even
though I sort of understand (see above) why it's like that. It's probably the
thing I would most change, if I could.
Noel
Noel Chiappa:
> I tended to be more annoyed by _extra_ characters; e.g. the fact that
> 'change directory' was (in standard V6) "chdir" (as opposed to just
> plain "cd") I found far more irritating! Why make that one _five_
> characters, when most common commands are two?! (cc, ld, mv, rm, cp,
> etc, etc, etc...)
In the earliest systems, e.g. that on the PDP-7, the change-directory
command was just `ch'.
Two vague memories about the change:
-- Dennis, in one of his retrospective papers (possibly that
in the 1984 all-UNIX BLTJ issue, but I don't have it handy at
the moment) remarked about ch becoming chdir but couldn't
remember why that happened.
-- Someone else, possibly Tom Duff, once suggested to me that
in the earliest systems, the working directory was the only
thing that could be changed: no chown, no chmod. Hence just
ch for chdir. I don't know offhand whether that's true, but
it makes a good story.
Personally I'd rather have to type chdir and leav off th
trailing e on many other words than creat if it let me off
dealing with pieces of key system infrastructure that insist
on printing colour-change ANSI escape sequences (with, so far
as I can tell, no way to disable them) and give important files
names beginning with - so that grep pattern * produces an error.
But that happens in Linux, not UNIX.
Norman Wilson
Toronto ON
Thanks for clearing that the whole members out of nowhere thing.
I had thought (ha ha) that since I don't have a working fork, I could just
rebuild CC as a native
executable, and then just call apout for each stage, but I never realized
how interdependent
they all are, at least C0 to C1.
It's crazy to think of how much this stuff cost once upon a time.
And now we live in the era of javascript pdp-11's
http://pdp11.aiju.de/
-----Original Message-----
From: Brantley Coile
To: Jason Stevens
Cc: tuhs(a)minnie.tuhs.org
Sent: 10/27/14 9:03 PM
Subject: Re: [TUHS] speaking of early C compilers
Early C allowed you to use the '->' operator with any scaler. See early
C reference manuals. This is the reason there is one operator to access
a member of a structure using a pointer and another, '.', to access a
member in a static structure. The B language had no types, everything
was a word, and dmr evolved C from B. At first it made sense to use the
'->' operator to mean add a constant to whatever is on the left and use
as an l-value.
You will also find that member names share a single name space. The
simple symbol table had an bit in each entry to delineate members from
normal variables. You could only use the same member name in two
different structs if the members had the same offsets. In other words,
it was legal to add a member name to the symbol table that was already
there if the value of the symbol was the same as the existing entry.
Dennis' compilers kept some backward compatibility even after the
language evolved away from them.
This really shows the value of evolving software instead of thinking one
has all the answers going into development. If one follows the
development of C one sees the insights learned as they went. The study
of these early Unix systems have a great deal to teach that will be
valuable in the post Moore's law age. Much of the worlds software will
need to a re-evolution.
By the way, did you notice the compiler overwrites itself? We used to
have to work in tiny spaces. Four megabytes was four million dollars.
Sent from my iPad
> On Oct 27, 2014, at 6:42 AM, Jason Stevens
<jsteve(a)superglobalmegacorp.com> wrote:
>
> has anyone ever tried to compile any of the old C compilers with a
'modern'
> C compiler?
>
> I tried a few from the 80's (Microsoft/Borland) and there is a bunch
of
> weird stuff where integers suddenly become structs, structures
reference
> fields that aren't in that struct,
>
> c01.c
> register int t1;
> ....
> t1->type = UNSIGN;
>
>
> And my favorite which is closing a bunch of file handles for the heck
of it,
> and redirecting stdin/out/err from within the program instead of just
> opening the file and using fread/fwrite..
>
> c00.c
> if (freopen(argv[2], "w", stdout)==NULL ||
> (sbufp=fopen(argv[3],"w"))==NULL)
>
>
> How did any of this compile? How did this stuff run without
clobbering
> each-other?
>
> I don't know why but I started to look at this stuff with some half
hearted
> attempt at getting Apout running on Windows. Naturally there is no
fork, so
> when a child process dies, the whole thing crashes out. I guess I
could
> simulate a fork with threads and containing all the cpu variables to a
> structure for each thread, but that sounds like a lot of work for a
limited
> audience.
>
> But there really is some weird stuff in v7's c compiler.
> _______________________________________________
> TUHS mailing list
> TUHS(a)minnie.tuhs.org
> https://minnie.tuhs.org/mailman/listinfo/tuhs
> From: random832(a)fastmail.us
> Did casting not exist back then?
No, not in the early V6 compiler. It was only added as of the Typesetter
compiler. (I think if you look in those 'Recent C Changes' things I sent in
recently {Oct 17}, you'll find mention of it.)
Noel
Have you looked at http://real-votrax.no-ip.org/
they have a votrax hooked up, and yes it'll use your phonemes that speak
generates.
It just likes things to be upper case though.
So..
hello
!p
,h,e0,l,o0,o1,-1
works more like
H E0 L O0 O1 PA1
I wonder if anyone's generated wav's for each of the phonemes, then you
could hook up a line printer or something that'll read it as a pipe and just
play the wav's as needed..
It is rough 1970's speech synthesis, but I had one of those Intellivoice
things as a kid, so I kinda like it.
-----Original Message-----
From: Mark Longridge
To: tuhs
Sent: 10/13/14 8:57 AM
Subject: [TUHS] Getting Unix v5 to talk
Thanks to the efforts of Jonathan Gevaryahu I have managed
to get the Unix v5 speak utility to compile and execute.
All this was done using the simh emulator emulating a
PDP-11/70.
Jonathan managed extract enough of speak.c to reconstruct it
to the point it could be compiled with v5 cc. I believe it
was necessary to look at speak.o to accomplish this.
Jonathan also states that there are more interesting things
that could possibly be recovered from v6doc.tar.gz
One can look at speak.c source here:
http://www.maxhost.org/other/speak.c
Now had we have speak compiled we can go a bit further:
cat speak.v - | speak -v null
generates speak.m from ascii file speak.v
speak speak.m
computer
!p (prints out phonetics for working word)
which outputs:
,k,a0,m,p,E2,U1,t,er,-1
ctrl-d exits
Looking at speak.c we can see that it opens /dev/vs.
Fortunately we have the file /usr/sys/dmr/vs.c to look at
so this could be compiled into the kernel although I haven't
done this as yet.
speak.c looks like Unix v5 era code. My understanding is that
Unix v5 appeared in June 1974 and the comments say 'Copyright 1974'
so it seems plausible.
I'm intrigued by the possibility of getting Unix v5 to talk.
Mark
_______________________________________________
TUHS mailing list
TUHS(a)minnie.tuhs.org
https://minnie.tuhs.org/mailman/listinfo/tuhs
> From: "Engel, Michael" <M.Engel(a)leedsbeckett.ac.uk>
> The machine has a Multibus FD controller with its own 8085 CPU and a
> uPD765, connected to a Toshiba 5.25" DD floppy drive (720 kB, 80
> tracks, 9 sectors of 512 bytes), the model identifier is DDF5-30C-34I
> ... I couldn't find any information on that drive online, so I hesitate
> to simply connect a more modern drive due to possible pinout differences.
> ...
> I also found out a bit more on the SMD disk controller. It seems to be
> an OEM variant of the Micro Computer Technology MCT-4300 controller.
> The only place I could find this mentioned was in a catalog of Multibus
> boards on archive.org.
> ...
> So, if you happen to have any information on the Codata floppy
> controller, the Toshiba floppy or the MCT-4300 SMD disk controller, I
> would be happy to hear from you...
I don't, but can I suggest the Classic Computers mailing list:
http://www.classiccmp.org/mailman/listinfo/cctalk
They seem to have an extremely deep well of knowledge, and perhaps someone
there can help? (I'd rate the odds very high on the floppy drive.)
Noel
Hi,
it's time for an update on our progress with the Codata machine.
The serial interface problem was not caused by a defective transceiver
chip (which I found out after buying a couple…), but by an extreme
amount of noise on the (quite long and old) serial cable we used to
connect the machine to the PC acting as a terminal. Using a USB
to serial adapter and a short 9-to-25-pin adapter cable solved this
problem. Well, try the obvious things first (using a scope helped).
The second CPU board also works, so we could build a complete
second machine with our spare boards if we have another multibus
backplane...
We could get the machine up and booting from the first 8" hard disk
last Friday. Luckily, an old version of Kermit was installed and we
were able to transmit a large part of the root file system from single
user more - especially the Unix kernels, driver sources, the build
directories for the kernel, include files and the build directory for
the standalone boot floppies. All with a speed of 500 bytes/s (9600
bps serial line minus kermit overhead). cksum was used to confirm
that the files were transferred correctly (this was the only checksumming
tool that was available on the Codata, I didn't want to mount the fs
read-write and compile software before completing the backup).
I had to shut the machine down on Friday evening (security policy
that kicks you out of the building here), since I didn't want to leave
it running unattended over the weekend. Unfortunately, the disk
seems to have developed a bad sector in the autoconfiguration
region (the system seems to be quite modern in this respect).
The kernel can be booted successfully, but it refuses to mount the
root fs, complaining about a timeout. There seems to be a complete
root file system on the second disk (the firmware is able to read files
from the disk, but it doesn't offer a feature to list directories…), but the
kernel on the second disk also is hardwired to mount its root fs from the
first disk. Trying to connect disk 2 as disk 1 resulted in a non-booting
system...
The good news is that both root file systems seem to be reasonably
intact so far, I can read text files from the boot monitor. So our next
step to backup the rest of the system is to build an emergency boot
floppy. At the moment, however, the Codata refuses to talk to its
floppy drive. The machine has a Multibus FD controller with its own
8085 CPU and a uPD765, connected to a Toshiba 5.25" DD floppy
drive (720 kB, 80 tracks, 9 sectors of 512 bytes), the model identifier
is DDF5-30C-34I (printed on the motor assembly). I couldn't find
any information on that drive online, so I hesitate to simply
connect a more modern drive due to possible pinout differences.
I also found out a bit more on the SMD disk controller. It seems to
be an OEM variant of the Micro Computer Technology MCT-4300
controller. The only place I could find this mentioned was in a
catalog of Multibus boards on archive.org. It has its own driver
(cd.c), there is a separate one for the Interphase 2180 and an
additional one for the Codata MFM controller.
So, if you happen to have any information on the Codata floppy
controller, the Toshiba floppy or the MCT-4300 SMD disk controller,
I would be happy to hear from you...
-- Michael
> From: Greg 'groggy' Lehey <grog(a)lemis.com>
> This is really an identifier issues
Probably actually a function of the relocatable object format / linker on the
machines in question, which in most (all?) cases predated C itself.
> it's documented in K&R 1st edition, page 179:
Oooh, good piece of detective work!
Noel
Hi folks,
I've been looking at Unix v5 cc limitations.
It seems like early cc could only use variable and function names up
to 8 characters.
This limitation occurs in v5, v6 and v7.
But when using the nm utility to print out the name list I see
function test1234() listed as:
000044T _test123
That seems to suggest that only the first 7 characters are
significant, but when looking at other sources they stated that one
can use up to 8 characters.
I hacked up a short program to test this:
main()
{
test1234();
test1235();
}
test1234()
{
printf ("\nWorking");
}
test1235()
{
printf ("\nAlso working");
}
This generated:
Multiply defined: test5.o;_test123
So it would seem that function names can only be 7 characters in
length. I am not sure if limitations of early cc were documented
anywhere. When I backported unirubik to v5 it compiled the longer
functions without any problem.
Did anyone document these sorts of limitations of early cc? Does
anyone remember when cc started to use function names longer than 7
characters?
Mark
> From: Mark Longridge <cubexyz(a)gmail.com>
> It seems like early cc could only use variable and function names up to
> 8 characters.
> This limitation occurs in v5, v6 and v7.
> ...
> That seems to suggest that only the first 7 characters are significant,
> but when looking at other sources they stated that one can use up to 8
> characters.
The a.out symbol tables use 8-character fields to hold symbol names. However,
C automagically and unavoidably prepends an _ to all externals (I forget
about automatics, registers, etc - too tired to check right now), making the
limit for C names 7 characters.
> I am not sure if limitations of early cc were documented anywhere.
I remember reading the above.
Other limits... well, you need to remember that C was still changing in that
period, so limits were a moving target.
> When I backported unirubik to v5 it compiled the longer functions
> without any problem.
ISTR that C truncated external names longer than 7 characters. Probably the
ones in that program were all unique within 7, so you won.
> Did anyone document these sorts of limitations of early cc?
I seem to recall at least one document from that period (I think pertaining
to the so-called 'Typesetter C') about 'changes to C'.
Also, I have started a note with a list of 'issues with C when you're
backporting V7 and later code to V6', I'll see if I can dig them out tomorrow.
Noel
Afternoon,
# /etc/mkfs /dev/rrp1g 145673
isize = 65488
m/n = 3 500
write error: 2
# file rp0g
rp0g: block special (0/6)
# file rp1g
rp1g: block special (0/14)
# file rp0a
rp0a: block special (0/0)
# file rp1a
rp1a: block special (0/8)
# file rrp0a
rrp0a: character special (4/0)
# file rrp1a
rrp1a: character special (4/8)
# file rrp0g
rrp0g: character special (4/6)
# file rrp1g
rrp1g: character special (4/14)
DESCRIPTION
Files with minor device numbers 0 through 7 refer to various
portions of drive 0; minor devices 8 through 15 refer to
drive 1, etc.
The origin and size of the pseudo-disks on each drive are as
follows:
What am I forgetting? I have an image attached, I have modified hp.c to
have NHP as 2.
Is it conflict between rp.c and hp.c? (I patched hp.c to have NHP 2 after
patching NURP in rp.c to be 2).
--
Cory Smelosky
http://gewt.net Personal stuff
http://gimme-sympathy.org Projects
> From: jnc(a)mercury.lcs.mit.edu (Noel Chiappa)
>> Did anyone document these sorts of limitations of early cc?
> I seem to recall at least one document from that period (I think
> pertaining to the so-called 'Typesetter C') about 'changes to C'.
> ...
> I'll see if I can dig them out tomorrow.
OK, there are three documents which sort of fall into this class. First,
there is something titled "New C Compiler Features", no date, available here:
http://minnie.tuhs.org/cgi-bin/utree.pl?file=Interdata732/usr/doc/cdoc/news…
no date, but it appears to describe an early version of the so-called
'Typesetter C', mentioned in other documents, so this would be circa 1976 or
so.
There is a second document, untitled, no date, which I have not been able to
locate online at all. I scanned my hard-copy, available here:
http://ana-3.lcs.mit.edu/~jnc/history/unix/CImprovements1.jpg
..
http://ana-3.lcs.mit.edu/~jnc/history/unix/CImprovements5.jpg
>From the content, it seems to be from shortly after the previous one, so say,
circa 1977.
Sorry about the poor readability (it looked fine on the monitor of the
machine my scanner is attached to); fudging with contrast would probably make
it more readable. When I get the MIT V6 Unix tapes read (they have been sent
off to a specialist in reading old tapes, results soon, I hope) I might be
able to get more info (e.g. date/filename), and machine-readable source.
Finally, there is "Recent Changes to C", from November 15, 1978, available
here:
http://cm.bell-labs.com/cm/cs/who/dmr/cchanges.pdf
which documents a few final bits.
There is of course also Dennis M. Ritchie, "The Development of the C
Language", available here:
http://cm.bell-labs.com/who/dmr/chist.html
which is a good, interesting history of C.
> Also, I have started a note with a list of 'issues with C when you're
> backporting V7 and later code to V6'
I found several documents which are bits and pieces of this.
http://ana-3.lcs.mit.edu/~jnc/history/unix/C_Backport.txthttp://ana-3.lcs.mit.edu/~jnc/history/unix/V6_C.txt
Too busy to really clean them up at the moment.
Noel
Back in the 80s in my University days I was using ISPS (Instruction Set Processor Simulator if I remember correctly ) a software tool To simulate CPU. It ran on a Vax with BSD 4.2. I have been unable to find any reference to It on the Internet . Do someone on this list know anything offerte this software ?
Thanks
Luca
> From: Mark Longridge
> Fortunately we have the file /usr/sys/dmr/vs.c to look at so this could
> be compiled into the kernel although I haven't done this as yet.
The vs.c seems to be a Votrax speech synthesizer hooked up to a DC11
interface. Do any of the simulators support the DC11? If not, adding the
driver won't do you much good.
Noel
PS: I seem to recall the DSSR group on the 4th floor at LCS actually had one
of these, back in the day. The sound quality was pretty marginal, as I recall!
Thanks to the efforts of Jonathan Gevaryahu I have managed
to get the Unix v5 speak utility to compile and execute.
All this was done using the simh emulator emulating a
PDP-11/70.
Jonathan managed extract enough of speak.c to reconstruct it
to the point it could be compiled with v5 cc. I believe it
was necessary to look at speak.o to accomplish this.
Jonathan also states that there are more interesting things
that could possibly be recovered from v6doc.tar.gz
One can look at speak.c source here:
http://www.maxhost.org/other/speak.c
Now had we have speak compiled we can go a bit further:
cat speak.v - | speak -v null
generates speak.m from ascii file speak.v
speak speak.m
computer
!p (prints out phonetics for working word)
which outputs:
,k,a0,m,p,E2,U1,t,er,-1
ctrl-d exits
Looking at speak.c we can see that it opens /dev/vs.
Fortunately we have the file /usr/sys/dmr/vs.c to look at
so this could be compiled into the kernel although I haven't
done this as yet.
speak.c looks like Unix v5 era code. My understanding is that
Unix v5 appeared in June 1974 and the comments say 'Copyright 1974'
so it seems plausible.
I'm intrigued by the possibility of getting Unix v5 to talk.
Mark