Hi,
I have a project to revive the C compiler from V7/V10.
I wanted to check if anyone here knows about the memory management in
the compiler (c0 only for now). I am trying to migrate the memory
management to malloc/free but I am struggling to understand exactly
how memory is being managed.
Thanks and Regards
Dibyendu
I am fairly sure the interdata port from Wollongong used the v6 c compiler, and this lived on in the “official” v7 port from perkin elmer, it still used the v6 compiler.
i remember the pain of the global namespace for structure members.
-Steve
> >> Even high-school employees could make lasting contributions. I am
> >> indebted to Steve for a technique he conceived during his first summer
> >> assignment: using macro definitions as if they were units of associative
> >> memory. This view of macros stimulated previously undreamed-of uses.
>
> > Can you give some examples of what this looked like?
>
See attached for an answer to Arnold's question
Doug
> Did the non-Unix people also pull pranks like the watertower?
One of my favorites was by John Kelly, a Texas original,
who refused the department-head perk of a rug so he
could stamp his cigarettes out on the vinyl floor.
John came from Visual and Acoustics Research, where
digital signal processing pressed the frontiers of
computing. Among his publications was the completely
synthetic recording of "Daisy, Daisy" released
circa 1963.
Kelly electrified the computer center with a
blockbuster prank a year or two before that. As
was typical of many machine rooms, a loudspeaker
hooked to the low-order bit of the accumulator
played gentle white noise in the background. The
noise would turn into a shriek when the computer
got into a tight loop, calling the operators to
put the program out of its misery.
Out of the blue one day, the loudspeaker called
for help more articulately: "Help, I'm caught in
a loop. Help, I'm caught in a loop. ..." it
intoned in a slow Texas drawl. News of the talking
computer spread instantly and folks croweded into
the machine room to marvel before the operators
freed the poor prisoner.
Doug
Dan Cross:
I'll confess I haven't looked _that_ closely, but I rather imagine that the
V10 compiler is a descendant of PCC rather than Dennis's V6/V7 PDP-11
compiler.
====
Correct. From 8/e to the end of official Research UNIX,
cc was pcc2 with a few research-specific hacks.
As Dan says, lcc was there too, but not used a lot.
I'm not sure which version of lcc it was; probably it
was already out-of-date.
In my private half-backed 10/e descendant system, which
runs only on MicroVAXes in my basement, cc is an lcc
descendant instead. I took the lcc on which the book
was based and re-ported it to the VAX to get an ISO-
compliant C compiler, and made small changes to libc
and /usr/include to afford ISO-C compliance there too.
The hardest but most-interesting part was optimizing.
lcc does a lot of optimization work by itself, and
initially I'd hoped to dispense with a separate c2
pass entirely, but that turns out not to be feasible
on machines like the VAX or the PDP-11: internally
lcc separates something like
c = *p++;
into two operations
c = *p;
p++;
and makes two distinct calls to the code generator.
To sew them back together from
cvtbl (p),c
incl p
to
cvtbl (p)+,c
requires external help; lcc just can't see that
what it thinks of as two distinct expressions
can be combined.
It's more than 15 years since I last looked at any
of this stuff, but I vaguely remember that lcc has
its own interesting (but ISO/POSIX-compatible)
memory-allocation setup. It allows several nested
contexts' worth of allocation, freeing an inner
context when there's no longer any need for it.
For example, once the compiler has finished with
a function and has no further need for its local
symbols, it frees the associated memory.
See the lcc book for details. Read the book anyway;
it's the one case I know of in which the authors
followed strict Literate Programming rules and made
a big success of it. Not only is the compiler well-
documented, but the result is a wonderful tour
through the construction and design decisions of a
large program that does real work.
Norman Wilson
Toronto ON
> From: Dibyendu Majumdar
> the C compiler from V7/V10. I wanted to check if anyone here knows
> about the memory management in the compiler (c0 only for now). I am
> trying to migrate the memory management to malloc/free but I am
> struggling to understand exactly how memory is being managed.
Well, I don't know much about the V7 compiler; the V6 one, which I have looked
at, doesn't (except for the optimizer, C2) use allocated memory at all.
The V7 compiler seems to use sbrk() (the system call to manage the location of
the end of a process' data space), and manage the additional data space
'manually'; it does not seem to use a true generic heap. See gblock() in
c01.c:
https://minnie.tuhs.org//cgi-bin/utree.pl?file=V7/usr/src/cmd/c/c01.c
which seems to use two static variables (curbase and coremax) to manage
that additional memory:
p = curbase;
if ((curbase =+ n) >= coremax) {
if (sbrk(1024) == -1) {
error("Out of space");
exit(1);
}
coremax =+ 1024;
}
return(p);
My guess is that there's no 'free' at all; each C source file is processed
by a new invocation of C0, and the old 'heap' is thrown away when the
process exits when it gets to the EOF.
Noel
> From: Larry
> It's possible the concept existed in some other OS but I'm not aware of
> it.
It's pretty old. Both TENEX and ITS had the ability to map file pages into a
process' address space. Not sure if anything else of that vintage had it (not
sure what other systems back then _had_ paging, other than Atlas; those two
both had custom KA10 homebrew hardware mods to support paging). Of course,
there's always Multics... (sorry, Ken :-).
Noel
fyi warner,
in your talk, you referred to Alex Fraser at bell labs.
he was my first director, and always went by “sandy”.
i asked my wife (who was a secretary at the time) and she
said he was occasionally referred to as Alex. certainly, every
time i saw anything written by him or references to a talk by him
always used “sandy”.
andrew
>
> Message: 6
> Date: Fri, 5 Jun 2020 16:51:27 -0600
> From: Warner Losh <imp(a)bsdimp.com>
> To: The Eunuchs Hysterical Society <tuhs(a)tuhs.org>
> Subject: [TUHS] My BSDcan talk
> Message-ID:
> <CANCZdfpq8tiDYe2iVeFh1h0VMDK+4B=kXuGSJ3iNmtjbzHQT6Q(a)mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> OK. Must be off my game... I forgot to tell people about my BSDcan talk
> earlier today. It was streamed live, and will be online in a week or
> three...
>
> It's another similar to the last two. I've uploaded a version to youtube
> until the conference has theirs ready. It's a private link, but should work
> for anybody that has it. Now that I've given my talk it's cool to share
> more widely... https://www.youtube.com/watch?v=NRq8xEvFS_g
>
> The link at the end is wrong. https://github.com/bsdimp/bsdcan2020-demos is
> the proper link.
>
> Please let me know what you think.
>
> Warner
>
Just saw your BSDcan talk. Great stuff, so much progress in the last five years. Just wanna say thanks. When I started looking into ancient systems, it was hard finding anything coherent on the historical side beyond manuals and this list (thankful to Warren & co for the list). Your talk is packed with interesting information and really pulls together the recent pieces.
Great job, Warner.
I needed to look up something in the Bell System Technical Journal
(Wikipedia didn't have it) and discovered that the old Alcatel-Lucent
site that used to host a free archive of BSTJ no longer seems extant.
(No surprise, the Web is nothing if not ephemeral.)
After a bit of Googling, I did find that the archives are now residing
at <https://archive.org/details/bstj-archives> and found what I was
looking for there.
Hope others find this link useful. At least until it too "sublimaates".
Kirk McKusick
Looking at the 6th edition man page tty(2), I see
Carriage-return delay type 1 lasts about .08 seconds and is
suitable for the Terminet 300. Delay type 2 lasts about .16
seconds and is suitable for the VT05 and the TI 700. Delay
type 3 is unimplemented and is 0.
New-line delay type 1 is dependent on the current column and
is tuned for Teletype model 37's. Type 2 is useful for the
VT05 and is about .10 seconds. Type 3 is unimplemented and
is 0.
Why would the VT05 (a VDU) need a delay for carriage return?
I can just about imagine that it might need one for linefeed
if it shifted the characters in memory.
-- Richard
--
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
On 2020-07-29 15:17, Paul Koning wrote:
>
>
>> On Jul 29, 2020, at 5:50 AM, Johnny Billquist <bqt(a)softjar.se> wrote:
>>
>> Just a small comment. Whoever it was that thought DECtape was a tape was making a serious mistake. DECtapes are very different from magtapes.
>>
>> Johnny
>
> Depends on what you're focusing on. Most tapes are not random-write. DECtape and EL-X1 tape are exceptional in that respect. But tapes, DECtape include, have access time proportional to delta block number (and that time is large) unlike disks.
>
> From the point of view of I/O semantics, the first point is significant and the second one not so much.
True. But seek times are in the end only relevant as an aspect of the
speed of the thing, nothing else.
However, seek times on DECtape aren't really comparable to magtape
either. Because DECtape deals with absolute block numbers. So you can
always, no matter where you are, find out where you are, and how far you
will need to move to get to the correct block.
With magtapes, this is pretty much impossible. You'll have to rewind,
and then start seeking.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
OK, I was able to locate 2bsd.tar.gz and spencer_2bsd.tar.gz in the
archive. Neither is an installation tape. It appears that they are just
tarballs of their respective systems (there are very minor differences
between the two).
In the TAPE file in the tarball, it talks about reading the tar program
off of the tape using:
dd if=/dev/mt0 bs=1b skip=1 of=tar
Well, tar is definitely not located at that address, which implies that
the tarball isn't a distro tape. This note in the archive used to read:
...
The remaining gzipped tar files are other 2BSD distributions supplied by
Keith Bostic, except for spencer_2bsd.tar.gz which came from Henry Spencer.
They do not contain installation tape images. The 2.9BSD-Patch directory
contains patches to 2.9BSD dated August 85, and again supplied by Keith Bostic.
...
now it reads:
...
2.11BSD 2.11BSD-pl195.tar is a copy of 2.11BSD at patch level 195, supplied
by Tom Ivar Helbekkmo. spencer_2bsd.tar.gz is a version of 2BSD which came
from Henry Spencer.
...
I recall having to do something with cont.a files, which are not present
on these images. So, my questions is, does anyone know of or have an
actual 2bsd tape/tape image?
Thanks,
Will
Here's where I found the tarballs:
https://www.tuhs.org/Archive/Distributions/UCB/
--
GPG Fingerprint: 68F4 B3BD 1730 555A 4462 7D45 3EAA 5B6D A982 BAAF
All,
About a week ago, Bill English passed away. He was a Xerox guy, who
along with Douglas Engelbart of "Mother of all demos" fame, created our
beloved mouse:
https://www.bbc.com/news/technology-53638033
I remember, back in the mid-1980's being part of a focus group
evaluating Microsoft's mouse. Wow, time flies.
-Will
--
GPG Fingerprint: 68F4 B3BD 1730 555A 4462 7D45 3EAA 5B6D A982 BAAF
> From: Lars Brinkhoff
> I haven't investigated it thoroughly, but I do see a file .DOVR.;.SPOOL
> 8 written in C by Eliot Moss.
> ...
> When sending to the DOVER, the spooler waits until Spruce is
> free before sending another file.
Ah, so there was a spooler on the ITS machine as well; I didn't know/remember
that.
I checked on CSR, and it did use TFTP to send it to the Alto spooler:
HOST MIT-SPOOLER, LCS 2/200,SERVER,TFTPSP,ALTO,[SPOOLER]
I vaguely recall the Dover being named 'Spruce', but that name wasn't in the
host table... I have this vague memory that 'MIT-Spooler' was the Alto which
prove the Dover, but now that I think about it, it might have been another one
(which ran only TFTP->EFTP spooler software). IIRC the Dover as a pain to run,
it required a very high bit rate, and the software to massage it was very
tense; so it may have made sense to do the TFTP->EFTP (I'm pretty sure the
vanilla Dover spoke EFTP, but maybe I'm wrong, and it used the PUP stream
protocol) in another machine.
It'd be interesting to look at the Dover spooler on ITS, and see if/how one
got to the CHAOS network from C - and if so, how it identified the protocol
translating box.
Noel
> From: Will Senn
> $c
> 0177520: ~signal(016,01) from ~sysinit+034
> 0177542: ~sysinit() from ~main+010
> 0177560: _main() from start+0104
> If this means it got signal 16... or 1 from the sysinit call (called
> from main)
I'm not sure that interpretation is correct. I think that trace shows signal()
being called from sysinit().
On V6, signal() was a system call which one could use to set the handlers for
signals (or set them to be ignored, or back to the default action). In. 2.11
it seems to be a shim layer which provides the same interface, but uses
the Berserkly signal system interface underneath:
https://www.tuhs.org/cgi-bin/utree.pl?file=2.11BSD/include/signal.hhttps://www.tuhs.org/cgi-bin/utree.pl?file=2.11BSD/man/cat3/signal.0
So maybe the old binary for kermit is still trying to use the (perhaps
now-removed) signal system call?
Noel
> From: Lars Brinkhoff
> the Dover printer spooler was written using Snyder's C compiler
I'm not sure if that's correct. I don't remember with crystal clarity all the
details of how we got files to the Dover, but here's what I recall (take with
1/2 a grain of salt, my memory may have dropped some bits). To start with,
there were different paths from the CHAOS and TCP/IP worlds. IIRC, there was a
spooler on the Alto which ran the Dover, and the two worlds had separate paths
to get to it.
>From the CHAOS world, there was a protocol translation which ran on whatever
machine had the AI Lab's 3Mbit Ethernet interface - probably MIT-AI's
CHAOS-11? If you look at the Macro-11 code from that, you should see it - IIRC
it translated (on the fly) from CHAOS to EFTP, the PUP prototocol which the
spooler ran 'natively'.
>From the IP world, IIRC, Dave Clark had adapted his Alto TCP/IP stack (written
in BCPL) to run in the spooler alongside the PUP software; it included a TFTP
server, and people ran TFTP from TCP/IP machines to talk to it. (IP access to
the 3Mbit Ethernet was via another UNIBUS Ethernet interface which was plugged
into an IP router which I had written. The initial revision was in Macro-11; a
massive kludge which used hairy macrology to produce N^2 discrete code paths,
one for every pair of interfaces on the machine. Later that was junked, and
replaced with the 'C Gateway' code.)
I can, if people are interested, look on the MIT-CSR machine dump I have
to see how it (a TCP/IP machine) printed on the Dover, to confirm that
it used TFTP.
I don't recall a role for any PDP-10 C code, though. I don't think there was a
spooler anywhere except on the Dover's Alto. Where did that bit about the
PDP-10 spooler in C come from, may I enquire? Was it a CMU thing, or something
like that?
Noel
> My unscientific survey of summer students was that they either came
> from scouts, or were people working on advanced degrees in college.
Not all high-school summer employees were scouts (or scout equivalents -
kids who had logins on BTL Unix machines). I think in particular of Steve
Johnson and Stu Feldman, who eventually became valued permanent employees.
The labs also hired undergrad summer employees. I was one.
Even high-school employees could make lasting contributions. I am
indebted to Steve for a technique he conceived during his first summer
assignment: using macro definitions as if they were units of associative
memory. This view of macros stimulated previously undreamed-of uses.
Doug
I'm running 211bsd pl 431 in SimH on FreeBSD. I've got networking
working on a tap interface both inbound and outbound. I still have a few
issues hanging around that are bugging me, but I'll eventually get to
them. One that is of concern at the moment is kermit. It is in the
system under /usr/new/kermit. When I call it, I get:
kermit
Bad system call - core dumped
I don't see core anywhere and if I did, I'd need to figure out what to
do with it anyway (mabye adb), but I'm wondering if anyone's used kermit
successfully who is on pl 431 or knows what's going on?
Thanks,
Will
--
GPG Fingerprint: 68F4 B3BD 1730 555A 4462 7D45 3EAA 5B6D A982 BAAF
I've always been intrigued with regexes. When I was first exposed to
them, I was mystified and lost in the greediness of matches. Now, I use
them regularly, but still have trouble using them. I think it is because
I don't really understand how they work.
My question for y'all has to do with early unix. I have a copy of
Thompson, K. (1968). Regular expression search algorithm. Communications
of the ACM, 11(6), 419-422. It is interesting as an example of
Thompson's thinking about regexes. In this paper, he presents a
non-backtracking, efficient, algorithm for converting a regex into an
IBM 7094 (whatever that is) program that can be run against text input
that generates matches. It's cool. It got me to thinking maybe the way
to understand the unix regex lies in a careful investigation into how it
is implemented (original thought, right?). So, here I am again to ask
your indulgence as the latecomer wannabe unix apprentice. My thought is
that ed is where it begins and might be a good starting point, but I'm
not sure - what say y'all?
I also have a copy of the O'Reilly Mastering Regular Expressions book,
but that's not really the kind of thing I'm talking about. My question
is more basic than how to use regexes practically. I would like to
understand them at a parsing level/state change level (not sure that's
the correct way to say it, but I'm really new to this kind of lingo).
When I'm done with my stepping through the source, I want to be able to
reason that this is why that search matched that text and not this text
and why the search was greedy, or not greedy because of this logic here...
If my question above isn't focused or on topic enough, here's an
alternative set to ruminate on and hopefully discuss:
1. What's the provenance of regex in unix (when did it appear, in what
form, etc)?
2. What are the 'best' implementations throughout unix (keep it pre 1980s)?
3. What are some of the milestones along the way (major changes, forks,
disagreements)?
4. Where, in the source, or in a paper, would you point someone to
wanting to better understand the mechanics of regex?
Thanks!
Will
--
GPG Fingerprint: 68F4 B3BD 1730 555A 4462 7D45 3EAA 5B6D A982 BAAF
> 1. What's the provenance of regex in unix (when did it appear, in what form, etc)?
> 2. What are the 'best' implementations throughout unix (keep it pre1980s)?
> 3. What are some of the milestones along the way (major changes, forks, disagreements)?
The editor ed was in Unix from day 1. For the necessarily tiny
implementation, Ken discarded various features
from the ancestral qed. Among the casualties was alternation
in regular expressions. It has never fully returned.
Ken's original paper described a method for simulating all paths
of a nondeterministic finite automaton in parallel, although he
didn't describe it in these exact terms. This meant he had to
keep track of up to n possible states, where n is the number of
terminal symbols in the regular expression.
"Computing Reviews" published a scathing critique of the paper:
everyone knows a deterministic automaton can recognize regular
expressions with one state transition per input character; what
a waste of time to have to keep track of multiple states! What the
review missed was that the size of the DFA can be exponential in n.
For one-shot use, as in an editor, it can take far longer to construct
the DFA than to run it.
This lesson came home with a vengeance when Al Aho wrote egrep,
which implemented full regular expressions as DFA's. I happened
to be writing calendar(1) at the same time, and used egrep to
search calendar files for dates in rather free formats for today
and all days through the next working day. Here's an example
(egrep interprets newline as "|"):
(^|[ (,;])(([Aa]ug[^ ]* *|(08|8)/)0*1)([^0123456789]|$)
(^|[ (,;])((\* *)0*1)([^0123456789]|$)
(^|[ (,;])(([Aa]ug[^ ]* *|(08|8)/)0*2)([^0123456789]|$)
(^|[ (,;])((\* *)0*2)([^0123456789]|$)
(^|[ (,;])(([Aa]ug[^ ]* *|(08|8)/)0*3)([^0123456789]|$)
(^|[ (,;])((\* *)0*3)([^0123456789]|$)
Much to Al's chagrin, this regular expression took the better
part of a minute to compile into a DFA, which would then run in
microseconds. The trouble was that the DFA was enormously
bigger than the input--only a tiny fraction of the machine's
states would be visited; the rest were useless. That led
him to the brilliant idea of constructing the machine on
the fly, creating only the states that were pertinent to
the input at hand. That innovation made the DFA again
competitive with an NFA.
Doug
This topic is still primarily UNIX but is getting near the edge of COFF, so
I'll CC there if people want to follow up.
As I mentioned to Will, during the time Research was doing the work/put out
their 'editions', the 'releases' were a bit more ephemeral - really a set
of bits (binary and hopefully matching source, but maybe not always)
that become a point in time. With 4th (and I think 5th) Editions it was a
state of disk pack when the bits were copies, but by 6th edition, as Noel
points out, there was a 'master tape' that the first site at an
institution received upon executing of a signed license, so the people at
each institution (MIT, Purdue, CMU, Harvard) passed those bits around
inside.
But what is more, is what Noel pointed out, we all passed source code and
binaries between each other, so DNA was fairly mixed up [sorry Larry - it
really was 'Open Source' between the licensees]. Sadly, it means some
things that actually were sourced at one location and one system, is
credited sometimes credited from some other place the >>wide<< release was
in USG or BSD [think Jim Kulp's Job control, which ended up in the kernel
and csh(1) as part in 4BSD, our recent discussions on the list about
more/pg/less, the different networking changes from all of MIT/UofI/Rand,
Goble's FS fixes to make the thing more crash resilient, the early Harvard
ar changes - *a.k.a.* newar(1) which became ar(1), CMU fsck, e*tc*.].
Eventually, the AT&T Unix Support Group (USG) was stood up in Summit, as I
understand it, originally for the Operating Companies as they wanted to use
UNIX (but not for the licenses, originally). Steve Johnson moved from
Research over there and can tell you many more of the specifics.
Eventually (*i.e.* post-Judge Green), distribution to the world moved from
MH's Research and the Patent Licensing teams to USG and AT&T North Carolina
business folks.
That said, when the distribution of UNIX moved to USG in Summit, things started
to a bit more formal. But there were still differences inside, as we have
tried to unravel. PWB/TS and eventually System x. FWIW, BSD went
through the same thing. The first BSD's are really the binary state of the
world on the Cory 11/70, later 'Ernie.' By the time CSRG gets stood
up because their official job (like USG) is to support Unix for DARPA, Sam
and company are acting a bit more like traditional SW firms with alpha/beta
releases and a more formal build process. Note that 2.X never really
went through that, so we are all witnessing the wonderful efforts to try to
rebuild early 2.X BSD, and see that the ephemeral nature of the bits has
become more obvious.
As a side story ... the fact is that even for professional SW houses, it
was not as pure as it should be. To be honest, knowing the players and
processes involved, I highly doubt DEC could rebuild early editions of VMS,
particularly since the 'source control' system was a physical flag in
Cutler's office.
The fact is that the problem of which bits were used to make what other
bits was widespread enough throughout the industry that in the mid-late 80s
when Masscomp won the bid to build the system that Nasa used to control the
space shuttle post-Challenger, a clause of the contract was that we have
put an archive of the bits running on the build machine ('Yeti'), a copy of
the prints and even microcode/PAL versions so that Ford Aerospace (the
prime contractor) could rebuild the exact system we used to build the
binaries for them if we went bankrupt. I actually, had a duplicate of that
Yeti as my home system ('Xorn') in my basement when I made some money for a
couple of years as a contract/on-call person for them every time the
shuttle flew.
Anyway - the point is that documentation and actual bits being 100% in sync
is nothing new. Companies work hard to try to keep it together, but
different projects work at different speeds. In fact, the 'train release'
model is what is usually what people fall into. You schedule a release of
some piece of SW and anything that goes with it, has to be on the train or
it must wait for the next one. So developers and marketing people in firms
argue what gets to be the 'engine' [hint often its HW releases which are a
terrible idea, but that's a topic for COFF].
> From: Warner Losh
> 8 I think was the limit.
IIRC, you could use longer names than that (in C), but external references
only used the first 7 (in C - C symbols had a leading '_' tacked on; I used to
know why), 8 (in assembler).
> Could that cause this error?
Seems unlikely - see below.
> The error comes from lookloc. This is called for external type
> relocations. It searches the local symbol table for something that
> matches the relocation entry. This error happens when it can't find
> it...
Someone who actually looked at the source:
https://www.tuhs.org/cgi-bin/utree.pl?file=V7/usr/src/cmd/ld.c
instead of just guessing. Give that man a star!
I spent a while looking at the code, trying to figure out i) how it works, and
ii) what's going wrong with that message, but I don't have a definitive
answer. The code is not super well commented, so one has to actually
understand what it's doing! :-)
It seems to my initial perusal that it maintains two symbol tables, one for
globals (which accumulates as each file is processed), and one for locals
(which is discarded/reset for each file). As Werner mentioned, the message
appears when a local symbol referenced in the relocation information in the
current file can't be found (in the local symbol table).
It's not, I think, simply due to too many local symbols in an input file -
there seems to be a check for that as it's reading the input file symbol
table:
if (lp >= &local[NSYMPR])
error(1, "Local symbol overflow");
*lp++ = symno;
*lp++ = sp;
although of course there could be a bug which breaks this check. It seems to
me that this is an 'impossible' error, one which can only happen due to i) a
bug in the loader (a fencepost error, or something), or ii) an error in the
input a.out file.
I don't want to spend more time on it, since I'm not sure if you've managed to
bypass the problem. If not, let me know, and we'll track it down. (This may
involve you addding some printf's so we have more info about the details.)
Noel
I finally munged lbforth.c (https://gist.github.com/lbruder/10007431) into
compiling cleanly on mostly-stock v7 with the system compiler (lbforth
itself does fine on 211BSD, but it needs a little help to build in a real
K&R environment).
Which would be nice, except that when it gets to the linker....
$ cc -o 4th forth.c
ld:forth.o: Local symbol botch
WTF?
How do I begin to debug this?
Adam