There was nothing unique about the size or the object code of Dennis's C
compiler. In the 1960s, Digitek had a thriving business of making Fortran
compilers for all manner of machines. To optimize space usage, the
compilers' internal memory model comprised variable-size movable tables,
called "rolls". To exploit this non-native architecture, the compilers
themselves were interpreted, although they generated native code. Bob
McClure tells me he used one on an SDS910 that had 8K 16-bit words.
Dennis was one-up on Digitek in having a self-maintaining compiler. Thus,
when he implemented an optimization, the source would grow, but the
compiler binary might even shrink thanks to self-application.
Doug
nl(1) uses the notable character sequences “\:\:\:”, “\:\:”, and “\:” to delimit header, body, and trailer sections within its input.
I wondered if anyone was able to shed light on the reason those were adopted as the defaults?
I would have expected perhaps something compatible with *roff (like, .\” something).
FreeBSD claims nl first appeared in System III (although it previously claimed SVR2), but I haven’t dug into the implementation any further.
Thanks in advance,
d
While the idea of small tools that do one job well is the core tenant of
what I think of as the UNIX philosophy, this goes a bit beyond UNIX, so I
have moved this discussion to COFF and BCCing TUHS for now.
The key is that not all "bloat" is the same (really)—or maybe one person's
bloat is another person's preference. That said, NIH leads to pure bloat
with little to recommend it, while multiple offerings are a choice. Maybe
the difference between the two may be one person's view over another.
On Fri, May 10, 2024 at 6:08 AM Rob Pike <robpike(a)gmail.com> wrote:
> Didn't recognize the command, looked it up. Sigh.
>
Like Rob -- this was a new one for me, too.
I looked, and it is on the SYS3 tape; see:
https://www.tuhs.org/cgi-bin/utree.pl?file=SysIII/usr/src/man/man1/nl.1
> pr -tn <file>
>
> seems sufficient for me, but then that raises the question of your
> question.
>
Agreed, that has been burned into the ROMs in my fingers since the
mid-1970s 😀
BTW: SYS3 has pr(1) with both switches too (more in a minute)
> I've been developing a theory about how the existence of something leads
> to things being added to it that you didn't need at all and only thought of
> when the original thing was created.
>
That is a good point, and I generally agree with you.
> Bloat by example, if you will. I suspect it will not be a popular theory,
> however accurately it may describe the technological world.
>
Of course, sometimes the new features >>are<< easier (more natural *for
some people*). And herein lies the core problem. The bloat is often
repetitive, and I suggest that it is often implemented in the wrong place -
and usually for the wrong reasons.
Bloat comes about because somebody thinks they need some feature and
probably doesn't understand that it is already there or how they can use
it. But they do know about it, their tool must be set up to exploit it - so
they do not need to reinvent it. GUI-based tools are notorious for this
failure. Everyone seems to have a built-in (unique) editor, or a private
way to set up configuration options et al. But ... that walled garden is
comfortable for many users and >>can be<< useful sometimes.
Long ago, UNIX programmers learned that looking for $EDITOR in the
environment was way better than creating one. Configuration was as ASCII
text, stored in /etc for system-wide and dot files in the home for users.
But it also means the >>output<< of each tool needs to be usable by each
other [*i.e.*, docx or xlx files are a no-no).
For example, for many things on my Mac, I do use the GUI-based tools --
there is no doubt they are better integrated with the core Mac system >>for
some tasks.<< But only if I obey a set of rules Apple decrees. For
instance, this email read is easier much of the time than MH (or the HM
front end, for that matter), which I used for probably 25-30 years. But on
my Mac, I always have 4 or 5 iterm2(1) open running zsh(1) these days. And,
much of my typing (and everything I do as a programmer) is done in the shell
(including a simple text editor, not an 'IDE'). People who love IDEs swear
by them -- I'm just not impressed - there is nothing they do for me that
makes it easier, and I have learned yet another scheme.
That said, sadly, Apple is forcing me to learn yet another debugger since
none of the traditional UNIX-based ones still work on the M1-based systems.
But at least LLDB is in the same key as sdb/dbx/gdb *et al*., so it is a
PITA but not a huge thing as, in the end, LLDB is still based on the UNIX
idea of a single well-designed and specific to the task tool, to do each
job and can work with each other.
FWIW: I was recently a tad gob-smacked by the core idea of UNIX and its
tools, which I have taken for a fact since the 1970s.
It turns out that I've been helping with the PiDP-10 users (all of the
PiDPs are cool, BTW). Before I saw UNIX, I was paid to program a PDP-10. In
fact, my first UNIX job was helping move programs from the 10 to the UNIX.
Thus ... I had been thinking that doing a little PDP-10 hacking shouldn't
be too hard to dust off some of that old knowledge. While some of it has,
of course, come back. But daily, I am discovering small things that are so
natural with a few simple tools can be hard on those systems.
I am realizing (rediscovering) that the "build it into my tool" was the
norm in those days. So instead of a pr(1) command, there was a tool that
created output to the lineprinter. You give it a file, and it is its job to
figure out what to do with it, so it has its set of features (switches) -
so "bloat" is that each tool (like many current GUI tools) has private ways
of doing things. If the maker of tool X decided to support some idea, they
would do it like tool Y. The problem, of course, was that tools X and Y
had to 'know about' each type of file (in IBM terms, use its "access
method"). Yes, the engineers at DEC, in their wisdom, tried to
"standardize" those access methods/switches/features >>if you implemented
them<< -- but they are not all there.
This leads me back to the question Rob raises. Years ago, I got into an
argument with Dave Cutler RE: UNIX *vs.* VMS. Dave's #1 complaint about
UNIX in those days was that it was not "standardized." Every program was
different, and more to Dave's point, there was no attempt to make switches
or errors the same [getopt(3) had been introduced but was not being used by
most applications). He hated that tar/tp used "keys" and tools like cpio
used switches. Dave hated that I/O was so simple - in his world all user
programs should use his RMS access method of course [1]. VMS, TOPS, *etc.*,
tried to maintain a system-wide error scheme, and users could look things
like errors up in a system DB by error number, *etc*. Simply put, VMS is
very "top-down."
My point with Dave was that by being "bottom-up," the best ideas in UNIX
were able to rise. And yes, it did mean some rough edges and repeated
implementations of the same idea. But UNIX offered a choice, and while Rob
and I like and find: pr -tn perfectly acceptable thank you, clearly someone
else desired the features that nl provides. The folks that put together
System 3 offer both solutions and let the user choose.
This, of course, comes as bloat, but maybe that is a type of bloat so bad?
My own thinking is this - get things down to the basics and simplest
privatives and then build back up. It's okay to offer choices, as long as
the foundation is simple and clean. To me, bloat becomes an issue when you
do the same thing over and over again, particularly because you can not
utilize what is there already, the worst example is NIH - which happens way
more than it should.
I think the kind of bloat that GUI tools and TOPS et al. created forces
recreation, not reuse. But offering choice and the expense of multiple
tools that do the same things strikes me as reasonable/probably a good
thing.
1.] BTW: One of my favorite DEC stories WRT to VMS engineering has to do
with the RMS I/O system. Supporting C using VMS was a bit of PITA.
Eventually, the VMS engineers added Stream I/O - which simplified the C
runtime, but it was also made available for all technical languages.
Fairly soon after it was released, the DEC Marketing folks discovered
almost all new programs, regardless of language, had started to use Stream
I/O and many older programs were being rewritten by customers to use it. In
fact, inside of DEC itself, the languages group eventually rewrote things
like the FTN runtime to use streams, making it much smaller/easier to
maintain. My line in the old days: "It's not so bad that ever I/O has
offer 1000 options, it's that Dave to check each one for every I/O. It's a
classic example of how you can easily build RMS I/O out of stream-based
I/O, but the other way around is much harder. My point here is to *use
the right primitives*. RMS may have made it easier to build RDB, but it
impeded everything else.
> On Wed, 8 May 2024 14:12:15 -0400,Clem Cole <clemc(a)ccc.com <mailto:clemc@ccc.com>> wrote:
>
> FWIW: The DEC Mod-II and Mod-III
> were new implementations from DEC WRL or SRC (I forget). They targeted
> Alpha and I, maybe Vax. I'd have to ask someone like Larry Stewart or Jeff
> Mogul who might know/remember, but I thought that the font end to the DEC
> MOD2 compiler might have been partly based on Wirths but rewritten and by
> the time of the MOD3 FE was a new one originally written using the previous
> MOD2 compiler -- but I don't remember that detail.
Michael Powell at DEC WRL wrote a Modula 2 compiler that generated VAX code. Here’s an extract from announcement.d accompanying a 1992 release of the compiler from gatekeeper.dec.com <http://gatekeeper.dec.com/>:
The compiler was designed and built by Michael L. Powell, and originally
released in 1984. Joel McCormack sped the compiler up, fixed lots of bugs, and
swiped/wrote a User's Manual. Len Lattanzi ported the compiler to the MIPS.
Later, Paul Rovner and others at DEC SRC designed Modula-2+ (a language extension with exceptions, threads, garbage collection, and runtime type dispatch). The Modula-2+ compiler was originally based on Powell’s compiler. Modula-2+ ran on the VAX.
Here’s a DEC SRC research report on Modula-2+:
http://www.bitsavers.org/pdf/dec/tech_reports/SRC-RR-3.pdf
Modula-3 was designed at DEC SRC and Olivetti Labs. It had a portable implementation (using the GCC back end) and ran on a number of machines including Alpha.
Paul
Sorry for the dual list post, I don’t who monitors COFF, the proper place for this.
There may a good timeline of the early decades of Computer Science and it’s evolution at Universities in some countries, but I’m missing it.
Doug McIlroy lived through all this, I hope he can fill in important gaps in my little timeline.
It seems from the 1967 letter, defining the field was part of the zeitgeist leading up to the NATO conference.
1949 ACM founded
1958 First ‘freshman’ computer course in USA, Perlis @ CMU
1960 IBM 1400 - affordable & ‘reliable’ transistorised computers arrived
1965 MIT / Bell / General Electric begin Multics project.
CMU establishes Computer Sciences Dept.
1967 “What is Computer Science” letter by Newell, Perlis, Simon
1968 “Software Crisis” and 1st NATO Conference
1969 Bell Labs withdraws from Multics
1970 GE's sells computer business, including Multics, to Honeywell
1970 PDP-11/20 released
1974 Unix issue of CACM
=========
The arrival of transistorised computers - cheaper, more reliable, smaller & faster - was a trigger for the accelerated uptake of computers.
The IBM 1400-series was offered for sale in 1960, becoming the first (large?) computer to sell 10,000 units - a marker of both effective marketing & sales and attractive pricing.
The 360-series, IBM’s “bet the company” machine, was in full development when the 1400 was released.
=========
Attached is a text file, a reformatted version of a 1967 letter to ’Science’ by Allen Newell, Alan J. Perlis, and Herbert A. Simon:
"What is computer science?”
<https://www.cs.cmu.edu/~choset/whatiscs.html>
=========
A 1978 masters thesis on Early Australian Computers (back to 1950’s, mainly 1960’s) cites a 17 June 1960 CSIRO report estimating
1,000 computers in the US and 100 in the UK. With no estimate mentioned for Western Europe.
The thesis has a long discussion of what to count as a (digital) ‘computer’ -
sources used different definitions, resulting in very different numbers,
making it difficult to reconcile early estimates, especially across continents & countries.
Reverse estimating to 1960 from the “10,000” NATO estimate of 1968, with a 1- or 2-year doubling time,
gives a range of 200-1,000, including the “100” in the UK.
Licklider and later directors of ARPA’s IPTO threw millions into Computing research in the 1960’s, funding research and University groups directly.
[ UCB had many projects/groups funded, including the CSRG creating BSD & TCP/IP stack & tools ]
Obviously there was more to the “Both sides of the Atlantic” argument of E.W. Dijkstra and Alan Kay - funding and numbers of installations was very different.
The USA had a substantially larger installed base of computers, even per person,
and with more university graduates trained in programming, a higher take-up in private sector, not just the public sector and defence, was possible.
=========
<https://www.acm.org/about-acm/acm-history>
In September 1949, a constitution was instituted by membership approval.
————
<https://web.archive.org/web/20160317070519/https://www.cs.cmu.edu/link/inst…>
In 1958, Perlis began teaching the first freshman-level computer programming course in the United States at Carnegie Tech.
In 1965, Carnegie Tech established its Computer Science Department with a $5 million grant from the R.K. Mellon Foundation. Perlis was the first department head.
=========
From the 1968 NATO report [pg 9 of pdf ]
<http://homepages.cs.ncl.ac.uk/brian.randell/NATO/nato1968.PDF>
Helms:
In Europe alone there are about 10,000 installed computers — this number is increasing at a rate of anywhere from 25 per cent to 50 per cent per year.
The quality of software provided for these computers will soon affect more than a quarter of a million analysts and programmers.
d’Agapeyeff:
In 1958 a European general purpose computer manufacturer often had less than 50 software programmers,
now they probably number 1,000-2,000 people; what will be needed in 1978?
_Yet this growth rate was viewed with more alarm than pride._ (comment)
=========
--
Steve Jenkin, IT Systems and Design
0412 786 915 (+61 412 786 915)
PO Box 38, Kippax ACT 2615, AUSTRALIA
mailto:sjenkin@canb.auug.org.au http://members.tip.net.au/~sjenkin
Where did chunix (which contains chaos.c) and several other branches of the
v8 /usr/sys tree on TUHS come from? This stuff does not appear in the v8
manual. I don't recall a Lisp machine anywhere near the Unix room, nor any
collaborations that involved a Lisp machine.
Doug
I wonder if anyone can shed any light on the timing and rationale for
the introduction of “word erase” functionality to the kernel terminal
driver. My surface skim earlier leads me to believe it came to Unix
with 4BSD, but it was not reincorporated into 8th Edition or later,
nor did it make it to Plan 9 (which did incorporate ^U for the "line
kill" command). TOPS-20 supports it via the familiar ^W, but I'm not
sure about other PDP-10 OSes (Lars?). Multics does not support it.
VMS does not support it.
What was the proximal inspiration? The early terminal drivers seem to
use the Multics command editing suite (`#` for erase/backspace, `@`
for line kill), though at some point that changed, one presumes as
TTYs fell out of favor and display terminals came to the fore.
- Dan C.
I've been doing some research on Lisp machines and came across an
interesting tidbit: there was Chaosnet support in Unix v8, e.g.
https://www.tuhs.org/cgi-bin/utree.pl?file=V8/usr/sys/chunix/chaos.c
Does anyone remember why that went in? My first guess would be for
interoperability with the Symbolics users at Bell Labs (see Bromley's
"Lisp Lore", 1986), but that's just speculation.
john
Wikipedia has a brief page on cscope, which has a link to
https://cscope.sourceforge.net/history.html
written by Harold Bamford, in which he talks about the
early days of cscope at Bell Labs and its inventor Joe Steffan.
I wondered if anyone can add any interesting information about using
cscope on their projects or anything about its development.
-Marcus.
> You can always read Josh Fisher's book on the "Bulldog" compiler, I
> believe he did this work at Yale.
Are you thinking of John Ellis’s thesis:
Bulldog: A Compiler for VLIW Architectures
John R. Ellis
February 1985
http://www.cs.yale.edu/publications/techreports/tr364.pdf
Fisher was Ellis’s advisor. The thesis was also published in ACM’s Doctoral Dissertation Award:
https://mitpress.mit.edu/9780262050340/bulldog/
I believe Ellis still has a tape with his thesis software on it, but I don’t know if he’s been able to read it.
Hello Everyone
One of polish academic institutions was getting rid of old IT-related stuff
and they were kind enough to give me all Solaris related stuff, including
lots (and i mean lots) of installation CD-ROMS, documentations, manuals,
and some solaris software, mostly compilers and scientific stuff.
If anyone would be interested feel free to contact me and i'd be happy to
share - almost everything is in more than a few copies and I have no
intention of keeping everything for myself.
Currently all of the stuff is located in Warsaw, Poland.
Best regards,
mjb
--
Maciej Jan Broniarz
> [Tex's] oversetting of lines caused by the periodic failure of the
> paragraph-justification algorithms drove me nuts.
Amen. If Tex can't do what it thinks is a good job, it throws a fit
and violates the margin, hoping to force a rewrite. Fortunately,
one can shut off the line-break algorithm and simply fill greedily.
The command to do so is \sloppy--an ironic descriptor of text
that looks better, albeit not up to Tex's discriminating standard.
Further irony: when obliged to write in Tex, I have resorted to
turning \sloppy mode on globally.
Apologies for airing an off-topic pet peeve,
Doug
I happened upon
https://old.gustavobarbieri.com.br/trabalhos-universidade/mc722/lowney92mul…
and I am curious as to whether any of the original Multiflow compilers
survive. I had never heard of them before now, but the fact that they were
licensed to so many influential companies makes me think that there might
be folks on this list who know of its history.
-Henry
ACPI has 4-byte identifiers (guess why!), but I just wondered, writing some
assembly:
is it globl, not global, or glbl, because globl would be a one-word
constant on the PDP-10 (5 7-bit bytes)?
Not entirely off track, netbsd at some point (still does?) ran on the
PDP-10.
> "BI" fonts can, it seems, largely be traced to the impact
> of PostScript
There was no room for BI on the C/A/T. It appeared in
troff upon the taming of the Linotron 202, just after v7
and five years before PostScript.
> Seventh Edition Unix shipped a tc(1) command to help
> you preview your troff output with that device before you
> spent precious departmental money sending it to the
> actual typesetter.
Slight exaggeration. It wasn't money, It was time and messing
with film cartridges, chemicals, and wet prints. You could buy a
lot of typesetter film and developer for the price of a 4014.
Doug
yeah that was the one that id' first mentioned.
Although I was more so interested in when/where the 386 PCC came from
Seems at best all those sources are locked away.
____
| From: Angus Robinson
| To: Jason Stevens
| Cc: TUHS main list
| Sent: March 25, 2024 09:17 AM
| Subject: Re: [TUHS] 386 PCC
|
|
| Is this it ?
|
| https://web.archive.org/web/20071017025542/http://pcc.l
| udd.ltu.se/
|
| Kind Regards,
| Angus Robinson
|
|
| On Sun, Mar 24, 2024 at 2:13?AM Jason Stevens <
| jsteve(a)superglobalmegacorp.com> wrote:
|
|
| I'd been on this whole rabbithole exploration thing of
| those MIT PCC 8086
| uploads that have been on the site & on bitsavers, it
| had me wondering is
| there any version of PCC that targeted the 386?
|
| While rebuilding all the 8086 port stuff, and MIT
| PC/IP was fun, it'd be
| kind of interesting to see if anything that ancient
| could be forced to work
| with a DOS Extender..
|
| I know there was the Anders Magnusson one in 2007,
| although the site is now
| offline. But surely there must have been another one
| between 1988/2007?
|
| Thanks!
|
|
|
|
I'd been on this whole rabbithole exploration thing of those MIT PCC 8086
uploads that have been on the site & on bitsavers, it had me wondering is
there any version of PCC that targeted the 386?
While rebuilding all the 8086 port stuff, and MIT PC/IP was fun, it'd be
kind of interesting to see if anything that ancient could be forced to work
with a DOS Extender..
I know there was the Anders Magnusson one in 2007, although the site is now
offline. But surely there must have been another one between 1988/2007?
Thanks!
Not that I'm looking for drama but any idea what happened?
Such a shame it just evaporated.
____
| From: arnold(a)skeeve.com
| To: tuhs@tuhs.org;jsteve@superglobalmegacorp.com
| Cc:
| Sent: March 25, 2024 08:46 AM
| Subject: Re: [TUHS] 386 PCC
|
|
| Jason Stevens <jsteve(a)superglobalmegacorp.com> wrote:
|
| > I know there was the Anders Magnusson one in 2007,
| although the site is now
| > offline.
|
| A mirror of that work is available at
| https://github.com/arnoldrobbins/pcc-revived.
| It's current as of the last time the main site was
| still online,
| back in the fall of 2023.
|
| Magnusson has more than once said he's working to get
| things back
| online, but nothing has happened yet. I check weekly.
|
| FWIW,
|
| Arnold
|
Hi Everyone,
I’m cleaning the office and I have the following free books available first-come, first-served (just pay shipping).
“Solaris Internals.” Richard McDougall and Jim Mauro. 2007 Second Edition. 1020pp hardbound. (2 copies)
“Sun Performance and Tuning - Java and the Internet.“ Adrian Cockcroft and Richard Pettit. 1998 Second Edition. 587pp softbound.
“DTrace - Dynamic Tracing in Oracle Solaris, MacOSX, and FreeBSD.” Brendan Gregg and Jim Mauro. 2011. 1115 pp softbound. (2 copies)
“Oracle Database 11g Release 2 High Availability.” Scott Jesse, Bill Burton, & Bryan Vongray. 2011 Second Edition. 515pp softbound.
“Oracle Solaris 11 System Administration - The Complete Reference.” Michael Jang, Harry Foxwell, Christine Tran, & Alan Formy-Duval. 2013. 582pp softbound. (12 copies). NOTE, this is an older edition not the one covering 11.2.
“Strategies for Real-Time System Specification.” Derek Hatley & Imtiaz Pirbhai. 1988. 386pp hardbound.
“Mathematica.” Stephen Wolfram. 1991 Second Edition. 961pp hardbound. (Anyone want to save this from the landfill?)
Please send me mail off-list with your name and address and I’ll let you know shipping cost.
I expect to have additional books later this year.
Regards,
Stephen
> From: Rich Salz <rich.salz(a)gmail.com <mailto:rich.salz@gmail.com>>
>> Don't forget the Imagen's
>>
>
> What, no Dover "call key operator"? :) (It was a Xerox product based on
> their 9700 copier.)
Actually, it was based on a Xerox 7000:
"The Dover is strip-down [sic] Xerox 7000 Reduction Duplicator. All optical system, electronics, contact relays, top harness, control console and related components are eliminated from the Xerox 7000. The paper feeder, paper transports, engines, solenoid, paper path sensing switches and related components are not disturbed. …"
http://www.bitsavers.org/pdf/xerox/dover/dover.pdf
Evenin' all...
I have a vague recollection that /dev/tty8 was the console in Edition 5
(we only used it briefly until Ed 6 appeared), but cannot find a reference
to it; lots of stuff about Penguin/OS though...
Something to do with 0-7 being the mux, so "8" was left (remember that
/dev/tty and /dev/console didn't exist back then), mayhaps?
Thanks.
-- Dave
> There was lawyerly concern about the code being stolen.
Not always misplaced. There was a guy in Boston who sold Unix look-alike
programs. A quick look at the binary revealed perfect correlation with our
C source. Coincidentally, DEC had hired this person as a consultant in
connection with cross-licensing negotiations with AT&T. Socializing at
the end of a day's negotiations, our lawyer somehow managed to turn the
conversation to software piracy. He discussed a case he was working on,
and happened to have some documents about it in his briefcase. He pulled
out a page disassembled binary and a page of source code and showed them to
the consultant.
After a little study, the consultant confidently opined that the binary was
obviously compiled from that source. "Would it surprise you," the lawyer
asked, "if I told you that this is yours and that is ours?" The consultant
did not attend the following day's meeting.
Doug
In another thread there's been some discussion of Coherent. I just came
across this very detailed history, just posted last month. There's much
more to it than I knew.
https://www.abortretry.fail/p/the-mark-williams-company
Marc
VP/ix ran on both System III and UNIX System V/386 Release 3.2.
I do still have a copy of the VP/ix Environment documentation
and the diskettes for the software. I have the "Introduction to the
VP/ix Environment" for further reference for interested folks.
Also found some information about VP/ix on these web pages:
1.
https://virtuallyfun.com/2020/11/29/fun-with-vp-ix-under-interactive-unix-s…
2.
https://techmonitor.ai/technology/interactive_systems_is_adding_to_vpix_wit…
3.
https://manualzz.com/doc/7267897/interactive-unix-system-v-386-r3.2-v4.1---…
It's been a long time since I looked at this.
Heinz
On 3/13/2024 8:53 AM, Clem Cole wrote:
> Thanks. Fair enough. You mentioned PC/IX as /ISC's System III/
>
> I'm not sure I ever ran ISC's System III port—only the V.3 port -
> which was the basis for their ATT, Intel, and IBM work and later sold
> directly. I'm fairly sure ISCalso called that port PC/IX, but they
> might have added something to say with 386 in the name—I've forgotten.
> [Heinz probably can clarify here]. Anyway, this is likely the source
> of my thinking. FWIW: The copy of PC/IX for the 386 (which I still
> have on a system I have not booted in ages) definitely has VPIX.
> ᐧ
>
> On Wed, Mar 13, 2024 at 11:28 AM Marc Rochkind <mrochkind(a)gmail.com>
> wrote:
>
> @Clem Cole <mailto:clemc@ccc.com>,
>
> I don't remember what it was. But, the XT had an 8088, so
> certainly no 386 technology was involved.
>
> Marc
>
> On Wed, Mar 13, 2024 at 8:38 AM Clem Cole <clemc(a)ccc.com> wrote:
>
> @Marc
>
> On Tue, Mar 12, 2024 at 1:18 PM Marc Rochkind
> <mrochkind(a)gmail.com> wrote:
>
> At a trade show, I bought a utility that allowed me to run
> PC-DOS under PC/IX. I'm sure it wasn't a virtual machine.
> Rather, it just swapped back and forth. (Guessing a bit
> there.)
>
> Hmm ... you sure it was not either VPIX or DOS/Merge -- ISC
> built VPIX in cooperation with the Phoenix Tech folks for
> PC/IX. I always bought a copy with it, but it may have been an
> option. LCC did DOS/Merge originally as part of the AIX work
> for IBM and would become a core part of OS/2 Warp IIRC. Both
> Merge and VPIX had some rough edges but certainly worked fine
> for DOS 3.3 programs. The issue tended to be Win and DOS
> graphics-based programs/games that played fast and loose,
> bypassing the DOS OS interface and accessing the HW directly.
> For instance, I never got the flight simulator (Air War over
> Germany) for Dad's WWII plane (P-47 Thunderbolt) to run under
> either (i.e., only under DOS directly on the HW. FWIW: In that
> mode, Dad said the simulator flew a lot like how he remembered
> it).
>
> Both Merge and VPIX used the 386 VM support and a bunch of
> work in the core OS. Heinz would have to fill us in here.
> The version of the 386 port ISC delivered to AT&T and Intel
> only had the kernel changes to allow the VM support for VPIX
> to be linked in, but it was not there. IICR (and I'm not
> sure I am) is that Merge could run on PC/IX also, but you had
> to replace a couple of kernel modules. It certainly would
> work on the AT&T and Intel versions.
> ᐧ
>
>
>
> --
> /My new email address is mrochkind(a)gmail.com/
>