[TUHS to Bcc:, +COFF]
On Wed, Jul 10, 2024 at 5:26 PM John Levine <johnl(a)taugh.com> wrote:
It appears that Noel Chiappa
<jnc(a)mercury.lcs.mit.edu> said:
From: Dan
Cross
> These techniques are rather old, and I think go back much further than
> we're suggesting. Knuth mentions nested translations in TAOCP ..
> suggesting the technique was well-known as early as the mid-1960s.
Knuth was talking about simulating one machine on another, interpreting
one instruction at a time. As he notes, the performance is generally awful,
although IBM did microcode emulation of many of their second generation
machines on S/360 which all (for business reasons) ran faster than the
real machines. Unsurprisingly, you couldn't emulate a 7094 on anything
smaller than a 360/65.
It's not clear to me why you suggest with such evident authority that
Knuth was referring only to serialized instruction emulation and not
something like JIT'ed code; true, he doesn't specify one way or the
other, but I find it specious to conclude that that implies the
technique wasn't already in use, or at least known. But certainly by
then JIT'ing techniques for "interpreted" programming languages were
known; it doesn't seem like a great leap to extend that to binary
translation. Of course, that's speculation on my part, and I could
certainly be wrong.
We've been discussing batch or JIT translation of
code which gives
much better performance without a lot of hardware help.
JIT'd performance of binary transliteration is certainly going to be
_better_ than strict emulation, but it is unlikely to be _as good_ as
native code. Indeed, this is still an active area of research; e.g.,
luajit;
https://www.mattkeeter.com/blog/2022-10-04-ssra/ (disclaimer:
Matt's a colleague of mine), etc.
- Dan C.