Since MINIX was a UNIX V7 clone for teaching, I figure this is at least somewhat on-topic.
I’ve wanted to port MINIX 1.5 for M68000 to other systems besides Amiga, Atari ST, and the classic Mac, but trying to do that within a system emulator is a pain and doesn’t help you use a modern editor or SCM system. So I took the Musashi M68000 emulator and, using the MINIX 1.5 sources for Atari ST for reference, I’ve implemented a system call emulator that’s now _almost_ sufficient to run /usr/bin/cc.
It’s up on GitHub at https://github.com/eschaton/MINIXCompat and I’ve released it under an MIT license. It requires my forked version of the Musashi project that supports implementing a TRAP instruction via a callback, which is necessary for implementing system calls on the host side. I reference this via a submodule so it can be kept at least logically distinct from the rest of the code. There’s no Makefile as I’m using Xcode on macOS to develop it, though I expect to write one at some point so I can run it on NetBSD and Linux as well as on macOS; writing one should be straightforward.
-- Chris
> From:
> I was able to rebuild both the UNSW and the native PWB compiler on PWB
> 1.0, but not to backport either to vanilla v6.
Any idea what the problem was? I'm curious, because we ran a version of the
Typesetter compiler on the MIT systems, which ran an enhanced V6.
Noel
Marshall Kirk McKusick gave a talk on the history of
the BSD Daemon:
https://www.youtube.com/watch?v=AeDaD-CEzzg
"This video tells the history of the BSD Daemon. It
starts with the first renditions in the 1970s of the
daemons that help UNIX systems provide services to
users. These early daemons were the inspiration for
the well-known daemon created by John Lasseter in the
early 1980s that became synonymous with BSD as they
adorned the covers of the first three editions of `The
Design and Implementation of the BSD Operating System'
textbooks. The talk will also highlight many of the
shirt designs that featured the BSD Daemon."
On Sun, Oct 20, 2024 at 01:23:23AM -0400, Dan Plassche wrote:
>
> On Sat, 19 Oct 2024, Jonathan Gray wrote:
>
> > PWB was an early external distribution with troff.
> >
> > Documents for the PWB/UNIX Time-Sharing System
> > https://datamuseum.dk/wiki/Bits:30007124
> > https://bitsavers.org/pdf/att/unix/PWB_UNIX/
> >
> > NROFF/TROFF User's Manual
> > October 11, 1976
> > datamuseum.dk, pp 325-357
> > bitsavers, pp 217-249
> >
> > Addendum to the NROFF/TROFF User's Manual
> > May 1977
> > datamuseum.dk, p 358
> > bitsavers, p 250
> >
> > fonts described in:
> > Administrative Advice for PWB/UNIX
> > 23. PHOTOTYPESETTING EQUIPMENT AND SUPPLIES
> > datamuseum.dk, p 647
>
> Thank you Jonathan. I was previously not sure where to place the
> PWB documentation in the timeline but a clearer picture is
> emerging.
>
> Based on the v6 "NROFF User's Manual" revised in 1974 and
> published in 1975, I can now see that the PWB documentation with
> the "NROFF/TROFF User's Manual" from 1976-77 has most of the
> content that later appears in v7. The major change immediately
> beforehand was the rewrite of troff into C.[1] Some clear
> differences are the combination of nroff and troff manpages and
> the addition of troff specific features like the special fonts
> into the user's manual.
>
> [1]. Apparently in 1976:
> https://www.tuhs.org/Archive/Distributions/USDL/unix_program_description-tr…
"It was rewritten in C around 1975"
Kernighan in CSTR 97, A Typesetter-independent TROFF
I've seen references to
"Documents for Use with the Phototypesetter (Version 7)"
which was likely distributed with the licensed phototypesetter tape in 1977.
What may have been the manual distributed with that tape is also close to v7.
https://www.tuhs.org/cgi-bin/utree.pl?file=Interdata732/usr/source/troff/dochttps://www.tuhs.org/Archive/Distributions/Other/Interdata/
tuhs Applications/Spencer_Tapes/unsw3.tar.gz
usr/source/formatters/troff/doc/
The "man 1 iconv" page on both HP-UX 11i 11.23 (Aug 2003) and 11.31 (Feb
2007) remark that iconv was developed by HP.
Cheers,
uncle rubl
--
The more I learn the better I understand I know nothing.
So a project I'm working on recently includes a need to store UTF-8 Japanese kana text in source files for readability, but then process those source files through tools only guaranteed to support single-byte code points, with something mapping the UTF-8 code points to single-byte points in the destination execution environment. After a bit of futzing, I've landed on the definition of iconv(1) provided by the Single UNIX Specification to push this character mapping concern to the tip of my pipelines. It is working well thus far and insulates the utilities down-pipe from needing multi-byte support (I'm looking at you Apple).
I started thumbing through my old manuals and noted that iconv(1) is not a historic utility, rather, SUS picked it up from HP-UX along the way.
Was there any older utility or set of practices for converting files between character encodings besides the ASCII/EBCDIC stuff in dd(1)? As I understand it, iconv(1) is just recognizing sequences of bytes, mapping them to a symbolic name, then emitting them in the complementary series of bytes assigned to that symbolic name in a second charmap file. This sounds like a simple filter operation that could be done in a few other ways. I'm curious if any particular approach was relatively ubiquitous, or if this was an exercise largely left to the individual and so solutions were wide and varied? My tool chain doesn't need to work on historic UNIX, but it would be cool to understand how to make it work on the least common denominator.
- Matt G.
>>> malloc(0) isn't undefined behaviour but implementation defined.
>>
>> In modern C there is no difference between those two concepts.
> Can you explain more about your view
There certainly is a difference, but in this case the practical
implications are the same: avoid malloc(0). malloc(0) lies at the high end
of a range of severity of concerns about implementation-definedness. At the
low end are things like the size of ints, which only affects applications
that may confront very large numbers. In the middle is the default
signedness of chars, which generally may be mitigated by explicit type
declarations.
For the size of ints, C offers guardrails like INT_MAX. There is no test to
discern what an error return from malloc(0) means.
Is there any other C construct that implementation-definedness renders
useless?
Doug