I have successfully got System V running on a PDP11 sim.
I have been trying to add serial lines like on Version 7 but
have had no success.
What would be necessary under System V on a sim to do so.
I have already tried the SIMH group but no working answers.
If direct telnet is a better way please let me know.
Thanks
Ken
--
WWL 📚
Ca. 1981, if memory serves, having even small numbers of TCP connections
was not common.
I was told at some point that Sun used UDP for NFS for that reason. It was
a reasonably big deal when we started to move to TCP for NFS ca 1990 (my
memory of the date -- I know I did it on my own for SunOS as an experiment
when I worked at the SRC -- it seemed to come into general use about that
time).
What kind of numbers for TCP connections would be reasonable in 1980, 90,
2000, 2010?
I sort of think I know, but I sort of think I'm probably wrong.
So I decided to keep the momentum and have just finished the first pass of a Fifth Edition manual restoration based on the same process I used for 3B20 4.1:
https://gitlab.com/segaloco/v5man
There were a few pages missing from the extant PDF scan, at least as far as pages that were in both V4 and V6 sources, so those are handled by seeing how V5 source of the few programs compares to V6. I'll note which pages required this in a second pass.
I've set my sights on V1 and V2 next, using V3's extant roff sources as a starting point, so more to come.
- Matt G.
From reading a lot of papers on the origins of TCP I can confirm that people appear to have been thinking in terms of a dozen connections per machine, maybe half that on 16-bit hardware, around 1980. Maybe their expectations for PDP-10’s were higher, I have not looked into that much.
> From: Tom Lyon <pugs78(a)gmail.com>
> Sun chose UDP for NFS at a point when few if any people believed TCP could
> go fast.
> I remember (early 80s) being told that one couldn't use TCP/IP in LANs
> because they were WAN protocols. In the late 80s, WAN people were saying
> you couldn't use TCP/IP because they were LAN protocols.
I’m not disputing the above, but there was a lot of focus on making TCP go fast enough for LAN usage in 1981-1984. For example see this 1981 post by Fabry/Joy in the TCP-IP mailing list: https://www.rfc-editor.org/in-notes/museum/tcp-ip-digest/tcp-ip-digest.v1n6…
There are a few other similar messages to the list from around that time.
An early issue was check-summing, with that initially taking 25% of CPU on a VAX750 when TCP was heavily used. Also ideas like having "trailing headers" (so that the data was block aligned) were driven by a search for LAN performance. Timeouts were reduced from 5s and 2s to 0.5s and 0.2s. Using a software interrupt instead of a kernel thread was another thing that made the stack more performant. It always seemed to me that the BBN-CSRG controversy over TCP code spurred both teams ahead with BBN more focussed on WAN use cases and CSRG more on LAN use cases. I would argue that no other contemporary network stack had this dual optimisation, with the possible exception of Datakit.
Guys,
Find attached an updated date.c for Y2K for System V
IE: date 0309182123
Also works:
# date +%D
03/09/23
# date +%y%m%d%H%M
2303091823
Interesting Version 7 wants: date 2303091821
Ken
--
WWL 📚
> From: Kenneth Goodwin
> The first frame buffers from Evans and Sutherland were at University of
> Utah, DOD SITES and NYIT CGL as I recall.
> Circa 1974 to 1978.
Were those on PDP-11's, or PDP-10's? (Really early E+S gear attached to
PDP-10's; '74-'78 sounds like an interim period.)
Noel
In PWB1, support for 'huge' files appears to have been removed. If one
compares bmap() in PWB1'S subr.c with V6's, the "'huge' fetch of double
indirect block" code is gone. I guess PWB didn't need very large (> 8*256*512
= 1,048,576 bytes) files? I'm not sure what the _benefits_ of removing it
were, though - unless PWB was generating lots of files of between 7*256*512
and 8*256*512 bytes in length, and they wanted to avoid the overhead of the
double-indirect block? (The savings in code space are derisory - unlike in
LSX/MINI-UNIX.) Anyone know?
Noel
I am confused on the history of the frame buffer device.
On Linux, it seems that /dev/fbdev originated in 1999 from work done by Martin Schaller and Geert Uytterhoeven (and some input from Fabrice Bellard?).
However, it would seem at first glance that early SunOS also had a frame buffer device (/dev/cgoneX. /dev/bwoneX, etc.) which was similar in nature (a character device that could be mmap’ed to give access to the hardware frame buffer, and ioctl’s to probe and configure the hardware). Is that correct, or were these entirely different in nature?
Paul
The wheel of reincarnation discussion got me to thinking:
What I'm seeing is reversing the rotation of the wheel of reincarnation.
Instead of pulling the task (e.g. graphics) from a special purpose device
back into the general purpose domain, the general purpose computing domain
is pushed into the special purpose device.
I first saw this almost 10 years ago with a WLAN modem chip that ran linux
on its 4 core cpu, all of it in a tiny package. It was faster, better, and
cheaper than its traditional embedded predecessor -- because the software
stack was less dedicated and single-company-created. Take Linux, add some
stuff, voila! WLAN modem.
Now I'm seeing it in peripheral devices that have, not one, but several
independent SoCs, all running Linux, on one card. There's even been a
recent remote code exploit on, ... an LCD panel.
Any of these little devices, with the better part of a 1G flash and a large
part of 1G DRAM, dwarfs anything Unix ever ran on. And there are more and
more of them, all over the little PCB in a laptop.
The evolution of platforms like laptops to becoming full distributed
systems continues. The wheel of reincarnation spins counter clockwise -- or
sideways?
I'm no longer sure the whole idea of the wheel or reincarnation is even
applicable.
Rob Pike:
As observed by many others, there is far more grunt today in the graphics
card than the CPU, which in Sutherland's timeline would mean it was time to
push that power back to the CPU. But no.
====
Indeed. Instead we are evolving ways to use graphics cards to
do general-purpose computation, and assembling systems that have
many graphics cards not to do graphics but to crunch numbers.
My current responsibilities include running a small stable of
those, because certain computer-science courses consider it
important that students learn to use them.
I sometimes wonder when someone will think of adding secondary
storage and memory management and network interfaces to GPUs,
and push to run Windows on them.
Norman Wilson
Toronto ON