I seem to remember that Sun was trying to sell boxes to the airline / reservation
industry, and one of the ways they came up with
to make Solaris handle thousands of ascii terminals was to push the character discipline
code into streams in order to eliminate the multiple user/kernel
crossings per character being handled…
Joe
Joe McGuckin
ViaNet Communications
joe(a)via.net
650-207-0372 cell
650-213-1302 office
650-969-2124 fax
On Apr 12, 2020, at 3:03 AM, Paul Ruizendaal
<pnr(a)planet.nl> wrote:
Date: Sat, 11 Apr 2020 08:44:28 -0700
From: Larry McVoy
On Sat, Apr 11, 2020 at 11:38:44AM -0400, Norman Wilson wrote:
-- Stream I/O system added; all
communication-device
drivers (serial ports, Ethernet, Datakit) changed to
work with streams. Pipes were streams.
How was performance? Was this Dennis' streams, not Sys V STREAMS?
It was streams, not STREAMS.
I ported Lachmans/Convergents STREAMS based
TCP/IP stack to the
ETA 10 Unix and SCO Unix and performance just sucked. Ditto for
the Solaris port (which I did not do, I don't think it made any
difference who did the port though).
STREAMS are outside the limited scope I try to restrain myself to, but I’m intrigued.
What in the above case drove/caused the poor performance?
There was a debate on the LKML in the late 1990’s where Caldera wanted STREAMS support in
Linux and to the extent the arguments were technical *), my understanding of them is that
the main show stopper was that STREAMS would make ‘zero copy’ networking impossible. If
so, then it is a comment more about the underlying buffer strategy than STREAMS itself.
Did STREAMS also perform poorly in the 1986 context they were developed in?
Paul
*) Other arguments pro- and con included forward maintenance and market need, but I’m not
so interested in those aspects of the debate.