Apologies to TUHS - other than please don't think Fortran did not impact
UNIX and its peers. We owe that community our jobs, and for creating the
market in that we all would build systems and eventually improve.
Note: I'm CCing COFF - you want to continue this...
On Mon, Jun 12, 2023 at 5:39 PM G. Branden Robinson <
g.branden.robinson(a)gmail.com> wrote:
> It's an ill wind that blows a Fortran runtime using the same convention.
>
Be careful there, weedhopper ... Fortran gave a lot to computing
(including UNIX) and frankly still does. I did not write have too much
Fortran as a professional (mostly early in my career), but I did spent 50+
years ensuring that the results of the Fortran compiler ran >>really well<<
on the systems I built. As a former collegiate of Paul W and I once said,
"*Any computer executive that does not take Fortran seriously will not have
their job very long.* It pays our salary."
It's still the #1 language for science [its also not the same language my
Father learned in the late 50s/early 60s, much less the one I learned 15
years later - check out: In what type of work is the Fortran Programming
Language most used today
<https://www.quora.com/In-what-type-of-work-is-the-Fortran-programming-langu…>
, Is Fortran still alive
<https://www.quora.com/Is-Fortran-still-alive/answer/Clem-Cole>, Is Fortran
obsolete <https://www.quora.com/Is-Fortran-obsolete/answer/Clem-Cole>
FWIW: These days, the Intel Fortran compiler (and eventually the LLVM one,
which Intel is the primary developer), calls the C/C++ common runtime for
support. Most libraries are written in C, C++, (or assembler in some very
special cases) - so now it's C that keeps Fortran alive. But "in the
beginning" it was all about Fortran because that paid the bills then and
still does today.
ᐧ
Sorry this is very tangential to the list but figured some folks here might have some knowledge just what with our proximity to Western Electric lore by way of UNIX. Still, didn't feel UNIX-y at all so COFF instead of TUHS.
Anywho, spotted something particularly interesting in my rounds of checking for eBay postings: https://www.ebay.com/itm/385635333789?hash=item59c9a8369d:g:8TMAAOSw3kJkblS…
After the link is an auction for a police badge, with the word "Police" on it, but also labeled as "Western Electric, Co.", along with the seal of North Carolina. I did a bit of searching and while I could find plenty of WECo badges labeled security, plant protection, etc. I can't find any others specifically using the word "Police". The latter term has governmental implications that other terms do not, and it's got me kinda curious if there was ever a time WECo or the Bell System at large actually had authority from the government to accredit their own personnel as "police" and not simply as security, guards, etc. and what sort of legal statutes would be involved. I'm not interested in purchasing this either way, but it'd be amusing if this is some facsimile, I don't know that reporting it as such would bubble up through eBay's systems though.
Also given the sensitivity of discussions of law enforcement these days, I'm simply interested in whether there is any accessible documentation or history of WECo and/or Bell's relationship with formal U.S. law enforcement agencies, not any discussion on the propriety of this. If you want to chat philosophy, email me privately, but that sort of public discussion is too sensitive for me to want to wade into in mixed company.
- Matt G.
Useful Shell Scripts Network Connections , Logins and
*Block hacking attempts*
[image: image.png]
#1. See how many remote IPs are connecting to the machine
See how many remote IPs are connecting to the local machine (whether
through ssh or web or ftp ) Use netstat — atn to view the status of all
connections on the machine, — a to view all, -T Display only tcp connection
information, ≤ n Display in numeric format Local Address (the fourth column
is the IP and port information of the machine) Foreign Address (the fifth
column is the IP and port information of the remote host) Use the awk
command to display only the data in column 5, and then display the
information of the IP address in column 1 Sort can be sorted by number
size, and finally use uniq to delete the redundant duplicates and count the
number of duplicates
netstat -atn | awk '{print $5}' | awk '{print $1}' | sort -nr | uniq -c
#2. Detect file consistency in specified directories of two servers
Detect the consistency of files in specified directories on two servers, by
comparing the md5 values of files on two servers to detect consistency
#!/bin/bash
dir=/data/web
b_ip=xxx.xxx.xxx.xxx
#Iterate through all the files in the specified directory and use them
as arguments to the md5sum command to get the md5 values of all the
files and write them to the specified file
find $dir -type f|xargs md5sum > /tmp/md5_a.txt
ssh $b_ip "find $dir -type f|xargs md5sum > /tmp/md5_b.txt"
scp $b_ip:/tmp/md5_b.txt /tmp
#Compare file names as traversal objects one by one
for f in `awk '{print 2} /tmp/md5_a.txt'`
do
#The standard is machine a. When machine b does not exist to traverse
the files in the object directly output the non-existent results
if grep -qw "$f" /tmp/md5_b.txt
then
md5_a=`grep -w "$f" /tmp/md5_a.txt|awk '{print 1}'`
md5_b=`grep -w "$f" /tmp/md5_b.txt|awk '{print 1}'`
#Output the result of file changes if the md5 value is inconsistent
when the file exists
if [ $md5_a != $md5_b ]
then
echo "$f changed."
fi
else
echo "$f deleted."
fi
done
#3. Detect network interface card traffic and record it in the log
according to the specified format
Detect the network interface card traffic and record it in the log
according to the specified format, and record it once a minute. The log
format is as follows:
- 2019–08–12 20:40
- ens33 input: 1234bps
- ens33 output: 1235bps
#!/bin/bash
while :
do
LANG=en
logfile=/tmp/`date +%d`.log
#Redirect the output of the following command execution to the logfile log
exec >> $logfile
date +"%F %H:%M"
#The unit of traffic counted by the sar command is kb/s, and the log
format is bps, so it should be *1000*8
sar -n DEV 1 59|grep Average|grep ens33|awk '{print
$2,"\t","input:","\t",$5*1000*8,"bps","\n",$2,"\t","output:","\t",$6*1000*8,"bps"}'
echo "####################"
#Because it takes 59 seconds to execute the sar command, sleep is not required
done
#4. Iptables automatically blocks IPs that visit websites frequentlyBlock
more than 200 IP accesses per minute
- According to Nginx
#!/bin/bash
DATE=$(date +%d/%b/%Y:%H:%M)
ABNORMAL_IP=$(tail -n5000 access.log |grep $DATE |awk
'{a[$1]++}END{for(i in a)if(a[i]>100)print i}')
#First tail prevents the file from being too large and slow to read,
and the number can be adjusted for the maximum number of visits per
minute. awk cannot filter the log directly because it contains special
characters.
for IP in $ABNORMAL_IP; do
if [ $(iptables -vnL |grep -c "$IP") -eq 0 ]; then
iptables -I INPUT -s $IP -j DROP
fi
done
- Connection established over TCP
#!/bin/bash
ABNORMAL_IP=$(netstat -an |awk '$4~/:80$/ &&
$6~/ESTABLISHED/{gsub(/:[0-9]+/,"",$5);{a[$5]++}}END{for(i in
a)if(a[i]>100)print i}')
#gsub is to remove the colon and port from the fifth column (client IP)
for IP in $ABNORMAL_IP; do
if [ $(iptables -vnL |grep -c "$IP") -eq 0 ]; then
iptables -I INPUT -s $IP -j DROP
fi
done
Block IPs with more than 10 SSH attempts per minute
- Get login status via lastb
#!/bin/bash
DATE=$(date +"%a %b %e %H:%M") #Day of the week, month, and hour %e
displays 7 for single digits, while %d displays 07
ABNORMAL_IP=$(lastb |grep "$DATE" |awk '{a[$3]++}END{for(i in
a)if(a[i]>10)print i}')
for IP in $ABNORMAL_IP; do
if [ $(iptables -vnL |grep -c "$IP") -eq 0 ]; then
iptables -I INPUT -s $IP -j DROP
fi
done
- Get login status from logs
#!/bin/bash
DATE=$(date +"%b %d %H")
ABNORMAL_IP="$(tail -n10000 /var/log/auth.log |grep "$DATE" |awk
'/Failed/{a[$(NF-3)]++}END{for(i in a)if(a[i]>5)print i}')"
for IP in $ABNORMAL_IP; do
if [ $(iptables -vnL |grep -c "$IP") -eq 0 ]; then
iptables -A INPUT -s $IP -j DROP
echo "$(date +"%F %T") - iptables -A INPUT -s $IP -j DROP"
>>~/ssh-login-limit.log
fi
done
Might come in handy...
--
End of line
> On May 11, 2023, at 12:38 PM, Clem Cole <clemc(a)ccc.com> wrote:
>
> I'm one of the many legacies of the over 50 years of teaching by Dan Siewiorek -- remember in the 1970s there was an infamous band (named after an interesting object BTW).
Ah, so Dan Siewiorek is Steely Dan IV, _not_ from Yokohama. Or perhaps Steely Dan V, from neither Yokohama nor Annandale-on-Hudson.
Adam
On 2023-05-04 10:58, Ralph Corderoy wrote:
> Twitter co-founder Jack Dorsey has been using it for a while.
Not only has he been using it for a while already, but he's also
contributing code and funding the developers of some projects (clients
and relays) with 14 BTC through fiatjaf, Nostr creator.
> I suggest any further chat about Nostr moves to coff(a)tuhs.org.
CC'd.
Cheers.
Ángel
I've just today received a COBOL manual I ordered to find quite the nice surprise.
The manual itself is: "IBM OS Full American National Standard COBOL". It is listed as File No. S360-24, Order No. GC28-6396-4. On the back of the first page this is noted as the "Fifth Edition (September 1973)" and that the current edition "is a reprint of GC28-6396-3, incorporating changes released in TNL GN28-1002." Copyright year chain ends at 1972.
However, in addition to this manual are three addenda:
The first is a memo from Tim S. "Systems Analyst", addressed to and cc'd to a few folks, providing an up-to-date listing (as of March 12*, 1976) of IBM System Reference Library materials. The attachment includes, among other things, documents for S/360, S/370, OS/360, BOS/360, OS/VS, and programming and diagnostic utilities. Each reference includes a volume number and an "SRL", the definition I couldn't find, but presumably just a catalog number of some kind.
The second is a scan of a 31 page, hand-written document titled "COBOL Compiler Release 2.2" providing information on the "March 11, 1979, Release 2.2 of the COBOL compiler...IBM's implementation of the ANSI 1974 Standard for COBOL. The previous Release 1.1 implemented the 1968 ANSI Standard." The document goes on to detail numerous changes between these revisions.
Lastly is a Technical Newsletter bearing the same File and Order numbers as the full manual, but with a date of May 15, 1974 and newsletter number of GN28-1048. This page bears a copyright chain out to 1974 and is simply a set of replacement pages for the manual, as was common at the time. The text indicates that all changes are denoted with a vertical bar printed to the left of the change, so this essentially is a diff between the Fifth Edition manual above and...wait for it..."Fourth Edition (May 1972); Fifth Edition (September 1973)". Strangely the copyright notice on the back still indicates the same edition, but adds reference back to the Fourth Edition as well. Strange, one of life's little mysteries? In any case, the copyright chain here is only out to 1973. Never sure how much that means at any given instant. In any case, I couldn't find any evidence in the manual-proper of previous such updates being applied, in other words, no vertical bars spotted flipping through the pages at least.
Both the replacement pages and the catalog are still stapled together, and the manual-proper still contains the pages (that I spot checked) slated for replacement. It seems the original was even bound itself at one point, indicated by the ghost of a glued spine still lingering on the end of the pages, but both the replacement pages and manual itself also have 3-hole punches and are bound in an Acco binder. If the manual had a true cover, it's long gone.
Figured I'd share some of those details in case anything in this is in want of further illumination. For the record, the Sixth and Seventh editions of this same document appear to be on archive.org. I haven't plumbed their depths searching for evidence of aforementioned diff pages, they're probably just scans of complete published copies.
So all of this for me at least begs the question, is there any sort of equivalent to TROFF sources for documents from the Big Blue? Truth be told, I only ordered this to have a paper COBOL reference on hand, if one should ever need such a thing. If there are such document sources, I'd happily add "patching" them to produce a restoration of this to my studies. At the very least the two smaller addenda will get a scan here pretty soon.
- Matt G.
P.S. While my main focus is Bell UNIX documentation, I do peek around for stuff like this time to time, but I'm much less inclined to spring for something without some functional value to me. That said, I'm looking for documents all the time, so if anyone has any tips on stuff that isn't well preserved in the public record that I should add to my searches time to time, I'm happy to keep an eye out. I'm coming to quite enjoy finding things and getting them on the record.
Apologies, this was meant to go to another mailing list. I also posted
to COFF, so send any follow-ups there.
John Cowan wrote:
> I attended CRWU in 1975-76 and programmed the 1108 (abs, alphabetic, arccos,
> arcsin, arctan) with punch cards so I am definitely interested if the
> material is still available.
Thank you, I'll fill you in on the details.
*Unix on a 3B2-700 won't boot*
I have been going round and round getting it to boot and am at
the point where it might be the sd630.img disk image.
It keeps hanging in "DIAGNOSTICS".
I have reloaded all the files to no avail. Does anyone have a
*known working copy* of *sd630.img* they could share as a gzip ?
Other sims work fine like 3b2-400, Interdata-32 and PDP-11.
Ken
--
WWL 📚
Hello,
I received word from someone who went to Case Wester Reserve
Univsersity, and is willing to send early 1970s ephemera to someone
interested in going through it. The description is:
"I've go stuff from my course work done on our Univac 1108/ChiOS system,
program listing, cpu code cards, etc."
Any takers?
Best regards,
Lars Brinkhoff
I have a 3b2/400 emulator running Unix V r3 fine,
but I have two questions.
Unix is set up with IP 10.0.2.15
I can telnet off it great *but* can not telnet into it. Is there a step I
am missing?
In the sim ini file I have set:
set NI enabled
attach NI nat:
Someone suggested:
attach nat:tcp=2323:10.0.2.15:23,tcp=2121:10.0.2.15:21
but that did not work.
This is what I get at boot time:
NAT args:
NAT network setup:
gateway =10.0.2.2/24(255.255.255.0)
DNS =10.0.2.3
dhcp_start =10.0.2.15
Protocol[State] FD Source Address Port Dest. Address Port RecvQ
SendQ
/home/ken/MYSIMS/System-V-r3/boot.ini-51> attach NI nat:
%SIM-INFO: Eth: opened OS device nat:
Thanks,
Ken
--
WWL 📚
I sent this to coff, but it bounced. Trying again.
[-tuhs] [+coff]
On Sun, Apr 2, 2023 at 3:39 AM Noel Hunt <noel.hunt(a)gmail.com> wrote:
Charles li reis, nostre emperesdre magnes, Set anz totz pleinz ad ested in
> Espagnes.
>
> A translation would be most helpful. It looks like a mixture
> of Spanish and Mediaevel French...ah, it is the La Chanson de
> Roland.
>
Yes, it's Old French, and means "Charles the king, our great emperor[*] /
Seven full years has been in Spain." You pronounce it pretty much like
Spanish, except for the "z" which is pronounced "ts".
[*] Old French had two noun cases, nominative and oblique (a combination of
the Latin genitive, dative, accusative, and ablative). In 99% of modern
French nouns, only the oblique survives. In particular, "emperesdre" is
the old nominative of "empereor"; it survives today in the name
"L[']empriere". A dozen nouns picked up different semantics in the
nominative and both survived: sire/seigneur, prêtre/Provoire (proper name),
copain/compagnon, pâtre/pasteur, chantre/chanteur , maire/majeur,
gars/garçon, and (most surprising) on/homme. In a few nouns, only the
nominative survives: soeur, peintre, traître (English traitor is from the
oblique), and the names Charles, Georges, James (now in English only),
Hugues, Marie, and Eve.
>
[ Please post follow-ups to COFF ]
Ron,
Thanks for the history, enjoyed very much.
Quite relevant to Early Unix, intertwined with VAxen, IP stack from UCB, NSF-net & fakery.
The earliest documented Trojan, Unix or not, would be Ken’s login/cc hack in his “Reflections on Trust” paper.
It was 1986 when Clifford Stoll tracked a KGB recruit who broke into MILNET, then the first “honeynet” by Stoll.
<https://en.wikipedia.org/wiki/Clifford_Stoll#Career>
<https://en.wikipedia.org/wiki/The_Cuckoo%27s_Egg_(book)>
1986 was also the first known PC virus according to Kaspersky.
<https://www.kaspersky.com.au/resource-center/threats/a-brief-history-of-com…
“Brain (boot) , the first PC virus, began infecting 5.2" floppy disks in 1986.”
2nd November 1988, the Morris worm escaped from a lab,
& overloaded the Internet for a week.
Causing CERT to be formed in November 1988 in response.
<https://en.wikipedia.org/wiki/CERT_Coordination_Center>
The SANS Institute was formed the next year, 1989, creating structured training & security materials.
<https://en.wikipedia.org/wiki/SANS_Institute>
This structured, co-ordinated response, led by technical folk, not NatSec/ Intelligence/ Criminal investigation bodies,
created CVE’s, Common Vulnerabilities and Exposures, as a way to identify & name
unique attacks & vectors, track them and make vendors aware, forcing publicity & responses.
<https://en.wikipedia.org/wiki/Common_Vulnerabilities_and_Exposures>
<https://cve.mitre.org>
The Internet eventually became a significant theatre of Crime & Espionage, Commercial & National Security.
Mandiant was formed in 2004 to identify, track and find sources of APT’s, Advanced Persistent Threats.
In 2010, they described APT’s tracked in their “M-trends” newsletter.
in Feb 2013, Mandiant publicly described “APT1” and the military unit & location they believed ran it.
<https://en.wikipedia.org/wiki/Mandiant>
<https://en.wikipedia.org/wiki/Advanced_persistent_threat>
<https://www.lawfareblog.com/mandiant-report-apt1>
<https://www.mandiant.com/resources/blog/mandiant-exposes-apt1-chinas-cyber-…>
=============
> On 2 Apr 2023, at 02:34, Ron Natalie <ron(a)ronnatalie.com> wrote:
>
> Once again, I must dredge up this post from 1991….
=============
For future reference, Kremvax lives! [ datestamp in email header ]
iMac1:steve$ host kremvax.demos.su
kremvax.demos.su has address 194.87.0.20
kremvax.demos.su mail is handled by 100 relay2.demos.su.
kremvax.demos.su mail is handled by 50 relay1.demos.su.
iMac1:steve$ ping -c2 kremvax.demos.su
PING kremvax.demos.su (194.87.0.20): 56 data bytes
64 bytes from 194.87.0.20: icmp_seq=0 ttl=46 time=336.127 ms
64 bytes from 194.87.0.20: icmp_seq=1 ttl=46 time=335.823 ms
--- kremvax.demos.su ping statistics ---
2 packets transmitted, 2 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 335.823/335.975/336.127/0.152 ms
=============
--
Steve Jenkin, IT Systems and Design
0412 786 915 (+61 412 786 915)
PO Box 38, Kippax ACT 2615, AUSTRALIA
mailto:sjenkin@canb.auug.org.au http://members.tip.net.au/~sjenkin
Hi.
I made an error on data entry installing TCP Networking on a 3b2/400 on a
SIM.
Is there a way to re-start the CONFIGURATION process without having to
start over from scratch re-installing the entire system?
The section covering hostname,host's network number, etc.
Thanks,
Ken
--
WWL 📚
[Redirected to COFF for some anecdotal E&S-related history and non-UNIX
terminal room nostalgia.]
On 3/7/23 9:43 PM, Lars Brinkhoff wrote:
> Noel Chiappa wrote:
>>> The first frame buffers from Evans and Sutherland were at University
>>> of Utah, DOD SITES and NYIT CGL as I recall. Circa 1974 to 1978.
>>
>> Were those on PDP-11's, or PDP-10's? (Really early E+S gear attached to
>> PDP-10's; '74-'78 sounds like an interim period.)
>
> The Picture System from 1974 was based on a PDP-11/05. It looks like
> vector graphics rather than a frame buffer though.
>
> http://archive.computerhistory.org/resources/text/Evans_Sutherland/EvansSut…
E&S LDS-1s used PDP-10s as their host systems. LDS-2s could at least in
principle use several different hosts (including spanning a range of
word sizes, e.g., a SEL-840 with 24 bit words or a 16 bit PDP-11.)
The Line Drawing Systems drove calligraphic displays. No frame buffers.
The early Picture Systems (like the brochure referenced by Lars) also
drove calligraphic displays but did sport a line segment "refresh
buffer" so that screen refreshes weren't dependent on the whole
pipeline's performance.
At least one heavily customized LDS-2 (described further below) produced
raster output by 1974 (and likely earlier in design and testing) and had
a buffer for raster refresh which exhibited some of what we think of as
the functionality of a frame buffer fitting the time frame referenced by
Noel for other E&S products.
On 3/8/23 10:21 AM, Larry McVoy wrote:
> I really miss terminal rooms. I learned so much looking over the
> shoulders of more experienced people.
Completely agree. They were the "playground learning" that did all of
educate, build craft and community, and occasionally bestow humility.
Although it completely predates frame buffer technology, the PDP-10
terminal room of the research computing environment at CWRU in the 1970s
was especially remarkable as well as personally influential. All
(calligraphic) graphics terminals and displays (though later a few
Datapoint CRTs appeared.) There was an LDS-1 hosted on the PDP-10 and
later an LDS-2 (which was co-located but not part of the PDP-10
environment.)
The chair of the department, Edward (Ted) Glaser, had been recruited
from MIT in 1968 and was heavily influential in guiding the graphics
orientation of the facilities, and later, in the design of the
customized LDS-2. Especially remarkable as he had been blind since he
was 8. He had a comprehensive vision of systems and thinking about them
that influenced a lot about the department's programs and research.
When I arrived in 1972, I only had a fleeting overlap with the LDS-1 to
experience some of its games (color wheel lorgnettes and carrier
landings!). The PDP-10 was being modified for TENEX and the LDS-1 was
being decommissioned. I recall a tablet and button box for LDS-1 input
devices.
The room was kept dimly lit with the overhead lighting off and only the
glow of the displays and small wattage desk lamps. It shared the raised
floor environment with the PDP-10 machine room (though was walled off
from it) and so had a "quiet-loud" aura from all the white noise. The
white noise cocooned you but permitted conversation and interaction with
others that didn't usually disturb the uninvolved.
The luxury terminals were IMLAC PDS-1s. There was a detachable switch
and indicator console that could be swapped between them for debugging
or if you simply liked having the blinking lights in view. When not in
use for real work the IMLACs would run Space War, much to the detriment
of IMLAC keyboards. They could handle pretty complex displays, like, a
screen full of dense text before flicker might set in. Light pens
provided pointing input.
The bulk of the terminals were an array of DEC VT02s. Storage tube
displays (so no animation possible), but with joysticks for pointing and
interacting. There were never many VT02s made and we always believed we
had the largest single collection of them.
None of these had character generators. The LDS-1 and the IMLACs drew
their own characters programmatically. A PDP-8/I drove the VT02s and
stroked all the characters. It did it at about 2400 baud but when the 8
got busy you could perceive the drawing of the characters like a scribe
on speed. If you stood way back to take in the room you could also watch
the PDP-8 going around as the screens brightened momentarily as the
characters/images were drawn. I was told that CWRU wrote the software
for the PDP-8 and gave it to DEC, in return DEC gave CWRU $1 and the
biggest line printer they sold. (The line printer did upper and lower
case, and the University archivists swooned when presented with theses
printed on it -- RUNOFF being akin to magic in a typewriter primitive
world.)
Until the Datapoint terminals arrived all the devices in the room either
were computers themselves or front-ended by one. Although I only saw it
happen once, the LDS-1 with it's rather intimate connection to the -10
was particularly attuned to the status of TOPS-10 and would flash
"CRASH" before users could tell that something was wrong vs. just being
slow.
(We would later run TOPS-10 for amusement. The system had 128K words in
total: 4 MA10 16K bays and 1 MD10 64K bay. TENEX needed a minimum of 80K
to "operate" though it'd be misleading to describe that configuration as
"running". If we lost the MD10 bay that meant no TENEX so we had a
DECtape-swapping configuration of TOPS-10 for such moments because,
well, a PDP-10 with 8 DECtapes twirling is pretty humorously theatrical.)
All the displays (even the later Datapoints) had green or blue-green
phosphors. This had the side effect that after several hours of
staring at them made anything which was white look pink. This was
especially pronounced in the winter in that being Cleveland it wasn't
that unusual to leave to find a large deposit of seemingly psychedelic
snow that hadn't been there when you went in.
The LDS-2 arrived in the winter of 1973-4. It was a highly modified
LDS-2 that produced raster graphics and shaded images in real-time. It
was the first system to do that and was called the Case Shaded Graphics
System (SGS). (E&S called it the Halftone System as it wouldn't do color
in real-time. In addition to a black & white raster display, It had a
35mm movie camera, a Polaroid camera, and an RGB filter that would
triple-expose each frame and so in a small way retained the charm of the
lorgnettes used on the LDS-1 to make color happen but not in real-time.)
It was hosted by a PDP-11/40 running RT-11.
Declining memory prices helped enable the innovations in the SGS as it
incorporated more memory components than the previous calligraphic
systems. The graphics pipeline was extended such that after translation
and clipping there was a Y-sort box that ordered the polygons from top
to bottom for raster scanning followed by a Visible Surface Processor
that separated hither from yon and finally a Gouraud Shader that
produced the final image to a monitor or one of the cameras. Physically
the system was 5 or maybe 6 bays long not including the 11/40 bay.
The SGS had some teething problems after its delivery. Ivan Sutherland
even came to Cleveland to work on it though he has claimed his main
memory of that is the gunfire he heard from the Howard Johnson's hotel
next to campus. The University was encircled by several distressed
communities at the time. A "bullet hole through glass" decal appeared on
the window of the SGS's camera bay to commemorate his experience.
The SGS configuration was unique but a number of its elements were
incorporated into later Picture Systems. It's my impression that the LDS
systems were pretty "one off" and the Picture Systems became the
(relative) "volume, off the shelf" product from E&S. (I'd love to read a
history of all the things E&S did in that era.)
By 1975-6 the SGS was being used by projects ranging from SST stress
analyses to mathematicians producing videos of theoretical concepts. The
exaggerated images of stresses on aircraft structures got pretty widely
distributed and referenced at the time. The SGS was more of a production
system used by other departments and entities rather than computer
graphics research as such, in some ways its (engineering) research
utility was achieved by its having existed. One student, Ben Jones,
created an extended ALGOL-60 to allow programming in something other
than the assembly language.
As the SGS came online in 1975 the PDP-10 was being decommissioned and
the calligraphic technologies associated with it vanished along with it.
A couple of years later a couple of Teraks appeared and by the end of
the 1970s frame buffers as we generally think of them were economically
practical. That along with other processing improvements rendered the
SGS obsolete and and so it was decommissioned in 1980 and donated to the
Computer History Museum where I imagine it sits in storage next to a
LINC-8 or the Ark of the Covenant or something.
One of the SGS's bays (containing the LDS-2 Channel Control, the front
of the pipeline LDS program interpreter running out of the host's
memory) and the PDP-11 interface is visible via this link:
https://www.computerhistory.org/collections/catalog/102691213
The bezels on the E&S bays were cosmetically like the DEC ones of the
same era. They were all smoked glass so the blinking lights were visible
but had to be raised if you wanted to see the identifying legends for them.
Hi,
Has anyone been successful in communicating using cu or some
other method to transfer files between two SIMS running Unix V ?
If so I would appreciate some help.
Thanks,
Ken
--
WWL 📚
Fortran question for Unix System-5 r3.
When executing fortran programs requiring input the screen will
show a blank screen. After entering input anyway the program completes
under Unix System V *r3*.
When the same program is compiled under Unix System V *r1* it
works as expected.
Sounds like on Unix System V *r3* the output buffer is not being flushed.
I tried re-compiling F77. No help.
Fortran code follows:
PROGRAM EASTER
INTEGER YEAR,METCYC,CENTRY,ERROR1,ERROR2,DAY
INTEGER EPACT,LUNA
C A PROGRAM TO CALCULATE THE DATE OF EASTER
PRINT '(A)',' INPUT THE YEAR FOR WHICH EASTER'
PRINT '(A)',' IS TO BE CALCULATED'
PRINT '(A)',' ENTER THE WHOLE YEAR, E.G. 1978 '
READ '(A)',YEAR
C CALCULATING THE YEAR IN THE 19 YEAR METONIC CYCLE-METCYC
METCYC = MOD(YEAR,19)+1
IF(YEAR.LE.1582)THEN
DAY = (5*YEAR)/4
EPACT = MOD(11*METCYC-4,30)+1
ELSE
C CALCULATING THE CENTURY-CENTRY
CENTRY = (YEAR/100)+1
C ACCOUNTING FOR ARITHMETIC INACCURACIES
C IGNORES LEAP YEARS ETC.
ERROR1 = (3*CENTRY/4)-12
ERROR2 = ((8*CENTRY+5)/25)-5
C LOCATING SUNDAY
DAY = (5*YEAR/4)-ERROR1-10
C LOCATING THE EPACT(FULL MOON)
EPACT = MOD(11*METCYC+20+ERROR2-ERROR1,30)
IF(EPACT.LT.0)EPACT=30+EPACT
IF((EPACT.EQ.25.AND.METCYC.GT.11).OR.EPACT.EQ.24)THEN
EPACT=EPACT+1
ENDIF
ENDIF
C FINDING THE FULL MOON
LUNA=44-EPACT
IF(LUNA.LT.21)THEN
LUNA=LUNA+30
ENDIF
C LOCATING EASTER SUNDAY
LUNA=LUNA+7-(MOD(DAY+LUNA,7))
C LOCATING THE CORRECT MONTH
IF(LUNA.GT.31)THEN
LUNA = LUNA - 31
PRINT '(A)',' FOR THE YEAR ',YEAR
PRINT '(A)',' EASTER FALLS ON APRIL ',LUNA
ELSE
PRINT '(A)',' FOR THE YEAR ',YEAR
PRINT '(A)',' EASTER FALLS ON MARCH ',LUNA
ENDIF
END
Any help would be appreciated,
Ken
--
WWL 📚
To make exceptional handling robust, I think every exception needs to be explicitly handled somewhere. If an exception not handled by a function, that fact must be specified in the function declaration. In effect the compiler can check that every exception has a handler somewhere. I think you can implement it using different syntactic sugar than Go’s obnoxious error handling but basically the same (though you may be tempted to make more efficient).
> On Mar 10, 2023, at 6:21 AM, Larry Stewart <stewart(a)serissa.com> wrote:
> TLDR exceptions don't make it better, they make it different.
>
> The Mesa and Cedar languages at PARC CSL were intended to be "Systems Languages" and fully embraced exceptions.
>
> The problem is that it is extremely tempting for the author of a library to use them, and equally tempting for the authors of library calls used by the first library, and so on.
> At the application level, literally anything can happen on any call.
>
> The Cedar OS was a library OS, where applications ran in the same address space, since there was no VM. In 1982 or so I set out to write a shell for it, and was determined that regardless of what happened, the shell should not crash, so I set out to guard every single call with handlers for every exception they could raise.
>
> This was an immensely frustrating process because while the language suggested that the author of a library capture exceptions on the way by and translate them to one at the package level, this is a terrible idea in its own way, because you can't debug - the state of the ultimate problem was lost. So no one did this, and at the top level, literally any exception could occur.
>
> Another thing that happens with exceptions is that programmers get the bright idea to use them for conditions which are uncommon, but expected, so any user of the function has to write complicated code to deal with these cases.
>
> On the whole, I came away with a great deal of grudging respect for ERRNO as striking a great balance between ease of use and specificity.
>
> I also evolved Larry's Theory of Exceptions, which is that it is the programmer's job to sort exceptional conditions into actionable categories: (1) resolvable by the user (bad arguments) (2) Temporary (out of network sockets or whatever) (3) resolvable by the sysadmin (config) (4) real bug, resolvable by the author.
>
> The usual practice of course is the popup "Received unknown error, OK?"
>
> -Larry
>
>> On Mar 10, 2023, at 8:15 AM, Ralph Corderoy <ralph(a)inputplus.co.uk> wrote:
>>
>> Hi Noel,
>>
>>>> if you say above that most people are unfamiliar with them due to
>>>> their use of goto then that's probably wrong
>>> I didn't say that.
>>
>> Thanks for clarifying; I did know it was a possibility.
>>
>>> I was just astonished that in a long thread about handling exceptional
>>> conditions, nobody had mentioned . . . exceptions. Clearly, either
>>> unfamiliarity (perhaps because not many laguages provide them - as you
>>> point out, Go does not), or not top of mind.
>>
>> Or perhaps those happy to use gotos also tend to be those who dislike
>> exceptions. :-)
>>
>> Anyway, I'm off-TUHS-pic so follow-ups set to goto COFF.
>>
>> --
>> Cheers, Ralph.
[bumping to COFF]
On Wed, Mar 8, 2023 at 2:05 PM ron minnich <rminnich(a)gmail.com> wrote:
> The wheel of reincarnation discussion got me to thinking:
>
> What I'm seeing is reversing the rotation of the wheel of reincarnation. Instead of pulling the task (e.g. graphics) from a special purpose device back into the general purpose domain, the general purpose computing domain is pushed into the special purpose device.
>
> I first saw this almost 10 years ago with a WLAN modem chip that ran linux on its 4 core cpu, all of it in a tiny package. It was faster, better, and cheaper than its traditional embedded predecessor -- because the software stack was less dedicated and single-company-created. Take Linux, add some stuff, voila! WLAN modem.
>
> Now I'm seeing it in peripheral devices that have, not one, but several independent SoCs, all running Linux, on one card. There's even been a recent remote code exploit on, ... an LCD panel.
>
> Any of these little devices, with the better part of a 1G flash and a large part of 1G DRAM, dwarfs anything Unix ever ran on. And there are more and more of them, all over the little PCB in a laptop.
>
> The evolution of platforms like laptops to becoming full distributed systems continues.
> The wheel of reincarnation spins counter clockwise -- or sideways?
About a year ago, I ran across an email written a decade or more prior
on some mainframe mailing list where someone wrote something like,
"wow! It just occurred to me that my Athlon machine is faster than the
ES/3090-600J I used in 1989!" Some guy responded angrily, rising to
the wounded honor of IBM, raving about how preposterous this was
because the mainframe could handle a thousand users logged in at one
time and there's no way this Linux box could ever do that.
I was struck by the absurdity of that; it's such a ridiculous
non-comparison. The mainframe had layers of terminal concentrators,
3270 controllers, IO controllers, etc, etc, and a software ecosystem
that made heavy use of all of that, all to keep user interaction _off_
of the actual CPU (I guess freeing that up to run COBOL programs in
batch mode...); it's not as though every time a mainframe user typed
something into a form on their terminal it interrupted the primary
CPU.
Of course, the first guy was right: the AMD machine probably _was_
more capable than a 3090 in terms of CPU performance, RAM and storage
capacity, and raw bandwidth between the CPU and IO subsystems. But the
3090 was really more like a distributed system than the Athlon box
was, with all sorts of offload capabilities. For that matter, a
thousand users probably _could_ telnet into the Athlon system. With
telnet in line mode, it'd probably even be decently responsive.
So often it seems to me like end-user systems are just continuing to
adopt "large system" techniques. Nothing new under the sun.
> I'm no longer sure the whole idea of the wheel or reincarnation is even applicable.
I often feel like the wheel has fallen onto its side, and we're
continually picking it up from the edge and flipping it over, ad
nauseum.
- Dan C.
Hi Steffen,
COFF'd.
> Very often i find myself needing a restart necessity, so "continue
> N" would that be. Then again when "N" is a number instead of
> a label this is a (let alone maintainance) mess but for shortest
> code paths.
Do you mean ‘continue’ which re-tests the condition or more like Perl's
‘redo’ which re-starts the loop's body?
‘The "redo" command restarts the loop block without evaluating the
conditional again. The "continue" block, if any, is not executed.’
— perldoc -f redo
So like a ‘goto redo’ in
while (...) {
redo:
...
if (...)
goto redo
...
}
--
Cheers, Ralph.
TLDR exceptions don't make it better, they make it different.
The Mesa and Cedar languages at PARC CSL were intended to be "Systems Languages" and fully embraced exceptions.
The problem is that it is extremely tempting for the author of a library to use them, and equally tempting for the authors of library calls used by the first library, and so on.
At the application level, literally anything can happen on any call.
The Cedar OS was a library OS, where applications ran in the same address space, since there was no VM. In 1982 or so I set out to write a shell for it, and was determined that regardless of what happened, the shell should not crash, so I set out to guard every single call with handlers for every exception they could raise.
This was an immensely frustrating process because while the language suggested that the author of a library capture exceptions on the way by and translate them to one at the package level, this is a terrible idea in its own way, because you can't debug - the state of the ultimate problem was lost. So no one did this, and at the top level, literally any exception could occur.
Another thing that happens with exceptions is that programmers get the bright idea to use them for conditions which are uncommon, but expected, so any user of the function has to write complicated code to deal with these cases.
On the whole, I came away with a great deal of grudging respect for ERRNO as striking a great balance between ease of use and specificity.
I also evolved Larry's Theory of Exceptions, which is that it is the programmer's job to sort exceptional conditions into actionable categories: (1) resolvable by the user (bad arguments) (2) Temporary (out of network sockets or whatever) (3) resolvable by the sysadmin (config) (4) real bug, resolvable by the author.
The usual practice of course is the popup "Received unknown error, OK?"
-Larry
> On Mar 10, 2023, at 8:15 AM, Ralph Corderoy <ralph(a)inputplus.co.uk> wrote:
>
> Hi Noel,
>
>>> if you say above that most people are unfamiliar with them due to
>>> their use of goto then that's probably wrong
>>
>> I didn't say that.
>
> Thanks for clarifying; I did know it was a possibility.
>
>> I was just astonished that in a long thread about handling exceptional
>> conditions, nobody had mentioned . . . exceptions. Clearly, either
>> unfamiliarity (perhaps because not many laguages provide them - as you
>> point out, Go does not), or not top of mind.
>
> Or perhaps those happy to use gotos also tend to be those who dislike
> exceptions. :-)
>
> Anyway, I'm off-TUHS-pic so follow-ups set to goto COFF.
>
> --
> Cheers, Ralph.
On Fri, Mar 10, 2023 at 6:15 AM Ralph Corderoy <ralph(a)inputplus.co.uk>
wrote:
> Hi Noel,
>
> > > if you say above that most people are unfamiliar with them due to
> > > their use of goto then that's probably wrong
> >
> > I didn't say that.
>
> Thanks for clarifying; I did know it was a possibility.
>
Exception handling is a great leap sideways. it's a supercharged goto with
steroids on top. In some ways more constrained, in other ways more prone to
abuse.
Example:
I diagnosed performance problems in a program that would call into
'waiting' threads that would read data from a pipe and then queue work.
Easy, simple, straightforward design. Except they used exceptions to then
process the packets rather than having a proper lockless producer /
consumer queue.
Exceptions are great for keeping the code linear and ignoring error
conditions logically, but still having them handled "somewhere" above the
current code and writing the code such that when it gets an abort, partial
work is cleaned up and trashed.
Global exception handlers are both good and bad. All errors become
tracebacks to where it occurred. People often don't disambiguate between
expected and unexpected exceptions, so programming errors get lumped in
with remote devices committing protocol errors get lumped in with your
config file had a typo and /dve/ttyU2 doesn't exist. It can be hard for the
user to know what comes next when it's all jumbled together. In-line error
handling, at least, can catch the expected things and give a more
reasonable error near to where it happened so I know if my next step is vi
prog.conf or email support(a)prog.com.
So it's a hate hate relationship with both. What do I hate the least?
That's a three drink minimum for the answer.
Warner