Fortran question for Unix System-5 r3.
When executing fortran programs requiring input the screen will
show a blank screen. After entering input anyway the program completes
under Unix System V *r3*.
When the same program is compiled under Unix System V *r1* it
works as expected.
Sounds like on Unix System V *r3* the output buffer is not being flushed.
I tried re-compiling F77. No help.
Fortran code follows:
PROGRAM EASTER
INTEGER YEAR,METCYC,CENTRY,ERROR1,ERROR2,DAY
INTEGER EPACT,LUNA
C A PROGRAM TO CALCULATE THE DATE OF EASTER
PRINT '(A)',' INPUT THE YEAR FOR WHICH EASTER'
PRINT '(A)',' IS TO BE CALCULATED'
PRINT '(A)',' ENTER THE WHOLE YEAR, E.G. 1978 '
READ '(A)',YEAR
C CALCULATING THE YEAR IN THE 19 YEAR METONIC CYCLE-METCYC
METCYC = MOD(YEAR,19)+1
IF(YEAR.LE.1582)THEN
DAY = (5*YEAR)/4
EPACT = MOD(11*METCYC-4,30)+1
ELSE
C CALCULATING THE CENTURY-CENTRY
CENTRY = (YEAR/100)+1
C ACCOUNTING FOR ARITHMETIC INACCURACIES
C IGNORES LEAP YEARS ETC.
ERROR1 = (3*CENTRY/4)-12
ERROR2 = ((8*CENTRY+5)/25)-5
C LOCATING SUNDAY
DAY = (5*YEAR/4)-ERROR1-10
C LOCATING THE EPACT(FULL MOON)
EPACT = MOD(11*METCYC+20+ERROR2-ERROR1,30)
IF(EPACT.LT.0)EPACT=30+EPACT
IF((EPACT.EQ.25.AND.METCYC.GT.11).OR.EPACT.EQ.24)THEN
EPACT=EPACT+1
ENDIF
ENDIF
C FINDING THE FULL MOON
LUNA=44-EPACT
IF(LUNA.LT.21)THEN
LUNA=LUNA+30
ENDIF
C LOCATING EASTER SUNDAY
LUNA=LUNA+7-(MOD(DAY+LUNA,7))
C LOCATING THE CORRECT MONTH
IF(LUNA.GT.31)THEN
LUNA = LUNA - 31
PRINT '(A)',' FOR THE YEAR ',YEAR
PRINT '(A)',' EASTER FALLS ON APRIL ',LUNA
ELSE
PRINT '(A)',' FOR THE YEAR ',YEAR
PRINT '(A)',' EASTER FALLS ON MARCH ',LUNA
ENDIF
END
Any help would be appreciated,
Ken
--
WWL 📚
To make exceptional handling robust, I think every exception needs to be explicitly handled somewhere. If an exception not handled by a function, that fact must be specified in the function declaration. In effect the compiler can check that every exception has a handler somewhere. I think you can implement it using different syntactic sugar than Go’s obnoxious error handling but basically the same (though you may be tempted to make more efficient).
> On Mar 10, 2023, at 6:21 AM, Larry Stewart <stewart(a)serissa.com> wrote:
> TLDR exceptions don't make it better, they make it different.
>
> The Mesa and Cedar languages at PARC CSL were intended to be "Systems Languages" and fully embraced exceptions.
>
> The problem is that it is extremely tempting for the author of a library to use them, and equally tempting for the authors of library calls used by the first library, and so on.
> At the application level, literally anything can happen on any call.
>
> The Cedar OS was a library OS, where applications ran in the same address space, since there was no VM. In 1982 or so I set out to write a shell for it, and was determined that regardless of what happened, the shell should not crash, so I set out to guard every single call with handlers for every exception they could raise.
>
> This was an immensely frustrating process because while the language suggested that the author of a library capture exceptions on the way by and translate them to one at the package level, this is a terrible idea in its own way, because you can't debug - the state of the ultimate problem was lost. So no one did this, and at the top level, literally any exception could occur.
>
> Another thing that happens with exceptions is that programmers get the bright idea to use them for conditions which are uncommon, but expected, so any user of the function has to write complicated code to deal with these cases.
>
> On the whole, I came away with a great deal of grudging respect for ERRNO as striking a great balance between ease of use and specificity.
>
> I also evolved Larry's Theory of Exceptions, which is that it is the programmer's job to sort exceptional conditions into actionable categories: (1) resolvable by the user (bad arguments) (2) Temporary (out of network sockets or whatever) (3) resolvable by the sysadmin (config) (4) real bug, resolvable by the author.
>
> The usual practice of course is the popup "Received unknown error, OK?"
>
> -Larry
>
>> On Mar 10, 2023, at 8:15 AM, Ralph Corderoy <ralph(a)inputplus.co.uk> wrote:
>>
>> Hi Noel,
>>
>>>> if you say above that most people are unfamiliar with them due to
>>>> their use of goto then that's probably wrong
>>> I didn't say that.
>>
>> Thanks for clarifying; I did know it was a possibility.
>>
>>> I was just astonished that in a long thread about handling exceptional
>>> conditions, nobody had mentioned . . . exceptions. Clearly, either
>>> unfamiliarity (perhaps because not many laguages provide them - as you
>>> point out, Go does not), or not top of mind.
>>
>> Or perhaps those happy to use gotos also tend to be those who dislike
>> exceptions. :-)
>>
>> Anyway, I'm off-TUHS-pic so follow-ups set to goto COFF.
>>
>> --
>> Cheers, Ralph.
[bumping to COFF]
On Wed, Mar 8, 2023 at 2:05 PM ron minnich <rminnich(a)gmail.com> wrote:
> The wheel of reincarnation discussion got me to thinking:
>
> What I'm seeing is reversing the rotation of the wheel of reincarnation. Instead of pulling the task (e.g. graphics) from a special purpose device back into the general purpose domain, the general purpose computing domain is pushed into the special purpose device.
>
> I first saw this almost 10 years ago with a WLAN modem chip that ran linux on its 4 core cpu, all of it in a tiny package. It was faster, better, and cheaper than its traditional embedded predecessor -- because the software stack was less dedicated and single-company-created. Take Linux, add some stuff, voila! WLAN modem.
>
> Now I'm seeing it in peripheral devices that have, not one, but several independent SoCs, all running Linux, on one card. There's even been a recent remote code exploit on, ... an LCD panel.
>
> Any of these little devices, with the better part of a 1G flash and a large part of 1G DRAM, dwarfs anything Unix ever ran on. And there are more and more of them, all over the little PCB in a laptop.
>
> The evolution of platforms like laptops to becoming full distributed systems continues.
> The wheel of reincarnation spins counter clockwise -- or sideways?
About a year ago, I ran across an email written a decade or more prior
on some mainframe mailing list where someone wrote something like,
"wow! It just occurred to me that my Athlon machine is faster than the
ES/3090-600J I used in 1989!" Some guy responded angrily, rising to
the wounded honor of IBM, raving about how preposterous this was
because the mainframe could handle a thousand users logged in at one
time and there's no way this Linux box could ever do that.
I was struck by the absurdity of that; it's such a ridiculous
non-comparison. The mainframe had layers of terminal concentrators,
3270 controllers, IO controllers, etc, etc, and a software ecosystem
that made heavy use of all of that, all to keep user interaction _off_
of the actual CPU (I guess freeing that up to run COBOL programs in
batch mode...); it's not as though every time a mainframe user typed
something into a form on their terminal it interrupted the primary
CPU.
Of course, the first guy was right: the AMD machine probably _was_
more capable than a 3090 in terms of CPU performance, RAM and storage
capacity, and raw bandwidth between the CPU and IO subsystems. But the
3090 was really more like a distributed system than the Athlon box
was, with all sorts of offload capabilities. For that matter, a
thousand users probably _could_ telnet into the Athlon system. With
telnet in line mode, it'd probably even be decently responsive.
So often it seems to me like end-user systems are just continuing to
adopt "large system" techniques. Nothing new under the sun.
> I'm no longer sure the whole idea of the wheel or reincarnation is even applicable.
I often feel like the wheel has fallen onto its side, and we're
continually picking it up from the edge and flipping it over, ad
nauseum.
- Dan C.
Hi Steffen,
COFF'd.
> Very often i find myself needing a restart necessity, so "continue
> N" would that be. Then again when "N" is a number instead of
> a label this is a (let alone maintainance) mess but for shortest
> code paths.
Do you mean ‘continue’ which re-tests the condition or more like Perl's
‘redo’ which re-starts the loop's body?
‘The "redo" command restarts the loop block without evaluating the
conditional again. The "continue" block, if any, is not executed.’
— perldoc -f redo
So like a ‘goto redo’ in
while (...) {
redo:
...
if (...)
goto redo
...
}
--
Cheers, Ralph.
TLDR exceptions don't make it better, they make it different.
The Mesa and Cedar languages at PARC CSL were intended to be "Systems Languages" and fully embraced exceptions.
The problem is that it is extremely tempting for the author of a library to use them, and equally tempting for the authors of library calls used by the first library, and so on.
At the application level, literally anything can happen on any call.
The Cedar OS was a library OS, where applications ran in the same address space, since there was no VM. In 1982 or so I set out to write a shell for it, and was determined that regardless of what happened, the shell should not crash, so I set out to guard every single call with handlers for every exception they could raise.
This was an immensely frustrating process because while the language suggested that the author of a library capture exceptions on the way by and translate them to one at the package level, this is a terrible idea in its own way, because you can't debug - the state of the ultimate problem was lost. So no one did this, and at the top level, literally any exception could occur.
Another thing that happens with exceptions is that programmers get the bright idea to use them for conditions which are uncommon, but expected, so any user of the function has to write complicated code to deal with these cases.
On the whole, I came away with a great deal of grudging respect for ERRNO as striking a great balance between ease of use and specificity.
I also evolved Larry's Theory of Exceptions, which is that it is the programmer's job to sort exceptional conditions into actionable categories: (1) resolvable by the user (bad arguments) (2) Temporary (out of network sockets or whatever) (3) resolvable by the sysadmin (config) (4) real bug, resolvable by the author.
The usual practice of course is the popup "Received unknown error, OK?"
-Larry
> On Mar 10, 2023, at 8:15 AM, Ralph Corderoy <ralph(a)inputplus.co.uk> wrote:
>
> Hi Noel,
>
>>> if you say above that most people are unfamiliar with them due to
>>> their use of goto then that's probably wrong
>>
>> I didn't say that.
>
> Thanks for clarifying; I did know it was a possibility.
>
>> I was just astonished that in a long thread about handling exceptional
>> conditions, nobody had mentioned . . . exceptions. Clearly, either
>> unfamiliarity (perhaps because not many laguages provide them - as you
>> point out, Go does not), or not top of mind.
>
> Or perhaps those happy to use gotos also tend to be those who dislike
> exceptions. :-)
>
> Anyway, I'm off-TUHS-pic so follow-ups set to goto COFF.
>
> --
> Cheers, Ralph.
On Fri, Mar 10, 2023 at 6:15 AM Ralph Corderoy <ralph(a)inputplus.co.uk>
wrote:
> Hi Noel,
>
> > > if you say above that most people are unfamiliar with them due to
> > > their use of goto then that's probably wrong
> >
> > I didn't say that.
>
> Thanks for clarifying; I did know it was a possibility.
>
Exception handling is a great leap sideways. it's a supercharged goto with
steroids on top. In some ways more constrained, in other ways more prone to
abuse.
Example:
I diagnosed performance problems in a program that would call into
'waiting' threads that would read data from a pipe and then queue work.
Easy, simple, straightforward design. Except they used exceptions to then
process the packets rather than having a proper lockless producer /
consumer queue.
Exceptions are great for keeping the code linear and ignoring error
conditions logically, but still having them handled "somewhere" above the
current code and writing the code such that when it gets an abort, partial
work is cleaned up and trashed.
Global exception handlers are both good and bad. All errors become
tracebacks to where it occurred. People often don't disambiguate between
expected and unexpected exceptions, so programming errors get lumped in
with remote devices committing protocol errors get lumped in with your
config file had a typo and /dve/ttyU2 doesn't exist. It can be hard for the
user to know what comes next when it's all jumbled together. In-line error
handling, at least, can catch the expected things and give a more
reasonable error near to where it happened so I know if my next step is vi
prog.conf or email support(a)prog.com.
So it's a hate hate relationship with both. What do I hate the least?
That's a three drink minimum for the answer.
Warner
(Moving to COFF)
On Mon, Mar 06, 2023 at 03:24:29PM -0800, Larry McVoy wrote:
> But even that seems suspect, I would think they could put some logic
> in there that just doesn't feed power to the GPU if you aren't using
> it but maybe that's harder than I think.
>
> If it's not about power then I don't get it, there are tons of transistors
> waiting to be used, they could easily plunk down a bunch of GPUs on the
> same die so why not? Maybe the dev timelines are completely different
> (I suspect not, I'm just grabbing at straws).
Other potential reasons:
1) Moving functionality off-CPU also allows for those devices to have
their own specialized video memory that might be faster (SDRAM) or
dual-ported (VRAM) without having to add that complexity to the more
general system DRAM and/or the CPU's Northbridge.
2) In some cases, having an off-chip co-processor may not need any
access to the system memory at well. An example of this is the "bump
in the wire" in-line crypto engines (ICE) which is located between the
Southbridge and the eMMC/UFS flash storage device. If you are using a
Android device, it's likely to have an ICE. The big advantage is that
it avoids needing to have a bounce buffer on the write path, where the
file system encryption layer has to copy-and-encrypt data from the
page cache to a bounce buffer, and then the encrypted block will then
get DMA'ed to the storage device.
3) From an architectural perspective, not all use cases need various
co-processors, whether it is to doing cryptography, or running some
kind of machine-learning module, or image manipulation to simulate
bokeh, or create HDR images, etc. While RISC-V does have the concept
of instructure set extensions, which can be developed without getting
permission from the "owners" of the core CPU ISA (e.g., ARM, Intel,
etc.), it's a lot more convenient for someone who doesn't need to bend
the knee to ARM, inc. (or their new corporate overloads) or Intel, to
simply put that extension outside the core ISA.
(More recently, there is an interesting lawsuit about whether it's
"allowed" to put a 3rd party co-processor on the same SOC without
paying $$$$$ to the corporate overload, which may make this point moot
--- although it might cause people to simply switch to another ISA
that doesn't have this kind of lawsuit-happy rent-seeking....)
In any case, if you don't need to play Quake with 240 frames per
second, then there's no point putting the GPU in the core CPU
architecture, and it may turn out that the kind of co-processor which
is optimized for running ML models is different, and it is often
easier to make changes to the programming model for a GPU, compared to
making changes to a CPU's ISA.
- Ted
Hi Phil,
Copying to the COFF list, hope that's okay. I thought it might interest
them.
> > $ units -1v '26^3 16 bit' 64KiB
>
> Works only for GNU units.
That's interesting, thanks.
I've access to a FreeBSD 12.3-RELEASE-p6, if that version number means
something to you. Its units groks ^ to mean power when applied to a
unit, as the fine units(1) says, but not to a number. Whereas * works.
$ units yd^3 ft^3
* 27
/ 0.037037037
$
$ units 6\*7 21
* 2
/ 0.5
$
$ units 2^4 64
* 0.03125
/ 32
$
The last one silently treats 2^4 as 2; I'd say that's a bug.
It has Ki- and byte allowing
$ units -t Kibyte bit
8192
but lacks GNU's
B byte
Fair enough, though I think that's common enough now to be included.
FreeBSD also seems to have another bug: demanding a space between the
quantity and the unit for fundamental ‘!’ units.
$ units m 8m
conformability error
1 m
8
$ units m '8 m'
* 0.125
/ 8
$
I found this when attempting the obvious
$ units Kibyte 8bit
conformability error
8192 bit
8
$ units Kibyte '8 bit'
* 1024
/ 0.0009765625
$
Whilst I'm not a GNU acolyte, in this case its version of units does
seem to have had a bit more TLC. :-)
--
Cheers, Ralph.
John Cowan <cowan(a)ccil.org> writes:
>> which Rob Austein re-wrote into "Alice's PDP-10".
> I didn't know that one was done at MIT.
This spells out the details:
https://www.hactrn.net/sra/alice/alice.glossary
[COFF]
On Mon, Feb 27, 2023 at 4:16 PM Chet Ramey <chet.ramey(a)case.edu> wrote:
> On 2/27/23 4:01 PM, segaloco wrote:
> > The official Rust book lists a blind script grab from a website piped into a shell as their "official" install mechanism.
>
> Well, I suppose if it's from a trustworthy source...
>
> (Sorry, my eyes rolled so hard they're bouncing on the floor right now.)
I find this a little odd. If I go back to O'Reilly books from the
early 90s, there was advice to do all sorts of suspect things in them,
such as fetching random bits of pieces from random FTP servers (or
even using email fetch tarballs [!!]). Or downloading shell archives
from USENET.
And of course you _can_ download the script and read through it if you want.
And no one forces anyone to use `rustup`. Most vendors ship some
version of Rust through their package management system these days.
- Dan C.
On Mon, Feb 27, 2023 at 5:06 PM KenUnix <ken.unix.guy(a)gmail.com> wrote:
> Have they not heard of common sense? Whenever I get something from git I look through it to
> check for something suspicious before using it and then and only then do I do make install.
Up to what size? What about the dependencies? How about the compiler
that compiles it all?
I have a copy of the Linux kernel I checked out on my machine; it's
many millions of lines of code; sorry, I haven't read all of that. I
often install things using the operating system's package manager; I
haven't read through all that code, either. Life's too short as it is!
> And today's cookie cutter approach to writing software means they are not learning anything
> but copy paste. Where's the innovation?
I imagine that when people made the switch from programming in machine
code to symbolic assemblers, and then again from assembler to
higher-level languages (FORTRAN! COBOL! PL/I!). And so on.
Consider that, perhaps, the innovation is in how those things are all
combined to do something useful for users. My ability to search, read
documents, listen to music, watch real-time video, etc, is way beyond
anything I could do on the machines of the early 90s.
Not everything that the kids do these days is for the better, but not
everything is terrible, either. This list, and TUHS, bluntly, too
often makes the mistake of assuming that it is. Innovation didn't stop
in 1989.
- Dan C.
> On Mon, Feb 27, 2023 at 4:22 PM Dan Cross <crossd(a)gmail.com> wrote:
>>
>> [COFF]
>>
>> On Mon, Feb 27, 2023 at 4:16 PM Chet Ramey <chet.ramey(a)case.edu> wrote:
>> > On 2/27/23 4:01 PM, segaloco wrote:
>> > > The official Rust book lists a blind script grab from a website piped into a shell as their "official" install mechanism.
>> >
>> > Well, I suppose if it's from a trustworthy source...
>> >
>> > (Sorry, my eyes rolled so hard they're bouncing on the floor right now.)
>>
>> I find this a little odd. If I go back to O'Reilly books from the
>> early 90s, there was advice to do all sorts of suspect things in them,
>> such as fetching random bits of pieces from random FTP servers (or
>> even using email fetch tarballs [!!]). Or downloading shell archives
>> from USENET.
>>
>> And of course you _can_ download the script and read through it if you want.
>>
>> And no one forces anyone to use `rustup`. Most vendors ship some
>> version of Rust through their package management system these days.
>>
>> - Dan C.
>
>
>
> --
> End of line
> JOB TERMINATED
>
>
On Mon, Feb 27, 2023 at 4:52 PM Michael Stiller <mstiller(a)me.com> wrote:
> > I find this a little odd. If I go back to O'Reilly books from the
> > early 90s, there was advice to do all sorts of suspect things in them,
> > such as fetching random bits of pieces from random FTP servers (or
> > even using email fetch tarballs [!!]). Or downloading shell archives
> > from USENET.
> >
> > And of course you _can_ download the script and read through it if you want.
>
> This does not help, you can detect that on the server and send something else.
What? You've already downloaded the script. Once it's on your local
machine, why would you download it again?
> https://www.idontplaydarts.com/2016/04/detecting-curl-pipe-bash-server-side/
If I really wanted to see whether it had been tampered with, perhaps
spin up a sacrificial machine and run,
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | tee the.script | sh
and compare to the output of,
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs >
the.script.nopipeshell
- Dan C.
[Redirecting to COFF; TUHS to Bcc:]
On Mon, Feb 27, 2023 at 3:46 PM segaloco via TUHS <tuhs(a)tuhs.org> wrote:
> I see the wisdom in your last line there, I've typed and deleted a response to this email 4 times, each one more convoluted than the last.
>
> The short of my stance though is, as a younger programmer (29), I am certainly not a fan of these trends that are all too common in my generation. That said, I've set foot in one single softare-related class in my life (highschool Java class) and so I don't really know what is being taught to folks going the traditional routes. All I know from my one abortive semester of college is that I didn't see a whole lot of reliance on individual exploration of concepts in classes, just everyone working to a one-size-fits-all understanding of how to be a good employee in a given subject area. Of course, this is also influenced by my philosophy and biases and such, and only represents 4-5 months of observation, but if my minimal experience with college is to be believed, I have little faith that educational programs are producing much more than meat filters between StackOverflow and <insert code editor here>. No offense to said meat filters, people gotta work, but there is something lost when the constant march of production torpedoes individual creativity. Then again, do big firms want sophisticated engineers or are we too far gone into assembly line programming with no personal connection to any of the products? I'm glad I'm as personally involved in the stuff I work with, I could see myself slipping into the same patterns of apathy if I was a nameless face in a sea of coders on some project I don't even know the legal name of any given day.
This is an extraordinarily complicated subject, and it's really full
of nuance. In general, I think your categorization is unfair.
It sounds like you had a bad experience in your first semester of
college. I can sympathize; I did too.
But a thing to bear in mind is that in the first year, universities
are taking kids (and yes, they are kids...sorry young folks, I don't
mean that as a pejorative, but consider the context! For most young
people this is their first experience living on their own, their first
_real_ taste of freedom, and the first where they're about to be
subject to rigorous academic expectations without a lot of systemic
support) with wildly uneven academic and social backgrounds and
preparing them for advanced study in a particular field...one that
most haven't even identified for themselves yet. For the precocious
student, this will feel stifling; for many others it will be a
struggle. What, perhaps, you see as lack of intellectual curiosity may
have in fact been the outward manifestations of that struggle.
That said...Things are, legitimately, very different today than they
were when Unix was young. The level of complexity has skyrocketed in
every dimension, and things have gotten to the point where hack upon
hack has congealed into a system that's nearly bursting at the seams.
It's honestly amazing that anything works at all.
That said, good things have been invented since 1985, and the way many
of us "grew up" thinking about problems doesn't always apply anymore.
The world changes; c'est la vie.
- Dan C.
> ------- Original Message -------
> On Monday, February 27th, 2023 at 12:22 PM, arnold(a)skeeve.com <arnold(a)skeeve.com> wrote:
>
>
> > Chet Ramey chet.ramey(a)case.edu wrote:
> >
> > > On 2/27/23 3:04 PM, arnold(a)skeeve.com wrote:
> > >
> > > > IMHO the dependence upon IDEs is crippling; they cut & paste to the
> > > > almost total exclusion of the keyboard, including when shell completion
> > > > would be faster.
> > >
> > > Don't forget cargo-culting by pasting shell commands they got from the web
> > > and barely understand, if at all.
> >
> >
> > Yeah, really.
> >
> > I do what I can, but it's a very steep uphill battle, as most
> > don't even understand that they're missing something, or that
> > they could learn it if they wanted to.
> >
> > I think I'll stop ranting before I really get going. :-)
> >
> > Arnold
COFF transfer, TUHS Bcc'd to know where this thread went.
Between the two if you're not doing UNIX-specific things but just trying to resurrect/restore these, COFF will probably be the better place for further discussion. @OP if you're not a member of COFF already, you should be able to reach out to Warren Toomey regarding subscription.
If you're feeling particularly adventurous, NetBSD still supports VAX in some manner: http://wiki.netbsd.org/ports/vax/
YMMV, but I've had some success with NetBSD on some pretty oddball stuff. As the old saying goes, "Of course it runs NetBSD". You might be able to find some old VMS stuff for them as well, but I wouldn't know where to point you other than bitsavers. There's some other archival site out there with a bunch of old DEC stuff but I can never seem to find it when I search for it, only by accident. Best of luck!
- Matt G.
------- Original Message -------
On Wednesday, February 22nd, 2023 at 10:08 AM, jnc(a)mercury.lcs.mit.edu <jnc(a)mercury.lcs.mit.edu> wrote:
> > From: Maciej Jan Broniarz
>
>
> > Our local Hackroom acquired some VAX Station machines.
>
>
> Exactly what sort of VAXstations? There are several different kinds; one:
>
> http://gunkies.org/wiki/VAXstation_100
>
> doesn't even include a VAX; it's just a branding deal from DEC Marketing.
> Start with finding out exactly which kind(s) of VAXstation you have.
>
> Noel
This is far afield even for COFF, so apologies up front. Machines and
OSes we fondly remember get older day by day. But many labs I worked in
during undergrad & grad years and then in the workforce always had a
radio going, and music never seems to age. When I hear Earth, Wind &
Fire's "September" or Doobie Brothers' "What a Fool Believes," it's
RSTS/E on a PDP11/70 as a teen, my first exposure to computers.
Kraftwerk and Big Audio Dynamite mean Unix with Mike Muuss at Ballistic
Research Lab in the early 90s. I had PX (military Post Exchange)
privileges which Mike used to the fullest to buy fantastic lab
speakers. The old ENIAC room, our work space, had thick walls. :-)
I wonder if particular music transports any others back to computing
days of old. The current lab I'm in receives exactly 1 radio station
from a local high school and streaming is blocked. Not sure that any new
musical memories will be formed for my ever nearer days of retirement!
Musically yours,
Mike Markowski
Jonathan Gray wrote:
>> Any chance this DOS supdup software is still around?
>
> https://web.mit.edu/Saltzer/www/publications/pcip-1985.pdf
> http://www.bitsavers.org/bits/MIT/pc-ip/
Great, thanks!
It's a bit sad to read in supdup.mss "Unfortunately, very few machines
have TCP/Supdup servers. The only servers known to us are on Mit-MC and
Su-AI, and 4.2 Unix machines running a server we distribute." At this
point, three old ITS machines had recently fallen over, one after the
other, and MC was the only one left standing. But not long after, four
new ones would appear. One of which is still up and running!
s/TUHS/COFF/
Theodore Ts'o wrote:
> The only I saw were PC/AT's (that is, the ones with the '286 CPU) that
> ran DOS and which were essentially used only to telnet to the Vax
> 750's (or supdup to the MIT AI / LCS lab machines, but most
> undergraduates didn't have access to those computers
Any chance this DOS supdup software is still around?
Was it part of PC/TCP? I searched around and found this:
https://windowsbulletin.com/files/exe/ftp-software-inc/pc-tcp
[TUHS to Bcc: and +COFF]
On Wed, Feb 8, 2023 at 3:50 PM Warner Losh <imp(a)bsdimp.com> wrote:
> [snip]
> The community aspect of open source was there in spades as well, with people helping other people and sharing fixes. But it was complicated by restrictive license agreements and somewhat (imho) overzealous protection of 'rights' at times that hampered things and would have echos in later open source licenses and attitudes that would develop in response. Even though the term 'open source' wasn't coined until 1998, the open source ethos were present in many of the early computer users groups, not least the unix ones.
Don't forget SHARE! Honestly, I think the IBM mainframe community
doesn't get its due. There was actually a lot of good stuff there.
> USENET amplified it, plus let in the unwashed masses who also had useful contributions (in addition to a lot of noise)... then things got really crowded with noise when AOL went live... And I'm sure there's a number of other BBS and/or compuserve communities I'm giving short-shrift here because I wasn't part of them in real time.
The phenomenon of "September" being the time when all the new
undergrads got their accounts and discovered USENET and the
shenanigans that ensued was well-known. Eternal September when AOL got
connected was a serious body blow.
As for BBSes...I'd go so far as to say that the BBS people were the
AOL people before the AOL people were the AOL people.
A takeaway from both was that communities with established norms but
no way beside social pressure to enforce them have a hard time
scaling. USENET worked when the user population was small and mostly
amenable to a set of shared goals centered around information exchange
(nevermind the Jim Flemings and other well-known cranks of the world).
But integrating someone into the fold took effort both on the
community's part as well as the user; when it wasn't obvious that
intrinsic motivation was required, or hordes of users just weren't
interested, it didn't work very well.
I think this is something we see over and over again with social networks.
- Dan C.
Good morning all, I was wondering if anyone in this group was aware of any known preservation of VAX/VMS 4.4 source code?
Just saw this on eBay: https://www.ebay.com/itm/195582389147?hash=item2d899e6b9b:g:neYAAOSwmQJj3EkH
I certainly don't have the equipment for this in my arsenal, but at the same time, if this represents long-lost source code, I'd happy try and nab it and then get it to someone who can do the restoration work from this.
Thoughts?
- Matt G.
On Fri, Feb 03, 2023 at 06:36:34AM +0000, Lars Brinkhoff wrote:
> Dan Cross wrote:
> > So, the question becomes: what _is_ that forum, if such a thing
> > exists at all?
>
> Some options:
>
> - Cctalk email list.
(cc-ed to coff, of coffse...)
I use to hang out on IBM-MAIN mailing list, too. While they are,
mostly, dealing with modern mainframes and current problems, they also
occasionally mention old story or two. Actually, since mainframe is
such a living fossil thing, the whole talk sometimes feels as if it
was about something upgraded continuously from the 1960-ties. Most of
it is uncomprehensible to me (never had proper mainframe training, or
unproper one, and they deal with stuff in unique way, have their own
acronyms for things, there are some intro books but there is not
enough time*energy), but also a bit educating - a bit today, a bit
next week etc.
> - ClassicCMP Discord.
> - Retrocomputingforum.com.
> - Various Facebook groups.
Web stuff, requiring Javascript to work, ugh, ugh-oh. Mostly, it boils
down to the fact that one cannot easily curl the text from those other
places (AFAICT). So it is hard to awk this text into mbox format and
read it comfortably.
--
Regards,
Tomasz Rola
--
** A C programmer asked whether computer had Buddha's nature. **
** As the answer, master did "rm -rif" on the programmer's home **
** directory. And then the C programmer became enlightened... **
** **
** Tomasz Rola mailto:tomasz_rola@bigfoot.com **
All,
I thought I would post something here that wasn't DOA over on tuhs and
see if it would fly here instead. I have been treating coff as the
destination for the place where off-topic tuhs posts go to die, but
after the latest thread bemoaning a place to go for topics tangential to
unix, I thought I'd actually start a coff thread! Here goes...
I read a tremendous number of documents from the web, or at least read
parts of them - to the tune of maybe 50 or so a week. It is appalling to
me in this era that we can't get better at scanning. Be that as it may,
the needle doesn't seem to have moved appreciably in the last decade or
so and it's a little sad. Sure, if folks print to pdf, it's great. But,
if they scan a doc, not so great, even today.
Rather than worry about the scanning aspects, I am more interested in
what to do with those scans. Can they be handled in such a way as to
give them new life? Unlike the scanning side of things, I have found
quite a bit of movement in the area of being able to work with the pdfs
and I'd really like to get way better at it. If I get a bad scanned pdf,
if I can make it legible on screen, legible on print, and searchable,
I'm golden. Sadly, that's way harder than it sounds, or, in my opinion,
than it should be.
I recently put together a workflow that is tenable, if time consuming.
If your interested in the details, I've shared them:
https://decuser.github.io/pdfs/2023/02/01/pdf-cleanup-workflow.html
In the note, I leverage a lot of great tools that have significantly
improved over the years to the point where they do a great job at what
they do. But, there's lots of room for improvement. Particularly in the
area of image tweaking around color and highlights and such.
The note is mac-centric in that I use a mac, otherwise, all of the tools
work on modern *nix and with a little abstract thought, windows too.
In my world, here's what happens:
* find a really interesting topic and along the way, collect pdfs to read
* open the pdf and find it salient, but not so readable, with sad
printability, and no or broken OCR
* I begin the process of making the pdf better with the aforementioned
goals aforethought
The process in a nutshell:
1. Extract the images to individual tiffs (so many tools can't work with
multi-image tiffs)
   * pdfimages from poppler works great for this
2. Adjust the color (it seems impossible to do this without a batch
capable gui app)
   * I use Photoscape X for this - just click batch and make
adjustments to all of the images using the same settings
3. Resize the images - most pdfs have super wonky sizes
   * I use convert from imagemagick for this and I compress the tiffs
while I'm converting them
4. Recombine the images into a multi-tiff image
   * I use tiffcp from libtiff for this
5. OCR the reworked image set
   * I use tesseract for this - It's gotten so much better it's ridiculous
This process results in a pdf that meets the objectives.
It's not horribly difficult to do and it's not horribly time consuming.
It represents many, many attempts to figure out this thorny problem.
I'd really like to get away from needing Photoscape X, though. Then I
could entirely automate the workflow in bash...
The problem is that the image adjustments are the most critical - image
extraction, resize, compression, recombining images, ocr (I still can't
believe it), and outputting a pdf are now taken care of by command line
tools that work well.
I wouldn't mind using a gui to figure out some color setting (Grayscale,
Black and White, or Color) and increase/decrease values for shadows and
highlights if those could then be mapped to command line arguments of a
tool that could apply them, though. Cuz, then the workflow could be,
extract a good representative page as image, open it, figure out the
color settings, and then use those settings with toolY as part of the
scripted workflow.
Here are the objectives for easy reference:
1. The PDF needs to be readable on a decent monitor (zooming in doesn't
distort the readability, pixelation that is systematic is ok, but not
preferred). Yes, I know it's got a degree of subjectivity, but blobby,
bleeding text is out of scope!
2. The PDF needs to print with a minimum of artifact (weird shadows,
bleeding and blob are out). It needs to be easy to read.
3. The PDF needs to be searchable with good accuracy (generally, bad
scans have no ocr, or ocr that doesn't work).
Size is a consideration, but depends greatly on the value of the work.
My own calculus goes like this - if it's modern work, it should be way
under 30mbs. If it's print to pdf, it should be way under 10mb (remember
when you thought you'd never use 10mb of space... for all of your files
and the os). If it is significant and rare, less than 150mbs can work.
Obviously, this is totally subjective, your calculus is probably quite
different.
The reason this isn't posted over in pdf.scans.discussion is that even
if there were such a place, it'd be filled with super technical
gibberish about color depth and the perils of gamma radiation or
somesuch. We, as folks interested in preserving the past have a more
pragmatic need for a workable solution that is attainable to mortals.
So, with that as a bit of background, let me ask what I asked previously
in a different wayon tuhs, here in coff - what's your experience with
using sad pdfs? Do you just live with them as they are, or do you try to
fix them and how, or do you use a workflow and get good results?
Later,
Will
Oh, and of course I would cc the old address!
Reply on the correct COFF address <coff(a)tuhs.org>
Sheesh.
On 2/3/23 11:26 AM, Will Senn wrote:
> We're in COFF territory again. I am enjoying the conversation, but
> let's self monitor. Perhaps, a workflow for this is that when we drift
> off into non-unix history discussion, we cc: COFF and tell folks to
> continue there? As a test I cced it on this email, don't reply all to
> this list. Just let's talk about it over in coff. If you aren't on
> coff join it.
>
> If you aren't sure or think most folks on the list want to discuss it.
> Post it on COFF, if you don't get any traction, reference the COFF
> thread and tease it in TUHS.
>
> This isn't at all a gripe - I heart all of our discussions, but I
> agree that it's hard to keep it history related here with no outlet
> for tangential discussion - so, let's put coff to good use and try it
> for those related, but not quite discussions.
>
> Remember, don't reply to TUHS on this email :)!
>
> - will
>
> On 2/3/23 11:11 AM, Steve Nickolas wrote:
>> On Fri, 3 Feb 2023, Larry McVoy wrote:
>>
>>> Some things will never go away, like keep your fingers off of my L1
>>> cache lines. I think it's mostly lost because of huge memories, but
>>> one of the things I love about early Unix is how small everything was.
>>> Most people don't care, but if you want to go really fast, there is no
>>> replacement for small.
>>>
>>> Personally, I'm fine with some amount of "list about new systems where
>>> we can ask about history because that helps us build those new
>>> systems".
>>> Might be just me, I love systems discussions.
>>
>> I find a lot of my own stuff is like this - kindasorta fits and
>> kindasorta doesn't for similar reasons.
>>
>> (Since a lot of what I've been doing lately is creating a
>> SysV-flavored rewrite of Unix from my own perspective as a
>> 40-something who actually got most of my experience coding for
>> 16-bits and MS-DOS, and speaks fluent but non-native C. I'm sure it
>> comes out in my coding style.)
>>
>> -uso.
>