(Moving to COFF)
On Mon, Mar 06, 2023 at 03:24:29PM -0800, Larry McVoy wrote:
> But even that seems suspect, I would think they could put some logic
> in there that just doesn't feed power to the GPU if you aren't using
> it but maybe that's harder than I think.
>
> If it's not about power then I don't get it, there are tons of transistors
> waiting to be used, they could easily plunk down a bunch of GPUs on the
> same die so why not? Maybe the dev timelines are completely different
> (I suspect not, I'm just grabbing at straws).
Other potential reasons:
1) Moving functionality off-CPU also allows for those devices to have
their own specialized video memory that might be faster (SDRAM) or
dual-ported (VRAM) without having to add that complexity to the more
general system DRAM and/or the CPU's Northbridge.
2) In some cases, having an off-chip co-processor may not need any
access to the system memory at well. An example of this is the "bump
in the wire" in-line crypto engines (ICE) which is located between the
Southbridge and the eMMC/UFS flash storage device. If you are using a
Android device, it's likely to have an ICE. The big advantage is that
it avoids needing to have a bounce buffer on the write path, where the
file system encryption layer has to copy-and-encrypt data from the
page cache to a bounce buffer, and then the encrypted block will then
get DMA'ed to the storage device.
3) From an architectural perspective, not all use cases need various
co-processors, whether it is to doing cryptography, or running some
kind of machine-learning module, or image manipulation to simulate
bokeh, or create HDR images, etc. While RISC-V does have the concept
of instructure set extensions, which can be developed without getting
permission from the "owners" of the core CPU ISA (e.g., ARM, Intel,
etc.), it's a lot more convenient for someone who doesn't need to bend
the knee to ARM, inc. (or their new corporate overloads) or Intel, to
simply put that extension outside the core ISA.
(More recently, there is an interesting lawsuit about whether it's
"allowed" to put a 3rd party co-processor on the same SOC without
paying $$$$$ to the corporate overload, which may make this point moot
--- although it might cause people to simply switch to another ISA
that doesn't have this kind of lawsuit-happy rent-seeking....)
In any case, if you don't need to play Quake with 240 frames per
second, then there's no point putting the GPU in the core CPU
architecture, and it may turn out that the kind of co-processor which
is optimized for running ML models is different, and it is often
easier to make changes to the programming model for a GPU, compared to
making changes to a CPU's ISA.
- Ted
Hi Phil,
Copying to the COFF list, hope that's okay. I thought it might interest
them.
> > $ units -1v '26^3 16 bit' 64KiB
>
> Works only for GNU units.
That's interesting, thanks.
I've access to a FreeBSD 12.3-RELEASE-p6, if that version number means
something to you. Its units groks ^ to mean power when applied to a
unit, as the fine units(1) says, but not to a number. Whereas * works.
$ units yd^3 ft^3
* 27
/ 0.037037037
$
$ units 6\*7 21
* 2
/ 0.5
$
$ units 2^4 64
* 0.03125
/ 32
$
The last one silently treats 2^4 as 2; I'd say that's a bug.
It has Ki- and byte allowing
$ units -t Kibyte bit
8192
but lacks GNU's
B byte
Fair enough, though I think that's common enough now to be included.
FreeBSD also seems to have another bug: demanding a space between the
quantity and the unit for fundamental ‘!’ units.
$ units m 8m
conformability error
1 m
8
$ units m '8 m'
* 0.125
/ 8
$
I found this when attempting the obvious
$ units Kibyte 8bit
conformability error
8192 bit
8
$ units Kibyte '8 bit'
* 1024
/ 0.0009765625
$
Whilst I'm not a GNU acolyte, in this case its version of units does
seem to have had a bit more TLC. :-)
--
Cheers, Ralph.
John Cowan <cowan(a)ccil.org> writes:
>> which Rob Austein re-wrote into "Alice's PDP-10".
> I didn't know that one was done at MIT.
This spells out the details:
https://www.hactrn.net/sra/alice/alice.glossary
[COFF]
On Mon, Feb 27, 2023 at 4:16 PM Chet Ramey <chet.ramey(a)case.edu> wrote:
> On 2/27/23 4:01 PM, segaloco wrote:
> > The official Rust book lists a blind script grab from a website piped into a shell as their "official" install mechanism.
>
> Well, I suppose if it's from a trustworthy source...
>
> (Sorry, my eyes rolled so hard they're bouncing on the floor right now.)
I find this a little odd. If I go back to O'Reilly books from the
early 90s, there was advice to do all sorts of suspect things in them,
such as fetching random bits of pieces from random FTP servers (or
even using email fetch tarballs [!!]). Or downloading shell archives
from USENET.
And of course you _can_ download the script and read through it if you want.
And no one forces anyone to use `rustup`. Most vendors ship some
version of Rust through their package management system these days.
- Dan C.
On Mon, Feb 27, 2023 at 5:06 PM KenUnix <ken.unix.guy(a)gmail.com> wrote:
> Have they not heard of common sense? Whenever I get something from git I look through it to
> check for something suspicious before using it and then and only then do I do make install.
Up to what size? What about the dependencies? How about the compiler
that compiles it all?
I have a copy of the Linux kernel I checked out on my machine; it's
many millions of lines of code; sorry, I haven't read all of that. I
often install things using the operating system's package manager; I
haven't read through all that code, either. Life's too short as it is!
> And today's cookie cutter approach to writing software means they are not learning anything
> but copy paste. Where's the innovation?
I imagine that when people made the switch from programming in machine
code to symbolic assemblers, and then again from assembler to
higher-level languages (FORTRAN! COBOL! PL/I!). And so on.
Consider that, perhaps, the innovation is in how those things are all
combined to do something useful for users. My ability to search, read
documents, listen to music, watch real-time video, etc, is way beyond
anything I could do on the machines of the early 90s.
Not everything that the kids do these days is for the better, but not
everything is terrible, either. This list, and TUHS, bluntly, too
often makes the mistake of assuming that it is. Innovation didn't stop
in 1989.
- Dan C.
> On Mon, Feb 27, 2023 at 4:22 PM Dan Cross <crossd(a)gmail.com> wrote:
>>
>> [COFF]
>>
>> On Mon, Feb 27, 2023 at 4:16 PM Chet Ramey <chet.ramey(a)case.edu> wrote:
>> > On 2/27/23 4:01 PM, segaloco wrote:
>> > > The official Rust book lists a blind script grab from a website piped into a shell as their "official" install mechanism.
>> >
>> > Well, I suppose if it's from a trustworthy source...
>> >
>> > (Sorry, my eyes rolled so hard they're bouncing on the floor right now.)
>>
>> I find this a little odd. If I go back to O'Reilly books from the
>> early 90s, there was advice to do all sorts of suspect things in them,
>> such as fetching random bits of pieces from random FTP servers (or
>> even using email fetch tarballs [!!]). Or downloading shell archives
>> from USENET.
>>
>> And of course you _can_ download the script and read through it if you want.
>>
>> And no one forces anyone to use `rustup`. Most vendors ship some
>> version of Rust through their package management system these days.
>>
>> - Dan C.
>
>
>
> --
> End of line
> JOB TERMINATED
>
>
On Mon, Feb 27, 2023 at 4:52 PM Michael Stiller <mstiller(a)me.com> wrote:
> > I find this a little odd. If I go back to O'Reilly books from the
> > early 90s, there was advice to do all sorts of suspect things in them,
> > such as fetching random bits of pieces from random FTP servers (or
> > even using email fetch tarballs [!!]). Or downloading shell archives
> > from USENET.
> >
> > And of course you _can_ download the script and read through it if you want.
>
> This does not help, you can detect that on the server and send something else.
What? You've already downloaded the script. Once it's on your local
machine, why would you download it again?
> https://www.idontplaydarts.com/2016/04/detecting-curl-pipe-bash-server-side/
If I really wanted to see whether it had been tampered with, perhaps
spin up a sacrificial machine and run,
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | tee the.script | sh
and compare to the output of,
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs >
the.script.nopipeshell
- Dan C.
[Redirecting to COFF; TUHS to Bcc:]
On Mon, Feb 27, 2023 at 3:46 PM segaloco via TUHS <tuhs(a)tuhs.org> wrote:
> I see the wisdom in your last line there, I've typed and deleted a response to this email 4 times, each one more convoluted than the last.
>
> The short of my stance though is, as a younger programmer (29), I am certainly not a fan of these trends that are all too common in my generation. That said, I've set foot in one single softare-related class in my life (highschool Java class) and so I don't really know what is being taught to folks going the traditional routes. All I know from my one abortive semester of college is that I didn't see a whole lot of reliance on individual exploration of concepts in classes, just everyone working to a one-size-fits-all understanding of how to be a good employee in a given subject area. Of course, this is also influenced by my philosophy and biases and such, and only represents 4-5 months of observation, but if my minimal experience with college is to be believed, I have little faith that educational programs are producing much more than meat filters between StackOverflow and <insert code editor here>. No offense to said meat filters, people gotta work, but there is something lost when the constant march of production torpedoes individual creativity. Then again, do big firms want sophisticated engineers or are we too far gone into assembly line programming with no personal connection to any of the products? I'm glad I'm as personally involved in the stuff I work with, I could see myself slipping into the same patterns of apathy if I was a nameless face in a sea of coders on some project I don't even know the legal name of any given day.
This is an extraordinarily complicated subject, and it's really full
of nuance. In general, I think your categorization is unfair.
It sounds like you had a bad experience in your first semester of
college. I can sympathize; I did too.
But a thing to bear in mind is that in the first year, universities
are taking kids (and yes, they are kids...sorry young folks, I don't
mean that as a pejorative, but consider the context! For most young
people this is their first experience living on their own, their first
_real_ taste of freedom, and the first where they're about to be
subject to rigorous academic expectations without a lot of systemic
support) with wildly uneven academic and social backgrounds and
preparing them for advanced study in a particular field...one that
most haven't even identified for themselves yet. For the precocious
student, this will feel stifling; for many others it will be a
struggle. What, perhaps, you see as lack of intellectual curiosity may
have in fact been the outward manifestations of that struggle.
That said...Things are, legitimately, very different today than they
were when Unix was young. The level of complexity has skyrocketed in
every dimension, and things have gotten to the point where hack upon
hack has congealed into a system that's nearly bursting at the seams.
It's honestly amazing that anything works at all.
That said, good things have been invented since 1985, and the way many
of us "grew up" thinking about problems doesn't always apply anymore.
The world changes; c'est la vie.
- Dan C.
> ------- Original Message -------
> On Monday, February 27th, 2023 at 12:22 PM, arnold(a)skeeve.com <arnold(a)skeeve.com> wrote:
>
>
> > Chet Ramey chet.ramey(a)case.edu wrote:
> >
> > > On 2/27/23 3:04 PM, arnold(a)skeeve.com wrote:
> > >
> > > > IMHO the dependence upon IDEs is crippling; they cut & paste to the
> > > > almost total exclusion of the keyboard, including when shell completion
> > > > would be faster.
> > >
> > > Don't forget cargo-culting by pasting shell commands they got from the web
> > > and barely understand, if at all.
> >
> >
> > Yeah, really.
> >
> > I do what I can, but it's a very steep uphill battle, as most
> > don't even understand that they're missing something, or that
> > they could learn it if they wanted to.
> >
> > I think I'll stop ranting before I really get going. :-)
> >
> > Arnold
COFF transfer, TUHS Bcc'd to know where this thread went.
Between the two if you're not doing UNIX-specific things but just trying to resurrect/restore these, COFF will probably be the better place for further discussion. @OP if you're not a member of COFF already, you should be able to reach out to Warren Toomey regarding subscription.
If you're feeling particularly adventurous, NetBSD still supports VAX in some manner: http://wiki.netbsd.org/ports/vax/
YMMV, but I've had some success with NetBSD on some pretty oddball stuff. As the old saying goes, "Of course it runs NetBSD". You might be able to find some old VMS stuff for them as well, but I wouldn't know where to point you other than bitsavers. There's some other archival site out there with a bunch of old DEC stuff but I can never seem to find it when I search for it, only by accident. Best of luck!
- Matt G.
------- Original Message -------
On Wednesday, February 22nd, 2023 at 10:08 AM, jnc(a)mercury.lcs.mit.edu <jnc(a)mercury.lcs.mit.edu> wrote:
> > From: Maciej Jan Broniarz
>
>
> > Our local Hackroom acquired some VAX Station machines.
>
>
> Exactly what sort of VAXstations? There are several different kinds; one:
>
> http://gunkies.org/wiki/VAXstation_100
>
> doesn't even include a VAX; it's just a branding deal from DEC Marketing.
> Start with finding out exactly which kind(s) of VAXstation you have.
>
> Noel
This is far afield even for COFF, so apologies up front. Machines and
OSes we fondly remember get older day by day. But many labs I worked in
during undergrad & grad years and then in the workforce always had a
radio going, and music never seems to age. When I hear Earth, Wind &
Fire's "September" or Doobie Brothers' "What a Fool Believes," it's
RSTS/E on a PDP11/70 as a teen, my first exposure to computers.
Kraftwerk and Big Audio Dynamite mean Unix with Mike Muuss at Ballistic
Research Lab in the early 90s. I had PX (military Post Exchange)
privileges which Mike used to the fullest to buy fantastic lab
speakers. The old ENIAC room, our work space, had thick walls. :-)
I wonder if particular music transports any others back to computing
days of old. The current lab I'm in receives exactly 1 radio station
from a local high school and streaming is blocked. Not sure that any new
musical memories will be formed for my ever nearer days of retirement!
Musically yours,
Mike Markowski
Jonathan Gray wrote:
>> Any chance this DOS supdup software is still around?
>
> https://web.mit.edu/Saltzer/www/publications/pcip-1985.pdf
> http://www.bitsavers.org/bits/MIT/pc-ip/
Great, thanks!
It's a bit sad to read in supdup.mss "Unfortunately, very few machines
have TCP/Supdup servers. The only servers known to us are on Mit-MC and
Su-AI, and 4.2 Unix machines running a server we distribute." At this
point, three old ITS machines had recently fallen over, one after the
other, and MC was the only one left standing. But not long after, four
new ones would appear. One of which is still up and running!
s/TUHS/COFF/
Theodore Ts'o wrote:
> The only I saw were PC/AT's (that is, the ones with the '286 CPU) that
> ran DOS and which were essentially used only to telnet to the Vax
> 750's (or supdup to the MIT AI / LCS lab machines, but most
> undergraduates didn't have access to those computers
Any chance this DOS supdup software is still around?
Was it part of PC/TCP? I searched around and found this:
https://windowsbulletin.com/files/exe/ftp-software-inc/pc-tcp
[TUHS to Bcc: and +COFF]
On Wed, Feb 8, 2023 at 3:50 PM Warner Losh <imp(a)bsdimp.com> wrote:
> [snip]
> The community aspect of open source was there in spades as well, with people helping other people and sharing fixes. But it was complicated by restrictive license agreements and somewhat (imho) overzealous protection of 'rights' at times that hampered things and would have echos in later open source licenses and attitudes that would develop in response. Even though the term 'open source' wasn't coined until 1998, the open source ethos were present in many of the early computer users groups, not least the unix ones.
Don't forget SHARE! Honestly, I think the IBM mainframe community
doesn't get its due. There was actually a lot of good stuff there.
> USENET amplified it, plus let in the unwashed masses who also had useful contributions (in addition to a lot of noise)... then things got really crowded with noise when AOL went live... And I'm sure there's a number of other BBS and/or compuserve communities I'm giving short-shrift here because I wasn't part of them in real time.
The phenomenon of "September" being the time when all the new
undergrads got their accounts and discovered USENET and the
shenanigans that ensued was well-known. Eternal September when AOL got
connected was a serious body blow.
As for BBSes...I'd go so far as to say that the BBS people were the
AOL people before the AOL people were the AOL people.
A takeaway from both was that communities with established norms but
no way beside social pressure to enforce them have a hard time
scaling. USENET worked when the user population was small and mostly
amenable to a set of shared goals centered around information exchange
(nevermind the Jim Flemings and other well-known cranks of the world).
But integrating someone into the fold took effort both on the
community's part as well as the user; when it wasn't obvious that
intrinsic motivation was required, or hordes of users just weren't
interested, it didn't work very well.
I think this is something we see over and over again with social networks.
- Dan C.
Good morning all, I was wondering if anyone in this group was aware of any known preservation of VAX/VMS 4.4 source code?
Just saw this on eBay: https://www.ebay.com/itm/195582389147?hash=item2d899e6b9b:g:neYAAOSwmQJj3EkH
I certainly don't have the equipment for this in my arsenal, but at the same time, if this represents long-lost source code, I'd happy try and nab it and then get it to someone who can do the restoration work from this.
Thoughts?
- Matt G.
On Fri, Feb 03, 2023 at 06:36:34AM +0000, Lars Brinkhoff wrote:
> Dan Cross wrote:
> > So, the question becomes: what _is_ that forum, if such a thing
> > exists at all?
>
> Some options:
>
> - Cctalk email list.
(cc-ed to coff, of coffse...)
I use to hang out on IBM-MAIN mailing list, too. While they are,
mostly, dealing with modern mainframes and current problems, they also
occasionally mention old story or two. Actually, since mainframe is
such a living fossil thing, the whole talk sometimes feels as if it
was about something upgraded continuously from the 1960-ties. Most of
it is uncomprehensible to me (never had proper mainframe training, or
unproper one, and they deal with stuff in unique way, have their own
acronyms for things, there are some intro books but there is not
enough time*energy), but also a bit educating - a bit today, a bit
next week etc.
> - ClassicCMP Discord.
> - Retrocomputingforum.com.
> - Various Facebook groups.
Web stuff, requiring Javascript to work, ugh, ugh-oh. Mostly, it boils
down to the fact that one cannot easily curl the text from those other
places (AFAICT). So it is hard to awk this text into mbox format and
read it comfortably.
--
Regards,
Tomasz Rola
--
** A C programmer asked whether computer had Buddha's nature. **
** As the answer, master did "rm -rif" on the programmer's home **
** directory. And then the C programmer became enlightened... **
** **
** Tomasz Rola mailto:tomasz_rola@bigfoot.com **
All,
I thought I would post something here that wasn't DOA over on tuhs and
see if it would fly here instead. I have been treating coff as the
destination for the place where off-topic tuhs posts go to die, but
after the latest thread bemoaning a place to go for topics tangential to
unix, I thought I'd actually start a coff thread! Here goes...
I read a tremendous number of documents from the web, or at least read
parts of them - to the tune of maybe 50 or so a week. It is appalling to
me in this era that we can't get better at scanning. Be that as it may,
the needle doesn't seem to have moved appreciably in the last decade or
so and it's a little sad. Sure, if folks print to pdf, it's great. But,
if they scan a doc, not so great, even today.
Rather than worry about the scanning aspects, I am more interested in
what to do with those scans. Can they be handled in such a way as to
give them new life? Unlike the scanning side of things, I have found
quite a bit of movement in the area of being able to work with the pdfs
and I'd really like to get way better at it. If I get a bad scanned pdf,
if I can make it legible on screen, legible on print, and searchable,
I'm golden. Sadly, that's way harder than it sounds, or, in my opinion,
than it should be.
I recently put together a workflow that is tenable, if time consuming.
If your interested in the details, I've shared them:
https://decuser.github.io/pdfs/2023/02/01/pdf-cleanup-workflow.html
In the note, I leverage a lot of great tools that have significantly
improved over the years to the point where they do a great job at what
they do. But, there's lots of room for improvement. Particularly in the
area of image tweaking around color and highlights and such.
The note is mac-centric in that I use a mac, otherwise, all of the tools
work on modern *nix and with a little abstract thought, windows too.
In my world, here's what happens:
* find a really interesting topic and along the way, collect pdfs to read
* open the pdf and find it salient, but not so readable, with sad
printability, and no or broken OCR
* I begin the process of making the pdf better with the aforementioned
goals aforethought
The process in a nutshell:
1. Extract the images to individual tiffs (so many tools can't work with
multi-image tiffs)
* pdfimages from poppler works great for this
2. Adjust the color (it seems impossible to do this without a batch
capable gui app)
* I use Photoscape X for this - just click batch and make
adjustments to all of the images using the same settings
3. Resize the images - most pdfs have super wonky sizes
* I use convert from imagemagick for this and I compress the tiffs
while I'm converting them
4. Recombine the images into a multi-tiff image
* I use tiffcp from libtiff for this
5. OCR the reworked image set
* I use tesseract for this - It's gotten so much better it's ridiculous
This process results in a pdf that meets the objectives.
It's not horribly difficult to do and it's not horribly time consuming.
It represents many, many attempts to figure out this thorny problem.
I'd really like to get away from needing Photoscape X, though. Then I
could entirely automate the workflow in bash...
The problem is that the image adjustments are the most critical - image
extraction, resize, compression, recombining images, ocr (I still can't
believe it), and outputting a pdf are now taken care of by command line
tools that work well.
I wouldn't mind using a gui to figure out some color setting (Grayscale,
Black and White, or Color) and increase/decrease values for shadows and
highlights if those could then be mapped to command line arguments of a
tool that could apply them, though. Cuz, then the workflow could be,
extract a good representative page as image, open it, figure out the
color settings, and then use those settings with toolY as part of the
scripted workflow.
Here are the objectives for easy reference:
1. The PDF needs to be readable on a decent monitor (zooming in doesn't
distort the readability, pixelation that is systematic is ok, but not
preferred). Yes, I know it's got a degree of subjectivity, but blobby,
bleeding text is out of scope!
2. The PDF needs to print with a minimum of artifact (weird shadows,
bleeding and blob are out). It needs to be easy to read.
3. The PDF needs to be searchable with good accuracy (generally, bad
scans have no ocr, or ocr that doesn't work).
Size is a consideration, but depends greatly on the value of the work.
My own calculus goes like this - if it's modern work, it should be way
under 30mbs. If it's print to pdf, it should be way under 10mb (remember
when you thought you'd never use 10mb of space... for all of your files
and the os). If it is significant and rare, less than 150mbs can work.
Obviously, this is totally subjective, your calculus is probably quite
different.
The reason this isn't posted over in pdf.scans.discussion is that even
if there were such a place, it'd be filled with super technical
gibberish about color depth and the perils of gamma radiation or
somesuch. We, as folks interested in preserving the past have a more
pragmatic need for a workable solution that is attainable to mortals.
So, with that as a bit of background, let me ask what I asked previously
in a different wayon tuhs, here in coff - what's your experience with
using sad pdfs? Do you just live with them as they are, or do you try to
fix them and how, or do you use a workflow and get good results?
Later,
Will
Oh, and of course I would cc the old address!
Reply on the correct COFF address <coff(a)tuhs.org>
Sheesh.
On 2/3/23 11:26 AM, Will Senn wrote:
> We're in COFF territory again. I am enjoying the conversation, but
> let's self monitor. Perhaps, a workflow for this is that when we drift
> off into non-unix history discussion, we cc: COFF and tell folks to
> continue there? As a test I cced it on this email, don't reply all to
> this list. Just let's talk about it over in coff. If you aren't on
> coff join it.
>
> If you aren't sure or think most folks on the list want to discuss it.
> Post it on COFF, if you don't get any traction, reference the COFF
> thread and tease it in TUHS.
>
> This isn't at all a gripe - I heart all of our discussions, but I
> agree that it's hard to keep it history related here with no outlet
> for tangential discussion - so, let's put coff to good use and try it
> for those related, but not quite discussions.
>
> Remember, don't reply to TUHS on this email :)!
>
> - will
>
> On 2/3/23 11:11 AM, Steve Nickolas wrote:
>> On Fri, 3 Feb 2023, Larry McVoy wrote:
>>
>>> Some things will never go away, like keep your fingers off of my L1
>>> cache lines. I think it's mostly lost because of huge memories, but
>>> one of the things I love about early Unix is how small everything was.
>>> Most people don't care, but if you want to go really fast, there is no
>>> replacement for small.
>>>
>>> Personally, I'm fine with some amount of "list about new systems where
>>> we can ask about history because that helps us build those new
>>> systems".
>>> Might be just me, I love systems discussions.
>>
>> I find a lot of my own stuff is like this - kindasorta fits and
>> kindasorta doesn't for similar reasons.
>>
>> (Since a lot of what I've been doing lately is creating a
>> SysV-flavored rewrite of Unix from my own perspective as a
>> 40-something who actually got most of my experience coding for
>> 16-bits and MS-DOS, and speaks fluent but non-native C. I'm sure it
>> comes out in my coding style.)
>>
>> -uso.
>
On Feb 3, 2023, at 8:26 AM, Will Senn <will.senn(a)gmail.com> wrote:
>
> I can't seem to get away from having to highlight and mark up the stuff I read. I love pdf's searchability of words, but not for quickly locating a section, or just browsing and studying them. I can flip pages much faster with paper than an ebook it seems :).
You can annotate, highlight and markup pdfs. There are apps for that though
I’m not very familiar with them as I don’t markup even paper copies. On an
iPad you can easily annotate pdfs with an apple pencil.
> From: Dennis Boone <drb(a)msu.edu>
>
> * Don't use JPEG 2000 and similar compression algorithms that try to
> re-use blocks of pixels from elsewhere in the document -- too many
> errors, and they're errors of the sort that can be critical. Even if
> the replacements use the correct code point, they're distracting as
> hell in a different font, size, etc.
I wondered about why certain images were the way they were, this
probably explains a lot.
> * OCR-under is good. I use `ocrmypdf`, which uses the Tesseract engine.
Thanks for the tips.
> * Bookmarks for pages / table of contents entries / etc are mandatory.
> Very few things make a scanned-doc PDF less useful than not being able
> to skip directly to a document indicated page.
I wish. This is a tough one. I generally sacrifice ditching the
bookmarks to make a better pdf. I need to look into extracting bookmarks
and if they can be re-added without getting all wonky.
> * I like to see at least 300 dpi.
Yes, me too, but I've found that this often results in too big (when
fixing existing), if I'm creating, they're fine.
> * Don't scan in color mode if the source material isn't color. Grey
> scale or even "line art" works fine in most cases. Using one pixel
> means you can use G4 compression for colorless pages.
Amen :).
>
> * Do reduce the color depth of pages that do contain color if you can.
> The resulting PDF can contain a mix of image types. I've worked with
> documents that did use color where four or eight colors were enough,
> and the whole document could be mapped to them. With care, you _can_
> force the scans down to two or three bits per pixel.
> * Do insert sensible metadata.
>
> * Do try to square up the inevitably crooked scans, clean up major
> floobydust and whatever crud around the edges isn't part of the paper,
> etc. Besides making the result more readable, it'll help the OCR. I
> never have any luck with automated page orientation tooling for some
> reason, so end up just doing this with Gimp.
Great points. Thanks.
-will
That was the title of a sidebar in Australia's "Silicon Chip" electronics
magazine this month, and referred to the alleged practice of running
scientific and engineering programs many times to ensure consistent
output, as hardware error checks weren't the best in those days (bit-flips
due to electrical noise etc).
Anyway, the mag is seeking corroboration on this (credit given where due,
of course); I find it a bit hard to believe that machines capable of
running complex programs did not have e.g. parity checking...
Thanks.
-- Dave
Will Senn wrote in
<3808a35c-2ee0-2081-4128-c8196b4732c0(a)gmail.com>:
|Well, I just read this as Rust is dead... here's hoping, but seriously,
|if we're gonna go off and have a language vs language discussion, I
|personally don't think we've had any real language innovation since
|Algol 60, well maybe lisp... sheesh, surely we're in COFF territory.
It has evangelists on all fronts. ..Yes it was only that while
i was writing the message i reread about Vala of the GNOME
project, which seems to be a modern language with many beneficial
properties still, growing out of a Swiss University (is that a bad
sign to come from Switzerland and more out of research), and it
had support of Ubuntu and many other parts of the GNOME project.
Still it is said to be dead. I scrubbed that part of my message.
But maybe thus a "dead" relation in between the lines remained.
Smalltalk is also such a thing, though not from Switzerland.
An Ach! on the mystery of human behaviour. Or, like the wonderful
Marcel Reich-Ranicki said, "Wir sehen es betroffen, den Vorhang
zu, und alle Fragen offen" ("Concerned we see, the curtain closed,
and all the Questions open").
--steffen
|
|Der Kragenbaer, The moon bear,
|der holt sich munter he cheerfully and one by one
|einen nach dem anderen runter wa.ks himself off
|(By Robert Gernhardt)
[TUHS to Bcc]
On Wed, Feb 1, 2023 at 2:11 PM Rich Salz <rich.salz(a)gmail.com> wrote:
> On Wed, Feb 1, 2023 at 1:33 PM segaloco via TUHS <tuhs(a)tuhs.org> wrote:
>> In the annals of UNIX gaming, have there ever been notable games that have operated as multiple processes, perhaps using formal IPC or even just pipes or shared files for communication between separate processes (games with networking notwithstanding)?
>
> https://www.unix.com/man-page/bsd/6/hunt/
> source at http://ftp.funet.fi/pub/unix/4.3bsd/reno/games/hunt/hunt/
Hunt was the one that I thought of immediately. We used to play that
on Suns and VAXen and it could be lively.
There were a number of such games, as Clem mentioned; others I
remember were xtrek, hearts, and various Chess and Go servers.
- Dan C.
Switching to COFF
On Wed, Feb 1, 2023 at 1:33 PM segaloco via TUHS <tuhs(a)tuhs.org> wrote:
> In the annals of UNIX gaming, have there ever been notable games that have
> operated as multiple processes, perhaps using formal IPC or even just pipes
> or shared files for communication between separate processes (games with
> networking notwithstanding)?
>
Yes - there were a number of them. Both for UNIX and other wise. Some
spanned the Arpanet back in the day on the PDP-10's. There was an early
first person shooter games that I remember that ran on the PDP-10s on
ADM3As and VT52 that worked that way. You flew into space and fought each
other.
CMU's (Steve Rubin's) Trip was stand alone program - sort of the
grand-daddy of the Star Trek games. It ran on a GDP2 (Triple-Drip Graphics
Wonder) and had dedicated 11/20. It was multiple processes to do
everything. You were at the Captions chair of the Enterprise looking out
into space. You had various mission and at some point would bee to
reprovision - which meant you had to dock at the 2001 space
station including timing your rotation to line up with docking bay like in
the movie. When you beat an alien ship you got a bottle of coke - all
of which collected in row on the bottom of the screen.
I did manage to save the (BLISS-11) sources to it a few years ago. One
of my dreams is to try to write GDP simulator for SIMH and see if we can
bring it back to life. A big issue as Rob knows is the GDPs had an amazing
keyboard so duplicating it will take some thinking with modern HW; but HW
has caught up such that I think it might be possible to emulate it. SIMH
works really well with a number of the other Graphics systems and with my
modem system like my current Mac and its graphics HW, there might be a
chance.
One of my other favorites was one that ran on the Xerox Alto's who's name I
don't remember, where you wandered around the Xerox 3M ethernet. People
would enter your system and appear on your system. IIRC Byte Magazine did
an article that talked about it at one point -- this was all pre-Apple Macs
- but I remember they had pictures of people playing it that I think they
took at Stanford. IIRC Shortly after the X-Terminals appeared somebody
tried to duplicate it, or maybe that was with the Bilts but it was not
quite as good as those of us that had access to real Xerox Altos.
ᐧ
COFF'd
> I think general software engineering knowledge and experience cannot be
> 'obsoleted' or made less relevant by better languages. If they help,
> great, but you have to do the other part too. As languages advance and
> get better at catching (certain kinds of) mistakes, I worry that
> engineers are not putting enough time into observation and understanding
> of how their programs actually work (or do not).
I think you nailed it there mentioning engineers in that one of the growing norms these days is making software development more accessible to a diverse set of backgrounds. No longer does a programming language have to just bridge the gap between, say, an expert mathematician and a compute device.
Now there are languages to allow UX designers to declaratively define interfaces, for data scientists to functionally define algorithms, and WYSIWYG editors for all sorts of things that were traditionally handled by hammering out code. The concern of describing a program through a standard language and the concern that language then describing the operations of a specific device have been growing more and more decoupled as time goes on, and that then puts a lot of the responsibility for "correctness" on those creating all these various languages.
Whatever concern an engineer originally had to some matter of memory safety, efficiency, concurrency, etc. is now being decided by some team working on the given language of the week, sometimes to great results, other times to disastrous ones. On the flip side, the person consuming the language or components then doesn't need to think about these things, which could go either way. If they're always going to work in this paradigm where they're offloading the concern of memory safety to their language architect of choice, then perhaps they're not shorting themselves any. However, they're then technically not seeing the big picture of what they're working on, which contributes to the diverse quality of software we have today.
Long story short, most people don't know how their programs work because they aren't really "their" programs so much as their assembly of a number of off-the-shelf or slightly tweaked components following the norms of whatever school of thought they may originate in (marketing, finance, graphic design, etc.). Sadly, this decoupling likely isn't going away, and we're only bound to see the percentage of "bad" software increase over time. That's the sort of change that over time leads to people then changing their opinions of what "bad software" is. Look at how many people gleefully accept the landscape of smart-device "apps"....
- Matt G.
Hi Dave,
COFF'd.
> > I'll never do if (a==b&&c==d), always if ((a==b)&&(c==d)).
>
> Indeed; I *always* use parentheses even if they are not necessary (for
> my benefit, at least).
I find unnecessary parenthesis annoying and clutter which obscures
reading. If parenthesis are used only when overriding the default
precedence then this beneficially draws attention to the exception.
I doubt mandatory parenthesis are used in maths formulas by those that
use them in expression.
Whitespace is beneficial in both maths formulas and expressions. The
squashed expression above will often be spaced more.
if (a==b && c==d)
if (a == b && c == d)
Go's source formatter will vary which operators get spaces to reflect
precedence, e.g. https://go.dev/play/p/TU95Oz57GuF shows ‘4*3’ differs.
fmt.Println(4 * 3)
fmt.Println(5 ^ 4*3)
fmt.Println(5 ^ 4*3 + 2/1.9)
--
Cheers, Ralph.