The EFF just published an article on the rise and fall of Gopher on
their Deeplinks blog.
"Gopher: When Adversarial Interoperability Burrowed Under the
Gatekeepers' Fortresses"
https://www.eff.org/deeplinks/2020/02/gopher-when-adversarial-interoperabil…
I thought it might be of interest to people here.
--
Michael Kjörling • https://michael.kjorling.se • michael(a)kjorling.se
“Remember when, on the Internet, nobody cared that you were a dog?”
Does anyone have any experience with UUCP on macOS or *BSD systems that
would be willing to help me figure something out?
I'm working on adding a macOS X system to my micro UUCP network and
running into some problems.
- uuto / uucp copy files from my non-root / non-(_)uucp user to the
UUCP spool. But the (demand based) ""call (pipe over SSH) is failing.
- running "uucico -r1 -s <remote system name> -f" as the (_)uucp user
(via sudo) works.
- I'm using the pipe port type and running things over an SSH connection.
- The (_)uucp user can ssh to the remote system as expected
without any problems or being prompted for a password. (Service
specific keys w/ forced command.)
I noticed that the following files weren't set UID or GID like they are
on Linux. But I don't know if that's a macOS and / or *BSD difference
when compared to Linux.
/usr/bin/uucp
/usr/bin/uuname
/usr/bin/uustat
/usr/bin/uux
/usr/sbin/uucico
/usr/sbin/uuxqt
Adding the set UID & GID bits allowed things to mostly work.
Aside: Getting the contemporary macOS so that I could edit the
(/usr/share/uucp/) sys & port files was a treat.
I figured that it was worth inquiring if anyone had any more experience
/ tips / gotchas before I go bending ~> breaking things even more.
Thank you.
--
Grant. . . .
unix || die
Redirected to COFF, since this isn't very TUHS related.
Richard Salz wrote:
> Noel Chiappa wote:
>> MLDEV on ITS would, I think, fit under that description.
>> I don't know if there's a paper on it; it's mid-70's.
I don't think there's anything like an academic paper.
The earliest evidence I found for the MLDEV facility, or a predecessor,
is from 1972:
https://retrocomputing.stackexchange.com/questions/13709/what-are-some-earl…
> A web search for "its mldev" finds several things (mostly by Lars Brinkhoff
> it seems) , including
> https://github.com/larsbrinkhoff/mldev/blob/master/doc/mldev.protoc
That's just a copy I made.
ABC-TV "Grantchester" (a detective-priest series[*]) featured someone who
died from mercury poisoning, when someone opened the tanks. Apart from
the beautiful Teletype in the foreground, there was a machine with lots of
dials in the background; obviously some early computer, but not enough
footage for me to tell which one, but is a British show if that helps.
[*]
I suppose that I need not remind any Monty Python freaks of these lines:
"There's another dead bishop on the landing, Vicar-Sergeant!"
"Err, Detective-Parsons, Madam."
Etc... Otherwise I'd go on all day :-)
-- Dave
[ -TUHS and +COFF ]
On Mon, Jun 8, 2020 at 1:49 AM Lars Brinkhoff <lars(a)nocrew.org> wrote:
> Chris Torek wrote:
> > You pay (sometimes noticeably) for the GC, but the price is not
> > too bad in less time-critical situations. The GC has a few short
> > stop-the-world points but since Go 1.6 or so, it's pretty smooth,
> > unlike what I remember from 1980s Lisp systems. :-)
>
> I'm guessing those 1980s Lisp systems would also be pretty smooth on
> 2020s hardware.
>
I worked on a big system written in Common Lisp at one point, and it used a
pretty standard Lispy garbage collector. It spent something like 15% of its
time in GC. It was sufficiently bad that we forced a full GC between every
query.
The Go garbage collector, on the other hand, is really neat. I believe it
reserves something like 25% of available CPU time for itself, but it runs
concurrently with your program: having to stop the world for GC is very,
very rare in Go programs and even when you have to do it, the concurrent
code (which, by the way, can make use of full physical parallelism on
multicore systems) has already done much of the heavy lifting, meaning you
spend less time in a GC pause. It's a very impressive piece of engineering.
- Dan C.
> I've never heard of the dimensionality of an array in such a context
> described in terms of a sum type,[...]
I was also puzzled by this remark. Perhaps Bram was thinking of an
infinite sum type such as
Arrays_0 + Arrays_1 + Arrays_2 + ...
where "Arrays_n" is the type of arrays of size n. However, I don't see
a way of defining this without dependent (or at least indexed) types.
> [...] but have often heard of it described in terms of a dependent
> type.
I think that's right, yes. We want the type checker to decide, at
compile time, whether a certain operation on arrays, such as scalar
product, or multiplication with a matrix, will respect the types. That
means providing the information about the size to the type checker:
types need to know about values, hence the need for dependent types.
Ideally, a lot of the type information can then be erased at run time,
so that we don't need the arrays to carry their sizes around. But the
distinction between what can and what cannot be erased at run time is
muddled in a dependently-typed programming language.
Best wishes,
Cezar
Dan Cross <crossd(a)gmail.com> writes:
> [-TUHS, +COFF as per wkt's request]
>
> On Sun, Jun 7, 2020 at 8:26 PM Bram Wyllie <bramwyllie(a)gmail.com> wrote:
>
>> Dependent types aren't needed for sum types though, which is what you'd
>> normally use for an array that carries its size, correct?
>>
>
> No, I don't think so. I've never heard of the dimensionality of an array in
> such a context described in terms of a sum type, but have often heard of it
> described in terms of a dependent type. However, I'm no type theorist.
>
> - Dan C.
> _______________________________________________
> COFF mailing list
> COFF(a)minnie.tuhs.org
> https://minnie.tuhs.org/cgi-bin/mailman/listinfo/coff
[-TUHS, +COFF as per wkt's request]
On Sun, Jun 7, 2020 at 8:26 PM Bram Wyllie <bramwyllie(a)gmail.com> wrote:
> Dependent types aren't needed for sum types though, which is what you'd
> normally use for an array that carries its size, correct?
>
No, I don't think so. I've never heard of the dimensionality of an array in
such a context described in terms of a sum type, but have often heard of it
described in terms of a dependent type. However, I'm no type theorist.
- Dan C.
Moving to COFF where this discussion really belongs ...
On Sun, Jun 7, 2020 at 2:51 PM Nemo Nusquam <cym224(a)gmail.com> wrote:
> On 06/07/20 11:26, Clem Cole wrote (in part):
> > Neither language is used for anything in production in our world at this
> point.
>
> They seem to be used in some worlds: https://blog.golang.org/10years and
> https://www.rust-lang.org/production
Nemo,
That was probably not my clearest wording. I did not mean to imply either
Go or Rust was not being used by any imagination.
My point was that in SW development environments that I reside (HPC, some
startups here in New England and the Bay Area, as well as Intel in general)
-- Go and Rust both have smaller use cases compared to C and C++ (much less
Python and Java for that matter). And I know of really no 'money' project
that relies yet on either. That does not mean I know of no one using
either; as I know of projects using both (including a couple of my own),
but no one that had done anything production or deployed 'mission-critical'
SW with one or the other. Nor does that mean it has not happened, it just
means I have not seen been exposed.
I also am saying that in my own personal opinion, I expect it too, in
particular in Go in userspace code - possibly having a chance to push out
Java and hopefully pushing out C++ a bit.
My response was to an earlier comment about C's popularity WRT to C++. I
answered with my experience and I widened it to suggest that maybe C++ was
not the guaranteed incumbent as the winner for production. What I did not
say then, but I alluded too was the particularly since nothing in nearly 70
years has displaced Fortran, which >>is<< still the #1 language for
production codes (as you saw with the Archer statistics I pointed out).
Reality time ... Intel, IBM, *et al,* spend a lot of money making sure
that there are >>production quality<< Fortran compilers easily available.
Today's modern society is from Weather prediction to energy, to param,
chemistry, and physics. As I have here and in other places, over my
career, Fortran has paid me and my peeps salary. It is the production
enabler and without a solid answer to having a Fortran solution, you are
unlikely to make too much progress, certainly in the HPC space.
Let me take this in a slightly different direction. I tend to use the
'follow the money' as a way to root out what people care about.
Where firms spend money to create or purchase tools to help their staff?
The answer is in tools that give them return that they can measure. So
using that rule: What programming languages have the largest ecosystems
for tools that help find performance and operation issues? Fortran, C, C++
have the largest that I know. My guess would be Java and maybe
JavaScript/PHP would be next, but I don't know of any.
If I look at Intel, were do we spend money on the development tools: C/C++
and Fortran (which all use a common backend) are #1. Then we invest in
other versions of the same (GCC/LLVM) for particularly things we care
about. After that, it's Python and used to be Java and maybe some
JavaScript. Why? because ensuring that those ecosystems are solid on
devices that we make is good for us, even it means we help some of our
competitors' devices also. But our investment helps us and Fortran,
C/C++ is where people use our devices (and our most profitable versions in
particular), so it's our own best interest to make sure there are tools to
bring out the best.
BTW: I might suggest you take a peek at where other firms do the same
thing, and I think you'll find the follow the money rule is helpful to
understand what people care the most about.