On Jan 4, 2023, at 9:00 AM, Warner Losh
<imp(a)bsdimp.com> wrote:
The best programmers I've ever worked with understood teamwork and the team produced
something way better than what any one of us could do (this was back in the days before
egoless programming, CI, code reviews, etc), so we invented the bits that worked for us on
the fly). The thing is, every single person on that team could (and often did) work on any
aspect of the product, be it the documentation (though the tech writers usually did that),
the code (the programmers usually did that, but the tech writers committed fixes to the
example code that was in the book), to the printer being out of toner / paper, the soda
supply in closet running out, the snacks that we got at costco running low, stuffing
product into boxes to ship to customers, handling customer calls, dealing with talking to
customers at a technical conference in a sales booth, presenting papers at conferences,
etc. Nobody did anything entirely by themselves. We interviewed several 'lone
wolves' that had done it all, but found the one we hired couldn't integrate into
our pack because they couldn't be part of a team and put the team first and the group
needs ahead of their own. That's the Genesis of my mistrust of this question, or at
least the premise behind it. And Dan, these 'scut tasks' weren't about
hazing, but just about doing what needed to be done...
One of the things that makes working in my team at the Rubin Observatory the best job
I've ever had is that our manager is brilliant at hiring smart generalists who can
play nice together. It's not that we can all do everything: I'm pretty terrible
at RDBM stuff, for instance (I have learned a fair bit about time-series databases in the
last year), but in a pinch, we can step in and try and get each other unstuck, and between
all of us we have a lot of experience and our hunches have gotten pretty good.
And honestly, it's just a lot more fun to have other people to bounce ideas off of
and to make the stuff I'm writing better through thoughtful code reviews. Sure, I
have done solo projects that saw the light of day, some very very large (that text
adventure is the second-biggest Inform 7 project I'm aware of, and it took a decade
or so of free-time screwing around, on and off; it's got a good-sized novella,
anyway, of text displayed to the player, and all the logic wrapped around that probably
doubles the size[*]), but the stuff I am most proud of currently (which is the conversion
of the JupyterHub Kubespawner to coroutines) was a maintenance project.
I didn't own the original codebase, my work went through several rounds of internal
review before we submitted it upstream, and then it went through a couple more rounds of
review with the project maintainers before they accepted it. But accept it they did, and
our spawn error rate is less than 10% of what it was with the thread-based version. And
to get that last 10% down significantly farther we're going to have to abandon their
spawning model entirely, which is the right decision for the Rubin Observatory but almost
certainly the wrong one for the vast majority of sites who want to run JupyterHub/Lab
under K8s.
Adam
[*] Inform 7 is a weird language. It's fundamentally declarative, and in some sense
wants to make the experience of writing a text adventure a lot like the experience of
playing a text adventure.