This is pretty much how we work today. Build or crossbuild the kernel
on a host with your editor of choice and hopefully tons of cores like
my dual socket desktop. Then netboot the device under test/victim.
Panic, edit, repeat, no panic :)
With modern CPUs and NICs you can pretty easily do diskless now. But
for "scale out" designs there's something to be said for having lots
of fully independent systems, especially if the applications software
can avoid any artificial locality dependence.
On Thu, Sep 28, 2017 at 3:20 PM, Larry McVoy <lm(a)mcvoy.com> wrote:
On Fri, Sep 29, 2017 at 08:08:16AM +1000, Dave
Horsfall wrote:
On Thu, 28 Sep 2017, Clem Cole wrote:
Truth is that an Sun-3 running
'diskless' != as an Apollo running
'twinned.' [There is a famous Clem' story I'll not repeat here from
Masscomp about a typo I made, but your imagination would probably be right
- when I refused to do build a diskless system for Masscomp]....
Not the infamous "dikless" workstation? I remember a riposte from a woman
(on Usenet?), saying she didn't know that it was an option...
I dunno why all the hating on diskless. They actually work, I used the
heck out of them. For kernel work, stacking one on top of the other,
the test machine being diskless, was a cheap way to get a setup.
Sure, disk was better and if your work load was write heavy then they
sucked (*), but for testing, for editing, that sort of thing, they were
fine.
--lm
(*) I did a distributed make when I was working on clusters. Did the
compiles on a pile of clients, all the data was on the NFS server, I started
the build on the NFS server, did all the compiles remotely, did the link
locally. Got a 12x speed up on a 16 node + server setup. The other kernel
hacks were super jealous. They were all sharing a big SMP machine with
a Solaris that didn't scale for shit, I was way faster.