Tyler - I'm with Jon on this. I'll pick on Apple here. It used to be a
huge difference between MSFT SW and MacOS was that the systems folks at
Apple really tested the system and the result was that Mac OS with way
really stable. My system never panic'ed except when I ran Windows under
parallels. After 3-4 years ago, that stopped being true. Crashes occur,
just like Windows BSD. It's not unusual for my Mac to panic just letting
it run overnight - which is just backups and the like. Yes, I have a
multiple monitors, a zillion windows open etc..
I come downstairs and the screen is blank (it should be, I have it turn off
after no activity), but I move the mouse or try to type something --
nothing wakes the system up again. I've chased it to Mac OS running out of
memory and not gracefully handling the lowe memory situation. Sad, I have
16G of RAM a 1T SSD and many TB of memory on Thunderbolt 3 connectivity.
Look I grew up with a 256K byte RAM Unix V6 system on an 11/34, 3 RK05s and
an RK07 for storage. We swapped. Yeah, I never ran a window manager, but
he had a number of 9600K terminals on DH11's and we were happy. You could
see it swapping like mad, but that system never crashed. It just ran and
ran and ran.
IMO, this is what I think Jon is referring. Those systems were stable
because we tested them and found and fixed the issue. These days, Apple
no longer cares about Mac OS because iOS is where they now put their
effort, although I'm not super impressed there either, but I also don't
push it like I do Mac OS. Sad really. If I could get the day-2-day
applications that I need to work on FreeBSD, I suspect I would be there in
a heartbeat.
Clem
ᐧ
On Sat, Jan 30, 2021 at 3:07 PM Tyler Adams <coppero1237(a)gmail.com> wrote:
Really? Except for one particularly incompetent team,
I cannot recall
working with nor reviewing code that sacrificed clarity for performance.
Tyler
On Sat, Jan 30, 2021 at 9:51 PM Jon Steinhart <jon(a)fourwinds.com> wrote:
Tyler Adams writes:
For sure, I've seen at least two interesting changes:
- market forces have pushed fast iteration and fast prototyping into the
mainstream in the form of Silicon valley "fail fast" culture and the
"agile" culture. This, over the disastrous "waterfall" style, has led
to a
momentous improvement in overall productivity
improvements.
- As coders get pulled away from the machine and performance is less and
less in coders hands, engineers aren't sucked into (premature)
optimization
as much.
It's interesting in more than one way.
The "fail fast" culture seems to result in a lot more failure than I find
acceptable.
As performance is less in coders hands, performance is getting worse. I
haven't seen less premature optimization, I've just seen more premature
optimization that didn't optimize anything.
My take is that the above changes have resulted in less reliable products
with poor performance being delivered more quickly. I'm just kind of
weird
in that I'd prefer better products delivered more slowly. Especially
since
much of what counts as a product these days is just churn to keep people
buying, not to provide things that are actually useful.