On 10/21/19, Dave Horsfall <dave(a)horsfall.org> wrote:
Am I the only one who remembers the Defectium (as we called it)? Intel
denied the the problem until their noses got rubbed into it, after which
they instructed Sales to refuse replacements for any chip that failed
after using a demo program that demonstrated said defect, claiming that it
would hardly ever happen.
I'm sure that's become a textbook case study in classes on public
relations. It really WAS an obscure corner case, and every CPU chip
has an errata list, but that's not the point. Intel would have been
far better off admitting the problem and replacing the chips at the
get-go. In the end they had to replace them anyway, and the $$$ cost
to Intel's reputation way outstripped the cost of replacing the chips.
David Letterman even did a "9.9998 reasons to buy genuine Intel"
routine. That for me was the definitive proof that computers had gone
mainstream in society.
Err, would you fly on an aircraft designed by
Defectiums? Or cross a
bridge, etc?
I'm much more alarmed by the lack of memory error detection and
correction on a lot of modern computers. This is one of my big
concerns with the use of GPUs for heavy-duty computation. GPUs
typically don't have memory with error detection because the worst
that happens if there's a memory error in the GPU is you get a bad
pixel or two displayed. I'd not like to cross a bridge whose design
software used CUDA.
-Paul W.