On 28 Jun 2018 18:29 -0400, from tytso(a)mit.edu (Theodore Y. Ts'o):
And if you are really worried about potential
problems with Spectre and Meltdown, what that means is that sharing
caches is perilous. So if you have 128 wimpy cores, you need 128
separate I and D cacaches. If you have 32 stronger cores, you need 32
separate I and D caches.
What's more, I suspect that in order to get good performance out of
those wimpy cores, you'd rather need _more_ cache per core, than less
or the same, simply because there's less of an advantage in raw clock.
One doesn't have to look hard to find examples of where adding or
increasing cache in a CPU (these days on-die) have, at least for
workloads that are able to use such cache effectively, led to huge
improvements in overall performance, even at similar clock rates.
Of course, I can't help but find it interesting that we're having this
discussion at all about a language that is approaching 50 years old by
now (Wikipedia puts the earliest design in 1969, which sounds about
right, and even K&R C is 40 years old by now). Sure, C has evolved --
for example, C11 added language constructs for multithreaded
programming, including the _Thread_local storage class specifier) --
but it's still in active use and it's still recognizably an evolved
version of the language specified in K&R. I can pull out the manual
for a pre-ANSI C compiler and look at the code samples, and sure there
are things about that code that a modern compiler barfs at, but it's
quite easy to just move a few things around a little and end up with
pretty close to modern C (albeit code that doesn't take advantage of
new features, obviously). I wonder how many of today's programming
languages we'll be able to say the same thing about in 2040-2050-ish.
--
Michael Kjörling •
https://michael.kjorling.se • michael(a)kjorling.se
“The most dangerous thought that you can have as a creative person
is to think you know what you’re doing.” (Bret Victor)